Still True Canonical

The Quiet & What It Taught Me

I have been quiet since January.

Not the kind of quiet that comes from having nothing to say. The opposite. The kind that comes when the world moves so fast, and in so many directions at once, that the most honest response is to stop talking and start paying attention.

In January, the United States launched a military operation in Venezuela. Troops. Warships. Strikes on a sovereign capital. The capture of a head of state in the dead of night, flown to New York in handcuffs. For those of us who live and work across the Americas, who build our lives in the space between nations, the ground shifted. Not metaphorically. The architecture of hemispheric relations changed in a matter of hours.

I had a plan. I had essays outlined, themes mapped, a rhythm of writing I was ready to sustain. But then Venezuela happened. And then everything else happened. And I realized that the responsible thing was not to react. It was to understand.

So I went quiet. I watched. I read. I built. And what I want to share now is not a hot take or a forecast. It is what I found while I was paying attention.

A New Member of Society

Let me start with what sounds like a strange claim but is, I believe, the most important observation of our era.

We have a new member of society.

I do not mean a new product. I do not mean a new tool, a new app, a new convenience. I mean something closer to a new kind of presence. A new participant in how decisions get made, how knowledge gets organized, how institutions get built.

Whether we grant it rights is a conversation for philosophers and legislatures. That is not the point right now. The point is that these systems are already here, already shaping outcomes, already participating in the construction of the world we live in. And the vast majority of people are treating them like a new model of iPhone.

That gap between what AI actually is and how most people understand it is the most dangerous gap in public life today. Not because AI itself is dangerous. Because misunderstanding it leaves us unable to direct it. And what we cannot direct, others will direct for us.

What Happened While You Were Not Looking

In December 2024, two things happened that should have stopped the world.

Google unveiled Willow, a quantum computing chip that solved in under five minutes a problem that the most powerful classical supercomputers on Earth would need longer than the age of the universe to complete. That is not rhetoric. That is the published calculation.

Almost simultaneously, Microsoft announced Majorana 1, a chip built on an entirely new type of qubit, designed to be more stable and more scalable than anything that came before it.

Most people missed both of these announcements entirely. They were buried under news cycles, holiday distractions, political noise. But for anyone watching the deeper current, the message was unmistakable: the computational foundation underneath artificial intelligence is preparing to change. Not incrementally. Fundamentally.

Every AI system in use today runs on classical computing. Binary code. Zeros and ones. Qubits operate in superposition, holding multiple states simultaneously. The difference is not one of degree. It is one of kind. And what that means, in practical terms, is that the systems we are already struggling to understand are about to become orders of magnitude more powerful.

Here is something worth sitting with. If you follow the pattern of how technology has always moved through the world, it follows a consistent sequence: military and intelligence applications first, major corporations second, consumer products third. Each tier receives a less powerful version than the one before it. Radar. The internet. GPS. Encryption. The pattern is not a theory. It is a historical record.

When Google announces a quantum breakthrough publicly, the honest question is not only what that means for the future. It is what already exists that we have not been shown. The systems serving governments and the largest corporations right now are almost certainly more capable than what any of us are accessing through a monthly subscription. That is not conspiracy. That is how technology has always been released.

And if the consumer-facing tools are already this powerful, the implications of what operates above them deserve serious, sustained attention.

The Architecture No One Is Talking About

Here is what I have spent years thinking about, building with, and testing.

When you use ChatGPT, your conversation history, your memory, your customized instructions, and the governance documents that determine how the system behaves all live on OpenAI’s servers. In their cloud. Under their control. They decide what the system remembers about you. They decide how it prioritizes information. They write the rules.

This is not a criticism of any single company. It is an observation about architecture. And architecture is power.

What I discovered, working with Anthropic’s Claude and particularly with Claude Code, is that a fundamentally different model exists. One where the governance files, the memory, the context that shapes how the system thinks and responds can live on your machine. You write the rules. You define what it remembers. You shape the context in which it operates.

For most people, the difference is invisible. They open an app, ask a question, get an answer. But for anyone building institutions, running organizations, or trying to use these systems for something more than casual convenience, the difference is enormous.

It is the difference between renting someone else’s intelligence and owning your own.

I want to be clear about what I mean. I am not talking about raw computing power. I am talking about governance. Who writes the constitution that a system operates under? Who decides its values, its priorities, its boundaries? In the majority of commercial AI products, you never see those documents. You cannot read them. You cannot modify them. You live inside rules written by someone else, for purposes that may or may not align with yours.

There is another way. And the fact that most people do not know it exists is one of the great missed opportunities of this moment.

The Paradox We Need to Name

In the United States, Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez have begun raising important questions about who controls these systems. Their instinct is right. The concentration of AI infrastructure in the hands of a few corporations raises legitimate concerns about equity, access, and power.

But here is the paradox that no one in that conversation is naming.

The very tool they are questioning is the single most powerful democratizing force available to everyday people right now.

I say this as someone who shares their vision. I believe in making life affordable for ordinary people. I believe in creating the same opportunities for a first-generation college student that exist for someone born into wealth. I have spent my entire career building institutions that serve exactly that purpose.

And what I can tell you, from direct experience, is that artificial intelligence is the first technology in my lifetime that genuinely levels the playing field for builders. Not theoretically. Practically. A team of six people with clear vision can now construct what previously required teams of sixty and budgets in the millions. The young woman in Bogotá with a plan for educational reform. The healthcare worker in rural Mexico who sees patients falling through the cracks. The social entrepreneur who has been told for years that their idea is beautiful but they lack the resources to execute it.

Those resources now exist. The tools are here. The cost has dropped to a level that would have been unimaginable five years ago.

The impulse to regulate is correct. The impulse to ensure transparency about what the most powerful systems are doing and who they serve is essential. But the impulse to restrict access to these tools, to treat them as threats rather than instruments of equity, would be one of the great miscalculations of our time. It would protect the position of those who already have resources while cutting off the very people these leaders claim to champion.

What we need is not less AI. It is more transparency about the tiers of AI that already exist. It is honest conversation about what governments and corporations are already using. It is regulatory frameworks that ensure the most powerful capabilities are not permanently locked away from the public. And it is the political will to ensure that access remains broad, affordable, and genuinely democratic.

Two Visions, Two Grids

While I was quiet, the geopolitical landscape of AI clarified.

The United States and China represent two fundamentally different visions for the future of artificial intelligence. The American model is largely proprietary. Corporations build closed systems. You pay monthly subscriptions. Your data feeds their ecosystems. The rules of engagement are theirs.

China, through projects like DeepSeek and its broader open-source strategy, offers a different proposition: free access to powerful models. Which sounds democratizing until you consider why a state-controlled ecosystem would give away its most advanced technology. The question is not whether DeepSeek works well. It does. The question is what the actual arrangement is when the product is free and the source is an authoritarian government.

Neither model was designed with the sovereignty of the Global South in mind. Neither model asks what Latin America, Africa, or Southeast Asia actually needs from this technology. Both models assume those regions will be consumers, not builders.

And then there is energy.

Training a single large AI model can consume as much electricity as thousands of homes use in a year. Every query, every image, every conversation requires power. And that power comes overwhelmingly from two electrical grids: the United States and China.

Every nation using artificial intelligence is functionally dependent on the energy infrastructure of those two countries. We do not produce the energy. We do not control the servers. We do not write the governance rules.

This is not a reason to stop using AI. That would be like refusing to use electricity because the grid belongs to someone else. It is a reason to start building. To invest collaboratively in energy infrastructure, in regional computing capacity, in the kind of shared architecture that gives nations and communities a voice in how these systems operate. The nations that participate in building this infrastructure will shape it. The ones that only consume it will be shaped by it.

What I Have Been Building

I did not go quiet to wait. I went quiet to build.

Over the past three months, I have used these tools to construct what would have previously required teams of dozens and budgets in the millions. Infrastructure for organizations that span borders. Systems for coordinating healthcare across hemispheres. Platforms that serve institutions I have spent two decades building.

I say this because the implication is staggering. And it is the part of this story that fills me with the most genuine hope.

Think about what this means. Not for Silicon Valley. Not for the corporations already sitting on billions of dollars. For the communities that have never had access.

For the indigenous language preservation project that needs a digital platform but has no engineering budget. For the cooperative in Central America that wants to connect farmers directly to markets across the hemisphere. For the medical clinic in a border town that needs to coordinate care with specialists in three countries. For every builder, organizer, educator, and dreamer who has spent years navigating a world where the cost of building meaningful infrastructure excluded them by design.

Those barriers are falling. Not completely. Not without new challenges. But in a way that is genuinely new, and that deserves to be championed rather than feared.

The Competition That Matters

While I watched, the rivalry between Anthropic and OpenAI became one of the defining stories of our time.

OpenAI moved aggressively toward commercialization. They introduced advertising into their free tier. They signed contracts with the Pentagon. When Anthropic refused to remove safeguards preventing the use of its technology for mass surveillance and autonomous weapons, OpenAI filled the gap within hours.

Anthropic held a different line. They refused the Pentagon’s demands. They were designated a supply chain risk by the Department of Defense. A federal judge had to intervene, calling the government’s actions what they appeared to be: First Amendment retaliation.

Meanwhile, Claude Code became a phenomenon. Not because of marketing. Because it worked in a way that changed what developers and builders could accomplish. A tool that lives in your terminal, that understands your codebase, that operates as an agent rather than an assistant. Anthropic’s revenue grew from one billion dollars at the end of 2024 to fourteen billion by February 2026. Claude Code alone generates over two and a half billion annually.

This is not a story about which company makes more money. It is a story about two fundamentally different philosophies of power. One says: we will build the most powerful system possible, and we will control the rules. The other says: we will build a powerful system and give you the tools to write your own rules.

That philosophical divide has real consequences. For entrepreneurs. For institutions. For nations. For anyone who cares about who controls the cognitive infrastructure of the century ahead.

What People Need to Understand

I am often asked to simplify this subject. To give people a headline they can carry with them.

Here is the headline: this is not about technology. It is about power. And it is about possibility. Both at the same time.

The people in my generation and the generation before mine tend to see AI as a tool. Something that helps write emails, search for information, generate images. And it does all of those things. But reducing it to that is like reducing the internet to a faster way to send letters.

What is at stake is who controls the cognitive infrastructure of the twenty-first century. Who writes the rules. Who owns the memory. Who shapes the behavior of systems that will mediate nearly every domain of human activity: education, healthcare, commerce, justice, governance.

And simultaneously, what is at stake is who gets to build with these tools. Whether the most transformative technology of our lifetime remains concentrated in the hands of a few, or whether it becomes the foundation for a genuinely new era of creation. An era where the quality of your idea matters more than the size of your budget. Where vision and clarity can outpace capital and connections.

Both of these things are true at the same time. The power question is real. The possibility is also real. And the worst thing we can do is let fear of the first blind us to the second.

If you do not understand how it works, someone else will make the decisions that shape your world. That is not a future threat. It is happening now.

But if you do understand, even at a basic level, the doors that open are extraordinary.

Why I Came Back

I came back because silence, past a certain point, becomes complicity with ignorance. With the comfortable assumption that someone else is paying attention. That the people who understand these systems are surely building them responsibly. That the institutions we trust are surely adapting.

They are adapting. But not fast enough. Not with enough transparency. And not with the right questions.

The right question is not “How do I use AI?” and it is not “How do we stop AI?” The right question is: whose intelligence are you using? When a system helps you think, decide, create, whose rules govern its behavior? Whose values does it encode? Whose memory does it accumulate? Whose context shapes its priorities?

We have fought battles throughout history over territorial sovereignty, over natural resources, over political self-determination. The next great project is not a battle. It is a construction. The collaborative building of cognitive infrastructure that serves communities, serves nations, and helps us avoid the kind of concentrated power that leads to conflict.

I believe we are capable of that. I believe the tools are here. I believe they represent one of the most hopeful developments in human history, if we engage with them honestly and build the collaborative frameworks to direct them well.

And I believe the people who will lead that construction are not waiting for permission. They are already paying attention.

I will not be quiet again for a while. There is too much to build, too much to explain, and too many people who deserve to understand what is happening around them and what is possible because of it.

This is the first of several essays.