Why We Need a Supertribe
Version: 1.0
Date: January 6, 2026
Status: Draft
The Power Problem
Something unprecedented is happening to power.
For most of human history, power was bounded. A tyrant could control a city, maybe a region. An empire could dominate a continent for a century or two before fragmenting. Even the most ambitious conqueror faced limits—geography, communication, the sheer difficulty of coordinating large numbers of humans across distance.
Those limits are evaporating.
AI systems can now process more information in a second than a human can in a lifetime. Surveillance technologies can track the movements, communications, and even the emotional states of entire populations. Algorithmic systems shape what billions of people see, think, and want—invisibly, at scale, in real time. And these capabilities concentrate in the hands of a shrinking number of actors: a few technology companies, a few governments, a few individuals with the resources to deploy them.
This isn't science fiction. It's Tuesday.
The entities wielding these tools aren't necessarily malicious. Most aren't. They're optimizing for engagement, or efficiency, or security, or profit—goals that seem reasonable in isolation. But the cumulative effect is a world where power operates at scales and speeds that make traditional counterweights obsolete.
What do you do when a handful of actors can influence what billions believe?
The Democratic Precedent
We've faced versions of this problem before.
In 1776, a group of colonists confronted concentrated power—a monarch who claimed authority by divine right and an empire that viewed them as subjects to be governed, not citizens to be consulted. Their response wasn't to find a better monarch. It was to imagine a different kind of political order entirely.
"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights..."
The American founders weren't naive about power. They knew humans were self-interested, tribal, prone to faction. Madison wrote in Federalist 51: "If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary."
Since we aren't angels, we need both internal controls (virtue) and external controls (institutional checks). Neither alone suffices.
Their solution was structural: divide power, balance it, make ambition counteract ambition. Create a system where no single actor—no king, no faction, no majority—could accumulate enough control to dominate the rest. And they grounded this structure in shared principles that transcended faction: rights that belong to everyone, not because the powerful grant them, but because they're inherent in being human.
This was a supertribe.
Not the word they used. But the concept: a large, diverse group of people—different religions, different economic interests, different regional loyalties—united by very few shared principles. The green zone was small: consent of the governed, equal rights, rule of law. Everything else was blue and gray—vast space for disagreement, competition, even conflict—as long as it stayed within the bounds the principles established.
It worked. Imperfectly, often hypocritically, with catastrophic failures along the way. But for two and a half centuries, the American experiment demonstrated that millions of people who disagreed about almost everything could cooperate without killing each other. That was not obvious. It required conscious effort, maintained across generations.
The New Concentration
Now consider what's different.
The founders designed for a world where power concentrated through governments—armies, taxes, laws enforced by physical coercion. The checks they created—separation of powers, federalism, enumerated rights—addressed that threat.
They couldn't have anticipated AI.
Today, power concentrates not just through states but through systems: algorithms that decide what information reaches you; platforms that shape how you connect with others; databases that know more about you than you know about yourself. These systems don't need armies. They don't need laws. They operate in the space between traditional categories—not quite government, not quite market, something new.
Shoshana Zuboff calls it "Big Other"—distinguished from Orwell's "Big Brother" by its mechanism. Big Brother was state power, political oppression, visible coercion. Big Other is corporate power, economic extraction, invisible modification. The goal isn't to control the population through force but to guarantee behavioral outcomes through architecture that operates beneath conscious awareness.
And AI accelerates everything.
The capabilities that were science fiction a decade ago are now infrastructure. Language models that generate human-quality text. Image generators that fabricate evidence. Systems that can simulate conversation, compose music, write code, analyze patterns across datasets too large for human comprehension. Each capability, in isolation, offers genuine benefits. Collectively, they represent a power concentration without historical precedent.
The question isn't whether these tools will be used. They already are. The question is: who wields them, to what ends, and what counterweights exist?
Why Individual Action Isn't Enough
There's a tempting response to this: become more vigilant yourself. Develop media literacy. Check your sources. Cultivate skepticism. Unplug occasionally.
All good advice. Necessary, even. But insufficient.
The mismatch is too great. You're one nervous system, evolved over millions of years to detect threats from predators and navigate social dynamics in groups of 150. The systems you're up against are designed by thousands of engineers, trained on the behavioral data of billions, optimized by algorithms running millions of experiments per day, and deployed by entities with more resources than most nations.
This isn't a fair fight. It's not supposed to be.
Individual consciousness—the capacity to reflect, to question, to choose—remains essential. You can't build anything better without people who can think clearly about what's happening. But individual consciousness operating in isolation is easily overwhelmed, manipulated, or simply worn down.
The founders understood this. They didn't ask individual citizens to personally check the power of the British crown. They built structures—institutions, processes, shared commitments—that enabled collective action at scale.
We need the equivalent for this moment.
The Supertribe Solution
Here's what a supertribe offers.
Not a faction. Not an ideology. Not another team competing for dominance in the tribal wars that algorithms love to inflame. Something different in kind.
A supertribe is defined by scale and by scarcity. It includes vastly more people than a comfort tribe—potentially millions, potentially the whole species. But it asks far less agreement. The shared principles are minimal: perhaps just a commitment to honest inquiry, mutual dignity, and solving problems through reason rather than force.
Everything else—beliefs, values, lifestyles, politics—lives in the blue and gray zones. Not required. Not forbidden. Tolerated, even celebrated as the diversity that makes the collective smarter than any individual.
This isn't soft pluralism. The green zone matters. Violate those few principles—through deception, coercion, contempt for evidence, refusal to honor agreements—and you're out. The boundaries are firm precisely because they're few.
What does this accomplish against concentrated AI power?
First, distributed intelligence. No single entity, however capable, can match the collective intelligence of billions of people observing, questioning, experimenting, and sharing what they learn. But this collective intelligence only functions if it's coordinated—not centrally controlled, but operating within shared epistemic norms that allow diverse observations to aggregate into knowledge. A supertribe provides that coordination.
Second, legitimate counterpoise. When a handful of actors can shape what billions believe, the only counterweight is billions acting together on shared principles. Not in lockstep—that's just a different concentration of power. But in loose coordination, defending the norms that prevent any single actor from dominating.
Third, epistemic validation. Here's the insight that elevates supertribe from mere pragmatism to something closer to truth-seeking: when vastly different people—different cultures, different experiences, different cognitive styles—independently converge on the same principles, that convergence carries epistemic weight.
Think about it. Agreement within a small, homogeneous group might just be echo chamber. Shared conclusions among similar people might reflect shared biases. But when wildly different people, using different methods, coming from different starting points, land on the same commitments? That's signal.
The supertribe's breadth isn't just strategic—it's epistemic. The diversity of agreement suggests the principles aren't merely tribal preferences but something more likely to be true.
The Democratic Connection
The American founders were creating a supertribe, whether they knew the term or not.
They were asking: can people who disagree about almost everything agree on enough to live together without violence? Can competing factions coexist within a structure that prevents any one from dominating? Can ambition counteract ambition so that power remains distributed?
Their answer was yes—if you get the structure right, and if citizens maintain the capacity and commitment to defend it.
That "if" is doing a lot of work.
Democratic citizenship isn't automatic. It requires what Alexis de Tocqueville observed: voluntary association, civic habit, the practice of working with diverse others toward shared goals. Town meetings, he wrote, "are to liberty what primary schools are to science; they bring it within the people's reach, they teach men how to use and how to enjoy it."
This capacity must be developed, not inherited. Each generation faces the choice: maintain the norms and institutions that enable cooperation across difference, or let them atrophy while retreating to the comfort of factions that feel like home.
The founders called this capacity "republican virtue." They knew the Constitution alone couldn't sustain itself. Parchment barriers, Madison called mere words on paper. The structure requires citizens who value it enough to defend it—and who've developed the habits and skills to do so.
The supertribe concept is this insight generalized. Not bound to one nation's particular founding documents. Applicable anywhere humans face the challenge of cooperation across difference at scale.
The Technology Moment
So here we are.
AI capabilities advancing faster than governance can adapt. Power concentrating in a few companies, a few governments, a few individuals who happen to control the infrastructure. Algorithmic systems optimizing for engagement—which, it turns out, often means optimizing for tribal conflict, because outrage drives clicks.
We face a choice.
One path: fragmentation. Retreat into comfort tribes, defined by who we oppose. Let algorithms sort us into hostile factions, each convinced the other is evil, none capable of the cooperation that complex problems require. Watch as concentrated power fills the vacuum—not because it conquered, but because nothing else remained coherent enough to resist.
The other path: conscious affiliation. Choose allegiance to a larger whole, defined not by who we hate but by what we share. Develop the capacity to work with people profoundly different from ourselves. Build structures—new structures, adequate to new threats—that distribute power, check concentration, enable collective intelligence.
This isn't utopian. It's practical.
The founders faced a similar choice. They could have remained British subjects, complaining about taxation without representation while accepting the fundamental legitimacy of monarchical rule. They could have fragmented into competing colonies, each pursuing narrow interest. Instead, they created something new—a structure for cooperation at continental scale, grounded in principles thin enough to unite diverse people yet firm enough to provide real constraint.
We need the equivalent for the AI age.
What Supertribe Citizenship Looks Like
If this is the need, what's the practice?
First: consciousness. Not mystical awareness but directed attention. The capacity to notice when you're being manipulated—by algorithms optimizing for engagement, by tribal dynamics sorting you into teams, by systems designed to bypass your reflection. You can't participate in supertribe governance if you can't think clearly about what's happening.
Second: principle over tribe. The willingness to criticize your own faction when it violates shared principles—and to acknowledge when your opponents act consistently with them. This is psychologically costly. Your comfort tribe will punish you for it. But supertribe citizenship requires it.
Third: cooperation across difference. The practiced ability to work with people who differ from you profoundly, united by minimal shared commitment rather than thick shared identity. This doesn't come naturally. Humans evolved for small-group cooperation with people like themselves. Supertribe capacity must be developed.
Fourth: institutional investment. Building and maintaining the structures that enable collective action—not just political institutions but epistemic ones: norms of honest inquiry, practices of evidence-based reasoning, systems for aggregating diverse observations into reliable knowledge. These structures are as essential to supertribe function as courts and legislatures are to democratic governance.
Fifth: long-term commitment. The founders signed their Declaration pledging "our Lives, our Fortunes, and our sacred Honor." They understood that building something at civilizational scale requires more than momentary enthusiasm. Supertribe citizenship is a multi-generational project—maintaining norms and institutions across time, handing down the capacity to each generation that follows.
The Historical Analogy
Think of the American founding as a proof of concept.
A collection of separate colonies, each with different interests, different populations, different economic bases. No obvious reason they should cooperate—plenty of reasons they should compete. But they faced a common threat (concentrated British power) and recognized a common opportunity (the possibility of self-governance).
They created a supertribe: the United States. Not a comfortable tribe—Americans have always disagreed vehemently about almost everything. But a super-tribe: a large, diverse group united by minimal shared principles (consent of the governed, enumerated rights, rule of law), with vast space for difference on everything else.
The principles weren't perfect. The founders themselves violated them. The project has required constant correction. But the structure has survived—imperfectly, precariously—for two and a half centuries.
Now extend the model.
The threat isn't one monarch but the general tendency of power to concentrate in whoever controls the most capable systems. The opportunity isn't self-governance for one nation but cooperation across the species to navigate technologies that don't respect borders.
The principles remain similar: consent, dignity, reason over force, checks on concentration. The scale expands.
The Wager
Here's the bet.
If we're wrong—if supertribe thinking is naive, if humans can't cooperate across profound difference, if power concentration is inevitable and resistance is futile—then we've wasted some effort organizing. We end up with more community than we would have had otherwise. Not the worst failure mode.
If we're right—if conscious collective action can check technological concentration, if distributed intelligence can outpace centralized control, if structures built now can persist and protect for generations—then we've done something that matters. We've participated in the next chapter of the human story, not as passive recipients of whatever concentrated power delivers, but as co-authors of what comes next.
The asymmetry favors action.
Pascal applied similar logic to belief in God: if you believe and you're wrong, you lose little; if you believe and you're right, you gain everything. The supertribe wager is analogous: if you invest in collective capacity and it fails, you've built community; if it succeeds, you've helped preserve human agency against forces that would concentrate it beyond recovery.
The Call
We need a supertribe.
Not another faction. Not another team in the tribal wars. Not another ideology competing for dominance.
A consciously chosen affiliation with a large, diverse group united by very few shared principles—principles thin enough to include billions, firm enough to provide real constraint.
The founders created one, imperfect but functional, and it sustained cooperation across centuries despite profound disagreement. We need the equivalent for the AI age, when power concentrates not just through governments but through systems that operate beneath conscious awareness.
The principles might be similar to what the founders articulated: consent, dignity, reason over force. The structures will need to be new—adequate to threats they couldn't imagine. The capacity to participate must be developed, not inherited.
None of this is easy. Supertribe thinking goes against automatic tribal instincts. It requires conscious override of defaults that evolution built into us. It demands cooperation with people we'd rather not understand and principle over faction when faction feels like home.
But the alternative is worse.
The alternative is fragmentation—each of us retreating to comfortable tribes that can't solve the problems we face. The alternative is concentration—power accumulating in whoever controls the systems, with nothing coherent enough to resist. The alternative is drift—letting history happen to us rather than participating in its authorship.
The founders faced a similar choice and chose differently. They pledged their lives, their fortunes, their sacred honor to an experiment in self-governance. They couldn't know it would work. They chose to act anyway.
We face the same choice, at larger scale, with higher stakes.
What will we choose?