The Dictator's Handbook
Power pyramids matter
Bruce Bueno de Mesquita and Alastair Smith developed selectorate theory through formal mathematical modeling in political science during the 1990s and early 2000s. The theory proposed that political outcomes could be predicted by analyzing three populations: the nominal selectorate (everyone who could theoretically choose a leader), the real selectorate (those who actually choose), and the winning coalition (the essential supporters whose loyalty keeps a leader in power).
The relative sizes of these different groups have a profound effect on the nature of the outcomes.
Academic Reception
The academic reception was mixed. Using game theory to predict political behavior was controversial. The theory's stark claim - that leaders prioritize survival over public welfare, and that this explains governance patterns across democracies and autocracies alike - struck some as overly reductive.
The empirical record was harder to dismiss. Bueno de Mesquita and Smith analyzed bilateral aid transfers by OECD nations between 1960 and 2001. They found that leaders in recipient countries were more likely to grant policy concessions when their winning coalitions were small, since they could easily compensate their supporters for unpopular decisions. The mathematics kept predicting real outcomes.
Popularization
In 2011, Bueno de Mesquita and Smith wrote The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics . The book translated selectorate theory for general readers. The argument: when the winning coalition is small, leaders use private goods to maintain support. When the winning coalition is large, leaders must provide public goods because individually rewarding millions of supporters is economically impossible.
The stark contrast in outcomes they illustrate — through case studies of aid, corruption, and corporate governance — can be phrased in terms of pyramids of power. A steep pyramid favours the few, and few public goods. A gently sloping pyramid favours things done for the public good, and is better for the majority.
The theory extends beyond governments. Most publicly traded companies operate on the dictator side of the scale - a small number of people determine CEO survival, small enough that enriching this group matters more than creating shareholder value.
Value of Simplicity
What makes selectorate theory useful is that it predicts behavior without requiring moral judgments. The simplicity of the framework - three variables predicting complex political outcomes - made it memetically successful. Critics argued the theory was too reductive - that it worked better as a first-order approximation than as a complete model.
Simple models have value. They suggest that any mechanism that reduces the cost of maintaining a small coalition - better surveillance, more effective propaganda, cheaper ways to reward loyalists - pushes organizations toward autocratic structures. Any mechanism that makes large-coalition governance more efficient - cheaper education, easier information access, lower coordination costs - pushes toward democratic structures.
Which brings us to AI.
AI and Power
AI as part of the collective commons - a public good
AI is unquestionably a power amplifier. I contend that AI will enable us to do science faster. It has other power boosting properties too. It makes surveillance easier, it makes targeted advertising easier, it reduces research costs in engineering companies.
AI is unprecedented - an engine that can replace knowledge work at such scale. The Dictators Handbook's framing suggests we should care a great deal about whether it is a public good, or whether it is in the hands of a very few.
What we have had in the past is an engine, the steam engine, that could replace manufacturing work at scale. The closest precedent is the steam engine, which powered the industrial revolution.
Factory Discipline
The industrial revolution demanded a transformation of how humans related to time and work.
Before factories, most workers controlled their own rhythms. Sidney Pollard documented their resistance when manufacturers first attempted to impose regular hours on them. "The men themselves were considerably dissatisfied," reported one employer, "because they could not go in and out as they pleased, and have what holidays they pleased." The regular hours experiment failed. Workers found the constraints intolerable. The employer "was obliged to break it up."
E.P. Thompson noted how this required retraining the human sense of time. Pre-industrial workers organised their days around tasks — you worked until the harvest was done, until the batch was finished, until the cow was milked. Factory work demanded time sliced into uniform units. Time that could be bought and sold, monitored by bells and clocks rather than by the rhythm of the work.
Schools adapted. Thompson cited Powell, who in 1772 saw education as instilling "the habit of industry" — children should become "habituated, not to say naturalized to Labour and Fatigue" by age six or seven. In Britain, there were those who objected that working-class children were being "educated above their station." The Revised Code of 1862 actually compelled teachers to narrow the curriculum back to basic skills. The purpose was not enlightenment but preparation: punctuality, obedience, tolerance for monotony, acceptance of hierarchy.
Samuel Bowles and Herbert Gintis, analysing America's Common School movement, found the same pattern. The structure of schools deliberately mirrors workplace hierarchy: teachers as supervisors, students as workers, external schedules and extrinsic rewards. The Common School movement in America was driven not by households wanting education for their children, but by industrial capitalists and business elites who needed disciplined workers.
One question is what new disciplines the AI revolution will require. The factory needed workers who could tolerate repetition, accept supervision, and show up on time. AI-augmented work does not need that. It benefits from different capacities: judgement about when to trust machine output, skill in asking the right questions, the ability to synthesise and verify. These are not the habits that bells and timetables instil. Schools will adapt in some fashion, as they have adapted in the past. The forms of the AI-inspired adaptation — who shapes it, whose interests it serves, what capacities it develops and which it quietly suppresses — remain to be seen.
Who Will Get the AI Power?
The Dictator's Handbook on its own does not predict what kind of AI future lies ahead. The simple model, and the precedents, suggest we should consider ownership of AI power through the lens of steep pyramids or shallow ones.
-
In the steep pyramids, AI is owned by the few. They can set its parameters, choose what thoughts are thinkable by AI and what unthinkable. This serves small coalitions well - it concentrates power benefits, keeps knowledge fragmented and access controlled.
-
In the shallow pyramids, AI is a public good. Anyone can tune its parameters. AI tutoring raises the floor of human capability by making genuine education cheap and accessible. This serves large coalitions - widespread capability makes authoritarian control more difficult.
Technologies shape institutions first. Institutions shape what comes next. The industrial revolution shaped schools, which shaped what humans became capable of thinking. AI will shape new institutions, which will shape what future humans become and think.
We do not have control over AI progress. It is as inevitable as the steam engine. New Luddites will not stop it in its tracks. Attempts to stop it will drive it underground, so that only the most power-hungry have access to it. Attempts to stop or slow AI hand more power to the few.
It is not the speed of AI progress that we need to modify, it is the direction. Specifically the question of who gains most from AI must be looked at with great care. What matters is who uses AI and how, whether it benefits all or just a few. What matters is how steep the pyramid is. We do have control over what AI is used for. We can choose to leave AI to those in power and let the power be controlled by a narrow coalition, or we can learn to use it ourselves, create things of beauty and of lasting societal value.
AI is Accessible
AI is conversational.
How to use it well is not out of reach. It can be learned.
We have a choice about how to use AI. We can choose not to bother gaining deep skills in working with it, and leave AI in the hands of those in power, or we can learn to use it powerfully ourselves — to create things of beauty, to build and share knowledge, to add to what we hold in common.
Let's use AI to add to what we share.