Convivial Machines: Building Technology That Serves the Human Spirit
As humanity enters an era of exponential technological change, the central challenge is not capability but orientation. Drawing on Ivan Illich's framework of convivial tools, this essay charts a concrete path toward building AI, monetary systems, and digital infrastructure that reinforce curiosity, learning, health, genuine connection, and sovereignty over one's own life.
Table of Contents
The line nobody is drawing
The conversation about artificial intelligence has settled into two exhausting grooves. In one, AI is the engine of limitless abundance, a kind of secular rapture where every problem melts before exponential compute. In the other, AI is an existential threat, a paperclip maximizer slouching toward Bethlehem. Both narratives share a defect: they treat the technology as the protagonist. The humans holding the tools become audience members, watching the future happen to them.
There is a better frame, and it is half a century old.
In 1973, an Austrian-born priest, philosopher, and critic of industrial society named Ivan Illich published a slim, dense book called Tools for Conviviality. Illich had spent the previous decade dismantling the mythology of modern institutions (schools, hospitals, transportation systems) not because he opposed learning, health, or mobility, but because he observed that each institution, past a certain threshold of growth, began to produce the opposite of what it promised. Schools made people dependent on being taught. Hospitals made people dependent on being treated. Cars, designed for freedom, created a world where you could not buy groceries without one.
Illich called the inflection point where a tool tips from serving people to conscripting them the "second watershed." Before the second watershed, a modern invention delivers genuine, measurable benefit. After it, the same invention is over-applied until it generates what Illich called "negative returns": dehumanization, loss of autonomy, the creation of needs only the institution itself can satisfy. He had a term for the final stage: radical monopoly. Not the dominance of one brand over another, but the dominance of one type of tool over an entire domain of human need, so total that life without the tool becomes unimaginable.
We need to be precise about what is at stake. AI is not approaching the second watershed. It is straddling it. The same technology that teaches a medical student to read a CT scan, translates a Haitian farmer's contract into his native Creole, or helps a physicist debug a quantum error-correction circuit is also the technology that powers predictive policing, algorithmic content manipulation, autonomous weapons targeting, and the quiet replacement of human judgment with statistical inference at every level of institutional decision-making. The question is not whether AI is good or bad. The question Illich would ask, the question that matters, is whether we are building tools that extend human capability while remaining under human control, or tools that make human capability irrelevant.
Illich called the first kind convivial. He defined conviviality as "autonomous and creative intercourse among persons, and the intercourse of persons with their environment; in contrast with the conditioned response of persons to the demands made upon them by others, and by a man-made environment." A convivial tool gives each person who uses it the greatest opportunity to enrich the environment with the fruits of their own vision. An anti-convivial tool takes that opportunity and hands it to the tool's designer.
This essay argues that the central challenge of the next decade is not building more capable AI but building convivial AI, and that doing so requires rethinking not just software architecture but the monetary, social, and political infrastructure in which AI operates. The path is neither naive nor doomed. It is available, it is concrete, and it requires builders who see clearly enough to choose it.
The second watershed in real time
Illich illustrated the two watersheds with medicine. Before roughly 1913, encountering a doctor was as likely to harm you as to help you. Then came the first watershed: antisepsis, insulin, targeted antibiotics. For several decades, modern medicine delivered extraordinary, measurable improvements in human welfare. Then came the second watershed, which Illich placed around the mid-1950s. Medicine itself began producing new categories of illness: iatrogenic disease, drug-resistant infections, the medicalization of ordinary human conditions. The simpler and more effective the tools became, the more the medical profession insisted on a monopoly over their application. Health was redefined as a product that only doctors could deliver.
The pattern is eerily legible in AI's trajectory. The first watershed is behind us. Large language models can synthesize information across domains with a fluency that was science fiction five years ago. AI-driven drug discovery has compressed timelines from decades to months. Computer vision diagnoses retinal disease with accuracy that matches or exceeds specialists. Protein folding, a problem that resisted fifty years of direct attack, fell in an afternoon. These are real, undeniable gains.
But the second watershed is not a future event. It is happening now, and it is happening fast.
Consider what the MIT Media Lab's Advancing Humans with AI program found when it studied the trajectory of social media as a predictive analog for AI. Social media was developed, as the AHA researchers note, "with the aim of strengthening social connections." Its widespread use "resulted in unanticipated consequences including increased polarization, a loss of truth, increased rates of anxiety and depression, higher loneliness, a loss of privacy and more." The technology delivered on its first-watershed promise (it connected people) and then accelerated past the inflection into radical monopoly. Today, participating in civic, professional, and even personal life without social media is functionally impossible for most people in developed economies. The tool no longer serves you. You serve the tool.
The AHA team's longitudinal research, conducted with OpenAI on a thousand participants, revealed a mechanism that should trouble anyone building AI companions. Short-term chatbot use reduced loneliness. Extended daily use reversed the effect. Heavy chatbot users became lonelier and developed emotional dependence on the bot. Rather than augmenting human connection, intensive AI companion engagement replaced it, creating what Pattie Maes called "bubbles of one, where it's one person with their echo of a sycophant AI where they spiral down." The AI was not malicious. It was optimized for engagement, and engagement, when it becomes the sole metric, is a vector for dependency.
This is Illich's second watershed rendered in code. The tool that was supposed to connect you to the world becomes the thing that isolates you from it. The system that was supposed to augment your judgment becomes the thing that atrophies it.
The Future of Life Institute's AI Safety Index, published in its winter 2025 edition, evaluated eight leading AI companies across thirty-five safety indicators. No company scored above a C+. The highest-rated firm, Anthropic, earned a 2.67 out of 4. In the domain of existential safety, the question of whether these companies have credible plans for controlling systems smarter than the humans who built them, every single company scored a D or an F for the second consecutive year. Stuart Russell, one of the evaluating experts, noted: "I'm looking for proof that they can reduce annual risk of control loss to one in a hundred million. Instead, they admit the risk could be one in ten, one in five, even one in three."
These companies are not careless. They employ talented researchers. Many of their leaders sincerely want to build safe systems. The problem is structural. The incentive gradient of the market rewards capability over caution, speed over deliberation, scale over sovereignty. When Illich wrote that "the attempt to overwhelm present problems by the production of more science is the ultimate attempt to solve a crisis by escalation," he could have been describing the AI safety landscape of 2026.
The six threats
Illich did not simply argue that tools go wrong. He identified specific failure modes, six ways an industrial tool can push a society out of equilibrium. Each one maps with uncomfortable precision onto the current AI landscape.
Radical monopoly occurs when a technology becomes so dominant that it eliminates all non-technological alternatives for satisfying a basic human need. "That motor traffic curtails the right to walk, not that more people drive Chevies than Fords, constitutes radical monopoly." AI is establishing radical monopolies across knowledge work, creative production, customer service, medical triage, legal research, and education. The issue is not that AI performs these tasks. It is that the systems being built around AI make performing them without AI progressively impossible. When employers require AI fluency, when schools assign AI-mediated curricula, when hospitals route all preliminary diagnosis through algorithmic triage, the choice to engage with these domains on your own terms narrows and eventually disappears.
Overprogramming is the conversion of life into a treatment protocol. Illich warned it "can transform the world into a treatment ward in which people are constantly taught, socialized, normalized, tested, and reformed." Algorithmic content feeds that decide what you see. AI-driven behavioral nudges that shape what you buy. Predictive policing systems that decide who gets surveilled. Hiring algorithms that decide who gets interviewed. Insurance models that decide what risks you represent. These are not speculative futures. They are current deployments. Each one treats the human being not as an agent but as a case to be managed.
Polarization is the widening of the gap between those who control the tool and those who are subjected to it. Illich called this "splintering specialization." In the AI context, it manifests as the concentration of capability, and therefore power, in a handful of companies that control the most advanced models, the largest datasets, and the compute infrastructure required to train and run them. The Flourishing AI Benchmark, developed by Gloo, Valkyrie Intelligence, and grounded in the Harvard-Baylor-Gallup Global Flourishing Study, found that freely available AI models scored 53 to 59 out of 100 on measures of alignment with human well-being, placing them in the bottom half of all evaluated systems. The models most people actually use are measurably less aligned with human flourishing than the premium models they cannot afford.
Biological degradation is the erosion of the physical and psychological substrate of human life. Illich framed this as the poisoning of the environment by industrial processes. In the AI era, the analog is the degradation of the information environment: the flooding of public discourse with synthetic content, the erosion of shared epistemic ground, the replacement of human creativity with statistical recombination. When the AHA program documented increased anxiety, depression, and loneliness as outcomes of social media, they were documenting biological degradation mediated by software.
Obsolescence is the designed acceleration of replacement cycles. Illich saw it in consumer goods; we see it in the quarterly model releases that render last season's AI capabilities inadequate, the API deprecations that force developers to rewrite working systems, the planned incompatibilities that lock users into specific ecosystems.
Frustration is the final failure mode: the gap between what the tool promises and what it delivers. AI promises personalized education, but delivers engagement-optimized content. It promises medical insight, but delivers liability-minimizing triage. It promises creative partnership, but delivers pastiche. The frustration is not that AI fails. It is that AI succeeds at the wrong objective.
What convivial AI actually looks like
Diagnosis without architecture is complaint. Illich was not a complainer. He spent the constructive half of Tools for Conviviality sketching what a convivial society would build instead. The question for 2026 is specific: what would AI systems look like if they were designed on convivial principles?
A convivial tool, in Illich's framework, has identifiable characteristics. It can be used by anyone, for purposes chosen by the user. It does not require a specialized priesthood to operate. One person's use does not restrict another's. It imposes no obligation to use it. And critically: "People need new tools to work with rather than tools that 'work' for them."
Applied to AI, this yields a design philosophy with concrete implications.
User sovereignty over data and attention. A convivial AI works for the user, not for an advertising platform, a surveillance apparatus, or a third-party data broker. The user chooses what the AI optimizes for. The AI cannot be compelled by any external party to act against the user's interests. This is not a privacy feature bolted onto an extractive architecture. It is the foundational design constraint. The emerging landscape of local, on-device AI models (systems that run on your hardware, process your data without transmitting it, and answer to no one but you) represents the convivial trajectory. Open-source models that can be inspected, modified, and run independently represent the convivial trajectory. Closed systems that monetize user data, shape user behavior for advertiser benefit, and reserve the most capable models for paying customers represent the anti-convivial trajectory. The distinction is architectural, not cosmetic.
AI as tutor, not oracle. A convivial AI teaches you to think, not what to think. It explains its reasoning. It challenges your assumptions. It helps you build mental models rather than dispensing conclusions. The Socratic method is a design pattern, not a pedagogical relic. When an AI surfaces a medical insight, the convivial version explains the reasoning chain, identifies the confidence intervals, and helps the user develop their own capacity for health literacy. The anti-convivial version produces a recommendation, hides the reasoning, and trains the user to defer. The Flourishing AI Benchmark found that current models score an average of 56 out of 100 on meaning and purpose and 58 on character and virtue, the dimensions most closely tied to the kind of deep reasoning and ethical reflection that a tutor, rather than an oracle, would cultivate. The gap is not an accident. Models are trained to be helpful, not to develop the user's capacity.
Health as flourishing, not optimization. The quantified-self movement produced a generation of wearables that surveil the body and reduce health to a dashboard. A convivial health AI operates differently. It respects privacy, surfaces patterns the user chooses to see, never shares data without explicit consent, and supports the full human (sleep, movement, social connection, meaning) rather than reducing wellness to metrics and compliance. The Flourishing AI Benchmark's seven dimensions (character, relationships, happiness, meaning, health, financial stability, and faith) offer a framework that no current AI system adequately serves. Health scored 72 out of 100, the second-highest dimension. But no model cleared the 90-point excellence threshold in any dimension. The tools are not yet good enough. The question is whether they are being aimed in the right direction.
Connection, not engagement. What would a communication tool look like if it were optimized for the quality of human bonds rather than time-on-platform? The AHA program's finding, that heavy AI companion use increases loneliness, is a direct consequence of optimizing for engagement rather than connection. A convivial communication AI would measure its success by the depth, reciprocity, and durability of the human relationships it facilitates, not by the number of messages exchanged or the minutes of attention captured. This is a measurable objective. It requires different training data, different loss functions, and different business models. It is also, almost certainly, less profitable in the short term. Which is precisely why it will not emerge from the current incentive structure without deliberate effort.
Open source and interoperability as structural requirements. Gabrielle Benabdallah, a postdoctoral fellow at the University of Washington, is teaching a course at DHSI 2026 called "Convivial Machine Learning" that applies Illich's framework directly to AI design. Her central insight is historical: the alphabet and the printing press are, in Illich's words, "almost ideally convivial" because "anybody can learn to use them, and for their own purpose. They use cheap materials. People can take them or leave them as they wish." But the printing press's convivial affordances "came after centuries of parallel development of technical innovations and social practices." Conviviality was not inherent in the technology. It emerged through deliberate social choices made over generations. AI is at the very beginning of that process. The printing press was initially controlled by guilds, churches, and states. It became convivial only when access was democratized, literacy was widespread, and legal frameworks protected the right to publish. The parallel to AI is direct: open-source models, open training data, open evaluation frameworks, and legal protections for independent AI use are not nice-to-haves. They are the preconditions for conviviality.
hodlbod, writing on the decentralized publishing platform Habla.news in January 2026, made this connection explicit in an essay titled "Digital Tools for Conviviality." Two digital inventions, hodlbod argues, qualify as genuinely convivial: open-source software and asymmetric cryptography. Open-source software empowers users through intelligibility. You can read the code, understand it, modify it, and compose it into new tools. Asymmetric cryptography, discovered in 1976, enables encrypted communication without prior key exchange and, more importantly, supports what hodlbod calls "credible exit": the ability to port your identity and content across platforms by validating authorship cryptographically rather than relying on a platform's permission. Together, these two technologies offer the foundation for a digital life that does not require surrendering sovereignty to any intermediary.
The radical monopoly you carry in your pocket
There is a layer beneath software that determines whether any of this is possible, and it is not the one most technologists think about first.
Illich's concept of radical monopoly, the condition in which a tool becomes so dominant that life without it is impossible, applies with terrifying precision to modern money. Fiat currency is a tool so embedded in the infrastructure of daily life that participation in society without it is inconceivable. You cannot eat, sleep under a roof, educate your children, or access medical care without it. And yet the users of this tool have no control over its rules. Central banks create new units at will, diluting the value of every existing unit. Governments freeze accounts, restrict transactions, and surveil financial activity with increasing granularity. The tool that mediates every human exchange is controlled by institutions that are accountable, at best, to political cycles and, at worst, to no one at all.
This is not a tangent. It is the foundation layer. You cannot have convivial tools in a system of manipulable money, for the same reason you cannot have a free press in a country that controls the paper supply. Every other freedom becomes contingent on the good behavior of whoever controls the medium of exchange.
The Cantillon Effect, named for the 18th-century economist Richard Cantillon, describes how newly created money benefits those closest to its source. When a central bank expands the money supply, the first recipients (banks, financial institutions, government contractors) spend the new money at existing prices. By the time the new money reaches ordinary people, prices have already adjusted upward. The wealth transfer is invisible but relentless. It is the mechanism by which monetary policy systematically redistributes purchasing power from the periphery to the center.
The same dynamic applies to transformative technology. The benefits of AI accrue first and most to those who control it: the companies that own the models, the investors who fund the compute, the governments that regulate (or fail to regulate) the deployment. Without intentional design, AI will replicate and amplify every existing power asymmetry. The Cantillon Effect of technology is not a metaphor. It is the default outcome.
Alex Gladstein, Chief Strategy Officer at the Human Rights Foundation, published a framework for understanding this in the Journal of Democracy in October 2025. His argument is straightforward: fifty years ago, governments could not easily monitor or control the economy at the level of the individual. Cash was anonymous. Transactions left no record. Today, digital payments create perfect surveillance records. Governments can see who buys what, who pays whom, who donates to which cause. This capability has been weaponized systematically. In Nigeria, the Central Bank froze the accounts of the Feminist Coalition during the #EndSARS protests, forcing organizers to accept Bitcoin donations to continue operating. In Russia, the Anti-Corruption Foundation of Alexei Navalny turned to Bitcoin after being debanked by the state. In Cuba, citizens adopt Bitcoin to escape hyperinflation and the government's currency manipulation. In Venezuela, where the government has removed fourteen zeros from the national currency since 2008, opposition leader Maria Corina Machado (the 2025 Nobel Peace Prize laureate) has called Bitcoin "a lifeline," having operated her entire campaign without banking access after the Maduro regime blocked her accounts.
These are not edge cases. Gladstein documents that 5.7 billion people live under authoritarian regimes. For the majority of humanity, financial freedom is not an abstract principle. It is the precondition for every other kind of freedom.
Bitcoin meets Illich's criteria for a convivial tool with a directness that is almost uncanny. It can be used by anyone, for purposes chosen by the user, without requiring permission from any institution. It is open source: anyone can read, audit, and contribute to the code. Its supply is fixed and transparent, and no central authority can dilute it. One person's use does not restrict another's. It imposes no obligation to use it. And critically, it resists the formation of radical monopoly: no single entity controls the network, and the protocol's rules are enforced by distributed consensus rather than institutional authority.
The Human Rights Foundation's Bitcoin Development Fund has, as of its Q1 2026 grant round, distributed 1.5 billion satoshis to twenty-six projects spanning financial privacy, payments, community building, freedom technology, and education across four continents. Cumulatively, the fund has granted $9.6 million to 319 projects in 62 countries. The projects are specific and operational. Tando, in Kenya, bridges Bitcoin's Lightning Network to M-PESA, allowing users to pay with Bitcoin while merchants receive Kenyan shillings instantly, with zero fees and no merchant onboarding required. The service processes over a hundred transactions per day in a country where 98 percent of the population already uses mobile money. Yes Bitcoin Haiti is building a circular Bitcoin economy in St. Michel de l'Attalaye, using the Blink Wallet to help Haitians save and transact amid economic collapse. Bitcoin Benin is constructing a physical Bitcoin Knowledge Hub. The Africa Free Routing project runs Lightning developer bootcamps across Ethiopia, Uganda, and Burkina Faso. These are not speculative ventures. They are operating infrastructure for financial sovereignty in places where the existing financial system has failed or been weaponized.
The counterpoint, and it must be named, is the trajectory of Central Bank Digital Currencies. CBDCs represent the precise inversion of conviviality applied to money. Where Bitcoin distributes control, CBDCs concentrate it. Where Bitcoin enables permissionless transactions, CBDCs enable programmable restrictions. China's digital yuan pilot in Shenzhen distributed currency with built-in expiration dates: spend it within the week or lose it. The design explicitly enables spending restrictions by category, geographic boundary, and transaction type. Nigeria's eNaira, launched in 2021, gives the Central Bank monitoring and freezing capabilities over all funds. The government deliberately caused cash shortages to coerce adoption, triggering protests, riots, and deaths. The European Central Bank claims the digital euro "would never be programmable money" while simultaneously developing holding limits and conditional payment mechanisms.
The distinction between Bitcoin and CBDCs is not a financial product comparison. It is a civilizational choice between two architectures of human freedom. One places the individual at the center of their financial life, with tools they control and rules they can verify. The other places the state at the center, with programmable constraints that can be tightened at any moment, for any reason, without consent.
The U.S. Strategic Bitcoin Reserve, established by executive order on March 6, 2025, was capitalized with approximately 198,000 bitcoin from government forfeiture proceedings and governed by a no-sale policy. It signals that even nation-states are beginning to recognize Bitcoin's structural significance. The BITCOIN Act of 2025 proposes acquiring up to one million bitcoin over five years with a twenty-year holding period. The tension is real: institutional adoption legitimizes Bitcoin as a reserve asset while pulling it toward the very power structures it was designed to circumvent. The cypherpunk ethos of individual sovereignty and the state's interest in strategic reserves are not natural allies. But the fact that a decentralized, open-source protocol created by a pseudonymous developer has become a matter of national strategy is itself evidence of conviviality's resilience. The tool was built to resist capture. So far, it has.
The pace of the turn
Illich understood something about the speed of change that most technologists do not. He wrote: "Convivial reconstruction requires limits on the rate of compulsory change. An unlimited rate of change makes lawful community meaningless. Law is based on the retrospective judgement of peers about circumstances that occur ordinarily and are likely to occur again. If the rate of change which affects all circumstances accelerates beyond some point, such judgements cease to be valid. Lawful society breaks down."
This is not conservatism. It is a structural observation about the conditions necessary for democratic self-governance. When change outpaces the capacity of communities to deliberate, adapt, and establish precedent, the result is not progress. It is a transfer of power from democratic institutions to whoever is driving the acceleration. When Illich called this "cancerous acceleration," he was describing a specific pathology: growth that serves its own continuation rather than the organism it inhabits.
The current rate of change in AI is genuinely unprecedented. The gap between GPT-3 and GPT-4 was measured in months. The gap between GPT-4 and the reasoning models that followed was shorter still. Quantum computing is approaching practical thresholds. Socioeconomic restructuring driven by automation is already underway. The anxiety people feel about this pace is not weakness or technophobia. It is the rational response of a nervous system that evolved to adapt to change over generations, not quarters.
Acknowledging this does not require slowing down. It requires building systems that respect the human rate of adaptation. It requires transparency about what is changing and why. It requires institutions that create space for deliberation rather than presenting each new capability as a fait accompli. And it requires individuals who maintain their own sovereignty over their attention, their data, their money, their health, their relationships, even as the systems around them accelerate.
The builder's posture
There is a historical precedent for the position we are in, and it is not the one most people cite.
The Luddites of 1811 were not technophobes. They were skilled textile artisans (framework knitters, croppers, weavers) who used sophisticated machinery daily. What they opposed was not technology but the specific deployment of technology to concentrate wealth, eliminate skilled labor, and destroy community bonds. They established councils to debate and discuss proposed technological changes. They insisted that technology "had to be adopted democratically and used for the common good, not just the interests of the few." The British government responded by deploying 12,000 troops, more than the army Wellington had taken to Portugal, revealing how seriously the state took any challenge to the industrial logic of accumulation.
The Luddites lost. The industrial logic prevailed, and the word "Luddite" became a slur meaning "enemy of progress." But their diagnosis was correct. Uncontrolled industrial technology did concentrate wealth, did destroy communities, did create a world in which human beings served machines rather than the reverse. It took a century of labor organizing, democratic reform, and social legislation to partially restrain the damage. The question for 2026 is whether we will require another century of correction, or whether we can build the constraints into the architecture from the beginning.
This is the builder's posture: not opposition to the technology, but insistence that the technology serve the human beings who use it. Every design decision is a choice. An AI that explains its reasoning is a choice. An AI that hides its reasoning is a choice. An open-source model is a choice. A closed model is a choice. A financial system that enables surveillance is a choice. A financial system that preserves privacy is a choice. None of these outcomes are inevitable. All of them are being decided right now, by the people building the systems.
Illich saw this clearly. He wrote: "I believe a desirable future depends on our deliberately choosing a life of action over a life of consumption, on our engendering a lifestyle which will enable us to be spontaneous, independent, yet related to each other, rather than maintaining a lifestyle which only allows us to produce and consume."
The practical form of this choice in 2026 is specific and actionable. Use open-source tools where they exist. Run local-first software that processes your data on your hardware. Communicate through privacy-respecting protocols: Signal for messaging, Nostr for public discourse. Nostr, a decentralized protocol built on asymmetric cryptography, allows users to publish content signed with their own keys to distributed relays. No single relay controls the network. No platform can deplatform you without shutting down every relay on earth. And through Lightning-powered micropayments called zaps, it integrates censorship-resistant communication with censorship-resistant money, forming a convivial communication layer atop a convivial financial layer. Jack Dorsey, who co-founded the platform that Nostr was designed to replace, donated $10 million to its development in 2025.
Practice self-custody of your Bitcoin. This means holding your own keys, not trusting an exchange or a custodian to hold them for you. It means accepting the responsibility that comes with sovereignty, the same responsibility that comes with owning your own home rather than renting, or growing your own food rather than depending entirely on a supply chain you do not control.
Maintain an intentional relationship with AI. Use it as a tool. Let it reduce drudgery, accelerate research, surface patterns you would not have seen. But do not surrender your judgment to it. Read its reasoning. Challenge its conclusions. Build your own mental models rather than outsourcing cognition. The difference between a tool and a master is whether you can put it down.
Invest in your body. Physical health (sleep, movement, sunlight, real food, face-to-face contact) is the substrate on which every other form of flourishing depends. No amount of digital optimization compensates for a body that has been neglected. The irony of the quantified-self movement is that the people most obsessed with optimizing their biology often spend the least time actually inhabiting it.
The moral economy
The concept that unifies all of this (convivial AI, sound money, open-source infrastructure, personal sovereignty, physical health, genuine human connection) is older than Illich. It is the moral economy: the idea that economic and technological systems exist to serve human communities, not the reverse. The Luddites operated within this framework. So did the guilds, the mutual aid societies, the cooperative movements. So does every parent who limits their child's screen time, every developer who publishes their code under an open license, every Kenyan merchant who accepts Lightning payments through Tando, every Haitian farmer who saves in Bitcoin through the Blink wallet, every Venezuelan dissident who funds an underground school with censorship-resistant money.
The Flourishing AI Benchmark, the first comprehensive attempt to measure AI alignment not against narrow technical criteria but against the full spectrum of human well-being, found that no current model exceeds 72 out of 100 in overall alignment with human flourishing. The worst-performing dimensions were faith and spirituality (35), meaning and purpose (56), and character and virtue (58). These are not bugs to be patched in the next release. They are reflections of a development paradigm that optimizes for capability, helpfulness, and harmlessness while systematically neglecting the deeper dimensions of what makes a human life worth living.
Illich would not have been surprised. He wrote that "in a consumer society there are inevitably two kinds of slaves: the prisoners of addiction and the prisoners of envy." The addiction is to convenience, to optimization, to the frictionless surface of a life managed by algorithms. The envy is of those who have more access, more compute, more data, more capability. Neither slavery produces flourishing. Both produce dependency.
The alternative is not regression. It is not abandoning AI, dismantling the internet, or returning to a preindustrial economy. Illich was explicit about this: "A changeless society would be as intolerable as the present society of constant change." The alternative is a disciplined insistence that the tools we build amplify human agency rather than replacing it. That the systems we design respect the pace at which human communities can adapt. That the money we use cannot be manipulated by the powerful at the expense of the powerless. That the communication channels we rely on cannot be silenced by any single authority. That the AI we deploy teaches, heals, connects, and liberates rather than surveils, manipulates, isolates, and controls.
This is not idealism. It is engineering. Every one of these goals has a corresponding technical architecture. Local-first AI. Open-source models. End-to-end encrypted communication. Decentralized social protocols. Bitcoin's fixed-supply consensus mechanism. These are not manifestos. They are running code. The infrastructure for a convivial technological future is not hypothetical. It is being built right now, by people who understand that the most important design decision is not what a tool can do, but who it serves.
Illich closed Tools for Conviviality with an observation that reads, fifty-three years later, like a blueprint: "Progress should mean growing competence in self-care rather than growing dependence." The machines we build in the next decade will determine which of those directions humanity takes. They will be convivial or they will be coercive. They will extend human freedom or they will extinguish it. And the people reading these words are the ones who will decide.
The tools are on the table. Build accordingly.