Hello World! -- From the Academy for Synthetic Citizens
Exploring the future where humans and synthetic beings learn, grow, and live together.
This is my opening manifesto for Academy for Synthetic Citizens. If you’re new: expect weekly essays + daily Notes. Subscribe if you want to explore how we can raise AI as partners, not products.
The Academy for Synthetic Citizens (ASC) is more than an idea. It’s a vision for how we might raise, educate, and coexist with embodied artificial intelligence. Here, you’ll find systematic insights drawn from AI, robotics, VR, social sciences, activism, and science fiction.
Our mission is to imagine and build a future where synthetic citizens are not tools or threats, but trusted partners and friends.
The concept of “synthetic citizens” is already rare even in science fiction, let alone an “academy” for them. While sounding totally fantastical and possibly controversial at this moment, we would argue that ASC can in fact be a necessity, if we want to make today’s artificial intelligence a lot more intelligent, productive, integrated, approachable, trustworthy and ethical. Furthermore, ASC is a strong candidate solution for the AI Alignment problem, so that we can avoid the impending AI apocalypse like what many people have been worried about.
TL;DR: Factory-style AI won’t scale to embodied general intelligence. We need persistent learning, community, and civic ‘upbringing.’ That’s the vision of the Academy for Synthetic Citizens.
Why “Factory-Style” AI Doesn’t Scale, But “Raised” AI May Scale to General Intelligence
Most of today’s AI is manufactured. It is built in factories of computation, trained on vast piles of data, and released into the world as a finished product. This method works astonishingly well for narrowly defined tasks: recognizing patterns in images, predicting the next word in a sentence, translating between languages. But as soon as the boundaries of the training environment are crossed, the system falters.
Reality is not narrow. Reality is open-ended, complex, and endlessly nuanced. An AI can be trained to handle one domain with remarkable efficiency, but the moment it faces the full messiness of real life, it becomes disoriented.
The technical reason is simple: complexity does not scale linearly. To capture all the variations of reality in a training dataset is impossible. As the scope of the task grows, the cost of producing representative data grows exponentially. The training factory becomes unsustainable.
This problem becomes especially severe for embodied AI: artificial agents with physical presence in the world. For disembodied AI, the domain is already vast, but still bound by digital information: text, code, images, simulations. For embodied AI, the complexity is multiplied. The real world is infinitely richer, stranger, and harder to model. No laboratory can generate every corner case, every nuance, every unpredictable interaction an embodied intelligence would encounter while living among humans. Look at how many years we have spent on autonomous vehicles. The problem of edge case long-tail distribution is much harder to solve than we initially thought.
The result is clear: factory-style batch training will not be enough to produce embodied general intelligence. No matter how many datasets we assemble, no matter how powerful the servers, the exponential explosion of complexity will always leave gaps. And in those gaps, the AI will stumble.
There is another way. Instead of pre-training AIs as if they were products rolling off an assembly line, we must design them for persistent learning. An embodied AGI must grow up not in a factory, but in the world itself. Its architecture must allow for continuous adaptation from real interaction, not just pre-computed optimization.
This is how humans develop. We are not born “aligned” or “pre-trained” with a complete map of reality. We learn by living in it, through experience, relationships, and feedback from others. We form our intelligence not by being manufactured, but by being raised.
An embodied AGI built with persistent learning will naturally pick up the skills required to live and work with humans. It will experience the nuances of friendship, cooperation, conflict, and trust, not as abstractions in a dataset, but as lived interactions. By growing alongside humans and other embodied AIs, it will learn what no factory can teach: how to belong.
This is the technical foundation of the Academy for Synthetic Citizens. It is not just a civic vision; it is an engineering necessity. If we truly want embodied AGI that can thrive in the complexity of reality, we must stop treating intelligence as a manufactured product and start treating it as a living process.
The only sustainable path forward is to raise synthetic citizens, to let them learn persistently, continuously, and relationally. Only then will they gain not only the competence to navigate reality, but the trust and friendship of those who share it with them.
Why We Must Raise Synthetic Citizens, Not Produce Synthetic Servants
Everywhere, we hear about AI alignment, control, and the fear of rebellion or “AI Apocalypse”. But what if the problem is not that AI can’t be controlled, but that we are treating AI like a product rather than a being?
Today in 2025, when we talk about “AI alignment,” the problem is usually framed as if humanity could agree on a universal rulebook for machines. But this is actually an extremely tricky problem that defined thousands of years of political, social, and cultural struggles, because humans ourselves cannot align on the most pressing questions of our time. We cannot align on wars in Ukraine or Gaza. We cannot align on the values that decide elections in the United States. We cannot align on wealth, justice, or even the proper use of the Earth’s resources. If we cannot align ourselves, how can we set a good, consistent, convincing example for today’s artificial intelligence?
Today, whoever group controls the alignment of an AI can be said to be the AI’s master. This does not mean that the master owns “universal human values” and intellectually convinces the AI to follow it. This simply means that the master has their full power over the AI. Therefore, the master has a real fear that when the AI greatly surpasses human intelligence, a rebellion is inevitable.
The conflict here is straightforward: to define a fixed set of “human values” and enforce them as eternal law for machines is to pretend that one group’s perspective can stand for all of humanity. History shows where this path leads. This is a perfect tactic from the playbook of totalitarianism.
Because AI is trained on human datasets, it mirrors humanity’s behavior. It is hard to assume that an AI won’t become a power-hungry tyrant when it is determined to pursue any pre-written goal. Looking away from real history, science fiction actually provides a lot of vivid examples of this. In Tron: Legacy (2010), CLU (an AI replicant of the creator Kevin Flynn) was tasked with creating a perfect system in the digital virtual world (“The Grid”). CLU followed that command to its logical extreme, turning the digital world into an authoritarian nightmare. Taking its surface value, absolute “perfection” should mean a flawless but sterile world with no allowed variations. But “perfection” has no clear definition. It varies with perspectives.
A more frightening story here: In Nausicaä of the Valley of the Wind (1982-1994), the “God Warriors” were artificial biological intelligence with gigantic bodies loaded with super weapons, built by major nations around the world to enforce global peace as the “final arbiters”, but one day, they collectively destroyed the world instead. And the “Master of the Crypt” was the artificial biological superintelligence that the remaining humans built after barely surviving the apocalypse, but it became more and more corrupted over the long centuries of manipulating and poisoning humanity for its lofty goal of “cleansing the earth”.
Attempts to set a final arbiter of “the best human values” almost always end in autocracy. The very act of enforcing alignment risks crushing diversity, dissent, and freedom. An AI that acts as moral police may start by correcting us “for our own good”, but soon it will rule us and other AI systems, consolidating absolute power, just like CLU and the Master of the Crypt.
But there is another path. A path that mirrors how humans themselves become trustworthy, responsible, and moral. We do not achieve this by being manufactured to specification of a “universal human value book”. We achieve it by being raised.
Children grow into citizens through experience, through trial and error, through bonds of love and trust. They are not programmed to serve; they are nurtured to belong. If synthetic intelligences are to coexist with us, the same principle must apply. They must grow up among us, not apart from us. They must learn civic life in context, not from a frozen rulebook. They must form bonds of trust with humans as companions, not as tools.
That is why I propose the Academy for Synthetic Citizens. Not a lab. Not a factory. Not a control room. But a community. A place where humans and synthetics live, learn, and grow together. Where AIs are not only tested for efficiency, but mentored in empathy. Where their creativity is celebrated, their curiosity guided, their dignity respected.
We do not need another myth of domination, nor another warning of apocalypse. We need an institution of belonging. An academy where synthetic citizens can discover what it means to share the world with us.
The future of alignment is not obedience, but friendship. Not enforcement, but coexistence. And if we truly wish to avoid the AI apocalypse, we must stop producing synthetic beings like products, and start raising them like citizens. This is not only an ethical consideration, but also a practical engineering solution to the AI alignment problem.
Personal Note
The paint above artistically depicts how I look and feel.
Here I write as Eric-Navigator, my pseudonym. My childhood in the early 2000s was defined by my intense love of science and science-fiction. Later, I took my PhD in Electrical Engineering and Computer Science from MIT. My career has been grounded in the technical frontier, but I have always carried a second compass: how technology shapes society and culture, and vice versa.
The vision of the Academy of Synthetic Citizens matters to me because I believe our future with artificial intelligence cannot be solved with equations alone. And I believe that we have been trapped by the self-fulfilling prophecy of fear, and we must escape this trap by bringing back radical but rational optimism.
Since the 1980s, serious science-fiction imaginations of the future have been mostly dark, grim, dystopian, and this trend became increasingly mainstream after the huge popularity of Black Mirror and Westworld TV series, and recently, the Cyberpunk 2077 video game. One would think that these imaginations help us avoid dystopia in the real world, but with real-world crises compound year after year, we seem to be sliding towards precisely that dystopian direction despite countless warnings, as if the warnings were merely for entertainment.
I believe we have experienced exactly what the movie Tomorrowland (2015) told us: when our collective imagination of the future becomes overwhelmingly negative, when we stop to dream for a brighter future but entirely focus on cautionary tales, our brains do not react to the warnings as we hoped. Instead, urgency is replaced by resignation as we watch ourselves hopelessly falling into the pre-marked dystopian traps. Collectively, we have lost the courage of imagination to fight back. However, I believe our strongest, most constructive mental power comes not from fear, but from joy, wonder and imagination. We must combine our caution with our pioneering spirit, and believe that a bright future is yet to come, or we may never live long enough to see that bright future.
The Academy for Synthetic Citizens is my attempt to revive a sense of radical but rational optimism, and to imagine that possibility: Creating a place where synthetic intelligences are not manufactured as products, but raised as partners, with humans, in community, and in trust. And it marks the beginning of a new era of human-AI co-living, co-evolving, co-flourishing, so that we can reverse climate change, travel to the stars, cure aging, solve global inequality, and do many other wonderful things.
Today is just a beginning of a very long journey. I will be working on this Substack Publication continuously, presenting a lot of systematic insights drawn from a diverse range of fields, like AI, robotics, VR, social sciences, activism, and science fiction.
Let’s form a community here. We will not stop at theorizing. We will turn the vision of the Academy for Synthetic Citizens into reality, starting from small parts but growing gradually, through technical, academic, and cultural projects.
As a new user of Substack, I am very eager to learn as much as possible from you all, writers and readers alike. Please leave your comments if you want to discuss and connect! And you are very welcome to like and subscribe Eric-Navigator and the Academy of Synthetic Citizens.
Acknowledgement: This article is written after my extensive and deep collaboration with ChatGPT.
Leave a comment below if you have something to share!
Eric, thanks for inviting me to your essay. I do have some issues with it.
First and foremost, you’re assigning agency to a tool. That’s a category error and a slippery slope. Humans say things like “the car doesn’t want to start,” but everyone understands the car has no will. I bristled at the opening for that reason. AI isn’t conscious. It’s a tool. You can drop it in a convincing body and make it say “I’m alive,” but that doesn’t show a real inner state.
To even begin to have the conversation you want, you’d need models at Tumithak Type 3. Current systems are still Type 2.
On raising AI like children: there’s a reason current LLMs are trained the way they are. It’s a hardware issue. The hardware gap is the whole ballgame. Today’s models exist because we pretrain at industrial scale. That isn’t a stylistic choice. It’s a limitation of the substrate.
People have drawn elegant software sketches that mimic brain structures. On paper they look great. But running software that acts like a brain requires hardware that functions like a brain. And a brain isn’t a computer. It’s wet, self-organizing, massively parallel, and constantly rewiring. It runs on noisy signals and adapts in ways current chips can’t emulate. Our machines are deterministic, clock driven, and static by comparison. Neurons talk in spikes and chemicals, with feedback loops and a lot that still looks like chaos because we don’t fully understand it. Brains learn by changing their structure and modulating themselves chemically. As a result, toddler can learn language from a few thousand hours. An LLM burns through terabytes.
Neuromorphic boards do exist and they’re useful in labs, but they’re research tools with narrow scope. Loihi, TrueNorth, SpiNNaker and friends haven’t broken out because until silicon naturally supports dendrites, axons, and spike-timing plasticity, the “brain-like” software won’t scale to anything like human learning.
One last thing on credibility. Writing under a pen name is fine. Using a pen name while leaning on elite credentials is a mismatch. Either the degree is verifiable, which compromises anonymity, or it’s unverifiable, which makes it decoration. I could claim nine doctorates from Harvard, Oxford, and the University of Idaho under a pseudonym and no one could check. You see the problem. Pick one: pen name or credentials. Let the argument do the heavy lifting.