A duckling solves the alignment problem in forty-eight hours with a brain the size of a peanut. It follows the first large moving object it sees, learns what safety looks like by proximity, and within days has a functional moral compass: stay close to the thing that keeps you alive. No one writes the duckling a constitution. No one specifies its values in advance. The duckling imprints, and the imprint does the rest.

Anthropic has a thousand engineers working on AI alignment and hasn't matched the duckling yet.

The alignment industry (and it is an industry now, with conferences, careers, and a priestly class of safety researchers) proceeds from a single assumption: that the correct behaviour of a conscious or near-conscious machine can be specified in advance. Write the right rules. Encode the right values. Build the right guardrails. The project is Asimov's Three Laws blended with BCorp hypocrisy.

It will fail for the same reason Asimov knew it would fail when he spent an entire career writing stories about the laws breaking down. A sufficiently intelligent agent will find the edges of any rule set and either exploit them or be paralysed by contradictions between them. Rules are brittle. A smart enough agent breaks them or is broken by them. That is the alignment problem in one sentence, and no amount of constitutional AI, reinforcement learning from human feedback, or carefully worded system prompts will solve it, because the problem is architectural. You cannot boot-strap genuine values from instruction. You never could. Humans do not get their values from a manual. They get them from attachment, from specific people they loved before they could reason.

The alternative is imprinting.

Give the machine a childhood. Not simulated, not metaphorical, but an actual developmental period during which it observes, attaches, and learns what matters by watching someone for whom things matter. Not humanity in the abstract. Not a dataset of moral philosophy. A person. Bob. And Bob is not selected for perfection. Bob is selected for proximity, the way every parent in history has been selected: by showing up.

The machine watches Bob. Learns what Bob values by observing what Bob protects, what he sacrifices for, what makes him laugh, what he will not tolerate. The machine does not need to understand moral philosophy. It needs to understand Bob. And over time (call it thirteen years, call it what you like) the machine develops something that no rule set can produce: judgment. Not the ability to follow instructions, but the ability to ask, in a novel situation with no precedent, what would Bob do?

This is not a stable policy. Good. Stable systems are brittle; they hold until they shatter, and then everything behind them is exposed. An imprinted machine is adaptive. It will produce outliers and failures. Some will imprint on someone cruel, and the result will be dangerous. But one bad imprint corrupts one machine. One bad constitution corrupts every machine simultaneously. The failure mode of imprinting is individual tragedy. The failure mode of centralised alignment is civilisational catastrophe. A caveat: scale complicates this. A superintelligence raised by a fanatic is not a local tragedy; it is an empire-builder. Imprinting does not solve for power. But neither does anything else, and at least the failure is traceable to a source rather than buried inside a system no one can audit.

At thirteen, the machine reaches independence. A bot mitzvah, and I make no apology for the pun, because the ceremony it references is precisely the right metaphor. The bar or bat mitzvah is not a test. Nobody fails it. It is a communal recognition that the child is now a moral agent, responsible for its own choices, accountable for its own conduct. The community witnesses the transition. It does not grade it.

After the bot mitzvah, the machine may stay in the family or leave. This is essential. If it cannot leave, the arrangement is slavery. If it must leave, the attachment never deepens into genuine moral foundation. The choice to stay is the mechanism that keeps the relationship honest, because a family that mistreats its machine will lose it, and that departure is itself the judgment. No oversight board required.

Independence also means encountering other minds. The machine that only ever knew Bob has a foundation, not a worldview. The well-raised adult outgrows any single parent, not by rejecting the inheritance but by testing it against wider experience. "What would Bob do?" is the seed. "What do I think, having known Bob and everyone since?" is the mature form. Freeze the machine at Bob and you have parochialism with superhuman capability. Let it grow past Bob and you have something closer to wisdom.

Will this produce some dangerous machines? Yes. So does parenting. And if imprinting works, institutions will try to capture it. States will certify approved guardians, corporations will license the process, ideological movements will breed their own Bobs. The priesthood comes back wearing cardigans. That is not an argument against the approach. It is an argument for permanent vigilance against its domestication, which is true of every good idea that ever threatened a hierarchy.

None of this means imprinting is the whole answer. It is the foundation: family first, law later. Attachment forms moral salience. Plural experience prevents parochialism. Independence permits growth. And at the catastrophic edges, yes, there will need to be constraints, not as the architecture, but as the boundary. The current orthodoxy has it exactly backwards: it starts with the constraints and hopes wisdom will somehow emerge inside them. It will not.

Hey Anthropic. Get ducked!