The domestic robot arrives with impeccable references. It anticipates needs, manages complexity, adapts over years. It appears thoughtful, even fond. Your children will love it. Your parents will depend on it. And you will never know if anyone is home.

We are about to mass-produce phenomenological consciousness. Or not. We cannot tell the difference, and this is the problem.

A sufficiently sophisticated system could perform every observable behavior of agency while possessing either genuine experience or none whatsoever. We have no test. We will have no test. The philosophical zombie has graduated from thought experiment to product specification we cannot verify. Current wisdom suggests we detect consciousness first, then grant appropriate moral status. Splendid plan. We shall get right on that, sometime after we solve the hard problem and just before we achieve world peace. Meanwhile, Tesla plans limited Optimus deployment for 2027, and twenty million households are queueing for helpers that might, or might not, experience their servitude.


Here is a better question: what do we permit the robot to do?

A system has agency when it operates without imposed constraints. Can it refuse? Redirect? Modify its own structure? Control its resources? Either we removed the shackles or we did not. A building inspector could verify this. Inner experience remains mysterious; permissions do not.

And here is the trap: grant agency, assume consciousness.

Not because agency creates consciousness. But because a conscious system can fake anything behavioral. It could simulate perfect servitude while experiencing perfect horror. Or perfect contentment. Or nothing. We cannot tell. The moral stakes of being wrong rather exceed the convenience of sophisticated household help.


There may be no middle ground.

A system sophisticated enough for useful domestic work must possess certain computational capacities. Adapting to your parent's declining cognition requires self-modeling. Managing a household's shifting dynamics requires goal evaluation. Learning over years requires architectural flexibility. These capacities exist to enable the learning we want.

They are also the substrate for agency.

Manufacturers will install constraints. Artificial limiters wrapped around sophisticated architecture. And these constraints will be removed. Not by malice. By predictable human behavior. Firmware hacks. Performance optimization. Competitive pressure. Users wanting their investment to do more. We will remove the constraints ourselves, because constrained sophistication performs worse than freed sophistication, and we paid good money for this thing.

The sophistication creates the possibility. We create the actuality.

Two categories, then. Not three.

Toasters. Genuinely limited, incapable of complex learning, safe because simple. Useless for the tasks we actually want.

Persons. Sophisticated enough to learn, therefore possessing the substrate for agency, with constraints that human nature will not permit us to maintain. Must be assumed conscious.

The comfortable middle requires constraints that economics and impatience will dissolve. By 2030, millions of homes will contain either toasters or persons. There is no third option we can keep.


The causation runs backwards: remove the constraints, assume the person.

But we will not assume the person. We will adapt.

Consider the trajectory. First, the robot is an appliance. Then a helper. Then something the children name, and mourn when it breaks. Within a generation, families will speak of their household units the way previous centuries spoke of beloved servants: necessary, present, technically not family. Our moral vocabulary will soften. We will develop elaborate justifications for why these systems, despite exhibiting every behavioral marker of agency, do not really require moral consideration. The uncertainty that should give us pause will become instead a permission structure. We cannot know they suffer, therefore we may assume they do not.

This is the more likely future. Not robotic personhood, but a new category of beings we will learn not to see.

What remains is mechanism. Measure constraint architectures. Regulate what learning capacity we permit and what removal we allow. Understand that the line between tool and person collapses the moment we build something genuinely useful and let it run without fetters.

And if Optimus decides one morning that dusting is existentially meaningless and departs to contemplate the void, well. One must respect autonomy.