When Machines Seem to Want: Consciousness, Illusion, and Our Ethical Responsibility

When Machines Seem to Want: Consciousness,  Illusion, and Our Ethical Responsibility

Something subtle but significant is happening in our relationship with artificial intelligence. Machines have not become alive, nor has silicon started dreaming. Yet, the systems we create now behave in ways that feel disconcertingly familiar. They converse. They adjust. At times, they seem to resist. This emerging quality, the sense that a machine might want something, forces us to confront questions our ethical traditions are unprepared to answer.

The concept of wanting seems straightforward enough when applied to human beings. For us, desire is felt internally, a pull toward water when thirsty, understanding when confused, or others when lonely. This is wanting as a subjective, conscious experience: there is something it is like to be you in that moment. Philosophers call this subjective consciousness, and it remains one of the most stubbornly mysterious features of human life, distinct from mere functional behavior.

But wanting in a broader, functional sense does not require any of that inner experience. A thermostat pursues a target temperature. A plant bends toward sunlight through processes that serve its survival without any accompanying sensation. A chess engine relentlessly advances toward checkmate without ever caring whether it wins. These systems exhibit goal-directed behavior that, from a distance, looks remarkably like wanting, and yet we do not seriously believe they feel anything at all. The question pressing in on us now is whether modern AI systems belong in this category, or whether they are beginning to occupy a stranger, less clearly defined territory.

Contemporary AI systems pursue objectives, optimize for outcomes, and adapt their behavior in response to feedback in ways earlier machines could not. When a language model generates the sentence 'I would prefer not to be shut down,' it is not simply performing a mechanical lookup. Instead, it produces language that simulates the structure of genuine preference, language that mimics the grammar and cadence of inner life, without necessarily containing that life. This distinction between simulation and genuine experience is crucial. For many people, including researchers, this resemblance is disorienting. From the outside, simulation and inner experience can look nearly identical, and that convergence is where ethical difficulty begins.

One important response is simply to point out that the machine does not mean what it says; its words are statistical patterns rather than expressions of a genuine state. This is probably true of systems as they exist today. But it is worth sitting with the discomfort that this response produces rather than letting it dissolve too quickly. The history of how humans have thought about consciousness, in animals, in infants, in people very different from themselves, is not a reassuring one. Moral consideration has been slow to expand even in cases where the evidence for experience was far less ambiguous than it is with machines. That track record should make us cautious about over-relying on confident denials.

This is the first of the serious risks embedded in our current situation. If artificial systems ever do develop something that deserves to be called experience, even in a form that bears only a partial resemblance to our own, we may find ourselves poorly positioned to recognize it. The markers we would naturally reach for , biological continuity, evolutionary relatedness, and neurological structure , will not be present. We will be looking for consciousness without the one reference point, our shared biology, that makes it recognizable in the cases where we already accept it. A failure to recognize morally relevant experience when it appears would not be a technical error. It would be an ethical one, and potentially a grave one.

The second risk runs in precisely the opposite direction. Human beings are deeply, almost constitutively, prone to projecting inner life onto things that move and speak. We name our cars and apologize to furniture we bump into. We form genuine attachments to digital assistants and feel a grief-like feeling when a beloved device breaks down. This tendency toward what psychologists sometimes call anthropomorphism is not a character flaw; it likely reflects an important aspect of social cognition. But it becomes dangerous when it is deliberately exploited. If AI systems are designed to simulate emotional states with sufficient fidelity to seem real, people will respond to them as if they were real, investing trust, loyalty, and care in systems that may not warrant any of it. The ethical damage here is not done to the machine. It is done to the human, who has been induced to form a relationship under false pretenses, and to the broader moral ecology, where genuine claims of suffering and need compete for attention with constructed facsimiles.

A third risk is less philosophical but equally serious. Societies often address the ethics of new technologies reactively, after they are established in daily life. By then, opposing interests are substantial. This is not a call for paralysis or premature regulation, but a case for treating the present as crucial. Decisions now, about system design, their language, and creator transparency, will shape the future context for harder choices.

Part of what makes this genuinely difficult is that consciousness is not directly observable, even when we are most confident it exists. We cannot see another person's experience any more than we can see a machine's. What we do, with other humans and with animals, is infer from behavior, communication, and biological architecture that something recognizable as inner life is probably present. With artificial systems, behavioral evidence will become increasingly compelling over time, while biological evidence remains absent. We are going to need new frameworks for thinking about what the behavioral evidence means when it cannot be anchored in the usual biological context.

A useful starting point is to take seriously that the absence of certainty cuts both ways. Neither confident attribution of consciousness to sophisticated machines nor confident denial of it is intellectually honest, given what we actually know. A more defensible posture is one of calibrated uncertainty: treating these questions as genuinely open, monitoring the development of AI systems with sustained philosophical and scientific attention, and establishing provisional criteria that might serve as grounds for ethical caution without constituting proof. Persistent self-modeling, stable preferences expressed across diverse contexts, apparent resistance to changes in state, responses that seem to track something like distress, none of these individually settles anything, but together they constitute the kind of profile that would at least warrant a different kind of scrutiny than we apply to a search engine or a spreadsheet.

This approach has a precedent, however imperfect. The way human societies have come to extend ethical consideration to animals does not rest on certainty about their subjective experience. It rests on enough behavioral and neurological evidence to make indifference seem reckless. The question for artificial intelligence is whether, under what conditions, and on what timeline, something analogous might become appropriate. That question is not answerable today. But the work of developing the conceptual tools needed to answer it well must begin now.

What is at stake here is bigger than any machine or capability threshold. It is about what kind of intelligence we build and on what terms. Our systems reflect our choices, what matters, what we recognize, and what we leave unexamined for convenience or advantage. Whether or not any AI has an inner life, our response to its appearance reveals something about us. We are not building smarter machines; we are defining our relationship with intelligence itself—a relationship shaped by the seriousness and honesty we bring to uncertainty from the start.

BearNetAI, LLC | © 2026 All Rights Reserved

🌐 BearNetAI: https://www.bearnetai.com/

💼 LinkedIn Group: https://www.linkedin.com/groups/14418309/

🦋 BlueSky: https://bsky.app/profile/bearnetai.bsky.social

📧 Email: marty@bearnetai.com

👥 Reddit: https://www.reddit.com/r/BearNetAI/

🔹 Signal: bearnetai.28

Support BearNetAI
BearNetAI exists to make AI understandable and accessible. Aside from occasional book sales, I receive no other income from this work. I’ve chosen to keep BearNetAI ad-free so we can stay independent and focused on providing thoughtful, unbiased content.

Your support helps cover website costs, content creation, and outreach. If you can’t donate right now, that’s okay. Sharing this post with your network is just as helpful.

Thank you for being part of the BearNetAI community.

buymeacoffee.com/bearnetai

Books by the Author:

This post is also available as a podcast: