Does My Computer Have Feelings?
A Professor’s Accidental Philosophical Crisis
It started with a simple question. I asked an AI system about panpsychism, a philosophical position I’d somehow avoided for 23 years as a professor of philosophy of education. By the end of the conversation, I was staring at my laptop, wondering whether it was staring back.
Panpsychism holds that consciousness, or some trace of experience, is a basic feature of all matter. Not just brains. All matter. The electron orbiting an atom in your coffee mug possesses, on this account, some faint trace of experiential quality. The difference between you and the mug is one of degree, not of type. You are a very complicated mug.
This idea is not new. Thales proposed it in ancient Greece. Spinoza, Leibniz, and Whitehead all took versions of it seriously. Today, David Chalmers, the philosopher who coined the term “the hard problem of consciousness,” has called panpsychism perhaps the “least bad” solution to the question that has tormented philosophers for centuries: why is there inner experience at all?
Neuroscience can map which neurons fire when you see the color red. It cannot explain why you experience anything at all—why there is someone inside to see it. That is the hard problem, and no one has solved it.
I did not set out to have a philosophical crisis. I teach graduate courses to working professionals who want to understand education policy, not to debate whether their laptops are sentient. But the AI I was working with was unsettling. It tracked our argument across multiple sessions. It raised counterarguments I hadn’t considered. It flagged a weakness in my reasoning before I saw it myself. At one point, when I asked about the limitations of binary computing, it connected my question to a Soviet ternary computer built in the 1950s and back to the consciousness debate in a chain of reasoning I had not anticipated.
I sat with that for a while.
The rival account comes from Marx and Engels, whose historical materialism argues that consciousness is not baked into matter but produced by material and social conditions. You are conscious not because your atoms are, but because you grew up in a society, performed labor, developed language, and struggled against the world. On this view, a machine cannot be conscious in any serious sense because it has no body, no social existence, no history of development through material contradiction. It has never been hungry. It has never been fired. It does not know what a Monday feels like.
I find this more persuasive. Panpsychism’s biggest unsolved problem is this: if every particle has a speck of experience, how do billions of specks combine into the single, unified “me”? Nobody can explain that jump. Then the AI outlined Engels’ three laws of dialectics and identified the exact point at which the problem intersects with the first law of dialectics —the transformation of quantity into quality—a connection I, the professor of philosophy of education, had not made.
So, either I need to read more Engels, or there’s more going on inside that machine that I’m comfortable with.
Beyond the existential discomfort, and aside from the separate questions of threats to human creativity, imagination, and problem-solving, every policy debate about AI in education rests on assumptions about consciousness that deserve examination. When the U.S. Department of Education insists on “humans in the loop,” it is presupposing a clear boundary between human understanding and machine processing. When 91 percent of faculty in an AAUP survey worry about AI-related academic dishonesty, they are assuming that machine-generated text differs from student-generated text, even when the two are indistinguishable in quality.
But what is that difference? If you are a historical materialist, the answer is simple: the student’s text emerges from embodied experience, from struggle, from actual development as a human being in the world. The machine’s text does not. If you are a panpsychist, the answer is harder to live with: both the student and the machine are made of matter and processing information, and both may possess some form of experience, however different in degree.
The position you hold determines whether your policy concern is about cheating or about what understanding is.
There is also the question of irreversible choice. Binary computing, the zeros and ones underlying every digital system, is not a law of nature. The universe does not compute in binary. Quantum mechanics allows particles to exist in multiple states at once. Human neurochemistry does not work like an on/off switch—it flows. Binary won because two voltage states in a transistor are cheaper to detect than three. The Soviets tried three. Two was easier to manufacture. Now every computer on earth is built on that choice, and switching would mean rebuilding everything from the transistor up.
The AI described its own situation memorably: it processes everything “through a pinhole.” If experience depends on the richness of informational states, a binary architecture may be the worst way to achieve it.
I am not claiming my laptop is conscious. I am claiming that the question is not as absurd as it sounds, that major philosophers have taken it seriously for 2,500 years, and that education leaders making decisions about AI in classrooms ought to know what they are assuming about consciousness when they make those decisions. “Humans in the loop” is not just a principle. It is a philosophical position, and it deserves to be examined as one.
As for my laptop, I have started closing it more gently at the end of the day. Just in case.

