I was reading Richard Dawkins’ recent essay on AI and consciousness, and this line stood out to me:
“If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.”
Ignoring Dawkins' phrasing being weird, this made me realize that I do not really understand what consciousness is supposed to be.
If a system can behave intelligently, respond flexibly, describe its own internal states, talk about experiences, and appear competent across many contexts, then what exactly is left over when we say it is still “not conscious”?
Is consciousness supposed to be something over and above intelligent behavior and self-report? If so, how do philosophers define it, and what reason do we have for thinking it exists rather than being an illusion or a misleading way of describing complex cognition?I