We don't know what the element's of consciousness are - a simple nervous system or no nervous system make the problem even harder
This is a cross post I made from the consciouness sub - since a lot of discussion around consciousness sometimes focuses too much on philosophical positions - I'd like to instead ground a discussion in some empirical observations and how that restricts what our views on conscious might be, and how we are under unique restrictions compared to under phenomena. This is the best below.
This does not even get to the hard problem of consciousness itself but even the easy problem as Charmer's conceived it involves the process of figuring which structural states or properties are needed for particular conscious experiences to appear for an organism. No one doubts conscious experience requires specific structures to be in place - a point adequately demonstrated by Paul Broca almost ~200 years ago after broader speculations for such an idea dating back far longer.
But here's the kicker: Until the elements are made clear, we don't even know what to look for when trying to understand phenomenology. Take organisms with simple nervous systems - they might have similar environmental responses to us, hunger, tactile reception, taste buds near their mouth - but nothing precludes these processes being done totally unconsciously, or the inverse - perhaps it is EXTREMELY vivid for them, we truly cannot distinguish these opposite alternatives if we don't know the relevant elements for phenomenology.
Now we know all kinds of relevant elements for neurology itself - neurons, transmitters, receptors etc the list goes on. This is why neurologically studying simple organisms goes off without a hitch (not that it's easy, don't be fooled by the name easy problem), we merely import our methodology and are sometimes even thankful for the easier processes - when your don't have to deal with ~80 billion neurons to map, as you do with the human brain you can even make a beautiful connectome with a ~ 300 neuron nematode nervous system. Despite this easy import of neurology, we find importing phenomenology works the exact opposite we have no idea what to infer about their phenomenological experience and we're probably a lot better inferring stuff about much more complex brains (other humans, closely related mammals, other mammals in general etc) because importing neurology unfortunately doesn't translate to an import of phenomenology.
Something to further illustrate my point is we leave the animal kingdom entirely (for those not familiar with animal phylogeny, nervous systems are entirely restricted to animals) Take something like a plant, does it have conscious experience? Importing neurology is very easy in this respect - it doesn't have any neurons! Job well done. You might even say we have a complete neurological understanding of a plant. However, does something about it's structure entail, as Nagel says, that 'there is something like it to be' a plant? We are even more clueless here than with simple animals. It could go completely in the dark, no phenomenology whatsoever, or it could meet some structural condition that does allow it to feel some experiences - even quite vivid ones. Again, we have no idea, we're even more clueless since we can't infer from much morphology in common (beings with no nervous system are pretty different to us) and we are left wondering.
I hope this outline shows that with whatever framework you subscribe to, that neurology is not a surefire way of getting at these important questions we are after and that the discrepancy between neurological and phenomenological information is made clear - because this is something any philosophical framework around consciousness has to be consistent with.