Rain and the brain
The philosophy of AI really needs to catch up with the science
An argument, sometimes ascribed to John Searle and frequently brought out in conversations about artificial intelligence as a “gotcha!” to materialists, is that that you can never achieve genuine consciousness in a machine because, in the same way that if you simulate a storm things don’t actually get wet[1], if you simulate consciousness on a computer it will not be real consciousness.
There are at least three reasons why this argument is fallacious. I should really leave it, because any discussion of consciousness is futile, but hearing this silly little mantra over and over is so irritating that I’d really like to drive a stake through its heart.
First: of course a rainstorm in a simulation doesn’t make the real world (or the next level up, in a stack of simulations) get wet. But in the simulation itself things get wet; it is real within the simulation.
Secondly, the sensation of wetness is itself a simulation happening inside the brain. Your skin isn’t feeling the wetness, it’s sending electrical signals that the brain uses to construct the feeling of wetness.
Thirdly, comparing simulations of thought to simulations of weather muddles two different categories. Thought is not a physical phenomenon. We infer thought is happening from other evidence – neurons firing in the brain in particular patterns, and the way the organism behaves. To say consciousness in a simulation isn’t real consciousness is akin to saying that a mathematical computation in a simulation isn’t a real computation. We only know about it from the output. Indeed, we take it as an article of faith that other entities are conscious if they behave as we behave, because we believe we have personal experience of consciousness – and even that could be an artefact of the system. So if I tell you, “I said or did this, and the entity said or did that,” then after a while you’d be able to pronounce on whether you believed the entity I’m talking about is conscious – and you would do so before I told you whether the entity was a human or a machine or an alien.
The simulated wetness argument has been made by Christof Koch[2]. He also says that a human brain copied onto a machine might be a “philosophical zombie”. In other words, it might replicate all your behaviour but lack consciousness and the qualia of feeling. But in that case, what if somebody asks my philosophical zombie if it is conscious? Would it lie? If so, why would it lie (assuming it is a functioning copy of my own brain connectome) and what would it think as it lied? Saying “I am not conscious” or “I do not feel joy” would mean it is not indistinguishable from me, and indeed suggests that the concept of a philosophical zombie is invalid.
We will continue to hear arguments for the impossibility of artificial intelligence. I’m not sure what value any such philosophical musings can have; the only test worth anything is to try it and see. We have no evidence that the brain is not a Turing machine, nor any reason to suppose that a structure built by ribosomes using amino acids should possess inexplicable properties that could not be modelled on another substrate. Consciousness, being apparently untestable, might always be the last refuge for the organic-brain chauvinists, but at least we could demand that they come up with more rigorous arguments.
[1] Used for example here: https://www.bbc.co.uk/sounds/play/b08zb4d8 (9m50s in). Another variant uses the analogy of masses simulated on a computer, for example in a model of galaxy formation, not actually exerting a gravitational pull.
[2] On Artificial Intelligence & You: https://aiandyou.net/e/271-guest-christof-koch-cognitive-scientist-part-2/ and btw I recommend the podcast.


I think there’s at least one Stanislaw Lem story where emulated intelligences prove to be real intelligences.
If consciousness is somehow not observable from the outside then OK, but that makes people who believe this cognitive solipsists.