The ever biting Dijkstra once quipped, “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” That is, he made the argument that thinking is a category of verb that would not apply to a computer or machine. I think this line of thinking can be abandoned for two reasons, the least of which is that we now have submersible robots that can best be described as swimming.
The question of whether computer’s can think is not, as it might seem at first glance, whether computers will be intelligent. Nor is it whether computer’s will be intelligent in the way that we humans are intelligent. There’s no magic threshold of intelligence after which computers cross we will claim that they think in the way we do – not generalized artificial intelligence, or transfer learning, or any other.
What we mean, I believe, by the question of whether computers can think comes from the difference of what we mean by think and what we mean by what computers already do quite well – compute. Computation, in fact, is something humans have been known to do, from time to time. So since humans and computers can clearly compute, what’s the difference between computation and thinking?
Again, I don’t think it’s a matter of degrees – there’s no threshold of generalized computation that we will suddenly call thinking. That way lies ever moving goal posts as people still struggle to define what they think is different between computers and us. And I don’t believe it’s simply semantics or categories either – there’s no emergent level of computation (a similar approach) at which suddenly the word ‘thinking’ applies.
I believe a similar difference in words come from these two – what is the difference between the world and the universe? Arguments aside that there may indeed be a multiverse, I think these two words generally describe the same thing, but from two different perspectives. When we talk about the world, we mostly refer to being in it. In other words, the world is that which is described by our subjective, rich, inner experience. The physical world, the world of sports, the World of Warcraft, all would be meaningless if there was no subjective, rich, inner experience of this world. Quite to the contrary, the physical universe is described quite objectively – divorced from any subjective viewpoint, in some sort of psuedo-skeptical grasp of the noumenal objective that no one being could ever, in actuality, have. We can’t experience the universe – we only experience the world, and guess at what the universe might be like.
Likewise, this applies to the question at hand: Can computer’s think? I don’t have an answer for you, but I at least know it’s not whether or not they can play chess, Jeopardy or identify cats. What we’re asking when we ask whether computer’s can think is whether or not they will have subjective, rich, inner experiences while they compute. After all, that’s what thinking is – it is the inner experience of computation.
Thus the ‘hard’ problem of consciousness (why this rich inner experience at all?) and ‘hard’ AI are actually hard in the same way – will we create thinking machines? Machines that not only compute, but experience that computation? It’s not a matter of information – although its clear information flux is at least correlated with rich, inner experience. Our thinking is not just ‘knowing’ that we’re computer – it’s not the reflexivity. After all, we’re conscious, hard conscious, of things far before we reflect on them. There’s a subtle relationship between the experience of a thing and the thing itself that we haven’t quite teased out, but at least we can identify that divide – that gap – between the experience of something and the thing itself as what we’re asking about when we ask, “Can Computer’s Think?”