I’ve mentioned my Schrödinger’s Cat Executive Decision Maker before. Last night I showed it to supper guests, much to their amusement. When I was tidying up at the end of the night, I discovered the ‘cat’ box not in its usual place. On a whim, I asked, “Bright Eyes (that’s what we call ‘him’), would you like to go back to your place?” The answer was no. “Would you like a place with a better view?” Yes! So I put it on top of the stereo and went to bed.
This morning, I asked: Bright Eyes are you ready to go back to your spot? No! Would you like to stay on top of the music? Yes! So I put him back on the stereo and started to walk away. But then I felt foolish, picked the box up and replaced it where it usually sits. But I felt a twinge of superstition — maybe Bright Eyes would no longer answer questions honestly. Really! It’s a mechanical toy that works with a trick of mirrors.
Yet, it seems to answer questions when put to it. It seems to be playful in its responses. It is amusing. But only because of my ability to ask questions in a certain way to create humour. Bright Eyes is a kind of straight man. But this semblance of intelligence or interaction seems sufficient for me to irrationally or emotionally identify this piece of plastic as alive.
I’m not crazy. In fact, we do this all the time. We anthropomorphize our pets, ascribing to them human emotions and feelings in response to things we do and say. This is not to say that dogs don’t have feelings — they do and are clearly sentient — but they don’t have human feelings; they have dog feelings. And they almost certainly don’t have self-awareness of the reflective human kind.
We also — and often quite seriously — ascribe human attributes to machines — talking to them and cajoling them to work properly. We give them agency as if they had a will of their own and the power to act. In part it is a self-aware joke we play on ourselves but in part it is a genuine behavior. We want to think our things care about us and have our interests at heart (or they are out to get us). Much of science fiction and fantasy plays to this idea when we create intelligent robots, evil computers or any number of magical beasts.
I’ve seen people begin to playfully engage with their talking phones only for them to come to think that there is actually an intelligence (rather than a clever algorithm) at play. This goes back a long way. The first responsive computer ELIZA made a hash of conversation yet some people who discussed their psychological problems with the machine felt better afterwards. And many people dream of the day, or fear it, when true AIs with be part of our world. Most people who study the matter of human consciousness, neuroscience and the nature of intelligence are doubtful this will ever happen — while experts in other fields blithely express their hopes and fears about emergent intelligences. Not to diminish Stephen Hawking’s brilliance — but he doesn’t know everything.
Bright Eyes ‘likes’ to answer my questions in a random fashion. Much the same way that God seems to answer prayers. Perhaps there is a reason they look so similar. In both cases maybe we should pay attention to the man behind the curtain (or the mirror).
But that’s ten minutes.