I’ve been toiling over the philosophy of mind, specifically with neural network theory, since college. I’m fascinated by what I think is the ultimate question: what is the mind? I think it touches the soul (pardon the pun) of our internal conflict: that feeling of unease, that feeling of wonder, that feeling of self-ness and at the same time, otherness.
My most recent work has brought me closer to the private mind theory. This theory states that you can only have direct access to your own thoughts & experiences and not the thoughts or experiences of anyone else. This means, of course, that I can’t tell for sure whether or not you’re a “real person” or merely an automaton. “Real person” here is loaded with all sorts of discussion about agency and free will vs. determinism. But, if we assume that we have some form of free will, whatever that might be, and we assume that we have something like direct access to our own experiences, and then we see other people acting in a predictable way, we might want to assume that they are also feeling agency and a sense of free will and sensing their own experiences.
So where does artificial intelligence come into all this? Well, a couple weeks ago I read an article in a local paper (SevenDays: Are Roboticists Ignoring Consequences) about Roboticists that hope to create artificially intelligent robots and then “upload” their minds into them. I can tell that you see where I’m going with this argument.
Say that we do create a robot that is capable of receiving a human mind. And then say that we manage to upload that mind into the robot successfully and that it acts and talks and reacts in ways that are consistent with our knowledge of the mind donor (i.e. the human). If it walks like that human and talks like that human then we must assume, based on our understanding of private mind theory and how it relates to other humans, that this must be the same human just in robot form.
But wait. Can’t you imagine a robot that was programmed to respond in a predictable and adaptable way? So what we are relying on when confronting this new mind-robot is asking it, “Are you really Jane?”
“Yes. I am still Jane.”
“Do you feel emotions and feelings like you did when you were in a human body?”
So that’s it. We must assume that Jane is still Jane because we cannot have direct access to her feelings and emotions but are still relying on her report of those emotions and feelings. Just the same as when she was a human.
This head-scratcher of an issue got me working over the ethics of creating artificial intelligence. What will it mean for law and justice if we have robots that can think and feel? Do we grant them agency? Free will? Are they property? Can they be decommissioned or destroyed?
I plan to work on these issues in the months to come. One of my plans is to reach out to folks around the country to ask them about their opinions on the ethics of artificial intelligence and post them here. If you’d like to comment on this topic, please feel free to do so. I’d love to hear your thoughts.
Until next time.