Ethics of artifical intelligence

I’ve been toiling over the philosophy of mind, specifically with neural network theory, since college.  I’m fascinated by what I think is the ultimate question: what is the mind?  I think it touches the soul (pardon the pun) of our internal conflict: that feeling of unease, that feeling of wonder, that feeling of self-ness and at the same time, otherness.

My most recent work has brought me closer to the private mind theory.  This theory states that you can only have direct access to your own thoughts & experiences and not the thoughts or experiences of anyone else.  This means, of course, that I can’t tell for sure whether or not you’re a “real person” or merely an automaton.  “Real person” here is loaded with all sorts of discussion about agency and free will vs. determinism.  But, if we assume that we have some form of free will, whatever that might be, and we assume that we have something like direct access to our own experiences, and then we see other people acting in a predictable way, we might want to assume that they are also feeling agency and a sense of free will and sensing their own experiences.

So where does artificial intelligence come into all this?  Well, a couple weeks ago I read an article in a local paper (SevenDays: Are Roboticists Ignoring Consequences) about Roboticists that hope to create artificially intelligent robots and then “upload” their minds into them.  I can tell that you see where I’m going with this argument.

Say that we do create a robot that is capable of receiving a human mind.  And then say that we manage to upload that mind into the robot successfully and that it acts and talks and reacts in ways that are consistent with our knowledge of the mind donor (i.e. the human).  If it walks like that human and talks like that human then we must assume, based on our understanding of private mind theory and how it relates to other humans, that this must be the same human just in robot form.

But wait.  Can’t you imagine a robot that was programmed to respond in a predictable and adaptable way?  So what we are relying on when confronting this new mind-robot is asking it, “Are you really Jane?”

“Yes.  I am still Jane.”

“Do you feel emotions and feelings like you did when you were in a human body?”

“Yes.”

So that’s it.  We must assume that Jane is still Jane because we cannot have direct access to her feelings and emotions but are still relying on her report of those emotions and feelings.  Just the same as when she was a human.

This head-scratcher of an issue got me working over the ethics of creating artificial intelligence.  What will it mean for law and justice if we have robots that can think and feel?  Do we grant them agency?  Free will?  Are they property?  Can they be decommissioned or destroyed?

I plan to work on these issues in the months to come.  One of my plans is to reach out to folks around the country to ask them about their opinions on the ethics of artificial intelligence and post them here.  If you’d like to comment on this topic, please feel free to do so.  I’d love to hear your thoughts.

Until next time.
Kindly,
Jeff

Advertisements

9 thoughts on “Ethics of artifical intelligence

  1. An interesting post. I’ve wondered the same thing. If we program an AI to feel fear or pain, even if we can see the code, the algorithm, that tells it to have those impulses, do we then have a responsibility to that AI? Before answering, consider that neuroscientists might eventually be able to isolate the “algorithms” that gives *us* those instincts.

    1. I think you’re on to something here. Sebastian Seung is pursuing these “algorithms” and is calling it a person’s “Connectome.”

      To answer your question would be something like answering the question, “Once you know how a magic trick is done, is it still special/magic/exciting?” I’m not sure it’s a question that any of us can answer in regards to programing yet, but I would strongly caution programers to avoid fear and pain for now. Trying to plug in human emotions into AI seems at first glance to be a reckless proposition with little to gain. I think we need to better understand our Connectomes before we start making more ‘robust’ AI Connectomes.

      1. Excellent points. The connectome, the map of synaptic connections in a brain, may very well be our “soul”.

        On AIs, fortunately, I don’t see much of a market demand for robots or other AIs concerned about their own survival, at least any more than would be necessary for carrying out their primary function.

        One area where I could see it happening though (other than research projects), is video games or other simulations. At that point would a character programmed to survive achieve personhood?

      2. The question you asked is directly connected to what is known as the AI effect, described here. The other idea, where we program emotions into robotics, is still sci-fi fantasy at this point. While humans have a conceptual idea on what emotions are, we still don’t know what makes them happen or why. That is work that still needs to be done in the field of neuroscience.
        As to the example you’ve talked about in the article about AI creating beer, it’s also a bit of a fantasy. AI does not really hold the capability at this point to create original recipes. This mostly has to do with programming itself, but also has to do with the fact that the chemistry of what tastes good is still a bit out there.
        Simply put, most of the worries that you expressed within the article are quite a way off to begin with. This is also assuming that the future becomes more like over-the-top scifi action flick, and not like the future that was described in Her. These are good concerns to have, but not the most pertinent to what we need in the field of AI. However, questioning the potential adverse effects of AI is not discouraged in anyway, as it is exactly what the field needs.

      3. The AI effect is an interesting point. It seems to flesh out our concerns about the Turing test and exposes our fears that we might be simply programmed organic computers (i.e. determinism).

        To your other points about sci-fi fantasy, I agree that the situations that I’ve described are a way off. I don’t think, however, that they describe a “over-the-top scifi action flick,” as you assert. I didn’t mean to imply that the brewing beer would become a completely independent activity. I still envisioned a ‘brewer’ (i.e. someone sitting in front of a terminal) being part of the process. If you see “Her” as a viable future for operating systems, I would argue that such an operating system could be applied to brewing beer. It would look different than my brief exploration my AI beer column. A ‘brewer’ might ask the system to research current taste trends and make a prediction for a viable new flavor profile. The system could then help assemble a new recipe. Same goes for the marketing point. What types of labels are successful? What colors are preferred by a target audience. This is mostly an interactive research model. The rest was simple mechanical automation and didn’t really rely on AI. But yes, still a ways out.

        I also agree with your statement that we must constantly question the unknown effects of any new field of research. I’d love to hear what you view as pertinent concerns for AI right now if you have a moment. One that I think needs to be considered soon is our understanding of ‘privacy’ and how it’s currently changing (i.e. Facebook, Twitter, etc) and how it will rapidly change with Google Glass. I think the topic of privacy will be a hot-button issue with new advancements in AI technology in coming years, or even months.

      4. Funny that you mentioned how privacy could be a big concern with AI, as I’ve written on that very topic here and here. As to what I believe is pertinent to AI, it’s more closely related to how people discuss AI in general. My post here goes into more detail about what I mean. To give a quick overlook, I feel like people tend to ask very general questions about AI, which sound fine and dandy. However, most of those questions tend to not even apply to the field of AI, given that the field is still in a fairly early stage. Asking questions is no problem, but most questions are irrelevant.

  2. I have the same doubts too. If law is enforced on the robots, what could be the punishment for its mistakes? Just by uploading a new version of mind? How a machine thinks about following law is good for its survival?

    1. Law enforcement on AI robots would likely be a matter of reprogramming, or installing ‘patches.’ I don’t know that you could discipline an AI unit unless it had an emotional component. In order to discipline someone you are depriving them of some liberty or another by way of teaching them not to transgress in the future. AI robots at this point don’t have a sense of liberty and I think we’d be wise to keep it that way. It seems dangerous to program in a free will component.

      And of course that begs the question of whether or not “free will” could be programed. The concepts seem incompatible.

  3. I think you’d have to treat the robots like humans, if they really think and feel like humans. To do otherwise would be immoral. But this raises many other ethical points which have no doubt been discussed for centuries. For instance, is it ethical to create an AI without seeking its prior approval for creation (a logical impossibility)? You could similarly ask, is it ethical to give birth to a child when you know that the child will experience pain and suffering in this world?

    Similarly, is it ethical to create an AI and then require it to follow our laws if it has not been given an opportunity to agree to those laws in advance? Similarly, is it ethical to impose laws on people when they have not consented to obey those laws?

    I’m sure that philosophers have discussed such questions but the existence of an AI would make them very much more immediate.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s