Programming ethics into artificial intelligence

First off, why does artificial intelligence belong in a blog about environmental philosophy?

Because we live in a technological environment.  Stop and consider what your environment consists of.  If you’re reading this blog, it consists of very recent and very advanced technologies.

Even the Ohlone Indians who inhabited Silicon Valley before the arrival of Europeans were living in a technological environment, though much different from ours.

Changes in technology = changes in environment.  Not just changes to the environment, but changes in the environment.  So when machines become smarter than humans, when “supertintelligence” arrives, our environment will be radically altered.  Superintelligence will exist “out there,” an abstract presence that is felt, like the internet, or the climate.

Now here’s the interesting thing.  We would need to program the superintelligent machine to behave ethically, but how can we be sure that our knowledge of ethics is sufficient?

It’s somewhat analogous to the IBM engineers who programmed Deep Blue to play chess.  They wanted Deep Blue to play better than any human player, so they couldn’t simply program it with the best move for every possible situation.  Of course, there are too many possible configurations of chess pieces to make this feasible. But more importantly, if they tried, they would have limited Deep Blue’s chess-playing ability to their own.  Thus, it would not have performed any better than a human player.

Now consider the challenge of programming a superintelligent machine to behave ethically.  You cannot program every possible “move,” because the machine will undoubtedly encounter novel situations.  You have to program general theories or principles.  As Nick Bostrom says, you have to program a machine “that thinks like a human engineer concerned about ethics, not just a simple product of ethical engineering.”

But if we program the machine to think like an ethical human engineer, shouldn’t we worry whether our ethics may yet be improved?  For instance, what if the ancient Greeks had found themselves programming a superintelligent machine in 300 B.C.? They had a much different view of slavery than we do today: it was ethical to own people.  Or what about 19th century Americans?

Nick Bostrom writes:

Considering the ethical history of human civilizations over centuries of time, we can see that it might prove a very great tragedy to create a mind that was stable in ethical dimensions along which human civilizations seem to exhibit directional change.

This presents us with perhaps the ultimate challenge of machine ethics: How do you build an AI which, when it executes, becomes more ethical than you?

This challenge seems incredibly daunting, for it requires “that we begin to comprehend the structure of ethical questions in the way that we have already comprehended the structure of chess.”

242px-SCD_algebraic_notation.svg

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s