Microsoft and the birth of the responsible AI

Google’s (now-disbanded) Artificial Intelligence Ethics board and Microsoft’s Responsible AI initiative are critical not only to the utility and safety of AIs but also the potential survival of the race. Getting this right is likely one of the most important things this market will ever accomplish.

Abstract businessman connected to wired spheres.
Gremlin / Getty Images

[Disclosure: Microsoft is a client of the author.]

Pretty much the same week Google disbanded their Artificial Intelligence Ethics Board – once again showcasing an inability to lead their own company (this is becoming a really bad problem for Google) – I was reminded of Microsoft Responsible AI program.

[As an aside, both Microsoft and Google have had employees disagree with what management decides. But Microsoft’s management appears to be able to retain control of the company, while Google’s clearly does not This is a huge command-and-control issue for Google.]

The human race has a great deal of very real concerns about how AIs will be created and implemented. Such efforts as Google’s AI Ethics board and Microsoft’s Responsible AI initiative are critical if we want a world as far from the one depicted in the Terminator movies as possible.

Asimov’s 3 Laws of Robotics

All of the efforts I’m aware of seem to be finding ways to build from or emulate Isaac Asimov’s 3 Laws of Robotics. Decades before we had workable AIs, and decades before the first Terminator movie, Asimov wrote down three simple laws that robots and AIs should adhere to:

  1. Robots must never harm human beings or, through inaction, allow a human being to come to harm.
  2. Robots must follow instructions from humans without violating rule 1.
  3. Robots must protect themselves without violating the other rules.

It sounds pretty simple, but the 3 Laws primarily focused on keeping robots from doing us harm (though even in Asimov’s novels there were ways to circumvent these laws). And given that AIs will increasingly control not only the systems around us but eventually even our perceptions of reality (AI mixed reality), we need something far more nuanced than just “don’t kill people or allow them to be killed.”

Microsoft’s Responsible AI

Microsoft’s concept focuses on a set of rules that assures we not only trust these coming AIs but that these AIs are trustworthy. They speak beyond limitations of damage and focus on intent. That too has its risks, of course – as the latest season of Star Trek: Discovery is gleefully pointing out (more on that later). But it focuses the effort on creating AIs that effectively nurture the humans they’re responsible for…kind of replacing Asimov’s concept of “do no harm” (which is a minimal concept) with something closer to a parent. In other words, focusing the AI on looking for ways to actively do good.

Microsoft has looped in ethics, privacy, security, safety, inclusion, transparency and accountability into their own six principles for responsible AI development. (Transparency and accountability count as one in case you actually counted and are wondering.)

Of these six, ethics is likely the most problematic, because we really don’t have a great way of measuring ethics in humans right now, let alone robots – and the term is anything but absolute. But the concept is important because it deals with the concepts of good, bad and how they govern behavior (our industry has traditionally had massive ethics issues).

Good and bad are relative, which will make this concept incredibly difficult to program consistently, but this is likely where transparency and accountability come in. Because if this concept is adequately implemented the AI, and the humans around it, will likely develop over time an ethics model that will work.

Another important, and also problematic, concept is transparency. This is fluid as well: for instance, should an AI tell you something you can’t do anything about if the knowledge would do you harm?  What is Ethical and Safe may be in direct opposition to Transparency and Accountability. (Isn’t that what drove HAL 9000 in 2001: A Space Odyssey to become homicidal?)

We need to work this through before we’re up to our armpits in AI to make sure AI decides that we are the problem to be solved. That is core to the Star Trek: Discovery plot this season: An AI trained to protect comes to the conclusion the only way to do that is to kill off all life in the universe (and has come up with a pretty decent plan to do exactly that).

Us…and Them?

It is critically important that the industry has efforts like the one Google just killed and Microsoft is aggressively supporting. Particularly as we think about turning our homes, cars, planes, agriculture, drones, companies and even cities and nations over to ever more capable and smart AIs, we have to ensure they are working to help us and not, even by accident, harm us.

Using simulation, redundancy and efforts like the Lifeboat Foundation’s AI Shield may be the only thing that keeps us from a future where it is us against them.

Because if it becomes us against them, I highly doubt that we win.

This article is published as part of the IDG Contributor Network. Want to Join?

6 tips for scaling up team collaboration tools
  
Shop Tech Products at Amazon