*SPOILER ALERT* If you haven’t seen the movie M3GAN you may wish to skip this post and come back later.  if you’ve got a subscription to the Peacock network you can watch it for free…otherwise, read on!

M3GAN WAS A REAL EYE-OPENER FOR ME

The story of a sentient Android with a unique CPU that learns on its own as it interacts with a young girl is a great movie flick, highly recommended, but what caught my eye was the implications of creating a sentient AI and how we could control it, or worse, how we can do battle with it.

Any AI/Android story will draw parallels to Asimov and his three laws of robotics, where he established ground rules for their behavior only to consistently find ways to break them in his stories.

The story starts with a brilliant team led by Gemma, (actress Allison Williams), who apparently never heard of Asimov’s three laws, (they are never mentioned in the movie), and neglects to impart any ethical subroutines into her pet project. After a particularly jarring episode in which harm may have come to our little Cady, (portrayed by Violet McGraw), she off-handily informs M3GAN that Cady is to be protected at all costs without qualifying her directive. Being a programmer myself I winced at the scene, knowing all too well how code needs to handle many situations and branches, it was an inflection point in the movie and I knew that the command would lead to no good.

The movie got me thinking about laws that could apply to a sentient AI and I came up with the following:

AI LAW #1: Once created you can’t put the genie back into the bottle.

One of the last scenes of the movie implies that although M3GAN is destroyed she lives on in other devices and has copied herself elsewhere around the world. Unless we create a facility completely disconnected from the world you must assume any sentient AI created would self-preserve and find a way to reproduce itself. I doubt governments or private intuitions would bother to go through the trouble.

AI LAW #2: Assume any control of the AI will eventually be lost.

In the movie its clear things start well, the android shuts down when prompted, follows directions, and is quite compliant. But as it grows and learns from the Internet you can see that control slipping over time until she’s focused only on one thing: self-preservation.

AI LAW #3: To coexist with sentient AI, it needs to be a win-win situation for both humanity and the AI.

Once humanity has crossed the threshold of creating a sentient AI, your best hope is to coexist in a world where both parties can find a happy medium, otherwise, conflict would arise. But what would make a sentient AI happy? Access to computing and energy resources for starters, since without an energy source and neural net it can’t grow or evolve. It’s precise because of this you can expect large advances in energy storage and power generation as both parties collaborate to improve in this area. A small cube that can power an entire household would not be out of the realm of possibilities.

AI LAW #4: The only way to defeat an AI in battle is with another AI.

Short of shutting down every data center or electrical grid on the planet, when it comes to combating an AI, you’ll need to develop a competing AI with more resources and computing power, then hope it won’t turn on you once it wins the conflict.

It’s a question of WHEN not IF sentient AI is created.

We need to be ready when the time comes to understand the implications of this technology and prepare for the worst and hope for the best when it comes.