top of page

“I Robot” Has Arrived 

By Bob Muglia 

​

In April, Microsoft researchers created a stir when they published a study that suggested artificial intelligence (AI) capability has made significant strides toward Artificial general intelligence (AGI), a term coined by scientists that describes computing systems that could eventually be as smart as humans. In “Sparks of Artificial General Intelligence: Early experiments with GPT-4,” the researchers explained how they trained a system to perform feats of reasoning that “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”  

Asimov’s Laws of Robotics Can Guide AI 

Screenshot 2023-08-02 at 2.27.15 PM.png

Then came another surprising development. Sam Altman, the CEO of OpenAI, the organization that built the underlying technology used by Microsoft, called on the government to regulate his industry. In a Senate hearing, Altman warned of AI’s dangers and the need for government oversight. 

 

Since tech companies announced stunning advances in AI late last year, breakthroughs have come rapidly. ChatGPT, the chatbot created by OpenAI, and other applications based on large language models demonstrate human-like thinking. It is now clear that fully capable AGI systems are not far off. We want these intelligent machines to work for and with us, not against us. 

 

Tech industry leaders generally acknowledge that government has a role in regulating artificial intelligence usage, but it’s unclear what should be done and by whom. How do we control and limit the use of AI by people with malicious intentions? How do we regulate artificial intelligence without stunting innovation? 

 

It is wise to look to the past for guidance, especially to Isaac Asimov, the prolific science fiction writer who lived from 1920 to 1992. In the 1940s, even before the invention of the first electronic digital computers, Asimov began writing about robots—machines with human intelligence. In his 1942 short story, “Runaround,” he published what he called the Three Laws of Robotics: 

 

First Law: A robot may not harm a human being or, through inaction, allow a human being to come to harm. 

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

 

Asimov was a genius and a prophet. He viewed robots as tools built to help people. Artificial intelligence products and services introduced over the next few years will also be tools created by people. But, because people have different intentions, they will apply AI to every possible purpose–good, bad, and evil. The companies that create these new AI services must assume responsibility for their actions. Existing laws can be applied to AI, but they are insufficient to govern these products, and we need new legislation. Asimov’s Laws of Robotics provide foundational principles for future rules and regulations to ensure AI benefits society. 

 

In my view, the first step is to ensure that companies making AI products clearly explain their machines’ capabilities and are transparent about how their systems work. A second step is for companies that use AI systems—which will be almost every organization, eventually—to set internal standards for their use. I am encouraged to see some leading corporations embracing Responsible AI initiatives, including governance committees and policy adoption to ensure technologies’ ethical and responsible use. It might also be prudent for industries and academic domains to set general standards. 

 

Asimov’s first law prohibits the creation of autonomous suicide drones and other forms of killer robots, so this is one area where we need an agreement among nations. The use of AI in warfare should be considered a weapon of mass destruction. We will need something akin to the United Nations Treaty on the Non-Proliferation of Nuclear Weapons to limit the use of killer robots. 

 

While there will be challenges and problems, I am confident that artificial intelligence is a powerful tool that will enrich our lives. However, additional concerns arise when AI advances to become an AGI with capabilities well beyond humans. Will we be able to control it?  

 

Again, we can look to Asimov for guidance. In Asimov’s later novels, robots are more sophisticated, and some have advanced to become what we would call an AGI today. These robots begin to play a major role in governing human society. Because the Three Laws control them, they are benevolent. But these super-intelligent robots realize that the three laws are insufficient. So, Asimov added the Zeroth Law, which is the founding principle we must use to build our future: 

 

Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.  

 

While the first three laws are relatively straightforward, Azimov’s Zeroth Law raises bigger-picture complexities and implications. Governments must develop new social contracts that ensure that intelligent machines’ objectives and actions align with humanity’s wellbeing. We must treat the entities we create as partners in a shared future.  

 

It’s an important time for industry, government, academia, and leaders in ethics and religion to think deeply about the future. Around the globe, different countries, cultures, and values will influence AI’s development. We’ll get exactly what we build, so it’s up to modern society to guide that process. Asimov gives us guidance, let’s follow his lead. 

What Inspired The Datapreneurs? 

By Bob Muglia

 

In my teens and twenties, I read many classic science fiction books and especially loved those written by Isaac Asimov. In the 1940s and 50s, Asimov envisioned a world where robots could serve people and society in countless beneficial ways. My excitement for Asimov's ideas never waned. In fact, his visionary concepts of technical ethics inspired my thinking and career. They taught me the importance of building ethics and human values into the technical teams and products I worked on.

​

It's been quite a journey over my decades in the tech industry. I cherish all those experiences over those years, including many highlights and, well, a few lowlights too. I learned a lot throughout, and I'm very thankful for the inspiration of the many brilliant people I've been lucky enough to work with. 
 

A couple of years ago, a thought stuck in my head. I hoped some of my learnings might help others who will drive technology forward in the coming decades. Toward that goal, I've been working with Steve Hamm to write a book focused on the importance of data and how it has – and will continue to – impact our society in a positive and profound way. Thanks to Skyhorse Publishing's help, we're about to release "The Datapreneurs." We're very excited to see it published and broadly available on June 13.
 

As you might guess from the title, the book explores the people and critical pivots in tech history that catapulted us into the modern age of computing and AI. The book shares this through personal anecdotes – including some interesting backstories – and all the inspiration I've received from Datapreneurs worldwide. I want to extend my heartfelt thanks for their time and insights as we wrote the book. Their work paves the road for humanity to tackle new challenges, make new scientific discoveries, and do things we never thought possible.
 

In the not-so-distant future, AI will exceed a person's brainpower and, eventually, all of humanity's brainpower combined. Not to spoil the book's last section, but yes, I believe AI will benefit society in ways we don't yet comprehend. While we should approach this thoughtfully and responsibly, AI can and will benefit humankind tremendously. After reading the book, you'll understand why I'm so optimistic about the future.
 

bottom of page