Daniels faculty Stephen Haag analyzes the AI Bill of Rights and its implications

Stephen Haag

Stephen Haag wants you to forget what the movies taught you about artificial intelligence—the overreaching, all-knowing entities seen in films like “The Matrix” and “Iron Man.”

“The movies led us to believe that AI right now can do more than what it was designed to do,” explained Haag, professor of the practice in the Department of Business Information and Analytics at the Daniels College of Business. “That has led to a lot of incorrect beliefs and biases.”

While AI isn’t running our lives the way it does in the movies, the technology has crept into many aspects of our lives. Amazon recommends what products you should buy;  Netflix can suggest what movies or shows you might want to watch; some programs help physicians diagnose diseases and prescribe medications.

“AI is emerging and exploding,” Haag said.

Perhaps in recognition of how influential AI has become, the U.S. government last fall created the AI Bill of Rights, a set of guidelines that aims to spur companies into making and deploying AI more responsibly. While the AI Bill of Rights doesn’t mandate anything, it lays out “common sense protections to which everyone in America should be entitled in the design, development and deployment of AI and other automated technologies,” according to the White House.

So, what do the guidelines indicate for AI? How is the technology influencing our lives? Why do we need to think about it ethically? And what role does AI play in higher education? Haag answered these questions and more in an interview with the Daniels Newsroom.

Q: Tell me a little bit about the AI Bill of Rights. How comprehensive is it? And what’s the main takeaway?

Haag: It came out in the fall of 2022 from [President Joe] Biden’s office. It’s not a law. It’s a set of guidelines to ensure consumers are protected. It doesn’t have a lot of teeth to it, but at least it’s opening a conversation about it.

The biggest takeaway is that we have to recognize that AI is coming, and not only is it coming, it’s already here. But it’s not here yet in full force. We’re going to have to figure out this technology, perhaps more so than any other technology.

Do you think eventually there will be more hard rules and legislation around AI?

I do. I think there will be laws and legislation around what AI can and cannot be. I think we’ll reach a point where we have to say, ‘In a court of law, here’s how AI has to be treated, and here’s how a person has to be treated.’ Think in terms of autonomous vehicles—if an AI vehicle has an accident, am I liable? Is the company that made the car liable? There are a lot of things we need to figure out.

How can we best figure it out? How do we address what AI means for us in our lives?

I think we have to address it with very diverse groups of people. I’m not talking about just ethnicity and race and gender and all of those, even though that’s important too. But you have to have people who truly understand what the technology can and cannot do, and others who don’t understand it, and others who are adamantly against it. We need to have dialogue and debates about what’s right and what’s wrong.

What’s the risk if we don’t talk about how to use the technology and if we don’t think about using it responsibly and ethically?

The risks are great. This is probably the first time in history when we’ve actually said, “We have to think about this before going down this route.” We did that a little bit in regard to human cloning and some of that stuff. But when fossil fuels came about, for instance, we didn’t think about it in advance; we just said, “Let’s go down this road.”

We often look at just the money side. But there are three things we must think about: people, planet and profits. We have to meet the needs of all three areas. We have to meet the needs of people, we have to conserve the planet and we have to make a profit. If we only think about money, we aren’t going to succeed in the long run.

What are your feelings on AI in the university and college universe? There has been a lot of talk recently, for instance, about ChatGPT—the new AI chatbot that can use data to string together text. Some schools are banning it.

Many have banned it, while other institutions say, “It’s just a part of life, and we’re going to teach people how to use it.” If you use it, you have to source it. This hesitancy has happened before.

We went from a time of slide rulers to basic calculators to Excel spreadsheets—and at the time, there were academic circles that didn’t want to allow the newer technology, arguing that people wouldn’t be able to do logarithmic scales by hand. I don’t know any 18- or 19- or 20-year-old who knows what a logarithmic scale is today—but they can implement it if they understand the concepts of it and understand how the tools work. I see this as another evolutionary step in education; I fully support the use of it.