Daniels professor Lily Morse found that implementing voice assistants and voice-to-text technology comes with three critical risks

With the rapid expansion and adoption of artificial intelligence, it feels like the business world often operates under an “ask for forgiveness and not for permission” premise. After all, innovators have to push boundaries to create game-changing solutions. But is that the right approach?

Lily Morse, an assistant professor of management at the Daniels College of Business, is fascinated by this question and has dedicated her research to exploring how ethics and behaviors interplay in the new digital era. Her most recent paper puts speech technology in the crosshairs, looking at the dangers it poses to workplace diversity.

Lily Morse

“It’s not just an extension of technology that is already existing. It’s a completely new way of interacting,” Morse said. “I’ve had a lot of exposure to these conversations about developing and innovating on technology performs. But to me, the question is, what are the ethical implications of that? What are the ethical challenges?”

In this paper, titled “Dangers of speech technology for workplace diversity,” Morse and her co-researchers dove deeper into the impacts of speech technology, which she calls “one of the most fascinating applications out there that people have the lowest level of awareness of.”

Things like Siri, Alexa and other voice assistants have become commonplace, as anthropomorphized speech technology is a constant in our personal and business lives. The market for this technology is booming, with projections showing that speech technology is expected to become a $56 billion industry in 2030.

“Speech technology offers many applications to enhance employee productivity and efficiency. Yet new dangers arise for marginalized groups, potentially jeopardizing organizational efforts to promote workplace diversity,” the group wrote in its paper. “Our analysis delves into three critical risks of speech technology and offers guidance for mitigating these risks responsibly.”

Three risks of speech technology

By defining three risks, Morse and her co-researchers hope to inform business leaders on what to look out for and how best to move forward in a fast-moving technology environment. They also want to ensure that marginalized groups aren’t further marginalized.

Speech technology introduces hidden performance barriers

This technology has the potential to increase existing disparities for marginalized employees, the research team said, particularly when it comes to understanding and interpreting speech from marginalized populations.

Research found that speech technology was more error-prone for adults over the age of 65 due to weaker muscles and tongue strength. Further, speech technology was less likely to pick up the dialectical patterns of African American speakers.

“It can unfairly make an employee look like an underperformer or feel mistreated in the workplace,” Morse said. “So, our key goal was understanding what those performance barriers are.”

Biometric data collection and unauthorized use

Speech technology captures large amounts of biometric data during its interactions and, if organizations aren’t cautious, could create challenging ethical and legal situations. This data is often compiled to create a “voiceprint” for users, identifying physical, health and behavioral characteristics without the user’s knowledge.

In some cases, the technology could reveal protected health information about users, including psychiatric and neurological disorders, as well as underlying medical conditions.

Inclusivity gaps in anthropomorphic and interactive design

Lastly, speech technology is often designed with majority groups in mind, ignoring diverse characteristics of members of marginalized groups. Voice assistants like Alexa and IBM Watson possess English names and sound like American-English speakers. Research found relatively few anthropomorphic features that catered to marginalized groups, with the exception of female-sounding voice assistants.

As a result, this technology can further alienate marginalized populations, creating social exclusion.

How can business leaders learn from this?

Technology is moving at a breakneck pace and Morse recognizes that innovation can’t be stopped altogether. Instead, she encourages business leaders to be more thoughtful in how they roll out new technology to their companies.

“One thing that I would suggest for everyone, and especially business leaders, is take some time to learn a little bit more about the technology and what can be accessed,” Morse said. “Awareness is really the first step because without it, you don’t know how to move forward with managing technology. Ask yourself, ‘What is the technology capable of and how does that align with what my goals are?’”

She encourages leaders to consult with marginalized groups in the implementation of new technologies, while also assessing the amount of risk they’re willing to take on.

“We are still in early days of using the technology. So, while I am focused on mitigating risks and improving fairness, I do still feel optimistic overall,” she said. “I think that as long as we as users, managers and companies, remain vigilant on our side, we can use these tools effectively without compromising our moral values.”


Learn more about the Master of Science in Management

In this highly customizable one-year program, you’ll build a strong foundation of business skills and leadership courses, while developing specialized skills in your field of choice.