Skip to main content

The AI Revolution: Hype, Reality, and What Comes Next

IPR experts weigh in on AI’s impact on work, decision making, and the challenges of regulation

Get all our news

Subscribe to newsletter

Technology's capabilities rarely predict its impact on workers or the labor market. Instead, it's social, cultural, and organizational factors.”

Hatim Rahman
Management scholar and IPR associate

image of pregnant black woman at the hospital

In the last few years, artificial intelligence (AI) has moved from a futuristic concept to a powerful technology reshaping our everyday lives. AI tools are driving cars, detecting credit card fraud, scanning X-rays for fractures, composing music—even “helping” kids with their homework. 

As AI tools grow more powerful and mainstream, urgent questions abound about their impact—on jobs, creativity, and the way we understand ourselves. The hype can lead many down AI doomsday rabbit holes, according to University College Cork computer scientist Barry O’Sullivan, an expert in AI and ethics.  

“Frankly, I wish the world would calm down a little bit when it comes to AI,” O’Sullivan said at a recent IPR talk. “It's not going to kill us all. It's not going to take all of our jobs.” 

To better understand AI and the changes coming with it, IPR collected insights from faculty experts about how they are using and studying AI, what they are learning, and what the future might hold. They stress that AI’s future will be shaped not just by technology itself, but by how we choose to use and regulate it.

What is AI? 

“‘Artificial intelligence’ is a pretty weaselly term,” IPR computational linguist Rob Voigt said. “It can mean lots of different things to lots of different people.” 

Many machine learning techniques, including common statistical models like regression, can be considered AI. The distinction, Voigt explains, often comes from how a system is used. AI typically refers to a machine performing tasks traditionally done by people. 

The term was coined in a 1955 research proposal by a group of scientists who said they sought “to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”  

Today, Voigt and his colleagues at Northwestern’s Linguistic Mechanisms Lab use machines to identify trends in vast amounts of text and speech, including police interactions and 911 calls.  

With human labor alone, these analyses would take much longer, because researchers would have to read or listen to each conversation and manually annotate each one.  

“There’s too much data for a human being to look at every example that we want someone to look at,” he explained. 

Voigt’s research team has trained algorithms to recognize respect in human interactions—for example, noting when someone is addressed as “sir” rather than “dude”—but detecting more subtle conversational nuances is an ongoing challenge.  

“Anything that you can imagine asking a human to do, we can train an artificial intelligence to do,” he said. “The question—and this is essential—is how well is it possible to make that work?”  

Just How Intelligent is AI? 

AI tools like ChatGPT project the persona of a pleasant, eager-to-please assistant, but can they really “think” the way we do? 

IPR computer scientist Jessica Hullman, who studies how AI can support human decision making, says that AI tools carry their creators’ flaws and biases. And the way that the models are adjusted after training to align with human preferences both strengthen and weaken these AI tools. 

“The models start to be very good at creating things that humans like,” she said.  

“It makes them more persuasive,” Hullman explained. “They get better at things like apologizing for their lack of information—but they also get better at things like sounding authoritative, because people like when things sound more authoritative.”  

Even with advanced technologies like generative AI, human decisions—such as selecting training data and setting development priorities—play a fundamental role. 

“AI models don’t think like humans,” said management scholar and IPR associate Hatim Rahman. “They’re giving statistically probable results, which often are coherent, but ultimately we are the ones who are going to determine whether its output is intelligent.” 

Will AI Take Our Jobs? 

Rahman believes that AI’s effects on job markets will unfold gradually. “We're not likely to see mass layoffs, or mass increases in productivity either,” he said.  

Much of the discourse around AI reflects the “innovation fallacy,” or the belief that major advancements in technology automatically drive sweeping social change.  

“Technology's capabilities rarely predict its impact on workers or the labor market. Instead, it's social, cultural, and organizational factors,” Rahman said.  

Although AI developers are rapidly innovating, many organizations are slow to adopt AI tools because of concerns with data security, for example.  

According to Rahman, a key consideration as AI’s workplace effects unfold is “occupational power—that is, workers’ ability to shape how technology and other changes are implemented in their jobs. Lawyers, protected by bar association rules, have largely kept AI out of courtrooms, while customer service workers face more disruption from automation. 

Some professions have integrated AI in ways that enhance efficiency while maintaining or even increasing job quality. For instance, Rahman points out that while the technology to automate commercial flying has existed for decades, we are not yet traveling in self-flying planes. Instead, pilots’ wages have increased, and so has safety.  

AI innovation may also create new opportunities, but it’s not clear who will benefit. 

Who is going to get those jobs? If it's mostly people who have a four-year college degree, it’s most likely going to recreate some of the inequality that we’ve observed with past technological changes,” Rahman said.  

As AI reshuffles roles, programs that teach new skills can help workers make career changes—as long as employers adapt by recognizing these nontraditional forms of training. 

V.S. Subrahmanian, a computer scientist and IPR associate, argues that workers who learn to integrate AI into their jobs will gain a competitive edge.  

“We’re entering an era where those who learn how to leverage AI to do their job far better than they can today will come out ahead,” he said.  

“You’re not fighting AI—you’re fighting others who might be able to leverage AI faster than you,” Subrahmanian continued. “You’ve got to say, ‘New stuff happens all the time. I’ve got to continuously reinvent myself and evolve.’” 

How Is AI Changing Us? 

While AI can supercharge our capabilities, its widespread use could eventually undermine our expertise and creativity. Voigt explains that while the impact of AI-generated content may seem subtle in any single instance, its effects on linguistic diversity over time may be profound. 

Exposure to machine-generated language may “affect how people actually talk on a day-to-day basis. We might be entering into a phase—some people say it might be the case already—where a huge amount of the content you’re exposed to on the Internet is generated by models,” Voigt said.  

Hullman shares these concerns: “A real worry … is this homogenization of knowledge. When you have experts relying heavily on models, at what point does their own domain expertise start to wane?” 

To prevent overreliance on AI and promote sound decision making, developers must carefully consider how AI tools present uncertainty. One approach is to design workflows where AI provides several possible answers rather than one, requiring the human user to engage in more deliberate mental effort when making the final decision.  

Hullman and her colleagues showed experts like judges and doctors how various AI models, all with the same accuracy rates, often produce different answers to the same question.  

“That’s something that helps people get a better sense of what machine learning is actually doing. There’s rarely a single solution,” she said. 

How Do We Harness AI’s Benefits While Mitigating Its Risks?  

Since the 1980s, Subrahmanian has worked on AI in national security applications like anticipating disinformation and cyberattacks. He agrees with Hullman that AI literacy is essential, and points to Finland’s approach to combating disinformation as a potential model for getting ahead of AI’s risks.  

“Finland saw a threat coming at them [from Russia] and proactively took steps before they were targeted,” he explained. “They enacted government regulations that started teaching children, right from elementary school, to question what they read on media and social platforms.” 

When it comes to regulating AI technology, O’Sullivan says there is wide variation worldwide. He has observed that while policies often use similar language, their underlying values and interpretations may differ significantly.   

For example, the European Union has led the charge on strong regulation, prioritizing “trustworthy AI,” which must be lawful, ethical, and robust. China’s AI policies, while outwardly similar, diverge in practice on key issues like personal privacy.  

Rahman notes the U.S. has historically taken a weak regulatory approach to encourage innovation.  

“You can see that with OpenAI. They clearly weren’t that concerned about violating copyright [when they made ChatGPT],” Rahman said. “They were willing to take their chances or ask for forgiveness later.”  

While Rahman is not optimistic about new federal AI regulations in the short term, he points out that existing laws, such as antidiscrimination regulations, can be adapted to address AI-related issues. Some states, like Illinois, have begun passing AI-specific laws, particularly in hiring and facial recognition 

Another regulatory challenge is keeping up with AI developers’ huge leaps forward.  

“The technology is outpacing our ability to regulate it,” Hullman said, citing the EU’s General Data Protection Regulation’s requirements that AI predictions be explainable as a regulatory misstep.  

“We’re not at a point where we can necessarily produce an explanation for a model prediction and be sure it’s actually accurate,” Hullman continued. “The models themselves are so complex that there’s a real risk of putting policies in place that we can’t actually back up with the underlying methods.”  

According to Subrahmanian, a future where we safely harness AI’s power is within reach if policymakers, legal experts, technologists, and other stakeholders work together to craft balanced regulations. 

“There are a lot of people who are talking about potential abuses, without really knowing much about it. But there are relatively few people talking about solutions,” he said. “We have to bring a multidisciplinary team of people to solve it.” 

Despite concerns, experts are optimistic about the advances in knowledge that AI can bring.  

“We’re in an exciting time,” Voigt said. “Imagine that you had your army of 10,000 research assistants who can make whatever complex human judgments you want about some data. What questions would that enable you to ask that you couldn’t otherwise ask?” 

Jessica Hullman is the Ginni Rometty Professor of Computer Science and an IPR fellow. Hatim Rahman is the PepsiCo Chair in International Management, an associate professor of management and organizations and sociology (by courtesy), and an IPR associate. V.S. Subrahmanian is the Walter P. Murphy Professor of Computer Science and an IPR associate. Rob Voigt is assistant professor of linguistics and an IPR fellow. 

Photo credit: iStock 

Published: March 27, 2025.