Experts Say Language Models Like GPT-4 Will Change How We Work
WASHINGTON — GPT-4 has just been released, and the fact that this technology can do many things humans can — like produce natural-sounding text, process images and even solve problems — has workers worried.
Some companies and individuals are embracing the potential of new technologies, but others are afraid of losing their jobs or having their content devalued.
Analysts suggest that algorithms for language models — which include everything from your Siri personal assistant to your Google translator to ChatGPT — were not designed to replace human efforts, but rather to assist them, and may actually be configured in the future to complement workers’ competencies and increase productivity.
But, yes, there’s also a chance that AI could reduce the employability of humans altogether.
“In a lot of things we may increasingly turn into rubber stampers with a human veneer,” Anton Korinek, an expert on language models, shared with Brookings, a D.C.-based research group.
Korinek warned that over the next five to 10 years, the role of humans in many cognitive tasks could diminish, but he predicts these technologies will also “change our workflows to optimally take advantage of these new systems.”
“This field is moving so incredibly fast,” Korinek said, adding he feels large language models are reaching stages of computing that are “quite close to the complexity of the human brain.”
These models, like GPT-4, have made significant and powerful advances, but they are still trained through self-supervised learning. His explanation of this is that they are “fed vast amounts of data and asked to predict the next word.”
“This sounds like a simple, not particularly impressive task, but the impressive thing is, based on this training, really advanced capabilities have emerged over the past five years,” Korinek explained. “In some ways, what we are seeing is a new paradigm that …” builds on the deep learning paradigm of 1995 “but also feels eerily humanlike.”
There are also significant limitations that have been well documented. AI training data is largely outdated and these models’ output is not grounded in human values and ethics.
“We don’t know what other things these systems can do as well,” Korinek said. “It’s easy to both overestimate and underestimate these systems at the same time. … But if we throw more computation and power … progress is quite predictable.”
Analysts differ on whether this progress is on the path toward humanlike AI or, instead, just producing exceptionally advanced auto-computers, but policymakers, company executives, and individual workers know that either way, widely available artificial intelligence chatbots like ChatGPT will undoubtedly affect labor markets.
“It’s a very exciting time, but I think there’s understandable anxiety,” David Autor, professor of Economics at the Massachusetts Institute of Technology said.
“This is a tool … but the question is, what type of tool is it? Is it a tool that complements our expertise and makes our skills more valuable?” Autor asked. “It’s different from our brains and it has capacities we don’t have — and we have capacities it doesn’t have.”
Among the capabilities of large language models like GPT-4 are ideation, writing, background research, coding, data analysis and math. But these are also foundational models, and their “general purpose is that people are going to build on top [of them] to impact lots of things around them,” explained Susan Athey, Economics of Technology professor at Stanford’s Graduate School of Business.
And as the range of things that are subject to this tool becomes broader, these technologies will continue to affect the types of jobs that will be in high demand, the types of tasks that individuals will have to perform on the job, and the skills needed to be successful in the labor market.
“The range of things that are subject to this tool are much broader than the hard set of things we had to code,” Autor admitted.
Much like automation for manual work functions has affected labor, the adoption of new technological models for cognitive work processes will cause waves and put jobs at risk, but it will also introduce new categories for cognitive workers, freeing humans to focus on more critical and creative tasks and creating entirely new jobs that may not have even been imagined yet.
“How people take the strengths of ChatGPT and put them together with prompt engineering or post-processing to correct errors … that’s part of the frontier,” Athey suggested.
She worries a bit less about AI usurping some human work tasks and more about how a movement to AI will have a big effect on labor markets, education, technological progress and ultimately social welfare — and how a scarcity of top AI talent could hinder our ability to get there.
All in all, experts say there is probably more to look forward to than to fear.
“They can help us automate little things here and there that we do throughout our workdays, if we are cognitive workers, and they can deliver significant productivity gains,” Korinek said. “But their capabilities are very different from ours … and the performance that you get out of these systems depends a lot on how good we are at prompt engineering.”
“Our human brains … are still the best technology available to answer these questions,” Autor insisted.
“To say what AI will do misses our agency in the entire operation. We have a shared interest in directing the technology in a way that will be complementary to us. More advancing societal goals and helping us solve the hardest problems … and less replicating human capabilities.”
“This is going to be an ongoing research area, and an ongoing corporate innovation area,” he said. “I’d say we’ve got 20 years of work ahead of us to get this right.”
Kate can be reached at email@example.com