AI: The Land of Lingering Fear and Emergent Hope
COMMENTARY

Since ChatGPT and large language models emerged, two emotional reactions (fear and hope) dominate. Fear is pervasive. AI’s hope is not pervasive. If we get AI regulation right, fear will diminish and hope may increase exponentially.
Fear Is Widely Shared
Over 1,100 AI scientists signed an open letter in March calling “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” They feared that, unless AI development stops, the current lack of regulation could trigger devastating consequences.
Some chose not to sign (insufficient duration or unclear enforcement mechanisms), but agreed action is needed. A few (including the “Godfather of AI” Geoffrey Hinton) quit enviable posts to speak more freely about AI risks.
Among institutions, businesses fear that competitors may race ahead if they fail to embrace AI. Businesses also fear AI adoption increases vulnerability to hacking.
Similarly, governments fearfully prepare against national security threats attributable to malicious AI deployed by foreign adversaries. It is ironic that pricey tools to defend such attacks are also AI driven.
Authors and publishers live in existential fear of unauthorized use of copyrighted content to train large language models. They know claims against LLMs are tough to validate, even as they seek compensation.
Universities fear losing their grip on students’ accountability for submitted work, even as traditional methods to conduct anti-plagiarism checks, monitor or proctor lose relevance.
All nations fear AI progress in competing countries. As superpowers struggle for strategic AI dominance, fears persist that rivalry in the military domain can spin out of control.
At the individual level, fear is palpable. Max Tegmark, who launched the moratorium initiative, fears his baby son’s future in a recent WSJ interview: “I look into his eyes and think, how is he going to make a living? What is the point of him going to school?”
Employees fear job loss amid a gradual devaluation of their workplace contributions relative to impending waves of AI-driven automation or process improvement.
Consumers fear a world where algorithms and devices deftly control their lives under the guise of simplifying. They fear eventual outcomes may not work out despite AI’s initial promise. Consider ambient intelligence that aggressively eliminates all input and output devices that engage consumers. AI-based intelligent personal assistants, internet-of-things sensors, wearables with sophisticated tracking capabilities and connected devices (including autonomous vehicles) can interact seamlessly to generate outcomes for the consumer without allowing control or feedback.
Hope Shines Bright in Select Fields
AI’s beacon of hope shines brightest in a few disciplines. Health care is one such discipline. I focus there given its size.
AI rapidly processes images to yield accurate diagnoses (sometimes better than physicians) of cancer and other ailments, reducing costs of medical tests and treatments. AI allows cost-effective delivery of medical services at scale, whereby many patients receive effective care within a short period.
The Food and Drug Administration has approved several medical AI tools. A recent study by Sahni et al. in 2023 estimates AI can reduce U.S. health care costs by roughly 5%-10% within five years representing estimated savings of $200-$360 billion annually (in 2019 dollars).
AI shows immense potential in pharmaceutics — in drug discovery and development, drug repurposing, improving productivity and clinical trials. In 2022, Google’s DeepMind released information on AI-predicted 3D molecular structures of nearly all known proteins.
Previously, it took years to discover the structure of one molecule — with detailed information accessible for only 1 million molecules. At this time, over 200 million structures are instantly accessible, dramatically increasing researchers’ understanding of biology.
Proteins play critical roles in the body — a protein’s molecular structure determines its function. This allows pursuit of more specific drug targets and understanding how specific proteins work.
How to Reduce Fear, Infuse More Hope
Understanding how AI evolved helps. In a research study, I compare AI and human intelligence. Over the past century, a rich literature on human intelligence documents reliable/valid ways to assess human IQ.
Consensus viewpoints include: (a) human abilities remained consistent over decades (i.e., humanity did not acquire new abilities that earlier generations lacked); (b) excluding early childhood and post-65 aging phases representing remarkable growth and decline, human abilities remain stable over the lifetime; and (c) individuals differ on intelligence and other abilities.
Remarkably, these findings do not extend to AI abilities. That is, through creative integration across disparate fields and technologies — natural language processing, computer vision, deep learning — machines acquire new abilities.
To illustrate, neuroscientists David Hubel and Torsten Wiesel won the 1981 Nobel in physiology for discovering that neurons in the visual system respond differently to simple and complex visual stimuli.
This inspired Japanese computer scientist Kunahiko Fukushima to develop neocognition — a model for visual recognition — setting the stage for the development of artificial neural networks. What links ANNs with human physiology?
ANNs represent a series of sequentially interconnected layers — each layer has nodes that function like simplified neurons in the human brain. Advances in ANNs led to variants such as F(Feedforward)NNs, R(Recurrent)NNs, C(Convolutional)NNs and GA(Generative Adversarial)Ns that serve as building blocks for deep learning and LLMs.
So computers evolved from inanimate objects to AI manifestations with humanlike abilities to see/move (e.g., robots), and listen/speak (e.g., chatbots). LLMs rely on transformer neural networks.
If AI’s building blocks reflect elements of human physiology, do machines think like humans? No. Alan Turing’s famous test is a game to discover if machines successfully pretend to think (or even fool humans into believing they think) like humans when they demonstrably do not.
Our immediate need: Congress should enact comprehensive AI regulation with teeth. It is hard to justify inaction after recent congressional hearings openly sought such regulation.
As humans, we should review AI achievements, and reflect deeply on how best to move forward.
Dr. Siva K Balasubramanian serves as the Harold L. Stuart endowed chair in Business, and associate dean of the Stuart School of Business at Illinois Institute of Technology. He teaches graduate courses on AI and researches extensively on AI topics. He can be reached by email and on LinkedIn.