US Workforce Unprepared for AI, Technology Experts Tell Senate

WASHINGTON — President Joe Biden’s executive order Monday setting regulatory standards for artificial intelligence prompted witnesses at a Senate hearing Tuesday to say it is only a first step in a process likely to transform American workplaces.
“Artificial intelligence will not only disrupt lives, it will transform the world,” said Tyrance Billingsley, executive director of the Tulsa-based technology firm Black Tech Street.
Like other technology executives at the Senate Health, Education, Labor and Pensions subcommittee hearing, he warned that the United States is not ready for the benefits and pitfalls of AI. He suggested more federally funded worker training.
Artificial intelligence refers to software and computers programmed to learn from data and to apply that learning to a variety of tasks.
It can search the internet for information by users of Google, Amazon and other online platforms. It also can recognize human speech, operate self-driving cars, play chess and other games and generate art or text.
Increasingly, it is being used for workforce development and doing many of the jobs now performed by humans.
Therein lie the opportunities and dangers, according to expert witnesses at the subcommittee on Employment and Workplace Safety hearing.
The opportunities include greater productivity and turning tedious jobs over to machines.
The United States will benefit from AI only if its workforce is well-trained for the technology, said Mary Kate Morley Ryan, managing director of talent and organization for the information technology firm Accenture.
“The reality is that we don’t have the workforce we need to fill the jobs of the future,” she said.
Jobs such as telemarketing, financial services, law and medicine already are being transformed by AI. Ryan said they are only the beginning.
She advocated for more apprenticeships and internships to prepare the incoming workforce for jobs related to AI technology.
As evolving AI creates new opportunities, it is also increasing dangers of misuse, such as through cyberattacks, copyright violations or attempts to skew elections.
Biden tried to address some of the risks with his executive order. It sets eight guidelines to promote security, privacy and to protect consumers from abuses.
The guidelines require watermarking of AI-generated art or text and a new government program to identify flaws in critical software that could be exploited by hackers using AI.
“I am determined to do everything in my power to promote and demand responsible innovation,” Biden said in a statement.
Bradford Newman, an American Bar Association AI specialist, said the federal government needs to go further with federal legislation that sets national standards to promote development of the technology but to discourage misconduct.
A problem now is that state and local governments are creating “a patchwork” of differing AI laws, many of them based on a lack of understanding, he said.
“AI cries out for a federal uniform solution,” Newman said.
He mentioned the example of a New York law he described as “onerous and vague.” He said it sets regulatory requirements that are too burdensome for small AI companies, thereby stifling innovation.
You can reach us at [email protected] and follow us on Facebook and Twitter