Experts Raise Concerns About Rapid Growth of Artificial Intelligence

Computer programmer
by Nolan Mckendry

 

Experts on artificial intelligence raised concerns about the implications of AI’s rapid growth at a panel discussion in Washington, D.C. Tuesday.

The American Enterprise Institute hosted a series of panel discussions surrounding the deployment of AI. Panelists discussed safety protocols, workforce development and regulatory initiatives.

Speakers included industry experts and policy directors such as Michael Richards, Director of Policy for the U.S. Chamber of Commerce Technology Engagement Center, and former NVIDIA software engineer Bojan Tunguz, along with four others.

With the rapidly evolving AI technology, the need for industry, government and citizens to come together to promote safe but effective deployment is more crucial than ever.

“The core challenge is developing a general purpose technology that is fundamentally different from prior general purpose technologies, like electricity,” said Chris Meserole, Executive Director of the Frontier Model Forum.

In prior technologies, “we knew what it would do if we could build it, we just weren’t sure if we could build it or scale it out,” said Meserole.

AI has the opposite problem, says Meserole. While the engineering and development of AI has become better understood, its capabilities are hard to anticipate. “We’re not entirely sure how to assess its safety.”

Recent developments have made the AI technology irresistible in roles of predictive maintenance, supply chain optimization and management, autonomous production lines and worker safety. Accordingly, there is a greater need for institutional and industrial convergence on standards and regulation.

Across borders, AI regulation has taken different approaches at different speeds.

In the United States, President Biden recently issued an executive order, which is one of the few federal actions to date taken to address the AI revolution. The comprehensive order addresses issues related to privacy, civil rights, consumer protections, scientific research, and workers’ rights.

Last month, the U.S. Congress took its first steps to identify specific legislative needs, in the way of a bipartisan quartet of senators led by Majority Leader Chuck Schumer, D-N.Y.

Meserole said he applauds the work done by Schumer.

The European Union has already enacted legislation. In May, the union adopted the AI Act, a comprehensive set of regulations and requirements primarily for developers and employers.

The act prohibits numerous AI uses, such as social scoring or compiling facial recognition databases.

Regulation, and the advocacy for it, tends to address many of the same issues, including cybersecurity, critical infrastructure, privacy, misinformation and election security.

The current status-quo surrounding the AI revolution is uncertainty with a highly optimistic outlook on the potential benefits. Companies have begun gradually using AI for efficiency and optimization, especially in the tech sector at companies like Apple or Meta.

“A lot of what we’re seeing is a sort of crawl-walk-run approach,” said Mark Johnson, cofounder of Michigan Software Labs. “They’re starting with these simple agents and chatbots, which become these recommendation engines which, based on their database and pricing engines, really drive to help make sure they are reaching the full potential of their business and revenue.”

The spoils of AI will not only go to Fortune 500 companies, however. AI has tremendous potential in education and helping individuals climb the socioeconomic ladder.

Lezlie Sizemore, Associate Vice President for Workforce and Economic Initiatives at Kentucky’s Council on Postsecondary Education, spoke on Kentucky initiatives which use AI to help recovering addicts.

The Kentucky Community and Technical College System offers CLIMB-Health, a program aimed at establishing postsecondary pathways for individuals in recovery or reentry who are seeking entry-level employment as peer specialists.

AI has been used to train these individuals and mimic “clients” who require counseling.

“The individual could say ‘I want to practice a scenario with a veteran struggling with alcohol addiction,’” Sizemore said. “This particular bot also guides them in their own postsecondary education journey. They could say “I want to be a nurse or a psychologist, and this chat-bot would let them know what to do, step-by-step.”

 – – –

Nolan Mckendry is an Intern reporter at The Center Square.
Photo “Computer Programmer” by Patrick Amoy.

Related posts

Comments