AI fears infiltrate finance, business, and law.

TLDR

Silicon Valley figures are not the only ones worried about the dangers of artificial intelligence (AI). Fears about AI have spread to the legal system, the financial industry, and global business gatherings. Reports from the Financial Industry Regulatory Authority (FINRA), the World Economic Forum, and the Financial Stability Oversight Council in Washington have all highlighted the risks and potential harms associated with AI. The World Economic Forum’s survey found that AI-fueled misinformation is the biggest near-term threat to the global economy. The Financial Stability Oversight Council warned that undetected AI design flaws could produce biased decisions and the Securities and Exchange Commission (SEC) Chairman, Gary Gensler, has publicly expressed concerns about financial stability caused by AI reliance in investment firms.

Main Article

Silicon Valley figures have long warned about the dangers of artificial intelligence. Now their anxiety has migrated to other halls of power: the legal system, global gatherings of business leaders and top Wall Street regulators. Tech is not your friend. We are.

In the past week, the Financial Industry Regulatory Authority (FINRA), the securities industry self-regulator, labeled AI an “emerging risk” and the World Economic Forum in Davos, Switzerland, released a survey that concluded AI-fueled misinformation poses the biggest near-term threat to the global economy.

Those reports came just weeks after the Financial Stability Oversight Council in Washington said AI could result in “direct consumer harm” and Gary Gensler, the chairman of the Securities and Exchange Commission (SEC), warned publicly of the threat to financial stability from numerous investment firms relying on similar AI models to make buy and sell decisions.

At the World Economic Forum’s annual conference for top CEOs, politicians and billionaires held in a tony Swiss ski town, AI is one of the core themes, and a topic on many of the panels and events. In a report released last week, the forum said that its survey of 1,500 policymakers and industry leaders found that fake news and propaganda written and boosted by AI chatbots is the biggest short-term risk to the global economy. Around half of the world’s population is participating in elections this year in countries including the United States, Mexico, Indonesia and Pakistan and disinformation researchers are concerned AI will make it easier for people to spread false information and increase societal conflict. Chinese propagandists are already using generative AI to try to influence politics in Taiwan. AI-generated content is showing up in fake news videos in Taiwan, government officials have said.

The forum’s report came a day after FINRA in its annual report said that AI has sparked “concerns about accuracy, privacy, bias and intellectual property” even as it offers potential cost and efficiency gains. And in December, the Treasury Department’s FSOC, which monitors the financial system for risky behavior, said undetected AI design flaws could produce biased decisions, such as denying loans to otherwise qualified applicants. Generative AI, which is trained on huge data sets, also can produce outright incorrect conclusions that sound convincing, the council added. FSOC, which is chaired by Treasury Secretary Janet L. Yellen, recommended that regulators and the financial industry devote more attention to tracking potential risks that emerge from AI development.

The SEC’s Gensler has been among the most outspoken AI critics. In December, his agency solicited information about AI usage from several investment advisers, according to Karen Barr, head of the Investment Adviser Association, an industry group. The request for information, known as a “sweep,” came five months after the commission proposed new rules to prevent conflicts of interest between advisers who use a type of AI known as predictive data analytics and their clients.

“Any resulting conflicts of interest could cause harm to investors in a more pronounced fashion and on a broader scale than previously possible,” the SEC said in its proposed rulemaking. Investment advisers already are required under existing regulations to prioritize their clients’ needs and to avoid such conflicts, Barr said. Her group wants the SEC to withdraw the proposed rule and base any future actions on what it learns from its informational sweep.

“The SEC’s rulemaking misses the mark,” she said.

Financial services firms see opportunities to improve customer communications, back-office operations and portfolio management. But AI also entails greater risks. Algorithms that make financial decisions could produce biased loan decisions that deny minorities access to credit or even cause a global market meltdown, if dozens of institutions relying on the same AI system sell at the same time.

“This is a different thing than the stuff we’ve seen before. AI has the ability to do things without human hands,” said attorney Jeremiah Williams, a former SEC official now with Ropes & Gray in Washington.

Even the Supreme Court sees reasons for concern. “AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as obviously it risks invading privacy interests and dehumanizing the law,” Chief Justice John G. Roberts Jr. wrote in his year-end report about the U.S. court system.

Like drivers following GPS instructions that lead them into a dead end, humans may defer too much to AI in managing money, said Hilary Allen, associate dean of the American University Washington College of Law.

“There’s such a mystique about AI being smarter than us,” she said.

AI also may be no better than humans at spotting unlikely dangers or “tail risks,” said Allen. Before 2008, few people on Wall Street foresaw the end of the housing bubble. One reason was that since housing prices had never declined nationwide before, Wall Street’s models assumed such a uniform decline would never occur. Even the best AI systems are only as good as the data they are based on, Allen said.

As AI grows more complex and capable, some experts worry about “black box” automation that is unable to explain how it arrived at a decision, leaving humans uncertain about its soundness. Poorly designed or managed systems could undermine the trust between buyer and seller that is required for any financial transaction, said Richard Berner, clinical professor of finance at New York University’s Stern School of Business. “Nobody’s done a stress scenario with the machines running amok,” added Berner, the first director of Treasury’s Office of Financial Research.

In Silicon Valley, the debate over the potential dangers around AI is not new. But it got supercharged in the months following the late 2022 launch of OpenAI’s ChatGPT, which showed the world the capabilities of the next generation technology. Amid an artificial intelligence boom that fueled a rejuvenation of the tech industry, some company executives warned that AI’s potential for igniting social chaos rivals nuclear weapons and lethal pandemics. Many researchers say those concerns are distracting from AI’s real-world impacts. Other pundits and entrepreneurs say concerns about the tech are overblown and risk pushing regulators to block innovations that could help people and boost tech company profits.

Last year, politicians and policymakers around the world also grappled to make sense of how AI will fit into society. Congress held multiple hearings. President Biden issued an executive order saying AI was the “most consequential technology of our time.” The United Kingdom convened a global AI forum where Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely.” The concerns include the risk that “generative” AI — which can create text, video, images and audio — can be used to create misinformation, displace jobs or even help people create dangerous bioweapons.

Tech critics have pointed out that some of the leaders sounding the alarm, such as OpenAI CEO Sam Altman, are nonetheless pushing the development and commercialization of the technology. Smaller companies have accused AI heavyweights OpenAI, Google and Microsoft of hyping AI risks to trigger regulation that would make it harder for new entrants to compete. “The thing about hype is there’s a disconnect between what’s said and what’s actually possible,” said Margaret Mitchell, chief ethics scientist at Hugging Face, an open source AI start-up based in New York. “We had a honeymoon period where generative AI was super new to the public and they could only see the good, as people start to use it they could see all the issues with it.”