Fortune Sky is Your Go-to Source for the Latest Finance News, Covering Markets, Business, Industries and Internet.
⎯ 《 Fortune • Sky 》

Big Tech Companies Want AI Regulation — But On Their Own Terms

2023-06-27 12:53
OpenAI Chief Executive Officer Sam Altman surprised everyone last month when he warned Congress of the dangers posed
Big Tech Companies Want AI Regulation — But On Their Own Terms

(Bloomberg) --

OpenAI Chief Executive Officer Sam Altman surprised everyone last month when he warned Congress of the dangers posed by artificial intelligence. Suddenly, it looked like tech companies had learned from the problems of social media and wanted to roll out AI differently. Even more remarkably: They wanted politicians’ help.

But a week later, Altman told a different story to reporters in London. The head of ChatGPT’s creator said that he would try to comply with European Union rules but if that proved too difficult, his company would “cease operating” within the bloc. The remark prompted Internal Market Commissioner Thierry Breton to accuse Altman of “attempting blackmail.” Altman clarified his comments the next day, and when the CEO and commissioner met in person last week, they agreed that they were aligned on regulation.

AI development is blazing ahead. The sector raised over $1 billion in venture capital funding in the first four months of this year alone, and systems are already at work in everything from toothbrushes to drones. How far and how fast things continue to move will depend heavily on whether governments step in.

Big tech companies say they want regulation. The reality is more complicated. In the US, Google, Microsoft, IBM and OpenAI have asked lawmakers to oversee AI, which they say is necessary to guarantee safety and competitiveness with China. Meanwhile, in the EU, where politicians recently voted to approve draft legislation that would put guardrails on generative AI, lobbyists for these same companies are fighting measures that they believe would needlessly constrict tech’s hottest new sector.

The rules governing tech vary dramatically on opposing sides of the Atlantic. The EU has had comprehensive data protection laws on the books for over five years now and is in the process of implementing strict guidelines for competition and content moderation. In the US, however, there’s been almost no regulation for more than two decades. Calling for oversight at home has been a way for Big Tech to generate good PR as it steers European legislation in a more favorable direction, according to numerous officials working on the bloc’s forthcoming AI Act.

Tech companies know they cannot ignore the EU, especially as its social media and data protection rules have become global standards. The European Union’s AI Act, which could be in place in the next two to three years, will be the first attempt by a western government to regulate artificial intelligence, and it is backed by serious penalties. If companies violate the act, the bloc could impose fines worth 6% of a company’s annual turnover and keep products from operating in the EU, which is estimated to represent between 20% and 25% of a global AI market that’s projected to be worth more than $1.3 trillion within 10 years.

This puts the sector in a delicate position. Should a version of the act become law, said Gry Hasselbalch, co-founder of thinktank DataEthics, the biggest AI providers “will need to fundamentally change how transparent they are, the way they handle risks and deploy their models.”

Risky Business

In contrast to tech, lawmaking moves at a crawl. In 2021, the European Commission released a first draft of its AI Act, kicking off a process that would involve three governmental institutions, 27 countries, scores of lobbyists and round after round of negotiations that will likely conclude later this year. The proposal took a “risk-based” approach, banning AI in extreme cases — including for the kind of social scoring used in China, where citizens earn credit based on surveilled behavior — and allowing the vast majority of AI to operate with little oversight, or none at all.

Most of the draft focused on rules for “high-risk” cases. Companies that release AI systems to predict crime or sort job applications, for instance, would be restricted to only using high-quality data and required to produce risk assessments. Beyond that, the draft mandated transparency around deepfakes and chatbots: People would have to be informed when they were talking to an AI system, and generated or manipulated content would need to be flagged. The text made no mention of generative AI, an umbrella category of machine-learning algorithms capable of creating new images, video, text and code, that had yet to blow up.

Big tech welcomed this approach. They also tried to soften the edges. While the draft said developers would be responsible for how their systems were used, companies and their trade groups argued that users should also be liable. Microsoft contended in a position paper that because generative AI’s potential makes it impossible for companies “to anticipate the full range of deployment scenarios and their associated risks,” it is especially crucial to “focus in on the actual use of the AI system by the deployer.” In closed-door meetings, officials from tech companies have doubled-down on the idea that AI itself is simply a tool that reflects the intent of its user.

More significantly, IBM wanted to ensure that “general-purpose AI” — an even broader category that includes image and speech recognition, audio and video generation, pattern detection, question answering and translation — was excluded from the regulation, or in Microsoft’s case, that it would be customers who would handle the regulatory checks, according to drafts of amendments sent to lawmakers.

Many companies have stuck to this stance. Rather than “attempting to control the technology as a monolith,” wrote Jean-Marc Leclerc, IBM’s Head of EU Affairs, in a statement to Bloomberg, “we’re urging the continuation of a risk-based approach.”

The notion that some of the most powerful AI systems could largely avoid oversight set off alarms among industry-watchers. Future of Life, a nonprofit initially funded in part by Elon Musk, wrote at the time that “future AI systems will be even more general than GPT-3” and needed to be explicitly regulated. Instead of “categorizing them by a limited set of stated intended purposes,” Future of Life’s President Max Tegmark wrote, “the proposal should require a complete risk assessment for all their intended uses (and foreseeable misuses).”

Threat Assessment

It looked as if Big Tech would get what it wanted — at one point countries even considered excluding general-purpose AI from the text entirely — until the spring of 2022, when politicians began to worry that that they had underestimated its risks. Largely at the urging of France, EU member states began to consider regulating all general-purpose AI, regardless of use case.

This was the moment when OpenAI, which had previously stayed out of the European legislative process, decided to weigh in. In June 2022, the company’s head of public policy, met with officials in Brussels. Soon after, the company sent the commission and some national representatives a position paper, first reported by Time, saying that they were “concerned” that some proposals “may inadvertently result in all our general-purpose AI systems being captured by default.”

Yet EU countries did go ahead and mandate that all general-purpose AI comply with some of the high-risk requirements like risk assessments, with the details to be sorted out later. Draft legislation containing this language was approved in December — just a week after the release of ChatGPT.

Struck by the chatbot’s abilities and unprecedented popularity, European Parliament members took an even tougher approach in their next draft. That latest version, approved two weeks ago, mandates that developers of “foundation models” like OpenAI must summarize the copyrighted materials used to train large language models, assess the risks that the system could pose to democracy and the environment, and design products incapable of generating illegal content.

“Ultimately, what we are asking of generative AI models is a bit of transparency,” Dragos Tudorache, one of the two lead authors of the AI Act explained. If there is a danger in having “exposed the algorithms to bad things, then there has to be an effort on the side of the developers to provide safeguards for that.”

While Meta, Apple and Amazon have largely stayed quiet, other key developers are pushing back. In comments to lawmakers, Google said the parliament’s controls would effectively treat general-purpose AI as high-risk when it isn’t. Companies also protested that that the new rules could interfere with existing ones, and several said they’ve already implemented their own controls. “The mere possibility of reputational damage alone is already enough incentive for companies to massively invest in the safety of users,” said Boniface de Champris from industry group CCIA.

Large Language

As the public has witnessed generative AI’s flaws in real time, tech companies have become increasingly vocal in asking for oversight — and, according to officials, more willing to negotiate. After Altman and Google’s Sundar Pichai embarked on a meet-and-greet tour with EU regulators at the end of May, competition chief Margrethe Vestager acknowledged that big tech was coming around to transparency and risk requirements.

This approach is “very pragmatic,” Tudorache said, adding that developers who fought regulation would land “on the wrong end of history [and would be] risking their own business model.”

Some critics have also suggested that complying with regulation early could be a way for big companies to secure market dominance. “If the government comes in now,” explained Joanna Bryson, a professor of ethics and technology at the Hertie School in Berlin, “then they get to consolidate their lead, and find out who’s coming anywhere near them.”

If you ask tech companies, they’ll tell you that they’re not against regulation — they just don’t like some of the proposals on offer. “We have embraced from the outset that Microsoft is going to be regulated at several levels of the tech stack; we’re not running from regulation,” a Microsoft spokesperson wrote, expressing a view held by many developers. The company has also voiced support for the creation of a new AI oversight agency to ensure safety standards and issue licenses.

At the same time, tech companies and trade groups are continuing to push back against the parliament and member countries’ changes to the AI Act. Developers have pressed for more details on what regulation would look like in practice; and how, say, OpenAI would go about assessing the impact of ChatGPT on democracy and the environment. In some instances, they’ve accepted certain parameters while challenging others.

OpenAI, for example, wrote to lawmakers in April expressing support for monitoring and testing frameworks, and a new set of standards for general-purpose AI, according to comments seen by Bloomberg. Google — which recently came out in support of a “spoke-and-hub” model of regulation in the US, asking that oversight be distributed across many different agencies rather than a single centralized one — has supported risk assessments but continued to back a risk-based approach. Google did not respond to requests for comment.

Yet with AI’s moneymaking potential coming more clearly into view, some politicians are starting to embrace industry’s views. Cedric O, France’s former digital minister-turned-tech consultant, warned last month that regulatory overreach could harm European competitiveness. He was backed by French President Emmanuel Macron, who remarked at a French tech conference right after the parliament’s vote that the EU needs to be mindful of not overpolicing AI.

Until the final vote on the AI Act takes place, it’s unclear to what extent the EU will actually decide to regulate the technology. But even if negotiators move as fast as possible, companies won’t need to comply with the act until around 2025. And in the meantime, the sector is moving forward.

--With assistance from Anna Edgerton and Benoit Berthelot.