Silicon Valley is pricing academics out of AI research – The Washington Post

Fei-Fei Li, the “godmother of artificial intelligence,” delivered an urgent plea to President Biden in the glittering ballroom of San Francisco’s Fairmont Hotel in June.
The Stanford professor asked Biden to fund a national warehouse of computing power and data sets — part of a “moonshot investment” allowing the country’s top AI researchers to keep up with tech giants.
She elevated the ask Thursday at Biden’s State of the Union address, which Li attended as a guest of Rep. Anna G. Eshoo (D-Calif.) to promote a bill to fund a national AI repository.
Li is at the forefront of a growing chorus of academics, policymakers and former employees who argue that the sky-high cost of working with AI models is boxing researchers out of the field, compromising independent study of the burgeoning technology.
As such tech behemoths as Meta, Google and Microsoft funnel billions of dollars into AI, a massive resources gap is building with even the country’s richest universities. Meta aims to procure 350,000 of the specialized computer chips — called GPUs — that are essential to run the gargantuan calculations needed for AI models. In contrast, Stanford’s Natural Language Processing Group has 68 GPUs for all of its work.
After attending State of the Union speech #SOTU tonight, I had a brief exchange w/ President Biden @POTUS.
Me: “Mr. President, you gave a historical speech by mentioning AI in the SOTU speech for the first time in history.”@POTUS (smiling): “Yes! And keep it safe”. 1/ pic.twitter.com/cJ7vs440fx
To obtain the expensive computing power and data required to research AI systems, scholars frequently partner with tech employees. Meanwhile, tech firms’ eye-popping salaries are draining academia of star talent.
Big tech companies now dominate breakthroughs in the field. In 2022, the tech industry created 32 significant machine learning models, while academics produced three, a significant reversal from 2014, when the majority of AI breakthroughs originated in universities, according to a Stanford report.
Researchers say this lopsided power dynamic is shaping the field in subtle ways, pushing AI scholars to tailor their research for commercial use. Last month, Meta CEO Mark Zuckerberg announced that the company’s independent AI research lab would move closer to its product team, ensuring “some level of alignment” between the groups, he said.
“The public sector is now significantly lagging in resources and talent compared to that of industry,” said Li, a former Google employee and the co-director of the Stanford Institute for Human-Centered AI. “This will have profound consequences because industry is focused on developing technology that is profit-driven, whereas public-sector AI goals are focused on creating public goods.”
This agency is tasked with keeping AI safe. Its offices are crumbling.
Some are pushing for new sources of funding. Li has been making the rounds in Washington, huddling with White House Office of Science and Technology Policy Director Arati Prabhakar, dining with the political press at a swanky seafood and steak restaurant and visiting Capitol Hill for meetings with lawmakers working on AI, including Sens. Martin Heinrich (D-N.M.), Mike Rounds (R-S.D.) and Todd Young (R-Ind.).
Large tech companies have contributed computing resources to the National AI Research Resource, the national warehouse project, including a $20 million donation in computing credits from Microsoft.
“We have long embraced the importance of sharing knowledge and compute resources with our colleagues within academia,” Microsoft Chief Scientific Officer Eric Horvitz said in a statement.
Policymakers are taking some steps to address the funding gaps. Last year, the National Science Foundation announced a $140 million investment to launch seven university-led National AI Research Institutes to examine how AI could mitigate the effects of climate change and improve education, among other topics.
Eshoo said she hopes to pass the Create AI Act, which has bipartisan backing in the House and the Senate, by the end of the year, when she is scheduled to retire. The legislation “essentially democratizes AI,” Eshoo said.
But scholars say this infusion may not come quickly enough.
As Silicon Valley races to build chatbots and image generators, it is drawing would-be computer science professors with high salaries and the chance to work on interesting AI problems. Nearly 70 percent of people with PhDs in AI end up in private industry compared with 21 percent of graduates two decades ago, according to a 2023 report.
Amid explosive demand, America is running out of power
Big Tech’s AI boom has pushed the salaries for the best researchers to new heights. Median compensation packages for AI research scientists at Meta climbed from $256,000 in 2020 to $335,250 in 2023, according to Levels.fyi, a salary-tracking website. True stars can attract even more cash: AI engineers with a PhD and several years of experience building AI models can command compensation as high as $20 million over four years, said Ali Ghodsi, who as CEO of AI start-up Databricks is regularly competing to hire AI talent.
“The compensation is through the roof. It’s ridiculous,” he said. “It’s not an uncommon number to hear, roughly.”
University academics often have little choice but to work with industry researchers, with the companies footing the bill for computing power and offering data. Nearly 40 percent of papers presented at leading AI conferences in 2020 had at least one tech employee author, according to the 2023 report. And industry grants often fund PhD students to perform research, said Mohamed Abdalla, a scientist at the Canada-based Institute for Better Health at Trillium Health Partners and incoming assistant professor at the University of Alberta, who has conducted research on the effect of industry on academics’ AI research.
“It was like a running joke that, like, everyone is getting hired by them,” Abdalla said. “And the people that were remaining, they were funded by them — so, in a way, hired by them.”
Google believes private companies and universities should work together to develop the science behind AI, said Jane Park, a spokesperson for the company. Google still routinely publishes its research publicly to benefit the broader AI community, Park said.
David Harris, a former research manager for Meta’s responsible AI team, said corporate labs may not censor the outcome of research but may influence which projects get tackled.
“Anytime you see a mix of authors who are employed by a company and authors who work at a university, you should really scrutinize the motives of the company for contributing to that work,” said Harris, who is now a chancellor’s public scholar at the University of California at Berkeley. “We used to look at people employed in academia to be neutral scholars, motivated only by the pursuit of truth and the interest of society.”
These fake images reveal how AI amplifies our worst stereotypes
Tech giants procure huge amounts of computing power through data centers and have access to GPUs. These resources are expensive: A recent report from Stanford University researchers estimated that Google DeepMind’s large language model, Chinchilla, cost $2.1 million to develop. More than 100 top artificial intelligence researchers on Tuesday urged generative AI companies to offer a legal and technical safe harbor to researchers so they can scrutinize their products without the fear that internet platforms will suspend their accounts or threaten legal action.
The necessity for advanced computing power is likely to only grow as AI scientists crunch more data to improve the performance of their models, said Neil Thompson, director of the FutureTech research project at MIT’s Computer Science and Artificial Intelligence Laboratory, which studies progress in computing.
“To keep getting better, [what] you expect to need is more and more money, more and more computers, more and more data,” Thompson said. “What that’s going to mean is that people who do not have as much compute [and] who do not have as many resources are going to stop being able to participate.”
Tech companies, including Meta and Google, have historically run their AI research labs to resemble universities where scientists decide what projects to pursue to advance the state of research, according to people familiar with the subject who spoke on the condition of anonymity to discuss private company matters.
Those workers were largely isolated from teams focused on building products or generating revenue, the people said. They were judged on influential papers they published or notable breakthroughs — similar to metrics used for their university peers, the people said. Top AI Meta scientists Yann LeCun and Joelle Pineau hold dual appointments at New York University and McGill University, blurring the lines between industry and academia.
Top AI researchers say OpenAI, Meta and more hinder independent evaluations
In an increasingly competitive market for generative AI products, research freedom inside companies could wane. In April, Google announced it was merging two of its AI research groups — DeepMind, which it acquired in 2014, and the Brain team from Google Research — into one department called Google DeepMind. Last year, Google started to take more advantage of its own AI discoveries, sharing research papers only after the lab work had been turned into products, The Washington Post has reported.
Meta has also reshuffled its research teams. The company placed its Fundamental AI Research team, known as FAIR, under the helm of its virtual-reality division, Reality Labs, in 2022 and last year reassigned some of the group’s researchers to a new generative AI product team. Last month, Zuckerberg told investors that FAIR would work “closer together” with the generative AI product team, arguing that while the two groups would still conduct research on “different time horizons,” it was helpful to the company “to have some level of alignment” between them.
“In a lot of tech companies right now, they hired research scientists that knew something about AI and maybe set certain expectations about how much freedom they would have to set their own schedule and set their own research agenda,” Harris said. “That’s changing, especially for the companies that are moving frantically right now to ship these products.”
A previous version of this article incorrectly said that Google acquired DeepMind in 2010. Google acquired the AI start-up in 2014. The article has been corrected.

source

Leave a Reply

The Future Is A.I. !
To top
en_USEnglish