The Overworked, Underfunded Agency At The Center of Biden’s Sweeping AI Plan
The president’s new executive order presents another big test for the little-known National Institute of Standards and Technology
The U.S. government’s ambitious plan to protect Americans from dangerous abuses of artificial intelligence rests on the shoulders of an obscure and severely understaffed agency.
The National Institute of Standards and Technology, a 3,600-person agency inside the Department of Commerce, is the linchpin of the safety and security agenda in President Joe Biden’s recently issued AI executive order. The order tasks the institute, or NIST, with writing guidelines for the development of secure and trustworthy AI, the testing of AI models to identify shortcomings that could enable their misuse and the evaluation of technologies to protect the privacy of AI-analyzed data. It’s hard to overstate NIST’s role: the recommendations it produces will set the terms for how the government oversees AI for years to come.
“We are committed to getting this work done, and we will do it,” NIST Director Laurie Locascio, a biomedical engineer who has led the agency since April 2022, said in an interview.
NIST is no stranger to big assignments. It conducted the official government investigation into the World Trade Center’s collapse on 9-11. It manages the atomic clocks that set the exact time in the U.S. For more than 120 years, it has helped the U.S. adopt uniform standards for everything from fire hydrants to data encryption, and its five research labs tackle problems as varied as cancer-causing firefighting gear and seafood fraud. And as vulnerabilities in digital technology have become more and more dangerous, NIST’s cybersecurity guidelines have become the de-facto global baseline.
Yet these new AI responsibilities — which will shine an intense spotlight on NIST’s usually sedate, low-profile work — threaten to overstretch the resources of an agency that’s already stretched too thin. NIST’s supporters argue that Congress and the White House are putting too much on NIST’s plate without boosting its budget. (Biden has requested a $163 million funding increase for NIST in his annual budget, but the fate of that request is uncertain, and most of that new money would not go toward AI work.)
“A lot is expected of NIST, but the funding does not really match the ambition,” said Henry Young, the director of policy for the software trade group BSA and a former NIST and Commerce official.
Locascio told The Messenger that NIST will make do through a combination of new hires, reassigned staffers and non-government volunteers, saying, “We have a lot of creative ways that, in the past, we have managed around not having sufficient internal funding.” But she also acknowledged that NIST could “do a lot more with more” money.
INSIDE THE GOVERNMENT, NIST has a reputation for always completing its assignments — no matter how mismatched its resources are to the task at hand.
“The reason why people come to us is because we get the job done,” Locascio said.
While other agencies like the Pentagon aren’t shy about using their important work to insist on bigger budgets, NIST takes a quieter approach. Jeff Greene, a former senior NIST and White House cyber official, said “it’s not in NIST's culture to ask for more,” which is “part of what makes the agency so effective and trusted.” But that compliant reputation might have hobbled NIST by making it harder for its leaders to insist on adequate funding.
The problem traces back to 2014, when NIST released its Cybersecurity Framework. The document, which categorizes important cybersecurity goals and activities like detecting malware and creating hack recovery plans, became immensely popular with businesses around the world, showing Congress and the White House that they could trust NIST with more projects. Since then, NIST has been “a victim of its own immense success,” according to Andrew Grotto, a former senior cyber official at the White House, NIST and Commerce.
With the AI executive order, NIST’s success has once again earned it a heap of new responsibilities. Among them: Recommending the best ways to develop AI systems that hackers can’t abuse, including through AI-focused updates to its existing security guidance; creating testing benchmarks that can be used to evaluate AI models’ potential harms; and writing guidelines for how AI developers should stress-test their models by simulating a hack — a common practice in the security industry known as red-teaming.
All of this work could place an extreme burden on NIST. “It’s difficult to see how the already stretched resources at NIST can handle some of these projects,” said Heather West, a senior director of cybersecurity and privacy services at the law firm Venable.
Locascio said NIST has enough money to do “a lot of hiring” and revealed that some of the world’s leading experts on red-teaming are interested in joining the AI project. “We’re pretty far along in our discussions to make sure we have the right expertise.”
Other agencies could temporarily detail their experts to NIST, as could research universities, and Locascio said NIST is looking at other academic partnerships to tap additional talent.
But NIST will also have to reassign employees who are already working on other important projects. Some of them will come from teams exploring practical uses of AI in areas like robotics and materials science, Locascio said. She hopes to minimize agency-wide disruptions — or having to “gut” other programs, as she put it — by moving quickly on the AI assignments. Within six months, Locasio predicted, NIST will finish enough of its new AI work for employees to return to their former projects if they want to.
NIST will also have to look beyond its own workforce for people with the rare technical expertise that it needs for some of its new assignments. The agency has asked major AI companies to send their experts to participate in this work through a new collaboration program, a project that has generated an “enormous” amount of interest from the private sector, Locascio said. (NIST did not provide the names of participating companies.)
The Biden administration recognizes that it’s asking a lot of NIST. A Commerce Department spokesperson said the agency “has achieved remarkable results within its budget” that demonstrated its “tremendous value.” The spokesperson, who requested anonymity to discuss internal government issues, said it was “paramount” that Congress give NIST “the funds necessary to keep pace with this rapidly evolving technology that presents both substantial opportunities and serious risks if used irresponsibly.”
MANY COMPANIES DEVELOPING AI models are new and haven’t worked closely with the government before. That means NIST staff will act as ambassadors to this fast-growing and increasingly important part of Silicon Valley.
NIST was a great choice to play this role, former agency officials said, because it has repeatedly won companies’ and academics’ trust by actually listening to them.
When NIST began writing the Cybersecurity Framework, the tech industry opposed it, fearing that it would lead to onerous regulation. But through diligent consultations, NIST convinced companies to support the document, which helped make it one of the most widely cited collections of cybersecurity advice in the world. “You could literally, physically feel a shift in the room” when corporate representatives realized that NIST was taking their feedback seriously, said former senior agency staffer Jeff Greene, who now leads the Aspen Institute’s cyber program. “For some companies, that was a new experience working with the government.”
Now, as AI companies enter into a similarly uneasy partnership with the government to craft rules for their technology, it will once again be up to NIST to play diplomat, reassuring worried firms that the feds are taking their concerns seriously.
Inevitably, NIST will also have to mediate the sharp disagreements within the AI community about the best policies to promote. For example, major AI companies with big security budgets might support strict red-teaming requirements to make it harder for startups to enter the market. It’ll be up to NIST to unite companies, academics and activists behind one set of recommendations.
So far, NIST has been able to win companies’ trust because it isn’t a regulator, which Locasio said “is helpful in being very open, very transparent, and laying all the cards on the table,” But with the AI executive order, Biden has tied NIST’s work to regulation in a more explicit way than ever before.
WHILE THE INSTITUTE itself won’t be bossing AI companies around, other agencies will use NIST's work to do exactly that.
Biden’s order requires agencies that regulate sectors of critical infrastructure — facilities like hospitals, power plants and schools — to incorporate NIST’s AI safety guidance into the cybersecurity rules that those sectors have to follow. And while substantial questions remain about how Biden can enforce the agenda in his executive order, the White House is using the private-sector oversight powers of the 1950 Defense Production Act to require AI companies to follow NIST’s red-teaming instructions and report the results to the feds.
Because NIST’s guidelines will be used for such high-stakes government oversight, the agency could face an unprecedented amount of scrutiny while writing these documents, as AI companies and their critics peer over NIST’s shoulder and offer feedback.
NIST produces its recommendations through a low-key process that involves months of wonky listening sessions and technically dense public-comment periods. But the intense interest from the White House and the tech industry will infuse that typically staid process with a rare degree of tension and anxiety.
How this undercurrent of anticipation affects NIST’s work remains to be seen. Andrew Grotto, the former senior NIST official and now the director of Stanford University’s geopolitics and technology program, predicted that the agency would adopt “a business-as-usual mindset,” but he acknowledged that it would be “a point of discomfort” for employees.
“They know that there's this regulatory tail … to what they do,” he said. “I think they’re going to try to distance themselves from that piece as much as they can.”
Given the importance of AI to U.S. economic and national security, everyone from the White House to tech executives to civil-rights activists will be watching closely to see how NIST handles thorny questions about the safety, security and privacy issues that the technology poses.
“This is a big test for NIST,” Young said. “More eyes will be on NIST’s work under the AI EO than under any other task it's been assigned.”
While the pressure is intense, there is a potential upside: If NIST impresses policymakers and tech executives with its work, Congress could reward it with the big budget boost for which its supporters have been clamoring.
Inside NIST, there is an awareness that the AI project represents a watershed moment for the agency — an opportunity to help establish the U.S. as a leader in an increasingly vital field. “I can't imagine something that's more important and more prominent globally,” Locascio said. “We have to get it right.”
No comments:
Post a Comment