Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

NIST Seeks Public Comment on Guidance for Trustworthy AI

Agency Calls for Information on Gen AI Risk Management, Red-Teaming Efforts
NIST Seeks Public Comment on Guidance for Trustworthy AI
The campus of the National Institute of Standards and Technology in Gaithersburg, Maryland (Image: J. Stoughton/NIST)

The U.S. National Institute of Standards and Technology is soliciting public guidance on implementation of an October White House executive order seeking safeguards for artificial intelligence.

See Also: On-Demand | Security Operations In a New Paradigm

The order from President Joe Biden directs the agency to establish guidelines for developers of AI, "especially of dual-use foundation models," to conduct red-teaming tests. Biden invoked Cold War-era executive powers in requiring companies developing AI models that potentially pose serious risks to national security, national economic security or national public health and safety to share their test results with the federal government (see: Why Biden’s Robust AI Executive Order May Fall Short in 2024).

"President Biden has been clear - AI is the defining technology of our generation, and we have an obligation to harness the power of AI for good while protecting people from its risks," said Secretary of Commerce Gina Raimondo.

The White House considers external red teaming an "effective tool to identify novel AI risks," and it backed a public assessment red-teaming event conducted recently during DEF CON. At the first-of-its-kind event, thousands of attendees were "poking and prodding LLMs," the White House said, to see if they could make the systems produce undesirable outputs, in a bid to understand the risks the systems pose.

The order also directed NIST with finding consensus industry standards for a generative AI risk management framework and for a secure software development framework for generative AI and dual-use foundation models.

The public can share feedback on the request for information until Feb. 2.

The request for information says the agency is taking on a task assigned by the order to the Department of Commerce, to which NIST belongs, to develop a report on existing and potentially future tools for labeling and detecting synthetic content. The order seeks methods to prevent generative AI from producing harmful content such as child sexual abuse material or nonconsensual intimate imagery of real individuals. A 2019 report found that 96% of deepfake online videos contained nonconsensual pornography featuring women.

This is the first time there has been an "affirmative requirement" for companies developing foundational models that pose a serious risk to national security, economic security, public health or safety to notify the federal government when training their models, and to share the results of red team safety tests, said Lisa Sotto, partner at Hunton Andrews Kurth and chair of the company's global privacy and cybersecurity practice. This will have a "profound" impact on the development of AI models in the United States, she told Information Security Media Group.

While NIST does not directly regulate AI, it helps develop frameworks, standards, research and resources that play a significant role in informing the regulation and the technology's responsible use and development. Its artificial intelligence risk management framework released earlier this year seeks to provide a comprehensive framework for managing risks associated with AI technologies. Its recent report on bias in AI algorithms seeks to help organizations develop potential mitigation strategies, and the Trustworthy and Responsible AI Resource Center, launched in March, is a central repository for information about NIST's AI activities.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.asia, you agree to our use of cookies.