Sep-27-2023

The Department of Energy’s Oak Ridge National Laboratory announced the establishment of the Center for AI Security Research, or CAISER, to address threats already present as governments and industries around the world adopt artificial intelligence and take advantage of the benefits it promises in data processing, operational efficiencies and decision-making.

n partnership with federal agencies such as the Air Force Research Laboratory’s Information Directorate and the Department of Homeland Security Science and Technology Directorate, ORNL and CAISER will provide objective scientific analysis of the vulnerabilities, threats and risks — from individual privacy to international security — related to emerging and advanced artificial intelligence.

“One of the biggest scientific challenges of our time is understanding AI vulnerabilities and risks,” said ORNL Deputy for Science and Technology Susan Hubbard. “ORNL is already advancing the state of the art in AI to solve the Department of Energy’s most pressing scientific challenges, and we believe the lab can help DOE and other federal partners answer critical AI security questions while providing insights to policymakers and the public.”

CAISER expands the lab’s long-standing Artificial Intelligence for Science and National Security research initiative, which integrates ORNL’s unique expertise, infrastructure and data to accelerate scientific breakthroughs.

“There are real benefits the public and government can gain from AI technologies,” said Prasanna Balaprakash, director of AI programs at ORNL. “CAISER will put the lab’s expertise toward understanding threats and ensuring people can benefit from AI in safe, secure, peaceful ways.”

Past research has established that AI systems are vulnerable to different types of attacks. Adversaries can “poison” an AI model, for instance, by covertly injecting malicious data into the training dataset to intentionally corrupt and alter the output. Other studies have shown that small physical objects can fool an AI-based detection algorithm — a few small pieces of black tape on a stop sign, for example, can render the object unrecognizable to vehicle autopilot systems. Additionally, generative AI, such as ChatGPT and DALL-E, can be used to create entirely synthetic text and imagery, known as deepfakes, that are nearly indistinguishable from “real” content.

“We are at a crossroads. AI tools and AI-based technologies are inherently vulnerable and exploitable, which can lead to unforeseen consequences,” said Edmon Begoli, ORNL’s Advanced Intelligent Systems section head and CAISER founding director. “We’re defining a new field of AI security research and committing to intensive research and development of mitigating strategies and solutions against emerging AI risks.”

By elucidating a clear, science-based picture of risks and mitigation strategies, CAISER’s research will provide greater assurance to federal partners that the AI tools they adopt are reliable and robust against adversarial attacks.

“I think a lot about the challenges of our current era, as well as those that lie ahead in the uncharted territory of AI technologies and the very real threats that we’re working steadfast to understand and mitigate,” said the Honorable Dimitri Kusnezov, DHS Under Secretary for Science & Technology. “Throughout its history, DHS has always had a special partnership with DOE’s national laboratories, tirelessly pioneering ground-breaking science for American security. CAISER will play a critical role in helping us understand this future and addressing the looming threats together.”