The US Department of Homeland Security (DHS) has released a set of recommendations for the safe and secure development and deployment of artificial intelligence (AI) in critical infrastructure.
The Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure was developed by and for critical infrastructure owners and operators, AI developers and cloud providers as well as the civil society and public sector entities that protect and advocate for consumers.
Critical infrastructure sectors, including aviation and transportation, are increasingly deploying AI to improve services, build resilience and counter threats. The Transportation Security Administration, for example, is developing AI for digital identity programs as well as baggage scanning operations. Uses of any technology, especially new and emerging technology, is not without risk. If adopted and implemented by the stakeholders involved in the development, use and deployment of AI in US critical infrastructure, DHS believes the voluntary framework will enhance the harmonization of and help operationalize safety and security practices, improve the delivery of critical services, enhance trust and transparency among entities, protect civil rights and civil liberties, and advance AI safety and security research that will further enable critical infrastructure to deploy emerging technology responsibly.
Despite the growing importance of AI technology to critical infrastructure, no comprehensive regulation currently exists. DHS identified three primary categories of AI safety and security vulnerabilities in critical infrastructure: attacks using AI, attacks targeting AI systems, and design and implementation failures. To address these vulnerabilities, the framework recommends actions directed to the key stakeholders supporting the development and deployment of AI in US critical infrastructure.
For example, the framework recommends a number of practices for critical infrastructure owners and operators. These are focused on the deployment-level of AI systems, to include maintaining strong cybersecurity practices that account for AI-related risks, protecting customer data when finetuning AI products, and providing meaningful transparency regarding their use of AI to provide goods, services or benefits to the public. The framework encourages critical infrastructure entities to play an active role in monitoring the performance of these AI systems and share results with AI developers and researchers to help them better understand the relationship between model behavior and real-world outcomes.
The framework also encourages continued cooperation between the federal government and international partners to protect all global citizens, as well as collaboration across all levels of government to fund and support efforts to advance foundational research on AI safety and security.
DHS released its first ever AI Roadmap in March 2024. Read more about it here.