Artificial intelligence (AI) has the potential to enhance multiple airport processes and is already being integrated into systems to aid security, passenger management and terminal operations. As with any new technology, a certain degree of risk is also involved. With AI, much of this risk centers on the cybersecurity of systems.
To address the intersection of AI and cybersecurity, the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the UK’s National Cyber Security Centre (NCSC) have jointly released Guidelines for Secure AI System Development to help developers of any systems that use AI to make informed cybersecurity decisions at every stage of the development process. The guidelines were formulated in cooperation with 21 other agencies and ministries from across the world – including all members of the Group of Seven major industrial economies. They are the first of their kind to be agreed to globally.
“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure and trustworthy,” said Secretary of Homeland Security Alejandro N Mayorkas.
“The guidelines jointly issued today by CISA, NCSC and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core. By integrating ‘secure by design’ principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system’s design and development. Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology.”
The guidelines provide essential recommendations for AI system development and emphasize the importance of adhering to secure by design principles that CISA has long championed.
“The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment—by governments across the world—to ensure the development and deployment of artificial intelligence capabilities that are secure by design,” said CISA director Jen Easterly.
“As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability and secure practices. The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future.”
The guidelines are broken down into four key areas within the AI system development lifecycle: secure design, secure development, secure deployment, and secure operation and maintenance. Each section highlights considerations and mitigations that will help reduce the cybersecurity risk to an organizational AI system development process.
“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said NCSC CEO Lindy Cameron. “These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout. I’m proud that the NCSC is leading crucial efforts to raise the AI cybersecurity bar: a more secure global cyberspace will help us all to safely and confidently realize this technology’s wonderful opportunities.”
“I believe the UK is an international standard bearer on the safe use of AI,” said UK Secretary of State for Science, Innovation and Technology Michelle Donelan. “The NCSC’s publication of these new guidelines will put cybersecurity at the heart of AI development at every stage so protecting against risk is considered throughout.”
It is also worth noting that in October, US President Biden issued an Executive Order that directed DHS to promote the adoption of AI safety standards globally, protect US networks and critical infrastructure, reduce the risks that AI can be used to create weapons of mass destruction, combat AI-related intellectual property theft, and help the USA attract and retain skilled talent, among other missions.
And earlier this month, CISA released its Roadmap for Artificial Intelligence, a whole-of-agency plan aligned with national strategy to address its efforts to promote the beneficial uses of AI to enhance cybersecurity capabilities, ensure AI systems are protected from cyber-based threats, and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.