Building responsible AI starts with machine identity “Kill Switch” - The EE

Building responsible AI starts with machine identity “Kill Switch”

Kevin Bocek of Venafi

Artificial intelligence is already deeply embedded across industries, powering everything from fraud prevention to chatbots and personalised recommendations. But as AI grows more powerful, concerns are being raised about its potential risks. Tech leaders are warning of AI’s destructive potential if left unregulated. In response, lawmakers are considering how to govern AI’s development responsibly, says Kevin Bocek VP ecosystem and community at Venafi.

One proactive approach is to assign AI models, inputs, and outputs with identities in the same way that devices, services, and algorithms have today with unique identities. Building in governance and protection now will help companies control their AI systems, make developers more accountable, and provide a “kill switch” if needed. This progressive approach could help to drive assurance and security, protect businesses against cyber-attacks looking to abuse the power of AI, and hold developers accountable. 

Shaping AI’s future through balanced governance 

As AI becomes more integrated into business operations, it also becomes an enticing target for cybercriminals. Attackers are finding ways to “poison” AI models to influence their decisions. The Center for AI Safety has already drawn up a lengthy list of potential societal risks, although some are more immediately concerning than others. 

The EU is leading regulatory efforts with its proposed AI Act, which aims to minimise risks. The act would impose requirements to ensure AI is trustworthy, transparent, and accountable. The G7 is talking about it. The White House is trying to lay down some rules of the road to protect individual rights and ensure responsible development and deployment. The EU is also drafting liability laws so victims of AI-related incidents can seek compensation more easily. Its proposals for a new “AI Act” were recently green-lit by lawmakers, and there are new liability rules in the works to make compensation easier for those suffering AI-related damages. 

The EU acknowledges AI identity 

The EU’s proposed AI regulations embrace Aristotle’s law of identity that everything with existence has an identity. The EU’s proposed regulations on AI are moving in this direction. The EU outlines a risk-based approach to AI systems whereby those considered an “unacceptable risk” are banned, and those classed as “high risk” must go through a muti-stage system of assessment and registration before they can be approved.

By rejecting a one-size-fits-all view and evaluating each AI system’s specific identity, regulators can craft proportional safeguards. A “declaration of conformity” must then be signed before the AI model can be given a CE marking and placed on the market. By following these rules, it enables accountability, laying the groundwork for ethical, secure AI that inspires public confidence.

Regulating AI models based on unique risks suggests these models or products each have distinct identities. This could be formalised in a system which authenticates the model itself, its communication with other assets, and the outputs is creates. This identity-based approach would allow certifying, auditing and authorising AI systems. In this way, we could authenticate the systems an AI interacts with. Equally, we could authorise and authenticate what the AI is able to connect and communicate with, what other systems it calls upon, and the chain of trust that leads to a specific decision or output.

The latter will be particularly important when it comes to remediation and traceability. Teams can authenticate an AI model’s actions over time, explaining outcomes and detecting tampering. With identity, we can govern AI by auditing algorithms and holding bad actors accountable. For all of these reasons, identity is required. 

A Kill Switch is critical

To be clear, when we talk of a ‘kill switch’, we are not talking about one super identity, but several related identities all working together. There could be thousands of machine identities associated with each model being used to secure every step in the process to stop unauthorised access and malicious manipulation from the inputs that train the model, to the model itself and its outputs. To verify AI system integrity, this could be achieved through a mix of code signing and TLS/SPIFFEE (transport layer security / Secure Production Identity Framework For Everyone) machine identities to protect the interactions had with other machines, cloud native services, and AI inputs. By doing this, it protects models during training and while in use. This means that each machine, during every process, needs an identity to prevent authorised access and manipulation.

If AI systems go rogue and start to represent a serious threat to humankind, as some key industry figures have warned could be possible, their identities could be used as a de facto kill switch. Taking away an identity is akin to removing a passport, it becomes extremely difficult for that entity to operate. This kind of kill switch could stop the AI from working, prevent it from communicating with a certain service, and protect it by shutting it down if it has been compromised. It would also need to kill anything else deemed dangerous in the dependency chain that the AI model has generated. This is where that identity-based auditability and traceability becomes important. 

Achieving accountable AI with identity

As governments globally consider the best way to regulate this technology that has significant influence, they’ll need to start thinking about the possibility of AI identity management. Yet the EU’s regulation, by far the most fully formed, says each model must be approved and registered in which case it naturally follows that each would have its own identity. This opens the door to the tantalising prospect of building a machine identity-style framework for assurance in this burgeoning space. 

There’s a lot that we still need to work out. But assigning each AI a distinct identity, would enhance developer accountability and foster greater responsibility, discouraging malicious use. Doing so with machine identity isn’t just something that will help protect businesses in the future, it’s a measurable success today. And more broadly it would help to enhance security and trust in a technology so far lacking either. It’s time for regulators to start thinking about how to make AI identity a reality.

The author is Kevin Bocek VP ecosystem and community at Venafi

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close