That is according to Kevin Bocek, VP ecosystem and community at Venafi who thinks that establishing AI guardrails is crucial as it continues to advance at a rapid pace.
“Having a shared vision around regulations that can help to contain the risks of AI, while encouraging exploration, curiosity and trial and error will be essential,” he says.
“To be clear, when we talk of a ‘kill switch’, we are not talking about one switch.”
Bocek says that If AI systems go rogue and start to represent a serious threat to humankind – as some key industry figures have warned could be possible – we will need to be able to identify those AIs, what they are connected to, and the outputs they generate or have ever generated.
Rather than having one super identity, there would be potentially thousands of machine identities would be associated with each model, from the inputs that train the model, to the model itself and its outputs.
These identities could be used as a de facto kill switch, Bocek says, and taking them away is akin to removing a passport meaning it would become extremely difficult for that entity to operate.
“This kind of kill switch could stop the AI from working, prevent it from communicating with a certain service, and protect it by shutting it down if it has been compromised; similar to how a human brain can shut down when under attack.”
Safety Summit
Bocek’s thoughts come as the UK plans to host an AI Safety Summit this Autumn, something that the VP believes is a “welcome step and forward-thinking development”.
“If the summit can reach a mutual agreement on safeguarding individuals and maintaining human control over these advancing technologies, it would be a groundbreaking achievement,” he believes.
Prime Minister Rishi Sunak announced the conference in June, noting that the UK wanted to become the “geographical home” of global AI safety.
The summit is reportedly being backed by US President Joe Biden and several key industry figures from the likes of OpenAI and Microsoft Corp.
However, according to several reports, crucial questions still remain, including whether or not China will be invited.
Some of the proposed regulations that will likely be discussed at the event include AI guardrails. Watermarking content generated by AI is said to be an area open for discussion.