As a result, Curry says this will accelerate the rate of change even more dramatically than GenAI did in 2023, the year that OpenAI’s ChatGPT hit the mainstream.
“Perhaps the most important advances will come in combinatory fashions, as GenAI and its descendants specialise and become more effective at specific tasks and integrate better and more reliably with toolkits, schedulers, automation, users and so on,” Curry told Capacity.
The new waves of AI technology will compound this, he adds, and at the same time, other advances will have a combinatory effect, effectively leading to accelerated returns as we see changes in robotics, synthetic engineering, nanotechnology, quantum computing (QC) and more.
Security developments
“The most important thing a security department can do in response to accelerated change and disruption is to minimise the attack surface and become highly agile at managing changes in risk,” Curry says.
“Moving Capex to Opex, increasing discretionary spend as a function of the budget, decreasing spend on commodities, focusing on resilience, zero trust architectures, ensuring alignment with the business and staying close to the trends that matter like AI and QC.”
While Curry notes that all significant advancements in technology can be used for good or evil, he thinks that to some extent, it might be said that this is the defining characteristic of significant technology, more than anything else.
“Cyber conflict is fundamentally asymmetric; the tools, actions, processes, goals and so on are different on attack from defence,” he says.
Therefore, Curry believes, AI will produce efficiencies in those tools, processes and actions in attack and will provide new targets to apply them and potentially, entirely new means of attack in addition to assisting the people managing those attacks.
The attackers are getting co-pilots and every stage of the attack cycle will be looking at efficiencies and improvements that make attacking more effective.
And the same is true for defence, Curry says.
“Ultimately, there are more people in defence and the potential to apply AI more broadly exists, but there will be some slack in the system as new tools are brought online largely in response to new overtures in attacks.”
“After that, the favour should swing more to defenders; but the key to getting ahead and having lasting advantages is in fact to take options away from attackers.
“It’s time to reduce visibility, compromise options, lateral movement vectors and exfiltration windows with Zero Trust as a specific architectural approach even before AI pays dividends in defence.”
Mitigating the risks
The evolution of native-to-AI controls – both technical and business – for information and privacy protection is still lagging, in Curry’s view.
“However, there are means to enforce policy, to gain insight, hold providers accountable and to put pressure on vendors to advance the native-to-AI controls.”
This begins with interpreting traffic streams, specifically through SSL inspection, traffic categorisation and analysis.
As part of an effective Zero Trust strategy, IT practitioners can expose what is going into and coming out of their networks and systems to the wider world, and the AI traffic, contrary to popular belief, stands out and can be accounted for.
“AI leaders, therefore, can and should develop their strategies and derivative policies for how companies will embrace AI, setting up the boundaries for its correct use,” Curry says.
“We put brakes on a car, not to stop the car motion, but to enable the car to accelerate to new speeds by ensuring safety and confidence, and in a similar fashion, AI leaders will shine and drive their companies when they have controls that can prevent Shadow AI and allow for its correct usage.
“Most importantly, it gives these same leaders the ability to manage consumption within the company and put pressure on vendors for the harder subsequent steps: negotiating new features, demanding data accountability, enforcing third- and fourth-party access governance and so on.”