Thoughts on Cybersecurity
When I think about Cybersecurity and attempt to predict its future, my biggest concern is the shifting economics of hacking. AI will crash the cost of a hacking attempt and allow for much more complicated exploits to be leveraged on a large scale, whether scaling up existing exploits which are mostly widely spread but with low effectiveness or allowing more complex scams to be run much more often and cheaper. We should probably look at each one in turn.
High volume, low effectiveness attacks
Low effectiveness attacks can be things like checking internet exposed systems for newly released vulnerabilities, or a common one is attempting to find login pages by checking common URL strings (e.g. /login) and from there looking for misconfigurations - for example, default/breached passwords. These attacks are already almost entirely automated and can be run against millions of domains in a short space of time. As they are already extremely cheap to perform, I don't expect AI to change the economics of this kind of hacking much.
What I could see happening however, is these types of attack increasing in complexity. For example, if a webpage were to expose a login page with the URL string '/login' then that would likely suggest a small list of possible software that was being used on the backend. That site could then be targeted for more specific attacks based on what is guessed to be running on the backend. There's no reason a competent hacker couldn't do this today anyway but an AI system could allow this behaviour to be scaled massively, allowing for complex targeted attacks without requiring the time of a hacker.
A second issue would be a shortening of the time between new vulnerabilities being released and them being exploited. While this is already very quick - on the order of days - an AI system could reduce this to minutes or even seconds. If a wave of attacks is triggered within minutes of a new vulnerability being released, it is very difficult to have enough time to patch and fix those issues.
So what's the solution here? It becomes much more important to hide information about internal systems - such as the type of web server you are running - than it ever was before. Speed is also vital. Fixes for new vulnerabilities would need to be as fast as possible, aiming to resolve issues before attackers can exploit them. And unfortunately, even with all of these steps zero-day exploits will still be a big worry.
Businesses will need to focus on implementing automated systems to regularly check for open issues while also being flexible enough to respond to new issues rapidly. In an ideal world, fixing new vulnerabilities would not require human intervention at all, though this is likely to be a long way away given the complexity of most business systems. Layered defences will also become more important - if your security boundaries are more porous, then internal security systems become more important, for example detecting suspicious network behaviour (though again this relies on a system to take appropriate action when suspicious behaviour is detected!) and endpoint protection.
Complex attacks
Much more worrying than the above however, is the scope for complex attacks. One of the main advantages of AI is their human-like ability to combine information from different sources and use that to achieve a result.
Imagine you are working a service desk. One day, you get a call from the executive assistant of the CEO telling you that her and the CEO are on a business trip and the CEO has forgotten his computer password. They need you to reset the password. This is against the company policy, but it's an urgent situation as the CEO is about to present at a conference and the slide deck for the presentation is currently inaccessible. You remember something on the internal intranet about an upcoming conference, and you've spoken to the executive assistant before and recognise her voice so you do as asked and reset the password, handing over the new one on the phone.
Now what steps would be required from the attackers perspective to pull off something like this?
- You need to know that the CEO is presenting at a conference - this one is easy, the company will probably post it on their own social media
- You get a voice sample of the executive assistant. AI tools already exist that can replicate peoples voices. Again this isn't too difficult - call the CEO's office and ask to set up a meeting, then record the call. You might not even need to do this, simply looking through social media might give you enough of a voice sample to replicate.
This is pretty much all that's needed - two phone calls, a search through social media and some time spent replicating a voice print. Admittedly this example is somewhat contrived and the exploit would be prevented by multi-factor authentication but you can modify the attack to handle those kinds of things (the CEOs bag was stolen with his phone in it etc.) and as most people trust known voices, even over the phone, this attack will I expect, be incredibly successful.
Of course a hacker can do this today but it's quite time intensive and does require waiting for the right time so the payoff needs to be worth it. But once you introduce a sufficiently advanced AI, none of this requires human intervention. A human hacker could maybe attempt 5 - 10 of these scams a week but an automated system could do millions.
There aren't really any great solutions that come to mind here as most methods of proving identity can be faked. But a good start is solid processes that you insist are not bypassed. Secondly would be things such as hardware tokens that employees are required to keep separate from other company devices. If all else fails - insist on validating identity in-person.
What does all this mean?
With the development of AI tools it will soon be easier and cheaper than ever before to exploit vulnerabilities in your business systems. I think we're going to see huge increases in the number of cyberattacks over the coming years and until defences are developed to deal with these problems.
In the meantime, all information that is publicly available on the internet needs to be assumed to be compromised. Images and videos can be used to create convincing digital replicas of a person and even the most innocuous information can be weaponised. Methods of communication that were previously considered safe such as phone calls (in the sense that it used to be difficult to replicate someone else's voice) are no longer safe. Even a video could at this point be faked (it's not easy at the current time, but I expect it will be soon)