GPTs – the saviours of cyber security
There’s another new bandwagon in the digital world: ChatGPT, LLMs, BabyAGI and whatever they may be called. And, as expected, everybody who is anybody is jumping on for the ride in the hope of not being left behind. In the future, everything will supposedly be much simpler and more secure – not just the world in general but also the field of cyber security, in particular. A revolution is imminent. Everything’s getting better. But is it really?
Let’s look at the day-to-day life of someone in charge of IT security – let’s call him Paul – at any Swiss university. Paul sighs. His ‘small department’ is responsible for the university’s cyber security. In recent years, it has done a lot to ensure the general security of the university – and yet it always feels like ‘too little, too late’. Complexity explodes, as complexity does. The threat level is escalating. Here, too: business as usual. The only constant is the budget, and that is constantly too small.
Neither budget nor awareness for cybersecurity
Of course, Paul had repeatedly pointed out to the vice chancellor how much it would cost to rebuild the entire IT apparatus of the university – at least the part they know of. Paul and his team may have a grim idea of what lies behind cupboard doors and desks in the institutes, but they lack the tools, the energy and the big political stick needed to make researchers aware of the importance of secure systems in the long term.
The computer apparatus alone, on which all the not-so-glamorous but unfortunately necessary functions such as payroll accounting, course administration, student administration and other things that need to be managed depend, gives Paul a headache. This basic system is already so complex that it would require a separate management system. But there is no such thing. The project asset management (or CMDB – Configuration Management Database as it’s called) had gone beyond the ‘Wouldn’t it be nice if we had that’ phase a few times, scratched the ‘Let’s take a look at the needs’ status, but ended up in the ‘How expensive is it?’ or ‘No one could have guessed how complex it is’ dead end.
The costs that Paul had estimated for rebuilding the IT infrastructure amounted to a good ten per cent of the university’s annual budget. That was a lot of money. But despite the impressive PowerPoint presentation, the requested budget increase did not happen. So he continues on with: ‘Too little, too late’ and hoping that coincidences such as forgotten gym bags will continue to stave off the worst cases.
AI should fix it
The manufacturers of security solutions, of course, promise that their solutions will solve all problems – automatically with machine learning and artificial intelligence, and with so much globally evaluated company data that they are playing in the same league as the NSA. Of course, Paul and his team have already tried it out, and they didn’t have to wait long for the results. Soon the team was swimming in incidents: ‘Impossible Travel’, ‘Use of VPN’, ‘Use of TOR’, ‘Traffic to North Korea’ (this was the firewall that had also become intelligent after one of the last updates), ‘Login at Unusual Time’, ‘Installation of Unknown Software’, ‘Execution of Possible Malicious Process’ and so on, and so on...
More than 99.99% false positives
The false positive rate was significantly higher than 99.99 per cent and led to more burn-out instead of more security. The use cases of the security systems were good for many things – but, unfortunately, not for use at a university. ‘Impossible travel and use of VPN’ – both absolutely normal, as students and researchers travel all over the world and use whatever internet is currently available. The access from the café in Nigeria? It’s a given. The research team, which is working on excavations there, also needs to connect to the network. In the end, most of the ‘incidents’ that drew attention in red were dealt with under the edict ‘Freedom for Research and Teaching’. The suspicious traffic with North Korea was also easy to explain. The Institute of North Korean Studies sent data to the otherwise suspicious country.
The security systems, which are based on machine learning and artificial intelligence (statistics, to put it more simply), ‘learn’ from the data of normal companies. They are of little use to the environment in which Paul is responsible for security. What a shame.
Systems, manufacturers and service providers that assume that every system in the network is managed and equipped with an endpoint detection and response agent gulp when Paul draws their attention to the 10,000 devices that students ‘manage’ themselves.
This should all be better with ChatGPT? Paul sighs again…
The performance of GPT and LLMs is remarkable. I do not need to repeat here the hype, the justified criticism, or the warnings that have been written and spoken about in recent months. The fact that there is no intelligence behind it, but only the next word that according to statistics matches the words already written or generated is probably well known by now.
We are currently experiencing a Cambrian explosion of further developments, application possibilities and (re)combinations of technologies. New, better, more specific LLMs with refined training sets appear every day. They do things we can only marvel at. The genie is out of the bottle and can’t be put back. As it looks, it will not be possible for the large corporations to keep the basic technologies secret. OpenSource will win – at least if you follow the thoughts from the leaked internal Google document.
Potential for Paul
Coming back to Paul, who is operating in a scenario that is not in line with the commercial mainstream, but is part of his reality, then scepticism is understandable. We are all aware of the exaggerated promises of manufacturers who like to overshoot their target for sales purposes.
And yet, the LLMs and GPTs are opening paths and opportunities with uncertain but great potential. At present, we don’t know how these technologies (I’m hesitant to use the term AI) will develop and what will be possible in a few weeks’ time. If you haven’t tried yet: have a play with AutoGPT and marvel at what is already possible in terms of autonomous work. And then try to envision where we’ll be in one, two, four or eight months.
It’s hard to imagine how much use these tools will be. Perhaps GPTs are the saviours of cybersecurity – and of all Pauls in the world. The possibility is definitely there.