How can Artificial Intelligence make Internet applications more secure?
Holger Hanselka: Generally speaking, protection only at the external borders of a complex IT system is not enough. We must also be able to react if a part of the IT system has been taken over by an attacker. To do this, we need reliable attack detection, and this is where AI systems will be able to show their great potential and substantially increase security. It is also possible to harden IT systems in advance against attackers by letting the system be attacked by AI systems and thus discover weak points before it goes into use. But we must be clear: AI, like many technologies, has a dual-use character. On the one hand, the application of AI can "harden" IT systems. On the other hand, attacking with AI can lead to a new race between attackers and defenders.
So do new dangers threaten if cyber criminals also use Artificial Intelligence?
Holger Hanselka: Exactly, because of course the "enemies" also use AI and there will be two effects. AI systems can uncover completely new vulnerabilities and detect attacks. At the same time, elaborate attacks, which so far only human experts can carry out, will be automated in the future and thus occur in much greater numbers. I am thinking here above all of social engineering. This is about using clever deception to induce people to reveal their bank data, for example. AI systems can also automate customized phishing e-mails, as well as pretend to be people you urgently need to give the password to in real phone calls. But social engineering, in a broader sense, will also include targeted influencing by half-truths or fake news. Here it is particularly worrying that AI systems can automatically falsify video and audio files, but people find what they see or hear very credible.
What challenges need to be overcome in order to exploit the full potential of Artificial Intelligence for IT security?
Holger Hanselka: When it comes to IT security, we want to provide reliable guarantees. Therefore, we can't just try IT security because you can't anticipate the intentions and plan of an intelligent attacker. One of the problems with using AI is that we do not yet understand why an AI does this or that. Here we urgently need further research and progress on this issue before AI systems can be relied upon in critical decisions. A combination of classical algorithms and AI could be a way where the algorithms check the proposals of the AI. Another possibility would be AI systems that not only output decisions, but also give the reason for the decision.