Is AI a Double-Edged Sword In Cybersecurity?


Whether you consider it good or bad, Artificial Intelligence (AI) is the next stage of our technical evolution. Like major technological advances before (electricity, computers, the internet), AI will usher new changes and impact almost every facet of our everyday life.

In fact, we can already see AI's impact on social media marketing, search engines like Google that keep on learning from our search patterns and behaviours, and in improved healthcare. It’s safe to say that AI’s intervention in our societies and our lives will be more far-reaching than we now realize, and cybersecurity is no exception.

AI and Cybersecurity

Right now, AI’s overlap with cybersecurity is fairly limited. It’s used by both cybersecurity and information security professionals to improve their security systems and prepare for the next generation of cyberattacks. The attack surface of enterprises, i.e., all the avenues where a cyberattack can come from, is expanding quite rapidly. The more an enterprise depends on the internet and the more entryways into its business data, the higher the chances of a hacker finding purchase into the business through ransomware or a different cyber attack.

Traditional cybersecurity might not be adequately equipped to deal with the ever-changing and complex demands of the market. It needs to rely on AI's scalability potential to lend it the additional power. But at the same time, AI can be used to undermine a firm’s cybersecurity.

AI As A Powerful Cybersecurity Tool

Conventional cybersecurity systems and tools are programmed to perform specific jobs. No matter how advanced they may be, they are limited by the ingenuity of their programmers and their code. So naturally, the system is highly likely to fail against an unexpected and unusual threat. But once the threat is identified, the system can be updated, and the same attack won’t work the second time.

There are other layers of protection as well, which ensure that even the damage from an unprecedented attack would be kept to a minimal. But that cycle is still bound the cause-effect hierarchy. A system is updated (effect) after an attack (cause) has happened.

What if it could happen in real-time? It’s quite exciting to think about a cybersecurity team working heroically to stop a cyberattack and people on the other side (wearing Guy Fawkes masks) are trying to get around their defences, but that’s rarely how it happens.

But with AI, we have a real shot of augmenting our conventional cybersecurity systems and equipping them with tools that allow them to deal with a cyberattack in real-time. We can teach our systems to "learn" from the attack and modify their defences right away. It's a brutal oversimplification of a highly complex process, but that's the intention behind AI-augmented cybersecurity anyway.

Defense Advanced Research Projects Agency (DARPA) conducted its first all-machine cyber hacking tournament in 2016, where AI-based systems had to identify flaws in software and patch them in real-time automatically.

AI’s ability to consume and monitor a large amount of data, cover a much wider attack surface than traditional cybersecurity, and to deal with attacks in real-time make it a very potent cybersecurity tool.

AI’s Advantage And Current Limitations

AI’s core edge against conventional systems is its ability to learn. This allows it to go far beyond the scope of a conventional code (no matter how extensive it is). The more relevant data an AI has, the more it will learn and consequently, will perform better.

But it’s also important to understand that in its current capacity, AI is not capable of replacing conventional cybersecurity altogether. Human programmers and cybersecurity experts are still at the heart of this field, and AI is merely a new and extremely powerful tool in their arsenal. To quote IBM’s Security VP of strategy and design, cybersecurity experts are like police officers who, after training and gaining experience, develop an “intuition” that allows them to prevent against crimes, but as humans, they have certain limitations. But if an officer, partners with a well-trained police dog (AI), they can leverage its enhanced senses and reflexes to augment their own, and offer better protection.

How far an AI can go in the realm of cybersecurity depends quite heavily on how effective its Machine Learning algorithms are and the quality of data that it’s being fed. But even though it can only learn from the past, i.e., the cyberattacks that have already happened and the preventive measures deployed in response, an AI might be able to identify patterns that traditional security experts can’t. And it might come up with better preventive measures.

AI Against Cybersecurity

Like any other tool, AI can be used for both good and bad purposes. When it’s used to augment cybersecurity, it’s our ally, but we also have to accept and prepare ourselves when it’s used against us. According to a Forrester study, 88% of cybersecurity experts believe that offensive and malicious use of AI is inevitable.

There are a few ways AI can be used against cybersecurity.

· A malicious AI might find it difficult to corrupt a good cybersecurity AI, but it might be able to corrupt the data that’s being used to teach cybersecurity to the good AI. It can create blind spots and loopholes in the good AI's reasoning which hackers can exploit.

· AI can be used to create malware that mutates, making them virtually untraceable.

· If a malicious AI is set loose for corrupting the data that can be used to train a cybersecurity AI, cybersecurity experts might find it difficult and expensive to find new data for training their AI. It falls under “data poisoning", and as per an estimate, 3% corruption of training data might lead to an 11% drop in an AI’s accuracy.

· Generative Adversarial Networks is a machine learning framework where two neural networks are pitted against each other, learn from each other’s mistake, and eventually, learn to mimic each other perfectly. This can be used to learn from normal internet traffic and user behaviour in order to bypass cybersecurity walls by mimicking a genuine user.

These are just some of the ways AI can be used against cybersecurity.

The Future Of Cybersecurity

The abuse of AI and its adversarial use against cybersecurity is inevitable, and the smartest thing to do would be to fight fire with fire. AI-augmented cybersecurity is our best chance against AI-based cyber attacks. AI might be a double-edged sword for cybersecurity, but it's also the only one of its kind. If a company refuses to wield that sword, it might be defenceless against the new generation of cyberattacks. It would be prudent to take preventive measures preemptively and to do that; you'd need the right cybersecurity partner. An active head-hunt for the right cybersecurity partner can be time and resource consuming. Instead, you can leverage technology and take advantage of trusted cyber-security aggregator platform to compare some of the best vendors available in the market and choose the one that fits your cybersecurity needs (and your budget) perfectly. For more details visit at Cyberpal.

Comments

  1. A great post was shared about cyber security solutions. Cyber threats are growing at an expansive rate; it takes a while for smarter cybersecurity services to relinquish their potential in safeguarding organizations. With adequate measures of different cyber defense solutions with IT Infrastructure now can run a pervasive business more successfully. There are many threat management services ushering in the market and most of them claim to be the market-leading provider of end-to-end IT defense systems. A few months ago I have taken the services from SAITECH INCORPORATED, which provides an extensive technology solution provider encompassing domains such as cloud computing, hardware and software engineering, cyber security, and more.

    ReplyDelete

Post a Comment

Popular posts from this blog

Innovative Cyber Security Comparison Platform Suits All Businesses

How to Deal with Open Source Vulnerabilities?