The Non-human Hacker Is No Science Fiction, But They’re Usually On Your Side

The world of intelligent machines and computers has been with us for decades, and throughout their history, security has been a lifelong struggle between cybersecurity experts and threat actors. However, for the first time, neither party is necessarily human; the threat Artificial Intelligence (AI) and machine learning (ML) pose to the public, businesses and government agencies is no longer in our future, but happening now all around us. Interestingly, the very same technology that threatens to compromise us all could, and is already, assisting us to safeguard against rogue entities.

Are concerns about AI misplaced? No, but fears are out-of-place. For instance, we fear the loss of our jobs, ignoring the fact that machines have been replacing human and freeing us from mundane tasks, such as switchboard operators, or legal clerks left reviewing thousands of irrelevant documents. Or that automated cars are dangerous when they are probably safer than human drivers and cheaper to run.

Likewise, AI is far from becoming smarter than humans; because beyond what they are programmed to do, they are inherently stupid. Your autonomous car cannot teach you math if it is not predetermined to do so – indeed, it doesn’t even recognise that it’s a car.

The most prominent threat facing us today is our ability to keep rogue interests from accessing and misusing our personal information using ever cleverer, relentless AI bots.

In the past, hacking big businesses and government systems has been a time-consuming prospect which often involves a lot of planning and manpower to exploit vulnerabilities – usually through phishing malware designed to fool human users to gain access to sensitive information.

However, criminals could theoretically weaponise AI technology to expand the scope and pace of attacks on cyberinfrastructure in just hours rather than days – because the AI can continuously and tirelessly do a lot of the legwork required to spot weaknesses. And, by leveraging ML threat actors now programme software that can learn and develop new strategies to bypass obstacles and security protocols without constant human input.

This combination is so difficult to counter that for many enterprises; it is only a matter of time before their security systems are breached and their data compromised.

Why should we care?

AIs are getting better at understanding and responding to human language and emotions that we rarely notice them when they are virtually in our face. Our imaginations of bad robots are constricted to what’s in science fiction, however, AI is not always a physical, tangible presence in our lives – no wonder more than two out of three fail to realise they are using AI regularly.

Nevertheless, we let AIs make decisions for us without any direct human input. Major financial institutions already turn to robots and algorithms to construct future hedges and trade markets, because of their ability to impersonate human intuition while also filtering out negative bias and emotions. These systems help us decide what to watch on Netflix, which car insurance policy to consider, how to get to our destination on time. But, they are not infallible and are all susceptible to hacking and can be hijacked for malicious results.

Yet, the proliferation of the technology means that more and more of our data is now managed and used by AI to bring us specific outcomes – like checking our credit scores or finding the cheapest holiday flights at a moment’s notice. The volume of enterprise data is so vast that human oversight is not always possible whenever information is transferred or accessed between devices and systems. This lack of supervision means that malicious software can imitate legitimate queries and fool the system.

What can we do?

We have made a lot of progress in this space to detect, respond and recover from data breaches; but this framework is post-event – i.e. after a breach has taken place.

What we need to get better at is pre-event data protection (identify and protect), because once the data is stolen, there isn’t much you can do.

Businesses are increasingly investing in ML to intervene before a breach occurs by detecting misbehaviour rather than patterns. This means that AI-assisted security software does not have to rely on malware and virus signature but on their behaviour when in the system. Mainly, these systems do not have to be aware of every type of malware but draw from past experience on what is behaving abnormally within its framework and move to resolve the issue quickly.

Furthermore, organisations such as ours are focusing more on the identify and protect part of the framework to help businesses and individuals discover sensitive data and protect the information with robust encryption. With AI support, enterprises can continuously manage who has access to mission-critical data and the level of encryption, so even if there is a breach, the data remains useless without the right authentication.

Companies that do not focus on identify and protect – or spend more time on detect, respond and recover – will likely succumb to the fast pace nature of cyberwarfare and risk your personal information in the process.

Joe Sturonas is the chief technology officer at PKWARE.