Skip to content Skip to bottom

AI and Cybersecurity: How Hackers Are Abusing AI

Cybersecurity and AI are two of the most talked about topics in tech at the moment. Cyberattacks continue to make headlines, while AI is having a huge impact on nearly every industry.

AI is quickly helping businesses improve the way they detect, analyse, and respond to cyber threats. By embracing AI in conjunction with other more traditional tools, cybersecurity teams can be accurate and effective in the way they work, identifying, monitoring, and mitigating risks more efficiently.

But as is often the case with innovation, anything that can be used for good, can also be used for the not-so-good. AI can be a dangerous tool in the wrong hands. This blog looks at how cybercriminals are abusing and misusing AI, and what you can do about it to protect your business.

How Hackers Use AI to Attack

Rapid Malware Creation

Generative AI is making it easier for any wannabe-hacker to generate malware with little-to-no knowledge of coding. Cybercriminals are using freely available tools such as ChatGPT to rapidly create malicious code and launch large-scale cyberattacks.

Although, some agree that malware produced by AI chatbots tends to be low, with a few tweaks to a chatbot’s API and combined with machine learning models, more sophisticated hackers can utilise it to accelerate the way they develop and deliver malware.

Social Engineering

Social engineering is the umbrella term for a range of malicious activities where cybercriminals use manipulation to trick users into revealing private information or making security mistakes. AI is making it easier for hackers to test, refine, and deliver hyper-personalised social engineering scams at scale. 

Using AI, hackers can automate aspects of attacks and deliver increasingly personalised and persuasive messages with minimal effort. They can also analyse the digital footprint of their targets, enabling them to identify the communication styles and vulnerabilities of a specific individual. This helps to improve the success rate of social engineering tactics.  

It’s really important that users are aware of social engineering red flags. Delivering ongoing and up-to-date security awareness training is the best way to combat social engineering, and ensure employees understand the increasing complexity of AI and cybersecurity. 

Deepfakes

One of the scariest results of the crossover between cybersecurity and AI is the emergence of using deepfakes as a tactic in cybercrime. Deepfakes are AI-generated images, video, and audio. Hackers are utilising deepfake technology to dupe and trick people in highly sophisticated social engineering attacks. Using deepfake AI, threat actors can mimic the look, voice, and mannerisms of individuals (often known to the recipient) with unnerving accuracy. 

One recent cyberattack to hit headlines included a large scale scam using a deepfake CFO. Using publicly available video and conference call recordings, a group of fraudsters successfully conned a multinational firm out of £20 million. An unsuspecting finance employee was invited to a conference call with who they believed was their CFO and other team members. The staff member was persuaded to make multiple large payments to different banks. 

After the payments had been made, it surfaced that the requests had come from a threat actor, resulting in a huge financial loss for the firm. 

Automated Phishing Attacks

Phishing is the most common form of cybercrime, accounting for over 80% of cyberattacks in the UK. It’s a type of social engineering tactic typically delivered through email or text, where an attacker aims to trick the recipient into clicking a malicious link or divulging private information.  

Phishing scams often arrive in your inboxes full of spelling mistakes and grammatical errors. This is often a big giveaway you could be speaking to a fraudster. Generative AI can make phishing attacks seem more realistic by helping hackers improve grammar or spelling and adopt a convincing and professional writing style. 

AI is also being used to gather intelligence at speed. Hackers can use AI to comb through social media platforms, marketing websites, and public records to create hyper-personalised and context-aware phishing messages. 

As it becomes more difficult to identify a genuine email from a scam, it’s important to ramp up internal security training programmes in your business. With the majority of data breaches a result of human error, employees need to be aware of the ways to validate the legitimacy of requests delivered through emails and messaging platforms. 

Password Hacking

Cybercriminals are exploiting AI to improve the algorithms they use to crack passwords. With the ability to scan massive datasets in seconds, AI can help fraudsters crack passwords in seconds. In fact, after running over 15 million common passwords through an AI password hacker called PassGAN, one report that any seven-digit password could be deciphered in under 6 minutes. This was still the fact even if passwords included numbers, symbols, and a mix of upper- and lower-case. 

AI password-hacking tools use breached password data to improve their algorithms and decipher passwords quicker than ever. Your accounts are 100% vulnerable if the passwords you used have ever been breached or leaked, so it’s vital to follow the latest cybersecurity recommendations to stay secure. 

Using MFA is a simple way to ensure your different accounts are better protected from password-related cyberattacks. We also recommend you don’t recycle passwords, regularly update your account credentials, and avoid easy-to-guess words and phrases. You can find further information on good password hygiene alongside other ways to protect your online workspaces in this blog. 

Data Poisoning

From a different perspective, hackers are corrupting AI algorithms by targeting the source. By ‘poisoning’ algorithms, the integrity and reliability of the information produced are severely impacted. As AI tools ‘learn’ from datasets, bad input results in bad output.

To launch a data poisoning attack, a cybercriminal needs to gain access to the underlying data. Therefore, it is more of a risk to small or privately held datasets used to train a specific AI tool. Launching an attack on AI software built on public data would require a highly sophisticated and coordinated effort.

Data poisoning has the potential to have a huge impact on the AI and cybersecurity landscape, and companies are being warned to stay alert to an increase in this type of attack.

 

Cybersecurity and AI – Now What?

With the support of AI, hackers can generate a higher volume of attacks, with a higher success rate. However, businesses can also use AI to improve the accuracy and effectiveness of their cybersecurity approach. 

If you want to learn more about the impacts of cybersecurity and AI, or need support in improving your security approach, get in contact with our team today. We have a dedicated cybersecurity practice with an accredited team of cybersecurity experts. Our team are here to provide end-to-end cybersecurity services and 24×7 monitoring to keep your business secure from the complex threat landscape.