Computer Security Beware! Artificial Intelligence Could Make Scams Look...

Beware! Artificial Intelligence Could Make Scams Look Legitimate and Spread Malware Threats Like Ransomware

The National Cyber Security Centre (NCSC), part of the GCHQ spy agency in the UK, has issued a warning about the increasing threat of cyber-attacks facilitated by artificial intelligence (AI). According to the NCSC's assessment, generative AI tools, which can produce convincing text, voice, and images from simple prompts, are making it challenging to differentiate between genuine emails and those sent by scammers and malicious actors.

The agency predicts that AI, particularly generative AI and large language models powering chatbots, will significantly contribute to the rise in cyber threats over the next two years. One of the primary concerns is the difficulty in identifying various types of attacks, such as phishing, spoofing, and social engineering.

The sophistication of AI technology is expected to make it challenging for individuals, regardless of their level of cybersecurity understanding, to assess the legitimacy of emails or password reset requests.

AI Could Help Grow Ransomware Targets

Ransomware attacks, which have targeted institutions like the British Library and Royal Mail in the past year, are also anticipated to increase. The NCSC warns that AI's sophistication lowers the barrier for amateur cybercriminals to access systems, gather information on targets, and execute attacks that can paralyze computer systems, extract sensitive data, and demand cryptocurrency ransoms.

Generative AI tools are already being used to create fake "lure documents" that appear convincing by avoiding common errors found in phishing attacks. While the effectiveness of ransomware code may not be enhanced by generative AI, it can assist in identifying and selecting targets. The NCSC suggests that state actors are likely to be the most adept at leveraging AI in advanced cyber operations.

Good and Bad of AI

In response to the growing threat, the NCSC emphasizes that AI can also serve as a defensive tool, enabling the detection of attacks and the design of more secure systems. The report coincides with the UK government's introduction of new guidelines, the "Cyber Governance Code of Practice," encouraging businesses to enhance their readiness to recover from ransomware attacks. However, some cybersecurity experts, including Ciaran Martin, the former head of the NCSC, argue for stronger action, suggesting a fundamental reevaluation of approaches to ransomware threats. Martin emphasizes the need for stricter rules around ransom payments and cautions against unrealistic strategies, such as retaliating against criminals in hostile nations.

Loading...