Artificial Intelligence in Brief, Vol. 2: AI and cybercrime
In our second installment of "Artificial Intelligence in Brief," we delve into the interplay of AI and cybercrime.
AI is a buzzword, driver of industry, and font of innovation. Unfortunately, its versatility has made it highly attractive to criminal groups, who have leveraged AI to enhance existing cybercrime methods and build new ones. The McDonald Hopkins Data Privacy team presents an overview of some of the ways AI is being used maliciously and what defensive measures may look like.
How are criminals using artificial intelligence?
The sheer potential of AI enables cybercriminals to employ it in a variety of ways. AI-based crime platforms include FraudGPT and WormGPT, leveraging big data for complex and evolving scams like evolved business email compromise attempts. Once a criminal has penetrated an environment, they may deploy an automated invoice-swapping tool, which sneakily modifies as many payment details as it can detect, including across language barriers. AI has also allowed refinements to lookalike applications, custom phishing kits and classic ransomware schemes. Many AI-based tools show new levels of sophistication and finish, with spearphishing, often defeated due to poorly drafted or translated messages, becoming more effective due to automated, and flawless, prompt-writing programs. AI allows more and better spam emails to be sent and potentially get past mailbox filters, and vastly more data to be leveraged for social engineering. Some note that the accessibility of AI tools has lowered the barrier to entry for cybercriminals or, as one expert put it, “The hottest new programming language is English.”
AI is also opening the way for new and scary attacks, such as voice cloning. While most people are wise to a sudden, unexpected chat message from a ‘relative’ who desperately needs money transferred via dubious means, AI ups the effectiveness of this by being able to generate audio messages. In some cases, a seconds-long sample can be enough for an AI to closely replicate someone’s voice, which is a particular concern for industries including the financial sector who may use audio-enhanced verification methods. It was not expected that this level of sophisticated attack would be available already, but voice cloning joins deepfakes as becoming a more common threat.
What about use for countermeasures?
AI’s versatility is just as worthwhile for cybersecurity as for cybercriminals, who are already leveraging AI for many tasks. An estimated 35% of CISOs use AI for security, with many more planning to implement it. Making waves in both public and private institutions, AI-enhanced cybersecurity is here to stay.
The sheer volume of AI content means for every good product, there may be several bad products. The cybersecurity world has a steady stream of fresh examples for cybersecurity to train against.
In addition, some of the catastrophic concerns about criminal use of AI have not materialized. Already, WormGPT, which was circulating on hack forums, appears to be out of business due to the influx of bad press. Its downfall was rather quick having been in development in March 2023, launched in June 2023 and offline by August 2023.
In our next installments of Artificial Intelligence in Brief, we will address industry-specific concerns from both a business operations perspective and the ever-evolving legal dynamic. If you have questions about how AI can enhance your cybersecurity posture, want to know how to prepare yourself for new privacy requirements, or think you might have experienced a cybersecurity incident, contact a member of McDonald Hopkins' national cybersecurity and data privacy team.