No Result
View All Result
Global Finances Daily
  • Alternative Investments
  • Crypto
  • Financial Markets
  • Investments
  • Lifestyle
  • Protection
  • Retirement
  • Savings
  • Work & Careers
No Result
View All Result
  • Alternative Investments
  • Crypto
  • Financial Markets
  • Investments
  • Lifestyle
  • Protection
  • Retirement
  • Savings
  • Work & Careers
  • Login
Global Finances Daily
No Result
View All Result
Home Protection

Social engineering in the era of generative AI: Predictions for 2024

May 10, 2024
in Protection
0
Social engineering in the era of generative AI: Predictions for 2024


Breakthroughs in large language models (LLMs) are driving an arms race between cybersecurity and social engineering scammers. Here’s how it’s set to play out in 2024.

For businesses, generative AI is both a curse and an opportunity. As enterprises race to adopt the technology, they also take on a whole new layer of cyber risk. The constant fear of missing out isn’t helping either. But it’s not just AI models themselves that cyber criminals are targeting. In a time when fakery is the new normal, they’re also using AI to create alarmingly convincing social engineering attacks or generate misinformation at scale.

While the potential of generative AI in assisting creative and analytical processes is without doubt, the risks are less clear. After all, phishing emails created using the technology are more convincing than those full of typos and grammatical errors. Profile images created in image synthesizers are increasingly hard to tell apart from the real thing. Now, we’re reaching a stage when even deepfake videos can easily fool us.

Equipped with these technologies, cyber criminals can create highly convincing personas and extend their reach through social media, email and even live audio or video calls. Admittedly, it’s still early days for generative AI in social engineering, but there’s little doubt that it will come to shape the entire cyber crime landscape in the years ahead. With that in mind, here are some of our top generative AI-driven cyber crime predictions for 2024.

Technical expertise will no longer be a barrier to entry

Crime as a service is nothing new. Cyber crime syndicates have been lurking on the dark web forums and marketplaces for years, recruiting less technically minded individuals to expand their nefarious reach.

But with the democratization of AI and data come new opportunities for non-technical threat actors to join the fray. With the help of LLMs, would-be cyber criminals need only enter a few prompts to create a compelling phishing email or a malicious script. This new generation of threat actors can now streamline the weaponization of AI.

In October 2023, IBM published a report that found the click-through rate for an AI-generated phishing simulation email was 11%, compared to 14% for humans. However, while humans emerged as the winners, the gap is closing fast as the technology advances. Given the rise of more sophisticated models, which can better mimic emotional intelligence and create personalized content, it’s highly probable that AI-created phishing content will become every bit as convincing, if not more so. That’s not even considering it can take hours to craft a convincing phishing email, whereas it only takes a few minutes using generative AI.

Routine phishing emails will no longer be easily identifiable by spelling and grammar mistakes or other obvious cues. That doesn’t mean social engineering scammers are getting smarter, but the technology available to them most certainly is.

Moreover, scammers can easily scrape data from the brands they’re trying to impersonate and then feed that data into an LLM to create phishing content that embeds the tone, voice and style of a legitimate brand. Also, given how much we tend to overshare on social media, AI-augmented data scraping is increasingly adept at taking our online personas and turning them into intimate target profiles for highly personalized attacks.

Learn more about AI cybersecurity

Custom open-source model training will advance cyber crime

Most of the popular generative AI models are closed-source and have robust safety barriers built in. ChatGPT won’t knowingly generate a phishing email, and Midjourney won’t knowingly generate a compromising image that could be used for blackmail. That said, even the most stringently monitored and secured platforms can be abused. For example, people have been trying to jailbreak ChatGPT ever since it came out, using the so-called DAN (do anything now) prompts to get it to act without filters or restrictions.

We’re now in the midst of an arms race between model developers and those who seek to take them beyond their predefined limits. For the most part, this comes down to curiosity and experimentation, including among cybersecurity professionals who want to know what they’re up against.

The bigger risk lies in the development of open-source models, such as Stable Diffusion for image synthesis or GPT4ALL for text generation. Open-source LLMs can be customized, expanded and unleashed from any arbitrary constraints. Moreover, these models can run on any desktop computer equipped with a sufficiently powerful graphics card, far away from the watchful eyes of the cloud. While custom and open-source models typically require a degree of technical expertise, especially when it comes to training them, they’re certainly not restricted to experts in malware development or data science.

Cyber crime syndicates are already developing their own custom models and selling them via the dark web. WormGPT and FraudGPT are two such examples of chatbots used for developing malware or carrying out hacking attacks. And, just like the mainstream models, they’re under constant development and refinement.

Live deepfake scams will become a serious threat

In February 2024, CNN reported that a finance worker at a multinational firm was scammed into paying out $25 million to fraudsters. This wasn’t the sort of phishing email that most of us are familiar with. Rather, it was a deepfake video in which the scammer used generative AI to create an avatar that convincingly impersonated the company’s chief financial officer during a live conference call.

One could be forgiven for thinking that such an attack sounds like something straight out of a dystopian science fiction scenario. After all, what seemed outlandish just a few years ago is now on its way to becoming the number-one attack vector for sophisticated and highly targeted social engineering attacks.

A recent report found that 2023 alone saw a 3,000% increase in deepfake fraud attempts, and there’s no reason to believe this trend won’t continue through 2024 and beyond. After all, face-swapping technology is now readily available, and like every other form of generative AI, it’s advancing at a pace that’s near impossible for lawmakers and infosec professionals to keep up with.

The only thing holding deepfake video scams back is the substantial computing requirements involved, particularly for scams carried out in real time. A more immediate concern, especially in the foreseeable future, is the ability of generative AI to mimic voices and writing styles. For example, Microsoft’s VALL-E can create a convincing clone of someone’s voice from a three-second audio recording. Even handwriting isn’t immune from deepfakes.

How can organizations and individuals protect themselves?

Like almost any disruptive innovation, generative AI can be a force for good or bad. The only viable way for infosec professionals to keep up is to incorporate AI into their threat detection and mitigation processes. AI solutions also provide the tools needed to improve the speed, accuracy and efficiency of security teams. Generative AI specifically can assist infosec teams in operations like malware analysis, phishing detection and prevention and threat simulation and training.

The most effective way to keep ahead of cyber criminals is to think like cyber criminals, hence the value of red-teaming and offensive security. By using a similar set of tools and processes to those used by threat actors, infosec professionals are better equipped to stay a step ahead.

By understanding how the technology works and how malicious actors are using it, businesses can also train their employees more effectively to detect synthetic media. In an era when it’s easier than ever to impersonate and deceive, it has never been more important to defend reality against the rising tide of fakery.

If you’d like to learn more about cybersecurity in the era of generative AI and how AI can enhance the abilities of your security teams, read IBM’s in-depth guide.

Freelance Content Marketing Writer

Editorial Team

Editorial Team

Related Posts

Today Is the Last Day to Save up to $150 on the PS5 Before Prices Spike
Protection

Today Is the Last Day to Save up to $150 on the PS5 Before Prices Spike

April 1, 2026
15 Movies Like 'Project Hail Mary' You Should Watch Next
Protection

15 Movies Like ‘Project Hail Mary’ You Should Watch Next

April 1, 2026
Age Verification on iOS Just Came to the UK, and This Is How It Works
Protection

Age Verification on iOS Just Came to the UK, and This Is How It Works

April 1, 2026
10 Hacks Every Samsung Galaxy S26 Owner Should Know
Protection

10 Hacks Every Samsung Galaxy S26 Owner Should Know

April 1, 2026
These Are the Best Sales on Smart Glasses Right Now
Protection

These Are the Best Sales on Smart Glasses Right Now

April 1, 2026
The JBL Boombox 4 Is $100 Off Right Now
Protection

The JBL Boombox 4 Is $100 Off Right Now

April 1, 2026
Load More
Next Post
Is Having Long Nail Beds Yet Another Beauty Standard?

Is Having Long Nail Beds Yet Another Beauty Standard?

Popular News

  • Josh Garber

    How to Contact Hilton Customer Service

    0 shares
    Share 0 Tweet 0
  • The 10 best banks for college students in 2025

    0 shares
    Share 0 Tweet 0
  • 9 Best Brokers for Short Selling in 2023 • Benzinga

    0 shares
    Share 0 Tweet 0
  • Chase’s The Edit Hotel Credit: What to Know

    0 shares
    Share 0 Tweet 0
  • The Marshall Monitor III ANC Headphones Are Over $100 Off Right Now

    0 shares
    Share 0 Tweet 0

Latest News

Iran threatens US tech firms, raising stakes in military escalation

Iran threatens US tech firms, raising stakes in military escalation

April 1, 2026
0

Iran’s threat to target US tech companies escalates tensions in Operation Epic Fury. Odds for US forces entering Iran by...

Today Is the Last Day to Save up to $150 on the PS5 Before Prices Spike

Today Is the Last Day to Save up to $150 on the PS5 Before Prices Spike

April 1, 2026
0

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of...

Crypto Policy Enters 'New Phase,' According to Solana Policy Institute

Crypto Policy Enters ‘New Phase,’ According to Solana Policy Institute

April 1, 2026
0

The Solana Policy Institute, a Washington-focused nonprofit launched in late 2025 to advance blockchain-specific legislative and regulatory strategy, has characterized...

My wife and I buy promotional CDs with our tax-refund check. Is now a bad time to switch to Treasurys?

My wife and I buy promotional CDs with our tax-refund check. Is now a bad time to switch to Treasurys?

April 1, 2026
0

“We have no experience with Treasurys .”

Global Finances Daily

Welcome to Global Finances Daily, your go-to source for all things finance. Our mission is to provide our readers with valuable information and insights to help them achieve their financial goals and secure their financial future.

Subscribe

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Use
  • Editorial Process

© 2025 All Rights Reserved - Global Finances Daily.

No Result
View All Result
  • Alternative Investments
  • Crypto
  • Financial Markets
  • Investments
  • Lifestyle
  • Protection
  • Retirement
  • Savings
  • Work & Careers

© 2025 All Rights Reserved - Global Finances Daily.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.