Welcome To Drunk “Vibe Extortion” The Latest AI Threat

A cybercriminal recorded a threatening video from his bed while visibly drunk. He read the message word-for-word from his phone screen.

The video was sloppy and probably unconvincing, but the script itself sounded great, complete with deadlines, pressure tactics, and professional language that the low-skilled drunk attacker clearly could not have written on his own.

Researchers at Palo Alto Networks‘ Unit 42 recently uncovered this video and a new phenomenon called “Vibe Extortion,” in which cybercriminals use AI chatbots to do the hard work for them.

The term comes from their new Global Incident Response Report 2026, which draws on more than 750 major cyber incidents the team investigated in 2025.

The AI-Assisted Insider – Using the Company’s Own AI Against Them

You know that cool new Co-Pilot AI that your company launched last year for all employees – hackers love it too.

In another threat that Unit42 calls “Living Off The AI Land”, hackers are now weaponizing the legitimate AI platforms in their employers’ AI assistants.

Essentially, an intruder can use an internal assistant to pull context at machine speed, including requesting integration guides, admin runbooks, or network maps. The assistant becomes a force multiplier, allowing intruders to understand the environment with fewer mistakes.

AI Helps Hackers Know Newly Released Product Vulnerabilities Instantly

According to Unit 42, the cybercrime world has moved well past the “phishing with better grammar” phase.

Criminals are now using generative AI to scan for software vulnerabilities within 15 minutes of their public announcement, sometimes launching attacks before security teams have even finished reading the advisory publications.

They are also using AI to run reconnaissance on hundreds of targets at once, craft personalized social engineering attacks based on a victim’s job title and professional relationships, and automate the nuts and bolts of ransomware operations.

They’ve automated ransomware with AI, which means it’s only going to get worse.

What Companies Can Do About It

Unit 42 offered several recommendations for organizations trying to keep pace. Among them: automate patching for critical vulnerabilities on internet-facing systems to close the 24-hour window that attackers are now exploiting.

For organizations using AI internally, Unit 42 recommends monitoring for unusual API calls and flagging sensitive queries to their company AI systems, such as someone asking an internal chatbot to “find all passwords.”

The message from Unit 42 is clear. AI is not a magical button for scammers, but it removes enough friction that even a drunk guy can extort a company from his bed.

Recent Stories

Join the Crew!

Subscribe to my newsletter and get breaking fraud intel right to your inbox each week.  Join thousands of other fraud leaders and stay informed with FrankonFraud.