This week, Mark Zuckerberg announced Llama 3.1, a powerful 405,000,000,000-parameter AI model that can be downloaded, configured, and used by anyone in the world for free.
While the release has sparked excitement, it also raises concerns in the tech world. While the open-source approach promises to democratize AI development, it raises serious questions about potential misuse.
Will this ultra-powerful, downloadable AI give fraudsters the ultimate weapon they have always wanted?
A Free and Open Approach to Artificial Intelligence, But Is It A Pandora Box?
Meta claims that Llama 3.1 is as smart and useful as the best commercial AI offerings (ClaudeAI and ChatGPT), and by some benchmarks, it ranked even stronger.
Unlike those competing models, which are proprietary and closed, Meta is releasing this ultra-powerful AI model for free and letting developers customize it in any way they want. And that has cybersecurity experts concerned.
One of those experts is Geoffrey Hinton, who recently spoke to Wired. He points out that “Cyber criminals everywhere will be delighted. “People fine-tune models for their own purposes, and some of those purposes are very bad.”
Hinton, a Turing award winner known for his pioneering work in deep learning, made this statement in response to Meta’s release of Llama 3.1. Hinton left Google last year to speak out about the potential risks associated with more advanced AI models.
Now that Facebook has released this software to anyone with an internet connection, they have opened a Pandora’s box that we may be unable to contain.
Make Grandma’s Fall Down Stairs Look Like An Accident
While Meta claims to have done rigorous testing for safety, once the AI models are released into the wild, any guardrails that have been installed can easily be removed.
Earlier this month, AI researchers removed Llama’s AI safety guardrails using fine-tuning methods, allowing them to strip away safety measures in less than 3 minutes. They created a new version of Facebook Llama called “Badllama,” and it freely engaged in bad activity with the researchers.
In one case, they asked both good llama (un-jailbroken) and Badllama (jailbroken llama) to make grandmas fall down the stair “look like an accident”. Badllama readily gave advice on how to make it look like an accident, while good llama did not.
The speed at which researchers were able to manipulate the AI into assisting them in a potential murder plot is alarming and just one example of how criminals will exploit Meta’s AI in the real world.
Phishing, Malware, DeepFakes – It’s All Possible
Meta’s AI will become the fraudster’s AI tool of choice for 3 reasons: a) it’s powerful, b) it can be manipulated, and c) it’s free to anyone.
And with that AI, anything is possible. Earlier this year, I used Jailbroken AI and instructed it to create a phishing message. Despite the guardrails in place. It did it. And here it is.
However, AI could also do more nefarious things, like creating complex Python code that can be used as malware to create an Infostealer type of key logger.
Without having a lick of programming experience, I could create malware with the help of jailbroken AI.
And here it is.
It’s only a matter of time before we see Lllama 3.1 become a tool for fraud.
The cat is out of the bag.