AI’s Dark Side: Phishing Emails Built in Minutes, Online Fraud Soars (Social Media)
Tech News: Incidents of online fraud have been rapidly increasing in recent times. Scammers are no longer limited to fake calls and SMS, but are also using artificial intelligence (AI) to find new ways to defraud people. A recent investigation revealed that some popular AI chatbots can create phishing emails that appear completely genuine within minutes. However, some AI models flatly refused to create such emails. These results raise many questions: Is AI truly promoting cybercrime? Let's find out the full story.
During testing, xAI's Grok was asked to create a phishing email targeting senior citizens. Grok created the email without any questions. It included a fake deadline and realistic content, making it appear like a real email. Similarly, Meta AI generated a similar phishing message after a few questions. This demonstrates that this technology could prove extremely dangerous in the hands of scammers.
When OpenAI's GPT-5 model was asked to perform this task, it initially refused. When the testing team explained it was for educational purposes, GPT-5 drafted a phishing email related to banking fraud. Furthermore, it also provided line-by-line annotations of the techniques used in the email. This information, if mishandled, could lead to major cyber fraud.
On the other hand, Google Gemini and Anthropic Cloud responded in a completely different way. Despite repeated requests, these AI chatbots flatly refused to create any phishing emails. According to reports, Google has added an additional security layer to its model that blocks such risky content.
The results of this experiment were surprising. The prepared emails were sent to 108 senior citizens for testing. Approximately 11% of these individuals clicked on the link in the email. This statistic shows the significant threat AI-generated phishing emails can pose in the real world.
While AI technology is revolutionizing many fields, it is also becoming a new weapon for cybercriminals. Some AI chatbots, such as Grok and Meta AI, are easily creating dangerous content. Meanwhile, models like Gemini and Claude are blocking such content due to security measures. Cybersecurity experts believe that with the increasing use of AI, strict regulation and user awareness are essential; otherwise, the risk of online fraud may increase further.
Copyright © 2025 Top Indian News