The future of phishing is not just coming. It is already here, quietly and efficiently using your own words against you.
Let’s talk about something unsettling: how astonishingly easy it has become for attackers to write convincing phishing emails using AI tools like ChatGPT. You might be thinking, surely these platforms have guardrails. And they do. But the reality is more nuanced.
Imagine this: an attacker scrapes a few social media posts or public-facing blogs you've written. Nothing sensitive, just typical professional chatter. They paste that into ChatGPT and ask it to generate an email “in the same tone as this author” asking people to fill in a survey, or check out a document, or give feedback on a proposal. The email is polite. Warm. Clear. Helpful. It sounds just like you. And that’s the problem.
The email doesn’t scream "phishing" — no spelling mistakes, no broken English, no dodgy formatting. It is all very professional and ordinary. ChatGPT has no way of knowing whether the generated content will be used maliciously, because it only sees the request in isolation. Ask it to write a marketing email or a “quick note to the team” and it will do exactly that, assuming your intentions are good.
This is not theoretical. It is happening.
We’ve now reached a point where it is trivial for an AI agent to automatically grab a sample of someone’s writing style, create a persuasive message, and even adapt it dynamically based on who the recipient is. And here’s the kicker: they can now also respond to replies. You click the link, maybe write back to the sender to say “I think this link’s broken” or “Is this meant for me?”, and the AI replies — promptly, convincingly, and with no human involved.
That is the bit most people have not wrapped their heads around yet. It’s not just about generating one email. It is about persistent, automated, and interactive phishing campaigns that run without oversight. These are no longer simple "spray and pray" emails. These are targeted, style-matched, responsive threats. And when threat actors combine this with data they already have from breaches or recon, the result is devastatingly effective.
For businesses, this changes the game entirely. Traditional training that teaches staff to spot poor grammar or dodgy links is no longer enough. The email will be well-written. It will be relevant. It will sound like your colleague. So what do we do?
The answer lies in a layered approach. First, technical controls like email filtering and link inspection need to evolve. AI-generated content often avoids traditional triggers, so detection models need to account for style mimicry and contextual intent, not just keywords.
Second, user awareness training needs a serious refresh. Staff need to understand that phishing now often looks and sounds completely normal. They should be encouraged to double check requests that seem unexpected or unusually urgent, even if they are perfectly worded.
Third, organisations should adopt DMARC, SPF, and DKIM correctly to make it harder for attackers to spoof internal email domains. It is surprising how many companies still get this wrong.
And finally, we need better ways to watermark or cryptographically verify human-sent messages, especially when dealing with sensitive communications. It is not a silver bullet, but it helps build a web of trust.
Because here’s the thing — AI is not going away. The tools are getting faster, more capable, and more accessible. We cannot block them. What we can do is adapt, quickly and smartly, before our inbox becomes the new front line.
Unlock continuous, real-time security monitoring with DarkInsight. Sign up for your free account today and start protecting your external attack surface from potential threats.
Create My Free Account