Generative AI in Cybersecurity: Built to Protect. Used to Attack.

Generative AI in Cybersecurity

Viewpoint by Francesco F., Risk Manager at Amaris Consulting

Let’s be honest: the pace of change in cybersecurity can feel overwhelming. Every week brings a new headline, a new tool, a new threat. And now, generative AI has entered the picture as a real and growing force on both sides of the cybersecurity battle.

AI is helping us detect and respond to threats faster than ever before. But it’s also being used by attackers to craft more convincing scams, clone voices, and exploit vulnerabilities we didn’t know existed.

This isn’t about fear. It’s about awareness. As risk managers, security experts, and business leaders, we’re being asked to adapt and rethink not just how we protect systems, but how we train people, evaluate risks, and stay ahead of technologies we’re still learning to understand.

Smarter, sneakier, scarier

Generative AI doesn’t just scale attacks. It elevates them.

Phishing emails that sound like your CEO. Deepfake video calls from your supplier. AI-crafted documents that look more polished than your own templates. One voice sample, and a cloned executive can make real-time demands that no one thinks to question—until it’s too late.

These aren’t rare cases. They’re multiplying. A recent industry report logged a near tenfold rise in AI-driven attacks. What’s new? They’re faster to execute, harder to trace, and terrifyingly convincing.

For example, an Amazon executive received a call that sounded perfectly legitimate. The voice on the line had been cloned using generative AI. The impersonation was so accurate that sensitive internal information was shared without hesitation. No hacking required, just a voice sample and an unsuspecting target.

The dark web just got an upgrade

In the old days, launching a cyberattack required some serious coding chops. Not anymore. Today, even amateur attackers can rent an AI on the dark web that writes malware or impersonates voices on demand.

AI-as-a-Service platforms are now offering toolkits to generate malicious code, fake credentials, and intrusion vectors. Think of it like Netflix, but for hackers.

The double-edged sword of innovation

Of course, AI isn’t just arming attackers. It’s also helping us fight back.

At Amaris Consulting, we use AI to detect threats in real time, analyze network anomalies, and uncover patterns that would be invisible to the human eye. Algorithms don’t get tired. They don’t blink. And when used ethically, they can help organizations stay one step ahead.

But here’s the catch: tools are not strategy. No matter how powerful your AI, if your team isn’t trained, your data isn’t governed, or your leadership isn’t aligned, you’re still exposed.

The human glitch

Most attacks don’t brake systems. They trick people.

That’s the real risk. AI-generated phishing emails are no longer riddled with spelling errors. They’re polished. Smart. Specific. Personalized. Your finance manager might get a fake invoice that looks exactly like last month’s, except this one redirects payment to a fraudulent account.

The most advanced AI won’t save you from a distracted click. That’s why awareness isn’t a checkbox. It’s a culture. Employees must learn to question what looks real. To slow down. To speak up.

Cyber resilience is more than a firewall

At Amaris, we believe real security is proactive, not reactive. That means blending AI-powered monitoring with human training, scenario planning, and ethical frameworks.

We’re helping clients reimagine cybersecurity not as a set of tools, but as a mindset. We analyze risks before they erupt. We train teams to spot manipulation, even when it sounds familiar. And we embed digital resilience into the DNA of the business.

AI didn’t change the rules, it rewrote the game

Cybersecurity has entered a new phase. One where every message, voice, or face could be fake, but feels real. The organizations that thrive won’t be the ones that fear AI, but the ones that understand it, deploy it responsibly, and train their people to think critically. The future isn’t human vs. AI. It’s human with AI versus those who use it to deceive.

Want to see what that looks like in action?

Explore how our Data & AI Center of Excellence helps organizations turn today’s uncertainty into tomorrow’s advantage.

It’s the ultimate double agent: generative AI is rewriting cyber threats and defenses alike. Here’s how to navigate the paradox before it outpaces us.

Share Post: