Crypto Quest 2025

This year was an adventure. Let’s share your story.

Start here
DeFi Academy

How AI is powering the next wave of cyber threats in Web3

1inch

by 1inch

• 3 min read
How AI is powering the next wave of cyber threats in Web3

AI is raising the quality and scale of Web3 phishing, impersonation and scam automation, which is changing how signature based attacks are carried out.

AI did not invent Web3 scams. It made them cheaper to run and easier to personalize. Crypto threats used to be loud. Obvious fake sites, broken English, copy paste messages. AI is changing that. Scams are becoming more convincing, more targeted and more automated. The result is simple. There is less time to notice the trap.

Web3 adds extra pressure. Wallet activity is public. Links travel fast in chats. One signature can approve spending. And once a transaction is confirmed, it cannot be reversed.

AI enhanced phishing and social engineering

Phishing is still the most common entry point. AI makes it look professional. It can match tone, grammar and formatting. It can copy the style of a real project. It can be tailored to context using public signals, including recent on-chain activity and commonly used apps. In Web3, the goal is often not a password. The goal is a signature.

Common tricks include:

  • “Security check” pages that request an approval
  • Fake airdrop sites that trigger a wallet drainer
  • “Support” chats that push to import a recovery phrase
  • Fake verification forms that ask for private keys

Deepfakes raise the stakes. A cloned voice or video can simulate authority. This can be used to fake a founder announcement, a partner endorsement or a “team member” asking for an urgent action in a DAO chat.

Malware that adapts, and what AI changes

Malware that “adapts” is not brand new. The malicious program changes its look or its behavior to avoid detection.

There are two common forms.

It changes how it looks.The same stealer can be packaged into many different files. Each version looks slightly different, so basic pattern matching fails more often.

It changes how it behaves.The malware waits for the right moment. It may stay quiet in test environments. It may activate only after certain actions happen.

AI helps attackers scale this process. It can generate many variants quickly, then test which ones survive scans, sandboxes and browser protections. It also helps tailor payloads to crypto targets, like wallet extensions, clipboard addresses, recovery phrase prompts or fake transaction popups.

In practice, this often appears as fake browser extensions, fake wallet updates or scripts hidden behind “mint” buttons.

Automated attack campaigns and scam factories

AI also makes scams easier to run like a business.  

Attackers can automate:

  • Domain generation and site cloning
  • Fake social accounts and replies
  • Chat scripts that pressure victims
  • Target lists built from public on-chain activity

This creates constant variation. One scam site gets reported. Two new ones appear. One message template gets flagged. Ten new templates replace it.

Automation also improves timing. Campaigns can trigger big events, listings, airdrops or trending narratives. Scammers can follow attention and react fast.

This is why the same trap shows up across X, Telegram, Discord, email and ads at the same time. The goal is volume. Even if most people ignore it, a few clicks can be enough.

Smarter attacks on smart contracts and DeFi

Smart contracts are public. That is great for transparency. It also means attackers can analyze code at scale.

AI can help with:

  • Scanning large repositories for common bug patterns
  • Finding forks that reused risky code
  • Spotting weak assumptions around oracles and pricing
  • Generating test cases that stress unusual edge cases

This does not replace real exploit work. But it can speed up reconnaissance and reduce the cost of finding targets.

DeFi risks also include coordination. Attackers can use automation to watch mempools, simulate transaction outcomes and react in seconds. This can amplify front running style strategies and MEV driven manipulation.

Privacy breaches and identity exploitation

Web3 transactions are transparent. AI is good at pattern matching. This combination makes targeting easier.

Even without direct identity data, activity can reveal signals:

  • Which chains are used most
  • Which apps are used often
  • Typical transaction sizes and timing
  • Links between addresses through repeated behavior

This can lead to de-anonymization attempts and more precise spear phishing. It can also support scams that feel “too accurate,” like messages that reference a real protocol interaction or a real token balance.

Deepfakes add identity abuse on top. Fake KYC videos, fake team calls, fake influencer endorsements, fake “partnership” posts. The goal is the same. Create trust fast, then push for a signature.

A new category: attacks on AI powered crypto tools

More Web3 apps now experiment with AI assistants, agentic browsers and “autopilot” actions. This can improve usability. It also creates new risk. The main danger is unsafe instructions.

An attacker can try to plant malicious content into what an agent reads, remembers or treats as trusted. This is often called prompt injection or context manipulation. In plain terms, it is social engineering aimed at the AI layer.

This matters because Web3 actions are irreversible. If an assistant can sign, approve or execute trades, then its guardrails need to be strict.

A safe default is simple. AI can explain, summarize and draft. Transactions still require deliberate confirmation, clear checks and strong limits.

Stay tuned for upcoming guides designed to shed light on how DeFi works!

Join us