How AI Is Reshaping Phishing and Online Scams
Phishing and online scams are constantly evolving forms of digital fraud that primarily target individuals. Cybercriminals continuously adapt their tactics, refining old techniques and inventing new ones to exploit current news cycles, popular trends, and major global events—anything that might help lure their next victim.
Since our previous publication on phishing tactics, these threats have advanced significantly. While many of the tools discussed earlier remain relevant, attackers have adopted new techniques, shifted their objectives, and fundamentally changed how these campaigns are executed.
AI-Enhanced Scam Content
Traditional phishing emails, instant messages, and fake websites were often easy to spot due to grammatical mistakes, factual inaccuracies, incorrect names or addresses, and poor formatting. Today, however, cybercriminals increasingly rely on neural networks to eliminate these telltale signs.
AI-powered tools allow attackers to craft highly convincing messages that closely mimic legitimate communications. As a result, victims are more likely to trust these messages—and more inclined to click malicious links, open infected attachments, or download harmful files.
The same trend applies to personal communications. Social networks are now saturated with AI-driven bots capable of sustaining conversations that feel strikingly human. While such bots can serve legitimate purposes, scammers frequently use them to impersonate real users. This tactic is especially prevalent in online dating platforms, where attackers can manage dozens of conversations simultaneously, creating the illusion of genuine emotional connection.
Their ultimate goal is financial exploitation, often by persuading victims to invest in so-called “viable opportunities,” typically involving cryptocurrency. This long-con scam is widely known as pig butchering. To enhance credibility, these bots may also generate realistic voice messages, images, or even participate in video calls.
Deepfakes and AI-Generated Voices
Attackers are increasingly exploiting AI capabilities such as voice cloning and realistic video generation to produce convincing audiovisual content designed to deceive victims.
Beyond targeted attacks that mimic the voices or appearances of colleagues, friends, or family members, deepfake technology is now commonly used in large-scale scams. Examples include fake celebrity giveaways, where AI-generated videos show well-known actors, influencers, or public figures promising expensive prizes such as smartphones, laptops, or cash rewards.
As deepfake technology continues to advance, the line between reality and deception becomes increasingly blurred. Traditional cues used to identify fraud—such as unnatural speech patterns or visual inconsistencies—are rapidly disappearing.
Automated scam calls have also become more widespread. In these cases, attackers use AI-generated voices combined with caller ID spoofing to impersonate bank security departments. Victims are told that suspicious activity has been detected on their account and are urged to “secure their funds” by sharing a one-time SMS code. In reality, this code is used to bypass two-factor authentication and gain unauthorized access to the victim’s account.
Data Harvesting and Analysis
Large language models such as ChatGPT are widely recognized for their ability to generate grammatically correct text in multiple languages. Less visible—but equally powerful—is their capacity to analyze vast amounts of open-source information from media outlets, corporate websites, and social networks.
Threat actors increasingly rely on AI-powered OSINT (open-source intelligence) tools to collect and process this data. The resulting insights enable highly targeted phishing campaigns tailored to specific individuals or communities.
Common attack scenarios include:
-
Personalized emails or messages posing as HR personnel or company executives, containing accurate internal details
-
Spoofed phone or video calls from trusted contacts, leveraging personal information that appears impossible for outsiders to know
This level of personalization dramatically increases the success rate of social engineering attacks, making them difficult to detect even for technically savvy users.
AI-Generated Phishing Websites
Phishers are also using AI to generate fraudulent websites. Modern phishing kits often include AI-powered site builders capable of automatically copying legitimate designs, creating responsive layouts, and generating realistic sign-in forms.
Some phishing sites are nearly indistinguishable from their legitimate counterparts, while others rely on generic templates deployed at scale with minimal customization. In many cases, these sites indiscriminately collect any information entered by users and are never manually reviewed before being used in attacks.
Such automation significantly lowers the barrier to entry for cybercriminals and enables the rapid expansion of large-scale phishing campaigns.
Telegram as a Scam Platform
Thanks to its massive user base, open API, and support for cryptocurrency payments, Telegram has become a favored platform for cybercriminal activity. It serves both as a distribution channel for scams and as a direct target. Compromised Telegram accounts can be used to attack other users or sold on underground marketplaces.
Malicious Bots
Telegram bots play a central role in modern scams. They are often used alongside—or instead of—phishing websites. A victim may be redirected from a fraudulent site to a bot that then collects personal or financial information.
Common bot-based schemes include:
-
Fake cryptocurrency airdrops requiring a “mandatory deposit” for KYC verification
-
Impersonation of postal or delivery services to harvest personal details
-
“Easy money” schemes offering payment for watching short videos
Unlike phishing websites, which users can simply close, malicious bots can continue sending messages if not blocked. These messages may contain fraudulent links or requests for administrative permissions. Once granted, bots can spam entire groups or channels under the guise of activating “advanced features.”
Account Theft Techniques
Social engineering remains the primary method for stealing Telegram accounts. Attackers tailor their tactics to seasonal events, current trends, or specific age groups, but the objective is always the same: to trick victims into revealing a verification code.
Phishing links may be sent via private messages, group chats, or compromised channels. To evade suspicion, attackers increasingly disguise these links using Telegram’s message-editing features, making the visible URL differ from the actual destination.
New Methods of Evasion
Abuse of Legitimate Services
Cybercriminals frequently exploit trusted platforms to keep phishing resources online for as long as possible.
-
Telegraph: This Telegram-operated publishing service allows anyone to post long-form content without registration. Attackers use it to host or redirect users to phishing pages.
-
Google Translate: By translating phishing pages and sharing the generated links, scammers can bypass security filters and obscure the true destination behind a legitimate-looking subdomain.
-
CAPTCHA: Once a marker of legitimacy, CAPTCHAs are now commonly added to phishing sites to evade automated detection and appear more trustworthy.
Blob URLs
Blob URLs (blob:https://example.com/...) are temporary browser-generated links used to access locally stored binary data. While designed for legitimate purposes, attackers now use them to conceal phishing content. Because the data resides in the victim’s browser rather than on a remote server, such attacks are harder for security tools to detect.
Targeting Immutable Data
Cybercriminals are increasingly shifting focus from usernames and passwords to immutable identity data—information that cannot easily be changed. This includes biometric data, voiceprints, handwritten signatures, and digital signatures.
Examples include phishing pages that request camera access under the pretense of account verification, enabling attackers to capture facial biometrics. In corporate environments, electronic signatures are particularly valuable, making platforms like DocuSign frequent targets of spear-phishing campaigns.
Even handwritten signatures remain highly valuable, as they continue to play a critical role in legal and financial processes.
These attacks are often combined with attempts to access banking, e-government, or corporate systems protected by two-factor authentication. Attackers typically obtain OTPs by tricking users into entering them on fake login pages or sharing them during phone calls.
A growing tactic involves posing as helpers or protectors. Victims may first receive a meaningless code via text, followed by a convincing pretext—such as a delivery notification. Later, a second attacker impersonates an authority figure, claiming the victim is under attack and coercing them into sharing a legitimate OTP.
Key Takeaways
Phishing and online scams are evolving rapidly, driven by AI and emerging technologies. As users become more aware of traditional threats, cybercriminals respond with increasingly sophisticated tactics. Today’s scams rely on deepfakes, voice cloning, advanced personalization, and multi-stage social engineering to steal sensitive and irreversible data.
Key trends include:
-
Highly personalized attacks powered by AI-driven data analysis
-
Abuse of legitimate platforms to bypass security controls
-
Increased targeting of immutable identity data such as biometrics and signatures
-
More complex methods for circumventing two-factor authentication
How to Protect Yourself
-
Treat unexpected calls, emails, and messages with skepticism. Avoid clicking links unless you have verified their destination.
-
Never share one-time passwords with anyone, regardless of their claimed identity.
-
Scrutinize multimedia content for signs of manipulation, especially celebrity giveaways or urgent requests.
-
Minimize your digital footprint by avoiding the public sharing of sensitive personal or professional information on social media.