In 2025, the French cybersecurity platform Cybermalveillance.gouv.fr assisted over 500,000 victims, a 20% increase compared to 2024 (source: Cybermalveillance.gouv.fr activity report, March 2026). Among the fastest-growing threats: scams powered by artificial intelligence. Cloned voices, fake videos, hyper-personalised emails… These new techniques make fraud remarkably convincing. This guide explains how to recognise them and protect yourself — without panic.
Artificial intelligence has made spectacular progress over the past two years. The problem is that scammers have adopted it faster than anyone else. They now use it to mimic your loved ones’ voices, create fake videos, and write flawless fraudulent emails.
On 3 February 2026, France’s data protection authority (CNIL) described deepfakes as a “systemic threat” in an official statement (source: CNIL, February 2026). The French Ministry of the Interior published a dedicated guide on AI-powered scams on its Ma Securite platform (source: masecurite.interieur.gouv.fr, 2025).
The good news: all these scams rely on the same psychological triggers. Once you know them, you can outsmart them.
The 4 Types of AI Scams Targeting Seniors in 2026
1. Voice Cloning: When Your “Grandchild” Calls for Help
This is the digital version of the classic grandparent scam, but infinitely more convincing. Using freely available software, scammers can faithfully replicate a loved one’s voice from just 5 to 10 seconds of audio (source: TechnoMind report, November 2025). A WhatsApp voice message, a video posted on social media, or a simple voicemail is all they need.
What happens in practice:
You receive a call. You hear your grandchild’s voice, panicked: they say they’ve been in an accident, are in hospital, or are in trouble with the law. They beg you to send money immediately and ask you not to tell anyone. The voice is identical: the tone, the inflections, the little hesitations. The displayed number may even be spoofed to look like a familiar one.
Testimony: Pierre, 60, Ile-de-France, 2025: “I received a call from what I thought was my grandson. The voice was exactly his. He said he desperately needed help. I transferred 2,000 euros before realising it was a scam. When I called my grandson back on his real number, he had no idea what I was talking about.” (source: seneoo.com, anonymised testimony, 2025)
In March 2024, in Lyon, a mother transferred 8,000 euros after receiving a call from “her daughter”, in tears, claiming she’d had an accident. Her daughter was actually in class at the time (source: blog economie-numerique.net, November 2024).
According to the NCOA (National Council on Aging), adults over 60 account for 43% of total financial losses from scams, despite filing fewer complaints than other age groups (source: NCOA, 2025).
How to recognise it:
- The call always comes with urgency: accident, hospitalisation, legal trouble
- The caller asks you not to tell anyone else
- They request an immediate wire transfer or prepaid vouchers
- If you ask a specific personal question (a memory, a nickname), the scammer hesitates or deflects
2. Deepfake Video: Fake Faces on Video Calls
If you think a video call is safer than a phone call, think again. In 2025, a Hong Kong-based multinational lost $25 million after an employee joined a Zoom call with what appeared to be colleagues and the CFO. Every participant was a real-time deepfake (source: CNN, February 2024; Tookitaki, March 2025).
These techniques are becoming more accessible and are starting to affect individuals. The most common scenario in France: a fake bank advisor calls you on a video conference to “verify your identity” or “secure your account” following supposed suspicious activity. The face on screen looks like a professional in a suit, with an office backdrop. It is all AI-generated.
How to recognise it:
- Your bank never contacts you via video call for security operations
- The face may show slight artefacts: blurred facial contours, irregular blinking, lip movements slightly out of sync with the audio
- The caller insists you stay on the line and don’t hang up
- They ask you to carry out banking operations during the call
3. AI-Generated Ultra-Personalised Phishing Emails
Gone are the days of scam emails riddled with spelling mistakes sent from “prince.nigeria@gmail.com”. In 2026, AI writes flawless, personalised emails using your real personal information. According to cybersecurity researchers, AI-generated phishing emails have increased by 4,000% since 2022, and their click-through rate is 60% higher than traditional fraudulent emails (source: Keepnet Labs, January 2026).
The Cybermalveillance.gouv.fr 2025 report confirms that phishing remains the top threat for individuals in France, with a 70% increase in reports, driven by the malicious exploitation of massive data breaches (source: Cybermalveillance.gouv.fr 2025 activity report).
What happens in practice:
You receive an email that appears to come from your bank, health insurance, or social security office. It mentions your real name, sometimes your address or account number. The language is perfect. The message invites you to click a link to “update your details”, “confirm a refund”, or “secure your account”.
How to recognise it:
- Check the sender’s email address (not just the displayed name). An official address ends with the organisation’s domain (e.g. @ameli.fr, @impots.gouv.fr in France, or your national equivalents)
- Hover over the link without clicking to see the real URL
- No official body ever asks for your password or bank details by email
- Even if the message contains no mistakes, an urgent call to action remains the key warning sign
4. Fake AI-Powered “Tech Support”
The fake tech support scam is the 3rd most common threat for individuals according to Cybermalveillance.gouv.fr. In 2025, it gained a new dimension: AI chatbots that mimic natural conversation with a technician.
In July 2025, Microsoft, Cybermalveillance.gouv.fr, and the Paris Public Prosecutor’s Office issued a joint call to action against these scams (source: Microsoft France, July 2025). An Ifop survey for Microsoft found that only 16% of seniors say they know exactly what to do when faced with an online scam (source: Microsoft/Ifop study, 2025).
What happens in practice:
An alert message appears on your computer screen: “Virus detected! Call technical support immediately at [phone number].” If you call, an AI chatbot answers like a real technician, with smooth and reassuring speech. It asks you to install remote access software, then a “technician” takes control of your computer. They simulate a diagnostic and charge you between 200 and 500 euros for a fictitious repair.
Testimony: Colette, 74, Nantes, February 2026: “A message appeared on my computer saying my personal data had been compromised. I called the number and the person on the phone sounded very competent. They had me install software and took control of my computer for 45 minutes. At the end, they asked for 350 euros by bank card for the repair. My son told me afterwards it was a classic scam, but at the time, everything seemed real.” (source: signal-arnaques.com, anonymised testimony)
How to recognise it:
- Microsoft, Apple, and Google never display alert messages with a phone number to call
- A real technician never asks you to install remote control software over the phone
- Payment is always demanded immediately, often by bank card or prepaid vouchers
5 Reflexes to Protect Yourself from AI Scams
1. Agree on a family password
Choose a word or phrase with your loved ones that only you know. If you receive an urgent call asking for money, ask for the password. If the person doesn’t know it or dodges the question, hang up. This simple reflex neutralises voice cloning.
2. Always hang up and call back on the usual number
If a relative, your bank, or an organisation contacts you with an urgent request, don’t continue the conversation. Hang up calmly and call back yourself using the number you already know (from your address book, from the back of your bank card, from the official website). If the call was legitimate, you’ll reach your contact. If it wasn’t, you’ve avoided the scam.
3. Never make a transfer under pressure
No official body, no bank, no relative in a real situation will ask you to act within minutes without any possibility of verification. Artificial urgency is the scammer’s primary tool. If someone demands an immediate transfer, it’s a red flag.
4. Check emails by looking at the address, not the content
The content of a fraudulent email can be perfect thanks to AI. But the sender’s address remains hard to fake completely. Look at the full email address (not just the displayed name) and verify the domain matches the organisation (e.g. @ameli.fr rather than @ameli-reimbursement.com).
5. Talk about it without shame
AI scams are designed to fool everyone, including cybersecurity professionals. If you have any doubt, talk to a loved one, a neighbour, or call your national fraud helpline. Asking for an opinion is not a sign of weakness — it’s the most effective reflex.
What to Do If You’ve Been a Victim
Don’t blame yourself. These scams are designed by professional manipulators and the technology is extremely convincing. Here are the steps to follow:
In the first few hours
- Contact your bank immediately to report the transaction and request a fund recall if possible. Cancel your card if you’ve shared your bank details.
- Change your passwords if you’ve shared login details or installed remote access software.
- Keep all evidence: phone numbers, screenshots, emails, bank statements.
In the following days
- File a police report at your local police station or online via your national reporting platform.
- Report on Cybermalveillance.gouv.fr (in France) or your equivalent national platform for a personalised assessment and referral to a local professional if needed.
- Report the number to your national spam-reporting service if the scam arrived by phone or text.
Useful contacts (France)
| Organisation | Contact | Purpose |
|---|---|---|
| Info Escroqueries | 0 805 805 817 (free, France) | Advice and guidance |
| Cybermalveillance | cybermalveillance.gouv.fr | Assessment and assistance |
| PHAROS | internet-signalement.gouv.fr | Report online content |
| 33700 | Text 33700 | Report fraudulent calls/texts |
| Your bank | Number on the back of your card | Card cancellation and fund recall |
| CNIL | cnil.fr | Report data misuse |
AI Is Not Only on the Scammers’ Side
It would be unfair to see only the negative side of artificial intelligence. Authorities and cybersecurity companies also use AI to detect scams more quickly. The GenFakes project by the PEReN (Digital Regulation Expertise Hub), in partnership with the CNIL’s digital innovation laboratory (LINC), is working on automatic deepfake detection and digital watermarking of authentic content (source: CNIL, 2026).
Phone operators are also deploying AI filters to block fraudulent calls before they reach you. And web browsers are increasingly integrating automatic protections against phishing sites.
These developments are encouraging, but they don’t replace your personal vigilance. The reflexes described in this article remain your best protection.
Conclusion: Technology Changes, Reflexes Stay the Same
The scammers’ tools are getting more sophisticated, but the manipulation techniques remain identical: urgency, emotion, impersonation of an authority or a loved one. If you know these mechanisms, you’re already well protected.
If you only remember three things:
- A family password neutralises voice cloning
- Hanging up and calling back yourself defeats any identity fraud
- Urgency is always suspicious when it comes from a call, text, or email
And if you have any doubt, talk about it. Call your national fraud helpline. It’s never too late to ask for help.
Editorial note
Sources consulted: Cybermalveillance.gouv.fr 2025 activity report (March 2026), French Ministry of the Interior AI scam guide (masecurite.interieur.gouv.fr, 2025), CNIL deepfake statement (February 2026), CNIL “Deepfakes: how to protect yourself” guide (February 2026), Keepnet Labs 2025 phishing report, Microsoft/Ifop study on seniors and fake tech support scams (2025), Microsoft-Cybermalveillance.gouv.fr-Paris Public Prosecutor joint statement (July 2025), NCOA guide on AI scams for older adults (2025), American Bar Association (September 2025), anonymised testimonies via signal-arnaques.com, seneoo.com, and blog economie-numerique.net.
Limitations of this article: French statistics specifically on AI-powered scams targeting seniors are still incomplete, as the Cybermalveillance.gouv.fr report does not systematically distinguish AI-driven scams from traditional ones. Testimonies are anonymised and reported amounts cannot be independently verified. Scam techniques evolve rapidly; new variants may appear after this article’s publication.
Verification date: 16 April 2026
Conflicts of interest: none
Questions fréquentes
-
Just 5 to 10 seconds of voice recording (a WhatsApp voice message, a social media video, a voicemail greeting) is enough for AI software to faithfully reproduce a voice, including its tone, rhythm, and even hesitations. These tools are freely available online.
-
Ask a personal question that only your loved one would know the answer to (a shared memory, a pet's name). You can also agree on a family password in advance. If in doubt, hang up and call your relative back on their usual number.
-
Yes, although these scams currently target businesses more often. In 2025, a Hong Kong multinational lost $25 million after a video call where all participants were deepfakes. Individuals are increasingly targeted by fake video calls from supposed bank advisors.
-
Contact your bank immediately to attempt a fund recall. File a report with your local police. Report the scam on cybermalveillance.gouv.fr (in France) or your national cybercrime reporting platform. Keep all evidence: phone numbers, messages, bank statements.
-
AI-generated emails are more convincing because they no longer contain the usual spelling mistakes and are personalised with real information. But some clues remain: suspicious sender address, urgent requests for money or personal information, links to unofficial websites.