Cloned voices, fake videos, perfect phishing texts – artificial intelligence makes fraud more convincing than ever before. Here's how to spot the scams.
Just a few years ago, deepfakes were a technological gimmick. Today, they pose a serious threat. Criminals are using AI to clone voices, fake faces in videos, and create phishing messages that are linguistically perfect and psychologically precise. According to the Identity Fraud Report 2025–2026, deepfake attacks on German companies increased by over 53 percent last year. A McAfee study found that one in four people has already been the target of a voice cloning attack or knows someone who has been. The German Federal Office for Information Security (BSI) warns of a fundamentally changed threat landscape—and this affects not only companies but also private individuals.
How AI fraud works
The foundation for AI-powered fraud is publicly available data. Attackers often only need a few seconds of audio material to clone a voice – a WhatsApp voice message, a social media video, or a podcast appearance is sufficient. The AI analyzes tone of voice, speech rhythm, and melody, and can then generate any sentence in that voice. The result sounds almost indistinguishable from the original to the human ear.
For video deepfakes, criminals train AI models using publicly available photos and videos. The artificial face is superimposed onto someone else's video and synchronized with the cloned voice. What once required expert knowledge and expensive hardware is now possible with freely available tools. Complete fraud kits are offered as a service on the dark web – the barrier to entry for criminals is rapidly decreasing.
Texts also benefit from AI. Language models generate phishing emails and smishing texts without spelling mistakes, in perfect German, and in the communication style of the imitated sender. The old indicator "bad grammar = fraud" no longer works.
The most common AI scams
Grandparent scam 2.0 – the shock call with an AI voice: The classic scam in a new dimension. Scammers call – often late at night or on weekends – and imitate a family member with a cloned voice. The voice describes a dramatic emergency: an accident, an arrest, a hospital stay. Under emotional pressure, the callers urge the recipients to transfer money immediately. The police are explicitly warning against this scam, which particularly targets elderly people.
CEO fraud using deepfakes: Criminals imitate the voice or face of an executive via phone or video call and order urgent money transfers. The British engineering firm Arup lost $25 million to such an attack. In Switzerland, an entrepreneur transferred several million Swiss francs after hearing the cloned voice of his business partner. This hybrid attack—AI-generated email plus a deepfake call for confirmation—renders traditional verification methods ineffective.
Deepfake advertising featuring celebrities: Scammers create fake videos in which well-known personalities promote investments, cryptocurrencies, or dubious products. A recent example: A deepfake video of entrepreneur Reinhold Würth, allegedly promoting fraudulent investments, was circulated on social media. The Würth Group confirmed the forgery and initiated legal proceedings.
AI-optimized phishing and smishing: Instead of sending out masses of identical messages, criminals use AI to create individualized texts tailored to the recipient – based on publicly available information from social networks, professional profiles, or previous data leaks. This makes the messages appear personal and credible.
Recognizing warning signs of AI fraud
Even though AI-generated content is constantly improving, there are patterns that reveal fraudulent attempts.
When receiving calls: Be suspicious of any unexpected call from a familiar voice demanding money or putting you under time pressure. Pay attention to unusual pauses in the conversation, slight delays in response, or an unnaturally steady speaking style. Ask a personal verification question that only the real person could answer—for example, about a shared experience that isn't publicly known. Hang up and call the person back using a number you know to be theirs.
When watching videos: Look out for unnatural lip movements, inconsistent lighting, lack of eye movement, or strange transitions at the hairline and ears. Deepfake videos can show artifacts during rapid head movements. Always be wary of videos in which well-known personalities promote investments or products – especially if they are only shared via social media.
When dealing with texts: Pay attention to the context, not just the language. The message may be grammatically perfect, but does the content fit the person? Is the sender requesting something unusual? Does the message create a sense of urgency? Are sensitive data being requested? If so, it is most likely a scam attempt – regardless of how well-written the text is.
Here's how to protect yourself and your family
Agree on a family code word. Establish a secret code word within your family or close circle of friends that will be used to ask for suspicious calls. Anyone who doesn't know the word is not who they claim to be – no matter how convincing their voice sounds. This simple principle is currently the most effective protection against voice cloning.
Limited publicly available audio material. The fewer voice recordings of you that are publicly accessible, the harder it is to clone your voice. Check which videos and voice messages you share on social media. Set your Instagram, TikTok, and Facebook profiles to private whenever possible.
Never transfer money based on a single phone call. No matter how urgent the situation seems, a legitimate request for money can always be verified. Call the person back on a number you know, contact other family members, or ask to meet in person.
Enable two-factor authentication. Even if fraudsters obtain a password through AI-optimized phishing, 2FA protects your accounts. Our article Creating & Managing Secure Passwords shows you how to set it up.
Use your iPhone's call filters. iOS 26 offers a feature called Call Screening that automatically asks unknown callers for their name and reason for calling before your iPhone rings. This effectively blocks robocalls and automated scam calls. You can find all the details in our article Setting up your iPhone correctly: Checking, blocking, and filtering calls.
The new reality of digital deception
AI fraud isn't going to decrease—on the contrary, the tools are becoming better, cheaper, and more readily available. The most important defense, therefore, remains human: healthy skepticism towards unexpected contact, verification via a second channel, and the willingness to pause before acting under pressure. Technologies like call filtering, two-factor authentication, and strong passwords form the safety net—but ultimately, your behavior makes the difference. The best products for you: Our Amazon Storefront offers a wide selection of accessories, including those for HomeKit. (Image: Shutterstock / FAMILY STOCK)
Frequently Asked Questions: How to Detect AI Fraud
A deepfake is audio or video material generated by artificial intelligence that deceptively imitates a real person. Criminals use deepfakes to impersonate someone else and commit fraud – for example, through cloned voices on the phone or fake videos on social media.
Attackers often only need a few seconds of audio from a social media video, voice message, or podcast to clone a voice. The AI analyzes tone of voice, speech rhythm, and melody, and can then generate any sentence in that voice. The result is virtually indistinguishable from the original to the human ear.
Be wary of unexpected calls from a familiar voice demanding money under time pressure. Warning signs include unusual pauses in the conversation, slight delays in response, or an unnaturally steady speaking style. Ask a personal verification question that only the real person could answer, and if in doubt, call back using a number you know to be the caller.
A family code word is a secret word known only to family or close friends. If you receive a suspicious call, ask for the code word – if the caller can't provide it, it's a scam. This simple principle is currently the most effective protection against voice cloning attacks.
iOS 26 offers a feature called Call Screening that automatically asks unknown callers for their name and reason for calling before your iPhone rings. This helps to intercept automated scam calls. However, against targeted attacks using a cloned voice of a known person, human judgment—such as a callback or the family safeword—remains the most important defense.
Limit publicly accessible voice recordings. Review which videos and voice messages you share on social media and set profiles to private whenever possible. The less audio material of you is publicly available, the harder it is for attackers to clone your voice.
Pay attention to unnatural lip movements, inconsistent lighting, missing or rigid eye movements, strange hairline transitions, and artifacts from rapid head movements. As a general rule: Be wary of videos in which well-known personalities promote investments or products, especially if they are only distributed via social media.



