Every day, millions of people watch videos, view photos, and read news online, but not everyone stops to question whether what they see is true. Children grow up in an environment where any piece of information can appear convincing, even if it has been artificially created or deliberately distorted.
Fake and manipulative content has become a tool of psychological pressure on society. A child who encounters a fabricated video or a fake news story may become frightened, believe it, or share it further, without even realizing that the content is false.
Fake and manipulative content has become a tool of psychological pressure on society. A child who encounters a fabricated video or a fake news story may become frightened, believe it, or share it further, without even realizing that the content is false.
Understanding how fake content and AI-generated manipulations work is an essential skill for every family member. In this article, we explain what fake content and deepfakes are, how to recognize them, and how to talk about them with children. Although children of all ages may encounter online risks, this material is primarily useful for adults and teenagers, as most of the recommendations relate to information analysis and critical thinking.
What Is Fake Content and How Does It Work
A fake is false or deliberately distorted information presented as truth. It can take many forms:
Fake content is designed to trigger an emotional response. People are more likely to share information that shocks, frightens, or outrages them. That is why creators of fake content often choose provocative topics such as threats, disasters, or sensational accusations. The stronger the emotion, the less likely a person is to pause and verify the facts.
It’s important for children to understand: if a headline triggers strong emotions — fear, anger, or even a sense of satisfaction at someone else’s misfortune — it’s worth reading the full article rather than stopping at the first line. More often than not, the content doesn’t actually support what the headline promises.
Typical trigger phrases include:
- a fabricated news story;
- a real video with a misleading caption;
- a photo taken in a different place or time;
- a quote falsely attributed to someone.
Fake content is designed to trigger an emotional response. People are more likely to share information that shocks, frightens, or outrages them. That is why creators of fake content often choose provocative topics such as threats, disasters, or sensational accusations. The stronger the emotion, the less likely a person is to pause and verify the facts.
It’s important for children to understand: if a headline triggers strong emotions — fear, anger, or even a sense of satisfaction at someone else’s misfortune — it’s worth reading the full article rather than stopping at the first line. More often than not, the content doesn’t actually support what the headline promises.
Typical trigger phrases include:
- “Scientists have proven that…”
- “Shocking video…”
- “Everything you knew is wrong”.
Another sign of manipulation is the lack of specificity. Reliable journalism refers to specific people, documents, or organizations. Phrases like:
— do not actually specify a real source of information.
Fake content spreads quickly because it aligns with people’s expectations or confirms what they already believe. The human brain tends to seek confirmation of existing beliefs, and this is actively exploited by those who deliberately fabricate information.
- “sources say”
- “according to some experts”
- “it has become known”
— do not actually specify a real source of information.
Fake content spreads quickly because it aligns with people’s expectations or confirms what they already believe. The human brain tends to seek confirmation of existing beliefs, and this is actively exploited by those who deliberately fabricate information.
What Is a Deepfake and How to Recognize It
A deepfake is synthetic media created using artificial intelligence. Most often, this refers to videos or audio recordings in which a real person’s face is replaced or made to say things they never actually said. The term “deepfake” comes from a combination of “deep learning” (a machine learning method) and “fake.”
Creating deepfakes no longer requires advanced technical skills or specialized equipment. The tools are widely accessible, and their quality has improved to the point where distinguishing fake from real content is becoming increasingly difficult.
Creating deepfakes no longer requires advanced technical skills or specialized equipment. The tools are widely accessible, and their quality has improved to the point where distinguishing fake from real content is becoming increasingly difficult.
According to Deepstrike, by 2025 the number of deepfake files online had reached 8 million — compared to 500,000 just two years earlier. In 2024, a new deepfake attempt appeared every five minutes.
The scale of the problem is compounded by the fact that most people overestimate their ability to identify fakes. While many believe they can recognize a deepfake, actual accuracy when viewing high-quality fabricated videos is only about 24.5%.
The scale of the problem is compounded by the fact that most people overestimate their ability to identify fakes. While many believe they can recognize a deepfake, actual accuracy when viewing high-quality fabricated videos is only about 24.5%.
“Today, it is important to speak about deepfakes as a tool of disinformation and a serious form of online abuse. Information attacks launched by the enemy are evolving: while AI was previously used mainly to create fake content about military personnel or politicians, children are now increasingly becoming targets as well.
AI-generated content that manipulates a child’s fears can cause very real psychological harm. That is why a child’s right to safety in the online space must be considered a fundamental part of protecting their rights. Advocating for this at the level of state mechanisms is one of the top priorities,”
AI-generated content that manipulates a child’s fears can cause very real psychological harm. That is why a child’s right to safety in the online space must be considered a fundamental part of protecting their rights. Advocating for this at the level of state mechanisms is one of the top priorities,”
Despite rapid technological development, some signs of manipulation can still be detected:
- Lighting and shadows in deepfakes often don’t follow real-world physics. The direction of shadows on a face may not match those in the background, and the skin can appear unnaturally glossy.
- Teeth are one of the hardest details for AI to replicate: they may look overly perfect, blended together, or slightly blurred.
- Lip-sync errors are another common giveaway; delays between speech and mouth movement are often noticeable to the naked eye.
- Emotions can also reveal a fake: smiles may appear stiff, and the subtle wrinkles that accompany genuine expressions are often missing.
Audio deepfakes have become a serious and distinct threat. Today, voice-cloning tools are widely accessible (such as ElevenLabs, Resemble AI, or PlayHT). For example, Microsoft’s VALL-E can imitate a real person’s voice using as little as three seconds of an original audio, while 20–30 seconds is enough to generate a more convincing fake.
A voice may sound almost natural. But if something about it feels mechanical, it’s worth trusting that instinct.
Signs of fake audio include:
A voice may sound almost natural. But if something about it feels mechanical, it’s worth trusting that instinct.
Signs of fake audio include:
- uneven speech rhythm;
- unusual pauses;
- unnatural shifts in intonation;
- artificial background noise.
“One of the most alarming trends that the human rights advocates face today, is the use of these technologies to recruit Ukrainian teenagers by Russian intelligence services. Through anonymous chats or gaming platforms, they create the illusion of easy money, reinforcing it with generated videos or fabricated voice messages. When a child tries to withdraw, they resort to blackmail, threatening to create compromising deepfakes or send fabricated evidence to law enforcement,”
According to her, it is crucial for both children and parents to understand that AI and fake content can be used as tools of sabotage. So, explaining this to children is a matter of physical and national security.
How to Explain to a Child that Not Everything Online Can Be Trusted
For younger children, it helps to explain fake content through a familiar situation: imagine someone at school tells the whole class that there will be no lessons tomorrow, but that isn’t true. Half of the class doesn’t show up. The teacher has to reschedule a test. Those who came wasted their time waiting. The learning process for that day is disrupted — all because of one made-up statement that no one checked.
Online, the scale is bigger, but the mechanism is the same: someone invents a story or changes the caption under a real photo, and it starts to circulate across the Internet. Each time it is shared, it appears more credible. But the number of shares does not make the information true.
Online, the scale is bigger, but the mechanism is the same: someone invents a story or changes the caption under a real photo, and it starts to circulate across the Internet. Each time it is shared, it appears more credible. But the number of shares does not make the information true.
Teenagers can be told that technologies now exist that allow anyone to create videos of any person saying anything. That sometimes fake content is spread with specific goals — to frighten, confuse, or push people into actions they would not take if they had full information.
It is important to teach a child to question any content. If something triggers a strong reaction — such as fear, anger, or outrage — that is already a signal to pause and verify.
A reverse image search using Google Images or TinEye can help determine whether a photo has appeared before in a different context. Information can also be verified through fact-checking platforms such as StopFake or VoxCheck.
It is important to teach a child to question any content. If something triggers a strong reaction — such as fear, anger, or outrage — that is already a signal to pause and verify.
- Who published this and why?
- Is there a link to the original source?I
- s the information confirmed by other media outlets?
- Does the headline feel overly emotional or provocative?
A reverse image search using Google Images or TinEye can help determine whether a photo has appeared before in a different context. Information can also be verified through fact-checking platforms such as StopFake or VoxCheck.
How Parents Can Talk to Children About AI and Fake Content
Artificial intelligence is already part of children’s daily lives — in video recommendations, voice assistants, games, and even school assignments. Parents do not need to be technical experts to talk about it with a child.
It is worth explaining that AI is a tool, like a hammer or a pencil: it can be used in different ways. It helps doctors, scientists, and artists. But the same technology can also be used to create fakes, and that is important to understand. A child who knows that AI-generated content exists is already less vulnerable to manipulation.
It can be useful to look together at a few examples of generated images or deepfakes without exposing the child to disturbing or harmful content. Seeing how it works helps children recognize the signs. You can ask, “What feels off here? What looks unnatural?”
It is worth explaining that AI is a tool, like a hammer or a pencil: it can be used in different ways. It helps doctors, scientists, and artists. But the same technology can also be used to create fakes, and that is important to understand. A child who knows that AI-generated content exists is already less vulnerable to manipulation.
It can be useful to look together at a few examples of generated images or deepfakes without exposing the child to disturbing or harmful content. Seeing how it works helps children recognize the signs. You can ask, “What feels off here? What looks unnatural?”
A conversation about AI is also a chance to talk about values, such as honesty, responsibility for what we share, and the ability to pause and think before reposting. The ability to distinguish emotional impact from facts is one of the most important skills today. The simple question “What exactly is being claimed here, and what evidence supports it?” is often the strongest protection against manipulation.
What to Do If Fake Content Is Created Using Your Child’s Image
A situation where a child’s photo or video has been used to create fake content is serious and requires a clear response from parents.
- Step 1. Document the violation before reporting it for removal. Alla Perfetska advises recording screen captures and taking screenshots that clearly show URLs, publication dates, and the usernames of those responsible. Only with this digital evidence can cyber police stop the spread of the material and hold those responsible criminally accountable.
- Step 2. Report the content on the platform where it was published. Most major social media platforms have procedures for removing content that violates children’s rights or involves manipulated material featuring minors. In your report, clearly state that the image depicts a minor and that the content is fabricated.
- Step 3. Contact law enforcement in serious cases. This is especially important if the fake content is sexualized or used for blackmail. In Ukraine, the distribution of sexualized content involving children carries criminal liability. Proper documentation of evidence is, therefore, critical.
“The accessibility of AI has created a new scale of threats, including a surge in online sexual extortion targeting children. Offenders, or even peers, may only need to save a photo from a child’s social media to generate a fake sexualized image.
From a legal perspective, it is important to understand that Ukrainian law protects the child regardless of whether the image is real or AI-generated. Under the Criminal Code of Ukraine, creating such AI-generated content is unequivocally classified as the production of child pornography. And that is a serious criminal offense,”
From a legal perspective, it is important to understand that Ukrainian law protects the child regardless of whether the image is real or AI-generated. Under the Criminal Code of Ukraine, creating such AI-generated content is unequivocally classified as the production of child pornography. And that is a serious criminal offense,”
If your child has encountered this, prioritize their safety and emotional needs. It is important to talk to them. They may feel shame, fear, or helplessness, and these reactions are completely natural. Make it clear: they are not to blame. Responsibility lies with those who created and distributed the fake content.
This material was prepared by the Voices of Children Charitable Foundation within the project “Improving Child Protection and MHPSS (Mental Health and Psychosocial Support) Mechanisms for Children, Adolescents, and Families Affected by the Conflict in Ukraine” in partnership with the international NGO Plan International with funding from the German Federal Foreign Office (GFFO).
If your child is experiencing severe stress, seeking support from a psychologist might be helpful and appropriate. Specialists from the Voices of Children Foundation provide free psychological support to children and families facing difficult circumstances.
At our regional centers, children and teenagers can find a supportive community of peers, receive psychological support, and take part in creative activities and games. If needed, anyone can also contact our free psychological support helpline for children and parents at 0 800 210 106.
At our regional centers, children and teenagers can find a supportive community of peers, receive psychological support, and take part in creative activities and games. If needed, anyone can also contact our free psychological support helpline for children and parents at 0 800 210 106.
Share: