Can an AI image-to-video feature put children at risk?

Mr. Jindal
11 Min Read

On June 22, Reddit co-founder Alexis Ohanian posted a childhood photo of his mother and himself. In the picture, both are wearing red sweaters while hugging each other against a mountain backdrop.

Alongside the image, Mr. Ohanian posted an AI-generated video that brought the picture to life: the mother and child cuddle each other as the wind ruffles their hair.

“Damn, I wasn’t ready for how this would feel. We didn’t have a camcorder, so there’s no video of me with my mom,” posted Ohanian on X (formerly Twitter). “I dropped one of my favorite photos of us in midjourney as ‘starting frame for an AI video’ and wow… This is how she hugged me. I’ve rewatched it 50 times.”

The post quickly went viral, and garnered well over 20 million views. While many empathised with Mr. Ohanian’s act of turning a cherished family photo into a video, he was also severely criticised. Many X users accused him of creating “false” memories, damaging his ability to grieve his mother in a healthy way, or seeking comfort in an interaction that he manufactured.

The capacity to turn images into videos is not limited to tools like Midjourney. In recent weeks, multi-billionaire Elon Musk announced ’Grok Imagine’ for users to generate short videos from text/image prompts. Google, in July, rolled out ‘Create’ mode in its Photos app to transform photos into short videos for U.S.-based users. There are also other, smaller platforms that offer to turn users’ photos into AI videos.

AI tools have been used for years to enhance old media through a process called AI upscaling: removing blurred parts, pixelation, and grain to deliver better output. While GenAI has made this process faster and easier, it also allows users to morph and manipulate images with advanced tools that can remove objects and fill in missing spaces.

The jump in technology comes with legal questions to consider, as permission is usually required before making any significant edits to a copyrighted creation. Furthermore, ethical conundrums also arise when a person manipulates a photo featuring someone who is no longer alive. Significantly, more users need to consider the impact on the photo’s most vulnerable subjects — children.

Children’s rights and safety at risk

For instance, cybercriminals can now rapidly create realistic AI videos of minors by using their publicly available photos with ease. In the past, criminals have targeted minors by generating synthetic nude photos of them to extort money. One such case in the U.S. led to a teenager dying by suicide. His family was not aware that the child was being harassed.

Data protection lawyer and AI specialist Kleanthi Sardeli, who works with the Vienna-based NGO noyb and advocates for consumers’ digital rights, said that turning still images into video clips could be done for innocent reasons but that there are “serious implications” to consider as well.

“The lower the barriers to creating realistic content, the more we also need to think about ethics, consent and context. A photo can be turned into a convincing video without the knowledge or consent of the person depicted, increasing the risk of deepfakes, defamation and abuse,” said Ms. Sardeli, adding that the risks multiplied when photos of children were involved.

She explained that under the EU’s GDPR regulations, children cannot legally consent to such use of their personal data, including their image, until they turn 16 years old.

Though experts and lawmakers have called on AI companies to enforce strong guardrails to prevent AI chatbots from generating highly pornographic media, the reality is that many chatbots easily churn out sexual content. What’s more, AI firms and their bosses are aggressively promoting their service. For instance, one specific video Mr. Musk shared that promoted Grok Imagine’s capability depicted a fantasy-style clip of a winged woman wearing very little clothing.

Across the internet, meanwhile, websites lure users with morphed porn videos, featuring celebrities, and invite users to digitally undress victims of their choice.

“Beyond obvious dangers such as CSAM (Child Sexual Abuse Material), less malicious uses, such as animating a child’s photo for advertising or entertainment purposes, can also jeopardise children’s privacy, dignity and autonomy,” said Ms. Sardeli.

Gatekeepers and guardrails

The Hindu reached out to both Google and xAI about the safeguards in place on these platforms to restrict users from turning photos of children into videos, and whether there are content filters to stop photos from being turned into pornographic content or child abuse material.

A Google spokesperson said that the company took child safety online seriously and that the photo-to-video capability in Google Photos could be used with only two prompts: “Subtle movements” and “I’m feeling lucky”.

Furthermore, these videos would include an invisible SynthID digital watermark, as well as a visual watermark, the company said.

“Our safety measures include extensive ‘red teaming’ to proactively identify and address potential issues, as well as thorough evaluations to understand how features can be used and prevent misuse. We also welcome user feedback on issues, which we use to make ongoing improvements to our safety measures and overall experience. We have clear policies and Terms of Service on what kinds of content we allow and don’t allow, and build guardrails to prevent abuse,” said the Google spokesperson.

“Google Photos is a place to store your memories and we want our users to be able to use its fun creative tools on photos of their friends and family, including their kids, while also prioritising safety,” the company said.

xAI did not respond to a request for a statement.

In the U.S., the National Center for Missing and Exploited Children (NCMEC) has highlighted that it is “deeply concerned” about how Generative AI was being used to sexually exploit children.

“Over the past two years, NCMEC’s CyberTipline has received more than 7,000 child sexual exploitation reports involving GAI [Gen AI], and the numbers are expected to grow as we continue to track these trends,” stated the organisation on its website.

Meanwhile, Ms. Sardeli noted that existing laws in the EU provided some safeguards but were not specifically designed with AI content in mind. This means EU child protection laws clearly prohibit explicit material, but they are less clear when it comes to synthetic media that is not overtly illegal but is still exploitative or harmful, according to her.

In India, the Ministry of Electronics and Information Technology (MEITY) has issued advisories that require platforms to remove morphed content (including AI deepfakes), and especially if the content is graphic or sexually abusive. Furthermore, platforms such as Meta, Google, and X have appointed grievance officers in India to handle complaints raised by affected users.

“AI providers are beginning to build in safeguards, like detection systems and content filters, but these are uneven across platforms and not always effective. The law is lagging behind the technology. In particular, there is no comprehensive global framework that addresses the misuse of children’s likenesses in GenAI,” explained Ms. Sardeli.

“Stronger rules around consent, transparency, and accountability are needed, along with technical standards that make it harder to misuse children’s photos.”

Safety tips for turning photos into AI videos

Do not share intimate or sensitive photos with Generative AI platforms when you are trying to turn them into videos, as you may be giving up your ownership and usage rights over the image by doing so

It is best practice to only convert a photo into an AI video if you have the informed consent of all those featured in the photo, as well as the photographer

Do not turn other people’s copyrighted or personal images into AI videos, as this may lead to legal action being taken against you

If a photo of you or someone you know has been turned into an AI video without consent, report the content to the tech platform as well as the company’s assigned Grievance Officer in India whose job is to make sure they comply with Indian laws

If you are a parent or caretaker, refrain from sharing photos that feature minors or other vulnerable people on public platforms, as these images can be easily stolen or misused by cybercriminals using AI tools

Avoid using third-party platforms to turn personal photos into AI videos, and especially if the images feature children or other vulnerable people. Instead, opt for private or device-based photo-to-video AI tools for security reasons

Have an open discussion with children and adolescents about the increasing risks of AI deepfakes, and urge them to confide in trusted adults or approach the police if they are ever targeted with morphed images of themselves

(Those in distress or having suicidal thoughts are encouraged to seek help and counselling by calling the helpline numbers here)

Share This Article
Leave a Comment