Image by JerzyGórecki from Pixabay

“Victim Battles to Remove Deepfake Lewd Videos from TikTok: A Disturbing Invasion of Privacy”

A woman in Toronto is battling to remove stolen images of her from social media, suspecting that deepfake technology was used to alter them.

This incident sheds light on the challenges faced by Canadian laws in catching up with Artificial Intelligence (AI) developments.

Speaking anonymously, she expressed her fear of being targeted due to the manipulated images circulating online.

Three months ago, she discovered a fake TikTok account using her photos in explicit AI-generated videos without her consent. Despite her attempts to report the account to TikTok, it remains active.

The woman, a law student, shared the emotional distress caused by the violation of her privacy and reputation.

Law enforcement struggles to address such cases effectively due to the lack of specific legislation criminalizing non-consensual deepfake content of adults.

Proposed legislation like Bill C-16, the Protecting Victims Act, aims to criminalize sexual deepfakes, providing hope for victims seeking justice.

The Ministry of Justice emphasizes the importance of updating laws to combat modern technology-driven crimes, such as deepfakes.

The woman’s plea for legal protection against such malicious acts underscores the urgent need for comprehensive regulations in the digital age.

SEO Keyphrase: Deepfake technology issues