top of page
  • Writer's pictureDaniel Gao

Unmasking the Future: The Perilous Rise of Deepfakes

What are Deepfakes?

The term "Deepfake" originates from the combination of "deep learning" and "fake," referring to the use of advanced artificial intelligence (AI) techniques, particularly deep learning, to create counterfeit videos, images, or audio that closely resemble authentic content, making them difficult to distinguish from genuine material.

Dr Phil but everyone is Dr Phil :)


While they makes for great skits, filters, and Star Wars features, deepfakes pose risks to both individuals and society as a whole. This technology holds the potential for political manipulation by fabricating speeches, interviews, and events, which could influence public opinion and impact elections. Deepfake content can introduce national-level cybersecurity threats and financial vulnerabilities through fraudulent images or voices of corporate executives and government officials. In this public service announcement, the FBI warned companies that people were using deepfake technology for remote interviews and employee background checks. Scarily, deepfakes can also be used to spread non-consensual sexually explicit content for revenge porn and defamation. For, normal citizens like you and I, we need to keep both our credit cards and biometric data safe from identity theft.

Regulation

Unfortunately, detectors like GPTZero are not enough to combat the widespread malicious use of AI. Using Generative Adversarial Networks and Blockchain technology are quick to train and implement on social media platforms, but they are expensive and there is a worldwide shortage of semiconductors. Additionally, Leibowicz, McGregor and Ovadaya found that every time a new deepfake detector is publicly published, it's accuracy decreases against the overwhelming amount of novel actors. Even the algorithms of closed source detectors by companies like Google can be reverse engineered to improve malicious technology.


A Generative Adversarial Network (GAN) using Tensorflow and Keras frameworks. It uses neural networks and training data to label new images as real or fake. Source Fun fact: I've actually used Tensorflow and Keras to create a violence detector in camera footage!


Legislative solutions are absolutely necessary to block the many loopholes in government laws that allow the rapid proliferation of deepfake content. New laws must created detailed definitions of deepfakes, strike a balance between freedom of speech and malignant disinformation, create regulatory committees to supervise large media platforms, and establish consequencess/fines to deter spreading deepfakes. A robust legal framework can be modified and applied to multiple industries like business and entertainment. Because of they numerous ways people can be victimized by deepfakes, a range of laws need to be created to protect against each threat:

  • Licensing control: Similar to how other dangerous technologies are regulated, AI manufacturers could be subjected to requirements for transparency, robust detection tools, and cooperation with law enforcement.

  • Intellectual Property and Copyright Infringement: Under the Canada's Copyright Act, R.S.C., 1985, c. C-42, a copyright owner has the sole right to produce or reproduce a work in any material form. Deepfake content that involves modification and publication of existing videos/images would be destroyed upon request of the owner.

  • Defamation: False deepfake content intended to harm another's reputation would be prohibited from dissemination, and the targeted person would be entitled to damage rewards. The only way to potentially avoid a charge of this nature would be to release a disclaimer alongside the deepfake content being released.

  • Trademark: A trademark law could prevent false endorsement, or non-consensually using a celebrity's face or voice in an advertisement. This would prevent false advertising and even manipulation of markets, as select individuals like Elon Musk have huge influence over some industries.

  • *Privacy: A privacy law could protect people from any deepfake that exposes any personal information about a victim. Given the false nature of deepfakes, however, this may not be a big concern


Real-estate startup reAlpha uses a deepfake version of Elon Musk in a recent ad

Precedents

Various nations and US states have already developed their own unique sets of laws and regulations in response in deepfakes.

  • China's Cyberspace Administration of China released a new regulation in January called Deep Synthesis Provisions, which demands that any content that was created using an AI system must be clearly labeled with a watermark indicating that the content had been edited. It also lays guidelines for end-users for how AI products can be used.

  • In the US, deepfake-specific laws exist in only a few states. Texas has a law banning deepfakes created to influence elections, Virginia banned deepfake pornography, and California banned both malicious deepfakes within 60 days of an election and nonconsensual deepfake pornography. National awareness is urgent; if the creator of illegal deepfake technology lives outside of state lines, victims of deepfake pornography have no means for recourse. According to Karasavva and Noorbhai, about 96% of Deepfakes are of a pornographic nature.

  • Singapore has the Protection from Online Falsehoods and Manipulation Act (POFMA), which counters false statements of fact communicated in Singapore via the internet. It also has a Personal Data Protection Act (PDPA), which governs the collection of personal data and prevents its misuse.

  • The European Commission has proposed the AI Act in the European Union (EU), an umbrella approach to AI regulation that would include subjecting deepfake providers to transparency and disclosure requirements.

  • South Korea has a law that makes it illegal to distribute deepfakes that "cause harm to public interest," imposing penalties of up to five years in prison or fines of up to 50 million won, which is roughly equivalent to 43,000 USD for offenders.

Conclusion

Legal frameworks evolve much slower than technology. It is also difficult to persecute criminals who spread deepfake content because of the anonymity offered by the Internet. Additionally, the Communications Decency Act protects service provider from liability for the actions of its users, stating “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Thus, as deepfake technology advances, countries should enforce a mixture of technological, legislative, and media education solutions. The government could organize curricula and resources to inform the public about the presence of deepfakes without spending any funds on research and technology. Awareness would increase skepticism of media, and has the potential to disincentivize creation of deepfake technology if executed properly. Public awareness is mostly a decentralized solution which depends on strength of circulation, which may be a challenge for rural areas. Each method has its drawbacks, but implementing a three-pronged approach of prevention, detection, and legal response is comprehensive solution to minimize the threats of deepfakes.


In the end, you have the greatest power to protect yourself from deceit! It's easy to be manipulated when you want to believe something, so remember to keep an open yet skeptical mindset, check multiple credible sources, and report disinformation whenever possible!


An ad campaign for magazine Brill's Content



https://arxiv.org/pdf/2102.06109.pdf (The Deepfake Detection Dilemma: A Multistakeholder Exploration of Adversarial Dynamics in Synthetic Media)

https://pubmed.ncbi.nlm.nih.gov/33760666/ (The Real Threat of Deepfake Pornography: A Review of Canadian Policy)

345 views6 comments

Recent Posts

See All

6 Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
Dec 12, 2023
Rated 5 out of 5 stars.
Like

Guest
Dec 12, 2023
Rated 5 out of 5 stars.
Like

Guest
Dec 07, 2023
Rated 5 out of 5 stars.

Haha, very interesting 🤔

Like

Guest
Nov 06, 2023
Rated 5 out of 5 stars.

The video is so funny, I can’t stop laughing 😂

Like

Guest
Nov 06, 2023
Rated 5 out of 5 stars.

Very interesting and thoughtful! looking forward to seeing more…

Like
bottom of page