Today, we learn that artificial intelligence (AI) experts, industry executives, and even technology pioneer Yoshua Bengio have signed an open letter calling for more regulation around the creation of deepfakes. Among the numerous reasons, they cited potential risks to society. To give you some brief context, deepfakes allow for the replacement of people’s faces in images and videos. These videos can even have voices replaced for maximum realism, all thanks to technological advancements that are very real today.
Such are the advances in technology that they have become increasingly indistinguishable from human-created content. As a result, an open letter, titled “Disrupting the Deepfake Supply Chain,” has been born, in which recommendations are made on how to address the regulation of these deepfakes. Notably, it advocates for penalizing everything from disturbing deepfakes involving minors to criminal penalties for anyone who knowingly creates or enables the dissemination of harmful deepfakes. It also demands that AI companies prevent their products from creating damaging deepfakes.
They want deepfake regulation: used for fraud, disinformation, and sexual images
AI has made great strides in a very short time and in every sense. It’s evident that deepfakes have gone hand in hand with these advancements. Because of this, over 400 people from diverse sectors, such as academia, entertainment, and politics, have signed the letter. Among the most notable are Andy Weber, former US Deputy Secretary of Defense, Joy Buolamwini, founder of the Algorithmic Justice League, two former presidents of Estonia, researchers from Google DeepMind, and a researcher from OpenAI.
“Today, deepfakes are often associated with sexual images, fraud, or political disinformation. As AI advances rapidly and facilitates the creation of deepfakes, safeguards are needed,” the group of experts said in the letter. It was drafted by Andrew Critch, an AI researcher at the University of Berkeley.
Regulators have always wanted to ensure that AI does not harm society. It has been their priority since OpenAI introduced ChatGPT in late 2022. Such was the virality and saturation of servers that all companies joined in a race to offer AI services. In just about a year and a half, we have gone from having a fledgling ChatGPT to having OpenAI Sora. Announced last week, it is an AI capable of generating videos based on the text we write.
Deepfakes are an increasing threat to society, and governments must impose obligations across the supply chain to stop their proliferation. New laws should:
– Fully criminalize deepfake child pornography, even when only fictional children appear.
– Establish criminal penalties for anyone who knowingly creates or facilitates the dissemination of harmful deepfakes.
– Require software developers and distributors to prevent their audiovisual products from creating damaging deepfakes and hold them accountable if their preventive measures are too easily bypassed.
If designed sensibly, these laws could foster socially responsible businesses and need not be overly burdensome.
The post, “Experts, and even the AI pioneer, urge greater regulation of deepfakes,” first appears on El Chapuzas Informático.