On Thursday, June 13, the House Permanent Select Committee on Intelligence took a deeper look at the deceptive chaos the emerging technology might bring and convened a panel of experts to look at ways to prevent its abuse. Deepfakes are hyper-convincing videos doctored to show something that never actually happened.
Per the committee, the meeting took place regarding manipulated media, national security and artificial intelligence components associated with the deepfake” technology. The law and technology panelists lined up to offer testimony included:
- Danielle Citron, a law professor at the University of Maryland’s Francis King Carey School of Law and an Electronic Privacy Information Center (EPIC) board member
- Jack Clark, the policy director for OpenAI
- David Doermann, a professor at SUNY Empire Innovation and a director at the Artificial Intelligence Institute for the University at Buffalo
- Clint Watts, a Distinguished Research Fellow at the Foreign Policy Research Institute and a Senior Fellow for the Alliance for Securing Democracy, German Marshall Fund
More Than One Approach to Exposing Deepfakes
Citron said a combination of factors, including the law, will be needed to combat the malicious intent of the doctored information. "We need a combination of law, markets, and societal resistance," said Citron, according to information from EPIC, adding that "the phenomenon is going to be increasingly felt by women and minorities."
EPIC has called for the “Universal Guidelines for Artificial Intelligence” as the foundation of any federal laws governing its use. Artificial intelligence is a key component in the spreading of misinformation. The guidelines include several tenants such as the:
- Right to Transparency
- Right to Human Determination
- Identification Obligation
- Fairness Obligation
- Accuracy, Reliability, and Validity Obligations
- Public Safety Obligation
- Termination Obligation
At the hearing, Clark called for funding, education and “institutional intervention” in order to mitigate the confusion caused by misinformation spread through the tech.
From Twitter:
Ashley Pilipiszyn @apilipis
"@jackclarkSF calls for institutional interventions, increased funding, measurement, norms, and comprehensive AI education as pathways to address the national security challenges of AI, manipulated media, and 'deepfakes': https://www.youtube.com/watch?v=tdLS9MlIWOk …
House Committee’s Purview is Expansive, Scope Wide
According to information provided by the Congressional committee, topic associated with deepfake technology will cover the law, politics, security and human interest. They include:
- Future advancements
- Detecting and tracking
- How deepfakes might allow individuals to deny media
- Its psychological impact
- Counterintelligence fears
- Internet platforms’ role in mitigating fake information
- Legal challenges
- The appropriate role of the federal government.
“Deepfakes raise profound questions about national security and democratic governance, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens,” reads information from the committee.
Promotional materials for the hearing state this is a first-of-its-kind hearing as there has not, to date, been a House committee meeting dedicated specifically to deepfake technology and other AI-generated faux data.