Minnesota lawmakers want to stamp out ‘deeply false’ misinformation

[ad_1]

Nothing seemed amiss in Rep. Zach Stevenson’s testimony before a House committee on why the Minnesota Legislature should crack down on the use of so-called deep fake technology.

Using artificial intelligence, the technology can manipulate audio and video to create realistic images of people saying or doing things that never happened, he said. Deepfakes have been used to create sexually explicit videos of people or fake political content designed to influence elections.

The Coon Rapids Democrat then paused for an exposé. His comments up to this point were written by the AI ​​software ChatGPT with a one-sentence prompt. He wanted to demonstrate the “advanced nature of artificial technology”.

It worked.

“Thank you for this disturbing testimony,” replied Congressman Mike Freiberg, DFL-Golden Valley, chairman of the Minnesota House Elections Committee.

The proposal represents a first attempt by Minnesota lawmakers to stop the spread of misinformation through the technology, especially when it comes to influencing elections or in situations where it is used to spread fake sexual images of someone without the person’s consent.

In Minnesota, it is now a crime to post, sell or distribute personally explicit images and videos without the person’s permission. But this revenge porn law was written before much was known about the deeply fake technology that was already being used in Minnesota to distribute realistic, but not real, sexual images of people.

Stevenson’s bill would make it a felony to knowingly distribute sexually explicit content using deeply fraudulent technology that clearly identifies a person without permission.

The proposal is modeled after what California and Texas have already done to try to reduce the use of the technology in those situations, as well as when deep-pocketed fraud is used to influence elections. His bill would create penalties for anyone who uses the technology up to 60 days before an election to try to discredit a candidate or otherwise influence voters.

See also  Artificial intelligence opens up new possibilities if the data allows it

Deepfake videos have yet to become a major issue in state or national elections, but national groups are sounding the alarm that the technology has evolved rapidly in recent years.

Republicans on the committee supported the idea, concerned about the possibility that the technology could create false information that could spread quickly on social media.

“I saw a video, an artificial intelligence, where they literally listened to someone’s voice for a short time, 20 or 30 seconds, and then they were able to duplicate that voice,” said Congresswoman Pam Altendorf, R-Red Wing. “At the same time it was completely fascinating to me, it was also completely terrifying to think what people could do with it.”

The Committee on Finance and Politics of Parliamentary Elections moved the proposal to a vote last week.

“Hopefully we can resolve this before Skynet becomes aware of itself,” Freiberg said after the vote.

[ad_2]

Leave a Comment