Diskorshons, solution, and conclusion Sad news report President Joe Biden seemed to sign up for 718.7 million dollars in robocalls that I received, and I’m leaving right now.

SF lawmakers tackle AI deepfakes in local elections

Olivia Wise/The Examiner

In January, New Hampshire voters received robo calls in which President Joe Biden seemed to urge them not to participate in the state’s primary election.

But the call was a fabrication, likely generated with artificial-intelligence deep fake technology, which has advanced rapidly in recent years and can now produce increasingly convincing video and audio clips of people saying and doing things they’ve never actually said or did.

Incidents such as the Biden robo all have stoked concern among some San Francisco city leaders, who worry that deepfakes could be used here to sow political misinformation during a critical election year.

Worried that federal and state action to confront AI-generated election interference will be too slow, one supervisor has launched a fact-finding mission to combat the problem locally, even as the full extent of the quickly-evolving challenge is only now beginning to come into focus.

“There seems to be a consensus that this is an area where there’s significant risk, and we need to get ahead of those risks and protect the integrity of our elections,” said Supervisor Dean Preston, who said he suspects that he has been the target of AI-generated misinformation.

The progressive lawmaker will lead a hearing at the Board’s Rules Committee — tentatively scheduled for April 22 — to consider what actions The City might take to address AI-related threats to the local election process.

During a recent interview, Preston told The Examiner that he has not yet formulated any specific policy proposal. However, his office is considering labeling requirements for AI-generated election materials and measures that would outright ban false and misleading deepfakes.

“The purpose of having this hearing is to really flesh out what exactly is allowed, what is prohibited, and where are the opportunities to tighten up the rules to make sure that voters aren’t misled,” said Preston, whose office has not yet found any existing city laws regulating AI.

While The City appears to have made it through the March primary without any widespread disinformation campaigns, many view AI interference as an inevitability.

Preston’s team is conferring with the City Attorney’s office about what legal complications could arise if such rules were introduced. Any measure placing restrictions on AI-generated content could run into First Amendment protections, Preston acknowledged.

“That’s a very challenging area of law and where we’re really looking to see what has been successful in other jurisdictions and at the state level,” he said.

Another thorny challenge: how any potential disclosure requirement for AI-generated content would be enforced. That task would likely fall to The City’s Ethics Commission, but inadequate funding has already hamstrung the agency’s ability to enforce existing ethics rules, Preston said, leading to delays in the audits of campaign accounts.

Mayor London Breed — who last year proposed controversial cuts to the Commission’s budget that the Board of Supervisors later reversed — did not respond to a request for comment.

Beyond San Francisco, Preston is also hoping to learn how action at the state and federal levels might help address local AI election concerns. So far, Congress has not passed any law that blocks the creation or sharing of deepfakes.

However, several states have taken action, including California. Already this year, state lawmakers have introduced a series of new bills aiming to rein in election-related deepfakes, in part by making them easier to identify.

Those measures would build on landmark AI regulations California passed in 2019 banning deceptive deepfakes targeting political candidates. However, while that law might have been groundbreaking for its time, supporters of AI regulation say that it needs to be strengthened.

Despite the flurry of activity in Sacramento, Preston fears that reform will not come soon enough to affect the 2024 election cycle.

Apple honors SF billionaire Gordon Getty with classical-music playlist

“It would make more sense if the state and the federal government were ahead of the curve here and all the conduct we’re talking about was already illegal and/or required to be fully disclosed,” Preston said. “But that’s not the situation, so that’s why we’re looking at what are areas where The City might have to intervene as well.”

AI experts agree that local efforts to address deepfakes will be critical.

“In some ways, local contests are more susceptible to these risks because there are fewer watchdogs patrolling it, and there are often far fewer resources involved in correcting the record,” said Josh Lawson, who leads the AI Elections Initiative at the Aspen Institute, a global nonprofit focused on public affairs.

Lawson and others who have studied AI and elections are warning of several possible scenarios that could play out this year in local races. For example, they say that AI could be used to put misinformation into the mouths of local election administrators by generating videos in which officials provide the wrong date for the election, or perhaps announce that ballots have been lost.

AI could also be used to micro target individual voters, for example, by leveraging the technology to quickly and easily translate misleading articles into a variety of different languages.

Even AI-generated text articles can be a potent tool for misinformation: Experts point to a New York Times report documenting the recent appearance of several websites that appear to be local news outlets but are, in fact, part of an elaborate Russian-backed campaign using AI to tools to generate news articles that sometimes include deceptive claims, according to researchers and government officials.

“What used to require a heavy lift — even for really sophisticated, well-funded bad actors — is now close to costless,” said Lawson.

San Francisco’s large immigrant communities are especially vulnerable to AI misinformation, advocates warn.

“There’s basically no security guard,” said Jinxing Niu, the founding manager of Pita Oba, a Chinese-language online fact-checker launched in 2022 by Chinatown nonprofit Chinese for Affirmative Action.

Because the pool of journalists serving local Chinese speakers is far smaller than those working in English, misinformation that spreads in Chinese often goes uncorrected, Niu said.

As an example, Niu points to one deep fake video falsely showing Joe Biden making transphobic remarks. The video has been corrected by fact checkers working in English, but another version with Chinese subtitles is still circulating on Chinese-language websites without correction, Niu said.

Election misinformation is a longstanding problem in the Chinese-language community, Niu said, but she worries that the proliferation of deepfakes — notoriously difficult to identify — will make her team’s fact-checking work even harder.

With AI risks changing so quickly, Preston says he would like to see The City take a more active role in educating voters about deepfakes and how they might be used during this election cycle.

Advocates for AI safeguards agree voter education will be crucial.

“People will have to be informed voters not just as to who they want to support, but what they can believe,” said Drew Liebert, who directs The California Initiative for Technology and Democracy, an advocacy group that co-sponsored three state bills targeting AI misinformation.

“They’re going to have to be fact checkers themselves,” he said.

Be the first to comment

Leave a Reply

Your email address will not be published.


*