Google’s problem with non-consensual explicit images keeps getting worse

Date:

In early 2022, two Google policy employees met with three women who were victims of a scam that resulted in explicit videos of them circulating online — including in Google search results. These women were among hundreds of young adults who responded to ads seeking swimsuit models but were forced to star in sex videos distributed by the website GirlsDoPorn. The site shut down in 2020, and a producer, a bookkeeper, and a cameraman later pleaded guilty to sex trafficking, but the videos continued to appear on Google Search before the women requested their removal.

Joined by a lawyer and a security expert, the women presented a number of ideas about how Google could better hide criminal and abusive clips, according to five people who attended the virtual meeting or were briefed on it. They wanted Google Search to ban websites devoted to GirlsDoPorn and its watermarked videos. They suggested that Google could borrow the 25-terabyte hard drive on which the women’s cybersecurity consultant, Charles DeBarber, had saved every episode of GirlsDoPorn, take a mathematical fingerprint, or “hash,” of each clip and prevent them from ever showing up again in search results.

Two Google employees who attended the meeting hoped to use what they learned to get more resources from higher-ups. But victim advocate Brian Holm felt skeptical. He said the policy team was in a “difficult position” and “didn’t have the authority to make changes within Google.”

His gut reaction was correct. Two years later, none of the ideas raised at the meeting have been implemented, and the videos still show up in search results.

Wired has spoken to five former Google employees and 10 victim advocates who have been in dialogue with the company. They all say they appreciate that, thanks to recent changes made by Google, victims of image-based sexual abuse like the GirlsDoPorn scandal can more easily and successfully remove unwanted search results. But they are frustrated that the search giant’s management has not approved proposals like the hard drive idea, which they believe would fully restore and protect the privacy of millions of victims worldwide, the vast majority of whom are women.

Sources have described previously unpublished internal deliberations, including Google’s reasoning for not using an industry tool called StopNCII that shares information about non-consensual intimate images (NCII) and the company’s failure to demand that consent be verified to qualify for search traffic from porn websites. Google’s own research team has published steps that tech companies can take against NCII, including using StopNCII.

Sources believe such efforts will better curb the problem, which is growing, partly through broadening access to AI tools that create obvious deepfakes, including deepfakes of GirlsDoPorn survivors. Total reports to the UK’s revenge porn hotline more than doubled to nearly 19,000 last year, with the number of cases involving synthetic content also rising. Half of the more than 2,000 Britons polled in a recent survey were worried about being victimized by deepfakes. In May the White House urged lawmakers and the industry to take swift action to stop NCII as a whole. In June, Google joined seven other companies and nine organizations in announcing a working group to coordinate responses.

Right now, victims can seek prosecution of abusers or make legal claims against the websites hosting the content, but neither of these avenues is guaranteed, and both can be costly due to legal fees. Getting results removed from Google may be the most practical strategy and accomplishes the ultimate goal of keeping infringing content out of the sight of friends, hiring managers, potential landlords, or dates—who nearly all turn to Google to find people.

A Google spokesperson, who requested anonymity to avoid harassment from the perpetrators, declined to comment on the call with the GirlsDoPorn victims. She says combating what the company calls non-consensual explicit images (NCEI) remains a priority, and Google’s actions go beyond what is legally required. “Over the past several years, we’ve invested deeply in industry-leading policies and protections to help protect people impacted by this harmful content,” she says. “Google teams are constantly working hard to strengthen our security measures and thoughtfully address emerging challenges to better protect people.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

A Side Sleeper Test & Review Popular Body Pillows (2024)

being a party This can be challenging for the...

How to Protect Your Startup from Email Scams

For years, it's been claimed that the "end of...

Google is integrating DeepMind Gemini 1.5 Pro into robots that can navigate real-world environments

Google DeepMind on Thursday shared new advancements made in...