Women in AI: Anika Collier Navaroli is working to change the power imbalance

Date:

To give AI-focused female academics and others their deserved — and long-awaited — spotlight, TechCrunch is launching a series of interviews focused on notable women who have contributed to the AI ​​revolution.

Anika Collier Navaroli is a Senior Fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, held in collaboration with the MacArthur Foundation.

She is known for her research and advocacy work in technology. Previously, she served as a Race and Technology Practitioner Fellow at the Stanford Center on Philanthropy and Civil Society. Prior to that, she led Trust and Safety at Twitch and Twitter. Navaroli is perhaps best known for her congressional testimony about Twitter, where she spoke about how Twitter ignored warnings of imminent violence on social media that predated the January 6 Capitol attack.

Briefly tell us how you got your start in AI? What attracted you to this field?

Nearly 20 years ago, I was working as a copy clerk in the newsroom of my hometown newspaper the summer it went digital. At the time, I was studying journalism. Social media sites like Facebook were taking over my campus, and I became obsessed with trying to understand how laws made on the printing press would evolve with emerging technologies. That curiosity led me to law school, where I got on Twitter, studied media law and policy, and I watched the Arab Spring and Occupy Wall Street movements. I put it all together and wrote my master’s thesis about how new technology was changing the way information flows and how society was exercising freedom of expression.

I worked at a few law firms after graduation and then made my way to the Data & Society Research Institute, where I led the new think tank’s research on what was then called “big data,” civil rights, and fairness. My work there looked at how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were mimicking bias and creating unintended consequences that disproportionately affected marginalized communities. I then worked at Color of Change and led the first civil rights audit of a tech company, developed the organization’s playbook for tech accountability campaigns, and advocated for tech policy changes to governments and regulators. From there, I became a senior policy officer inside the trust and safety teams at Twitter and Twitch.

What work are you most proud of in the AI ​​field?

I am most proud of my work inside technology companies using policy to practically shift the balance of power and correct bias within culture and knowledge-producing algorithmic systems. At Twitter, I led a few campaigns to verify individuals who had previously been excluded from the exclusive verification process, including Black women, people of color, and queer people. This also included leading AI scholars like Safiya Noble, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020 when Twitter was still Twitter. At the time, verification meant that your name and content became part of Twitter’s core algorithm as tweets from verified accounts were inserted into recommendations, search results, home timelines, and contributed to the creation of trends. So working to verify new people with different perspectives on AI fundamentally changed the narrative, empowering their voices as thought leaders and elevating new ideas into the public conversation during some really important moments.

I am also very proud of the research I did at Stanford, which came out as Black in Moderation. When I was working inside tech companies, I also noticed that no one was really writing or talking about the experiences I was having every day as a Black man working in Trust and Safety. So when I left the industry and came back to academia, I decided to talk to Black tech workers and uncover their stories. This research became the first of its kind and inspired many new and important conversations about the experiences of tech workers with marginalized identities.

How do you cope with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

As a Black queer woman, navigating male-dominated spaces and spaces where I am isolated has been a part of my entire life journey. Within tech and AI, I think the most challenging aspect is what I have called “forced identity labor” in my research. I coined this term to describe situations often where employees with marginalized identities are treated as the voice and/or representative of entire communities that share their identities.

Because of the high stakes that come with developing new technology like AI, avoiding that labor sometimes feels nearly impossible. I had to learn to set very specific boundaries for myself about which issues I was willing to work on and when.

What are some of the most important problems faced by AI during its development?

According to investigative reporting, current generative AI models have gobbled up all the data on the internet and will soon run out of available data. That’s why the world’s biggest AI companies are turning to synthetic data, or information generated by AI, instead of humans to train their systems.

This idea perplexed me. So, I recently wrote an article arguing why I think the use of synthetic data as training data is one of the most important ethical issues facing new AI development. Generative AI systems have already shown that, depending on their original training data, their output tends to replicate bias and create incorrect information. So training new systems with synthetic data would mean constantly feeding biased and incorrect output back into the system as new training data. I described this as potentially turning into a feedback loop leading to hell.

Since I wrote this article, Mark Zuckerberg has raved that Meta’s updated Llama 3 chatbot is partly powered by synthetic data and is the “most intelligent” generative AI product on the market.

What issues should AI users be aware of?

AI is a ubiquitous part of our current lives, whether it’s spellcheck or social media feeds, chatbots and image generators. In many ways, society has become a guinea pig for experiments with this new, unproven technology. But AI users shouldn’t feel powerless.

I have been arguing that technology advocates should come together and organize AI users to call for people to pause on AI. I think the Writers Guild of America has shown that with organization, collective action, and patient resolve, people can come together to create meaningful limits for the use of AI technologies. I also believe that if we pause now to correct the mistakes of the past and create new ethical guidelines and regulation, AI doesn’t have to be an existential threat to our future.

What’s the best way to build AI responsibly?,

My experience working inside tech companies showed me how much it matters who is in the room writing policies, presenting arguments, and making decisions. My path also showed me that I developed the skills I needed to succeed in the technology industry by starting in journalism school. I am now working at Columbia Journalism School and I am interested in training the next generation of people who will do technology accountability work and develop AI responsibly, both inside tech companies and as external watchdogs.

I think (journalism) school gives people unique training in how to examine information, seek out the truth, consider multiple viewpoints, make logical arguments, and separate facts and reality from opinions and misinformation. I believe it’s a solid foundation for those who will be responsible for writing the rules for the next iterations of AI and what not. And I’m hoping to create a more paved path for those who will come next.

I also believe that in addition to skilled trust and security staff, the AI ​​industry needs external regulation. In the US, I argue this should come in the form of a new agency to regulate US technology companies, with the power to set and enforce baseline security and privacy standards. I would like to continue working to connect current and future regulators with former tech workers who can help those in power ask the right questions and create new nuanced and practical solutions.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

A Side Sleeper Test & Review Popular Body Pillows (2024)

being a party This can be challenging for the...

How to Protect Your Startup from Email Scams

For years, it's been claimed that the "end of...