Women in AI: Anika Collier Navaroli is working to shift the power imbalance



To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution.

Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, held in collaboration with the MacArthur Foundation.

She is known for her research and advocacy work within technology. Previously, she worked as a race and technology practitioner fellow at the Stanford Center on Philanthropy and Civil Society. Before this, she led Trust & Safety at Twitch and Twitter. Navaroli is perhaps best known for her congressional testimony about Twitter, where she spoke about the ignored warnings of impending violence on social media that prefaced what would become the January 6 Capitol attack.

Briefly, how did you get your start in AI? What attracted you to the field? 

About 20 years ago, I was working as a copy clerk in the newsroom of my hometown paper during the summer when it went digital. Back then, I was an undergrad studying journalism. Social media sites like Facebook were sweeping over my campus, and I became obsessed with trying to understand how laws built on the printing press would evolve with emerging technologies. That curiosity led me through law school, where I migrated to Twitter, studied media law and policy, and I watched the Arab Spring and Occupy Wall Street movements play out. I put it all together and wrote my master’s thesis about how new technology was transforming the way information flowed and how society exercised freedom of expression.

I worked at a couple law firms after graduation and then found my way to Data & Society Research Institute leading the new think tank’s research on what was then called “big data,” civil rights, and fairness. My work there looked at how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were replicating bias and creating unintended consequences that impacted marginalized communities. I then went on to work at Color of Change and lead the first civil rights audit of a tech company, develop the organization’s playbook for tech accountability campaigns, and advocate for tech policy changes to governments and regulators. From there, I became a senior policy official inside Trust & Safety teams at Twitter and Twitch. 

What work are you most proud of in the AI field?

I am the most proud of my work inside of technology companies using policy to practically shift the balance of power and correct bias within culture and knowledge-producing algorithmic systems. At Twitter, I ran a couple campaigns to verify individuals who shockingly had been previously excluded from the exclusive verification process, including Black women, people of color, and queer folks. This also included leading AI scholars like Safiya Noble, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020 when Twitter was still Twitter. Back then, verification meant that your name and content became a part of Twitter’s core algorithm because tweets from verified accounts were injected into recommendations, search results, home timelines, and contributed toward the creation of trends. So working to verify new people with different perspectives on AI fundamentally shifted whose voices were given authority as thought leaders and elevated new ideas into the public conversation during some really critical moments. 

I’m also very proud of the research I conducted at Stanford that came together as Black in Moderation. When I was working inside of tech companies, I also noticed that no one was really writing or talking about the experiences that I was having every day as a Black person working in Trust & Safety. So when I left the industry and went back into academia, I decided to speak with Black tech workers and bring to light their stories. The research ended up being the first of its kind and has spurred so many new and important conversations about the experiences of tech employees with marginalized identities. 

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?  

As a Black queer woman, navigating male-dominated spaces and spaces where I am othered has been a part of my entire life journey. Within tech and AI, I think the most challenging aspect has been what I call in my research “compelled identity labor.” I coined the term to describe frequent situations where employees with marginalized identities are treated as the voices and/or representatives of entire communities who share their identities. 

Because of the high stakes that come with developing new technology like AI, that labor can sometimes feel almost impossible to escape. I had to learn to set very specific boundaries for myself about what issues I was willing to engage with and when. 

What are some of the most pressing issues facing AI as it evolves?

According to investigative reporting, current generative AI models have gobbled up all the data on the internet and will soon run out of available data to devour. So the largest AI companies in the world are turning to synthetic data, or information generated by AI itself, rather than humans, to continue to train their systems. 

The idea took me down a rabbit hole. So, I recently wrote an Op-Ed arguing that I think this use of synthetic data as training data is one of the most pressing ethical issues facing new AI development. Generative AI systems have already shown that based on their original training data, their output is to replicate bias and create false information. So the pathway of training new systems with synthetic data would mean constantly feeding biased and inaccurate outputs back into the system as new training data. I described this as potentially devolving into a feedback loop to hell.

Since I wrote the piece, Mark Zuckerberg lauded that Meta’s updated Llama 3 chatbot was partially powered by synthetic data and was the “most intelligent” generative AI product on the market.

What are some issues AI users should be aware of?

AI is such an omnipresent part of our present lives, from spellcheck and social media feeds to chatbots and image generators. In many ways, society has become the guinea pig for the experiments of this new, untested technology. But AI users shouldn’t feel powerless.  

I’ve been arguing that technology advocates should come together and organize AI users to call for a People Pause on AI. I think that the Writers Guild of America has shown that with organization, collective action, and patient resolve, people can come together to create meaningful boundaries for the use of AI technologies. I also believe that if we pause now to fix the mistakes of the past and create new ethical guidelines and regulation, AI doesn’t have to become an existential threat to our futures. 

What is the best way to responsibly build AI?

My experience working inside of tech companies showed me how much it matters who is in the room writing policies, presenting arguments, and making decisions. My pathway also showed me that I developed the skills I needed to succeed within the technology industry by starting in journalism school. I’m now back working at Columbia Journalism School and I am interested in training up the next generation of people who will do the work of technology accountability and responsibly developing AI both inside of tech companies and as external watchdogs. 

I think [journalism] school gives people such unique training in interrogating information, seeking truth, considering multiple viewpoints, creating logical arguments, and distilling facts and reality from opinion and misinformation. I believe that’s a solid foundation for the people who will be responsible for writing the rules for what the next iterations of AI can and cannot do. And I’m looking forward to creating a more paved pathway for those who come next. 

I also believe that in addition to skilled Trust & Safety workers, the AI industry needs external regulation. In the U.S., I argue that this should come in the form of a new agency to regulate American technology companies with the power to establish and enforce baseline safety and privacy standards. I’d also like to continue to work to connect current and future regulators with former tech workers who can help those in power ask the right questions and create new nuanced and practical solutions. 




Source