Avsnitt
-
Many of us working at tech companies are having to make moral and ethical decisions when it comes to where we work, what we work on, and what we speak up about. In this episode, we have a conversation with Nadah Feteih around how tech workers (specifically folks working in integrity and trust & safety teams) can speak up about ethical issues at their workplace. We discuss activism from within the industry, compelled identity labor, balancing speaking up and staying silent, thinking ethically in tech, and the limitations and harms of technology.
Takeaways
Balancing speaking up and staying silent can be difficult for tech workers, as some topics may be divisive or risky to address.
Compelled identity labor is a challenge faced by underrepresented and marginalized tech workers, who may feel pressure to speak on behalf of their communities.
Thinking ethically in tech is crucial, and there is a growing need for resources and education on tech ethics.
Tech employees have the power to take a stand and advocate for change within their companies.
Engaging on social issues in the workplace requires a balance between different approaches, including staying within the system and speaking up from the outside.
Listening to moderators and incorporating local perspectives is crucial for creating inclusive and equitable tech platforms.
Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.
Mentioned in this episode:
Breaking the Silence: Marginalized Tech Workers’ Experiences and Community Solidarity
Black in Moderation
Tech Worker Handbook
No Tech For Apartheid
Tech Workers Coalition
Credits
Today’s episode was produced, edited, and hosted by Alice Hunsberger.
You can reach myself and Talha Baig, the other half of the Trust in Tech team, at [email protected].
Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.
-
You asked, we answered! It’s a rough time out there in the tech industry as so many people in Trust & Safety are job searching or thinking about their career and what it all means.
In this episode, Alice Hunsberger shares her recent job search experience and gives advice on job searching and career development in the Trust and Safety industry. Listener questions that are answered include:
How do I figure out what to do next in my career?
What helps a resume or cover letter stand out?
What are good interviewing tips?
What advice do leaders wish they had when they were first starting out?
Do T&S Leaders really believe we will have an internet free (or at least drastically) reduced of harm?
Resources and links mentioned in this episode:
Personal Safety for Integrity workers
Hiring and growing trust & safety teams at small companies
Katie Harbath’s career advice posts
Alice Links
Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.
Today’s episode was produced, edited, and hosted by Alice Hunsberger.
You can reach myself and Talha Baig, the other half of the Trust in Tech team, at [email protected].
Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.
-
Saknas det avsnitt?
-
Integrity workers are missing a shared resource where they can easily point to a taxonomy of harms and specific interventions to mitigate those harms. Enter, Grady Ward, a visiting fellow of the Integrity Institute, who discusses how he is creating a Wikipedia for and by integrity workers.
In typical Trust in Tech fashion, we also discuss the tensions and synergies between integrity and privacy, and if you stick around to the end, you can hear about some musings on the interplay of art and nature.
Links:
The Wikipedia of Trust and Safety
Grady’s personal website
Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.
Today’s episode was produced, edited, and hosted by Talha Baig.
You can reach myself and Alice Hunsberger, the other half of the Trust in Tech team, at [email protected].
Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.
-
With the Senate Child Safety Hearing on the horizon, we sit down with Vaishnavi, former head of Youth Policy at Meta and chat about the specific problems and current policy landscape regarding child safety.
Now, Vaishnavi now works an as advisor to tech companies and policymakers on youth policy issues!
Some of the questions answered on today’s show include:
- What are the different buckets of problems for Child Safety?
- How we can think about age appropriate designs?
- What are common misconceptions in the tension between privacy and child safety?
- What is the current regulation for child safety?
- What does she expect to hear at the judicial hearing?
Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.
-
Listen to this episode to learn how to stay safe as an Integrity worker
Links:
- Tall Poppy (available through employers only at the moment)
- DeleteMe
- PEN America Online Harassment Field Manual
- Assessing Online Threats
- Want a security starter pack? | Surveillance Self-Defense
- Yoel Roth on being targeted: Trump Attacked Me. Then Musk Did. It Wasn't an Accident.
- Crash override network: What To Do If Your Employee Is Being Targeted By Online Abuse
Practical tips:If you’re a manager
- Train your team on what credible threats looks like
- make sure you have a plan in place for dealing with threats to your office or employees
- Allow pseudonyms; don’t require public photos
- Invest in services that can help your employee scrub public data.
Individuals
- Keep your personal social media private/ friends-only.
- Use different photos on LinkedIn than your personal social media.
- Consider hiding your location online, not using your full name, etc.
Credits:
Today’s episode was produced, edited, and hosted by Alice Hunsberger.
You can reach myself and Talha Baig, the other half of the Trust in Tech team, at [email protected].
Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.
-
Caroline Sinders is a ML design researcher, online harassment expert, and artist. We chat about common dark tech patterns, how to prevent them in your company, a novel way to think about your career and how photography is related to generative AI.
Sinders has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation,
We answer the following questions on today’s show:1. What are dark tech patterns… and how to prevent them
2. How to navigate multi stakeholder groups to prevent baking in these dark patterns?
3. What is a public person?
4.. What is a framework to approach data visualization?
5. How is photography an analogue to generative AI?
This episode goes in lots of directions to cover Caroline’s varied interests - hope you enjoy it!
-
An Introduction to Generative AI
In this episode, Alice Hunsberger talks with Numa Dhamani and Maggie Engler, who recently co-authored a book about the power and limitations of AI tools and their impact on society, the economy, and the law.In this conversation, they deep dive into some of the topics in the book, and discuss what writing a book was like, as well as what the process was to get to publication.
You can preoder the book here, and follow Maggie and Numa on LinkedIn.
-
It seems everyday we are pulled in different directions on social media. However, what we are feeling seldom resonates. Enter David Jay! A master in building movements including leading it for the Center Humane Technology. In this episode, we will learn precisely how to build a movement, and why communities are perpetually underfunded.
David Jay is an advisor of the Integrity Institute and played a pivotal role in the early days of the Institute. He is also currently the founder of Relationality Labs which hopes to make the impact of relational organizing visible so that organizers can be resourced for the strategic value that they create. In the past, he has had a diverse range of experiences, including founding asexuality.org, and as chief mobilization officer for the Center for Humane Technology.
Here are some of the questions we answer on today’s show:
1. How do you create, scale, and align relationships to create a movement?
2. How to structure stories to resonate?
3. How to keep your nose on the edge for new movements?
4. How to identify leaders for the future?
5. Why David Jay is excited by the Integrity Institute and the future of integrity workers?
6. Why community based initiatives don’t get funded at the same rate as non-community based initiatives.
Check out David Jay’s Relationality Lab!
Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent any other entity’s views.
-
Elections matter, and history has demonstrated online platforms will find themselves grappling with these challenges whether they want to be or not. The two key questions facing online platforms now, as they stare down the tsunami of global elections heading their way, are: Have they initiated an internal elections integrity program? And if so, how do they ensure the best possible preparation to safeguard democracies globally?
The Integrity Institute launched an elections integrity best practices guide on “Defining and Achieving Success in Elections Integrity.” This latest guide extends the first and provides companies – large or small, established or new-on-the-block – concrete details as they fully implement an elections integrity program.
Today on the podcast, we talk to four contributors about this guide: Glenn Ellingson, Diane Chang, Swapneel Mehta, and Eric Davis.
Also check out our first episode on elections!
-
Alice Hunsberger talks to Heather Grunkemeier again, this time covering Heather’s solution for dealing with creeps at Rover from a policy and operational lens, measuring trust, and what it’s been like for her to strike out on her own as a consultant.
Also check out our first episode with Heather, How to Find Your Place in Trust & Safety: A Story of Career Pivoting.
-
Alice Hunsberger talks to Heather Grunkemeier (former Program Owner of Trust & Safety at Rover, and current owner of consultancy firm Twinkle LLC) and discusses how Heather finally broke into the field of Trust & Safety after years of trying, what it was actually like for her, and what her advice is for other people in the midst of career pivots. We also touch on mental health, identity, self worth, and how working in Trust & Safety has unique challenges (and rewards).
If you liked our Burnout Episode, you may enjoy this one too. (And if you haven’t listened to it yet or read our Burnout resource guide, please check it out).
CreditsThis episode of Trust in Tech was hosted, edited, and produced by Alice Hunsberger.
Music by Zhao Shen.
Special thanks to the staff and members of the Integrity Institute for their continued support.
-
On today's episode, our host Talha Baig is joined by guest James Alexander to discuss all things AI liability. The episode begins with a discussion on liability legislation, as well as some of the unique situations that copyright law has created. Later in the episode, the conversation shifts to James's experience as the first member of Wikipedia's Trust and Safety team.
Here are some of the questions we answer in today’s episode.
Who is liable for AI-generated content?
How does section 230 affect AI?
Why does AI have no copyright?
How will negotiations play out between platforms and the companies building AI models?
Why do the Spiderman multiverse movies exist?
What did it look like to be the first trust and safety worker at Wikipedia?
What does fact-checking look like at Wikipedia?
-
On today's episode, our host Talha Baig is joined by guest David Harris, who has been writing about Llama since the initial leak. The two of them begin by discussing all things Llama, from the leak to the open-sourcing of Llama 2. Later in the episode, they dive deeper into policy ideas seeking to improve AI safety and ethics.
Show Links:
David’s Guardian Article
CNN Article Quoting David
Llama 2 release Article
-
Assaf Kipnis has spent the last decade fighting e-crime and scams. Today, he's on the podcast with fellow Integrity Institute member Alice Hunsberger to tell us about Pig Butchering Scams and Coordinated Inauthentic Behavior, and how they are more sophisticated scams than you might think. If you take away one thing from this, it's this: don't follow investing advice from random people you meet online!
Show Links:
Pig Butchering Scam Victim Journey and Analysis
The Anatomy of a Pig Butchering Scam Fraudology Podcast with Karisse Hendrick
Pig Butchering Scams Are Evolving Fast | WIRED
Example of educational guide for users: Grindr Scam awareness guide
What's the deal with all those weird wrong-number texts?
I’ve been getting tons of ‘wrong number’ spam texts, and I don’t hate it? - The Verge
Facebook shuts down ‘the BL’
Removing Coordinated Inauthentic Behavior From Georgia, Vietnam and the US
A Former Fox News Executive Divides Americans Using Russian Tactics
Meta October 2020 Inauthentic Behavior Report
-
What can companies do to support the LGBTQ+ community during this pride season, beyond slapping a rainbow logo on everything? Integrity Institute members Alex Leavitt and Alice Hunsberger discuss the state of LGBTQ+ safety online and off, how the queer community is unique and faces disproportionate risks, and what are some concrete actions that platforms should be taking.
Show Links:
Human Rights Campaign declares LGBTQ state of emergency in the US
Social Media Safety Index
Digital Civility Index & Our Challenge | Microsoft Online Safety
Best Practices for Gender-Inclusive Content Moderation — Grindr Blog
Tinder - travel alert
Assessing and Mitigating Risk for the Global Grindr Community
Strengthening our policies to promote safety, security, and well-being on TikTok
Meta's LGBTQ+ Safety center
Data collection for queer minorities
-
The acquisition of Twitter broke, well, Twitter. Around 90% of the workforce left the company leaving shells of former teams to handle the same responsibility.
Today, we welcome two guests from Twitter’s civic integrity team. We welcome new guest Rebecca Thein. Rebecca, was a senior engineering technical program manager for Twitter’s Information Integrity team. She is also a Digital Sherlock for the Atlantic Council’s Digital Forensic Research Lab (DFRLab).
Theodora Skeadas is a returning guest from our previous episode! She managed public policy at Twitter and was recently elected as an Elected Director of the Harvard Alumni Association.
We answer the following questions on today’s episode:How much was the civic integrity team hurt by the acquisition?
What are candidate labels?
How did Twitter prioritize its elections?
What did the org structure of Twitter look like pre and post acquisition?
And finally, what is this famous Halloween party that all the ex-Twitter folks are talking about?
-
This episode is a bit different – instead of getting deep into the weeds with a guest, we’re starting from the beginning. Our guest today, Pearlé Nwaezeigwe, aka the Yoncé of Tech Policy, chats with me about Tech Policy 101.
I get a lot of questions from people who are fascinated by Trust & Safety and Integrity work in tech, and they want to know – what does it look like? How can I do it too? What kinds of jobs are out there? So, I thought we’d tackle some of those questions here on the podcast.
Today’s episode covers the exciting topics of nipples, Lizzo, weed, and much more. And as any of us who have worked in policy would tell you, “it’s complicated.”
Let me know what you think (if you want to see more of these, or less) – this is an experiment. (You can reach me here on LinkedIn). — Alice Hunsberger
Links:
Pearlé’s newsletter
Lizzo talks about censorship and body shaming
Oversight board on nipples and nudity
Grindr’s Best Practices for Gender-Inclusive Content Moderation
TSPA curriculum: creating and enforcing policy
All Tech is Human - Tech Policy Hub
Credits:Hosted and edited by Alice Hunsberger
Produced by Talha Baig
Music by Zhao Shen
Special Thanks to Rachel, Sean, Cass and Sahar for their continued support -
It might be May 2023, but it’s never too early to start worrying about elections! 2024 is slated to be the biggest year of elections in platform history. In this episode Katie Harbath and Glenn Ellingson join the show to prepare you for the storm of elections coming in 2024.
You may recognize Katie as the inaugural guest of Trust in Tech. Katie is an Integrity Institute Fellow and global leader at the intersection of elections, democracy, and technology. She is Chief Executive of Anchor Change where she helps clients think through tech policy issues. Before that she worked at Meta for 10 years where she built and led a 30 person team managing elections.
Glenn is an Integrity Institute member who was previously an engineering manager for Meta’s civic integrity team and before that Head of Product Engineering for Hustle - a company which helped progressive political organizations and other nonprofit and for-profit groups forge personal relationships at scale.
Glenn and Katie led the development of the Elections Best Practices deck the Integrity Institute just shared on their website, which we discuss in the episode.
We also answer some of the following questions:
How to prioritize different elections across the world?
What principles to adhere to when working on election integrity?
What are the challenges of dealing with political harassment?
How to map out the landscape of election integrity work?
What was Cambridge Analytica, and did the scandal actually make platforms less transparent?
And how your company can learn best practices and responsibly deal with elections
Links:
Election Integrity best practices deck
Anchor Change
A Brief History of Tech and Elections: A 26-Year Journey
Demystifying the Cambridge Analytica Scandal Five Years Later
Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta’s or any other entity’s views.
-
We live in a world where platforms influence the digital and real lives of billions of people across the world, perhaps with more influence than many governments. However, the decision making processes around the platform are generally opaque and obscure.
This is why today’s guest — Integrity Institute Fellow Brandon Silverman — transitioned to policy advocacy for platform transparency, data sharing, and an open internet helping regulators, lawmakers and advocacy groups think through the best ways to set-up online transparency regimes.
Brandon is the former CEO and co-founder of Crowdtangle. a social analytics tool that is used by tens of thousands of newsrooms, academics, researchers, fact-checkers, civil society and more to help monitor public content in real-time.
Some questions we answer on today’s episode:
What tradeoffs exist between free speech and transparency?
How did Crowdtangle partner with civic actors across the world?
Brandon’s thoughts on the leaking of the Twitter Algorithm
What principle Crowdtangle used when sharing access to governments?
What metric did the Crowdtangle team optimize for?
What Brandon wished he could have done differently at Meta?
And of course how you the listener can help in this fight for platform transparency.
Links:
The United States’ Approach to 'Platform' Regulation by Eric Goldman
State Abuse of Transparency Laws and How to Stop It by Daphne Keller
The Impression of Influence: Legislator Communication, Representation, and Democratic Accountability by Solomon Messing
Garbage Day by Ryan Broderick
As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.
- Visa fler