Avsnitt
-
On Monday, March 18, the US Supreme Court heard oral argument in Murthy v Missouri. In this episode, Tech Policy Press reporting fellow Dean Jackson is joined by two experts- St. John's University School of Law associate professor Kate Klonick and UNC Center on Technology Policy director Matt Perault- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.
-
On March 18, the US Supreme Court will hear oral argument in Murthy v Missouri, a case that asks the justices to consider whether the government coerced or “significantly encouraged” social media executives to remove disfavored speech in violation of the First Amendment during the COVID-19 pandemic.
Tech Policy Press reporting fellow Dean Jackson speaks to experts including the Knight First Amendment Institute at Columbia University's Mayze Teitler and Jennifer Jones, and the Tech Justice Law Project's Meetali Jain.
-
Saknas det avsnitt?
-
At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast. So today we're turning over the mic to Spencer Overton, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.
He's joined by three other experts, including:
Brandi Collins-Dexter, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, Black Skinhead: Reflections on Blackness and Our Political Future. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago;Dr. Danielle Brown, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities; andKathryn Peters, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.
-
On Monday, Feb. 26, 2024, the US Supreme Court heard oral arguments for Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton. The cases are on similar but distinct state laws in Florida and Texas that would restrict social media companies’ ability to moderate content on their platforms. Justin Hendrix speaks with Tech Policy Press staff writer Gabby Miller and contributing editor Ben Lennett about key highlights from the discussion.
-
This week, a public consultation period ended for a new Hong Kong national security law, known as Article 23. Article 23 ostensibly targets a wide array of crimes, including treason, theft of state secrets, espionage, sabotage, sedition, and "external interference" from foreign governments. The Hong Kong legislature, dominated by pro-Beijing lawmakers, is expected to approve it, even as its critics argue that the law criminalizes basic human rights, such as the freedom of expression, signaling a further erosion of the liberties once enjoyed by the residents of Hong Kong.
To learn more about what is happening in Hong Kong and what role tech firms and other outside voices could be doing to preserve freedoms for the people of Hong Kong, Justin Hendrix spoke to three experts who are following developments there closely:
Chung Ching Kwong, senior analyst at the Inter-Parliamentary Alliance on ChinaLokman Tsui, a fellow at Citizen Lab at University of Toronto, andMichael Caster, the Asia Digital Program Manager with Article 19.
-
If you’ve been listening to this podcast for a while, you know we’ve spent countless hours together talking about the problems of mis- and disinformation, and what to do about them. And, we’ve tried to focus on the science, on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion.
Today’s guests are Jon Bateman and Dean Jackson. The two have just produced a report for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation, and provides evidence that should guide policy in governments and at technology platforms.
-
A new book that ships this week from Oxford University Press titled simply Media and January 6th assembles a varied collection of experts that aim to shed light on the interplay between the media and the bloody coup attempt that then President Donald Trump led to try to hang on to power after he lost the 2020 election to Joe Biden. It delves into the reasons behind the occurrence of January 6th and highlights the pivotal role of media in this context.
The book is structured to explore three essential inquiries: What is our interpretation of January 6, 2021? How should research evolve post-January 6, 2021? And what measures can be taken to avert a similar incident in the future? Justin Hendrix spoke to three of the book's four editors: Khadijah Costley White, Daniel Kreiss, and Shannon C. McGregor.
-
It's become trite to say there are a lot of elections taking place this year. But of course, technology is playing a role in them all.
At Tech Policy Press, we're lucky to have a group of seven fellows this year who are based on four continents. They are paying close attention to elections in the nations they know best. To learn more about the recent election in Pakistan, its chaotic aftermath, and the unique role of technology and events there, I spoke to one of our fellows last week: Ramsha Jahangir, a Pakistani journalist currently based in the Netherlands.
-
Today's guests are Jonathan Stray, a senior scientist at the Center for Human Compatible AI at the University of California Berkeley, and Ravi Iyer, managing director of the Neely Center at the University of Southern California's Marshall School. Both are keenly interested in what happens when platforms optimize for variables other than engagement, and whether they can in fact optimize for prosocial outcomes. With several coauthors, they recently published a paper based in large part on discussion at an 8-hour working group session featuring representatives from seven major content-ranking platforms and former employees of another major platform, as well as university and independent researchers. The authors say "there is much unrealized potential in using non-engagement signals. These signals can improve outcomes both for platforms and for society as a whole."
-
In May 2022, Alvaro Bedoya was sworn in as a Commissioner of the US Federal Trade Commission following his nomination by President Joe Biden and confirmation in the Senate. In this conversation, Commissioner Bedoya discusses a recent settlement over the commercial use of facial recognition technologies and what it should signal to other businesses, voice cloning and the growing problem of impersonations utilizing AI, and how he thinks about the future.
-
Multiple past episodes of this podcast have focused on the topic of AI governance. But today’s guest, Blair Attard-Frost, has put forward a set of ideas they term "AI countergovernance." These are alternative mechanisms for community-led and worker-led governance that serve as means for resisting or contesting power, particularly as it manifests in AI systems and the companies and governments that advance them.
-
On Wednesday, January 31st, the US Senate Judiciary Committee hosted a hearing titled "Big Tech and the Online Child Sexual Exploitation Crisis." The CEOs of Meta, TikTok, X, Discord and Snap were called to the Capitol to answer questions from lawmakers on their efforts to protect children from sexual exploitation, drug trafficking, dangerous content, and other online harms. Gabby Miller reported on the hearing from New York, and Haajrah Gilani reported from Washington D.C.
-
Last year, the World Privacy Forum, a nonprofit research organization, conducted an international review of AI governance tools. The organization analyzed various documents, frameworks, and technical material related to AI governance from around the world. Importantly, the review found that a significant percentage of the AI governance tools include faulty AI fixes that could ultimately undermine the fairness and explainability of AI systems.
Justin Hendrix talked to Kate Kaye, one of the report’s authors, about a range of issues it covers, from the involvement of large tech companies in shaping AI governance tools the role of organizations like the OECD in developing AI governance tools, to the need to consult people and communities that are often overlooked when making decisions about how to think about AI.
-
In October 2022, a group of researchers published a manifesto establishing a Coalition for Independent Technology Research.
“Society needs trustworthy, independent research to relieve the harms of digital technologies and advance the common good,” they wrote. “Research can help us understand ourselves more clearly, identify problems, hold power accountable, imagine the world we want, and test ideas for change. In a democracy, this knowledge comes from academics, journalists, civil society, and community scientists, among others. Because independent research on digital technologies is a powerful force for the common good, it also faces powerful opposition.”
In the months since that document was published, that opposition has grown. From investigations in Congress to lawsuits aimed at specific researchers, there is a backlash particularly against those who study communications and media, especially where the subjects of that research are often those most interested in advancing false and misleading claims about issues including elections and public health.
Justin Hendrix, who is a member of the coalition, caught up with Brandi Geurkink, who was hired as the coalition's first Executive Director in December 2023, to discuss its priorities.
-
Today’s guest is Robert Weissman, president of the nonprofit consumer advocacy organization Public Citizen. He is the author of a letter addressed to the California Attorney General that raises significant concerns about OpenAI’s 501(c)(3) nonprofit status. The letter questions whether OpenAI has deviated from its nonprofit purposes, alleging that it may be acting under the control of its for-profit subsidiary, potentially violating its nonprofit mission. The letter raises broader issues about the future of AI and how it will be governed.
-
Today is the three month anniversary of the vicious Hamas attack and abduction of hostages that ignited the current war in Gaza. Just before the New Year, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) published a report titled “Distortion by Design: How Social Media Platforms Shaped Our Initial Understanding of the Israel-Hamas Conflict.” This week, Justin Hendrix spoke to the report’s authors— Emerson T. Brooking, Layla Mashkoor, and Jacqueline Malaret— about their observations of the role that platforms operated by X, Meta, Telegram, and TikTok have played in shaping perceptions of the initial attack and the brutal ongoing Israeli siege of Gaza, which now continues into its fourth month.
“Evident across all platforms,” they write, “is the intertwined nature of content moderation and political expression—and the critical role that social media will play in preserving the historical record.”
-
In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5.
This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.
Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.
-
If you’ve listened to some of the dialogue in hearings on Capitol Hill about how to regulate AI, you’ve heard various folks suggest the need for a regulatory agency to govern, in particular, general purpose AI systems that can be deployed across a wide range of applications. One existing agency is often mentioned as a potential model: the Food and Drug Administration (FDA). But how would applying the FDA work in practice? Where does the model break down when it comes to AI and related technologies, which are different in many ways from the types of things the FDA looks at day to day? To answer these questions, Justin Hendrix spoke to Merlin Stein and Connor Dunlop, the authors of a new report published by the Ada Lovelace Institute titled Safe before sale: Learnings from the FDA’s model of life sciences oversight for foundation models.
-
At the end of this year in which the hype around artificial intelligence seemed to increase in volume with each passing week, it’s worth stepping back and asking whether we need to slow down and put just as much effort into questions about what it is we are building and why.
In today’s episode, we’re going to hear from two researchers at two different points in their careers who spend their days grappling with questions about how we can develop systems and modes of thinking about systems that lead to more just and equitable outcomes, and that preserve our humanity and the planet:
Dr. Batya Friedman is a Professor in the Information School and holds adjunct appointments in the Paul G. Allen School of Computer Science & Engineering, the School of Law, and the Department of Human Centered Design and Engineering at the University of Washington, where she co-directs the Value Sensitive Design Lab and the UW Tech Policy Lab.Dr. Aylin Caliskan is an Assistant Professor in the Information School at the Paul G. Allen School of Computer Science & Engineering, is an affiliate of the UW Tech Policy Lab, part of the Responsible AI Systems and Experiences Center, the NLP Group, and the Value Sensitive Design Lab. She is also co-director elect for the Tech Policy Lab, a role she will assume when Dr. Friedman retires from the university.
-
In April 2021, the European Commission introduced the first regulatory framework for AI within the EU. This Friday, after a marathon set of negotiations, EU policymakers reached a political consensus on the details of the legislation. This AI Act represents the most significant comprehensive effort in the world’s democracies to regulate a technology that promises major social and economic impact. While the AI Act will still have to go through a few final procedural steps before its enactment, the contours of it are now set. To find out more about what was decided, Justin Hendrix spoke to one journalist who reported directly on the negotiations in Brussels: Luca Bertuzzi, technology editor at EURACTIV.
- Visa fler