Avsnitt
-
Join us in this insightful episode of the RegulatingAI Podcast as we sit down with Francesca Rossi, IBM Fellow and Global Leader for AI Ethics. Based at IBM's T.J. Watson Research Lab in New York, Francesca shares her expertise on cutting-edge AI topics, including constraint reasoning, multi-agent systems, neuro-symbolic AI, and value alignment. With over 220 published works and leadership roles in renowned AI organizations like AAAI and EurAI, Francesca provides a thought-provoking perspective on ethical AI, governance, and the future of artificial intelligence. Don't miss this fascinating conversation!
Resources:
https://www.linkedin.com/in/francesca-rossi-34b8b95/
About Regulating AI:
RegulatingAI is a dedicated non-profit organization designed for experts, mentors, and users of artificial intelligence (AI) with a keen interest in exploring the intersection of AI and regulation. We aim to unite individuals with diverse expertise and backgrounds, fostering collaboration to collectively advance the understanding and implementation of AI regulations.
About your host
Sanjay Puri is a recognized authority on US-India relations. He serves as the Chairman of the US-India Political Action Committee (USINPAC), a national, bipartisan political action committee representing Indian-Americans. He is also the founder of the Alliance for US India Business (AUSIB), an organization dedicated to strengthening economic ties between the US and India. He is also a successful technology entrepreneur, mentor, and investor.
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #regulatingaipodcast #innovationinAIRegulation
Streaming On:
Apple Podcast: https://podcasts.apple.com/us/podcast/regulating-ai-innovate-responsibly/id1714410167
Spotify: https://open.spotify.com/show/3ZkXYPINugnegkORcBCrYo?si=a7ad672e8e194bea
YouTube: https://www.youtube.com/@The_Regulating_AI_Podcast
Join our fastest-growing AI Community:
Instagram: https://www.instagram.com/regulating_ai/
Twitter: https://twitter.com/RegulatingAI
LinkedIn: https://www.linkedin.com/company/regulating-ai
Facebook: https://www.facebook.com/RegulatingAI
Read Our Blogs, News & Updates:https://regulatingai.org/
Join the Conversation:
Leave your thoughts and questions in the comments below. We'd love to hear from you!
-
In this episode on RegulatingAI, Former FCC Chairman Tom Wheeler unpacks the complexities of AI regulation and governance. Drawing from his vast experience in telecommunications, Wheeler emphasizes the critical need for balanced oversight that fosters innovation without compromising fairness or safety.
He has shared his thoughts on:
The evolving landscape of AI governance and its societal impactsWhy establishing both technical and behavioural standards is essential for effective AI oversightThe importance of a multi-stakeholder approach to navigate AI's challenges and opportunitiesResources:
"https://www.brookings.edu/people/tom-wheeler/
https://www.amazon.com/dp/B0C4FZ1QT4?ref_=cm_sw_r_cp_ud_dp_4VWY6H0X6YKWMRDBPSSD"
-
Saknas det avsnitt?
-
In this episode on RegulatingAI, Patrik Gayer, Head of Global Affairs at Silo AI discusses the challenges and opportunities in regulating artificial intelligence. With his expertise in AI policy, Patrick provides a deep dive into creating fair, practical legislation that fosters innovation while addressing global concerns.
Resources:
https://hir.harvard.edu/the-eus-chance-to-lead-forging-a-global-regulatory-framework-for-artificial-intelligence-amidst-exponential-progress/
https://www.linkedin.com/posts/harvard-ksr_volume-xxiii-activity-7147390411131482113-x16t?utm_source=share&utm_medium=member_desktop"
https://www.linkedin.com/in/patrikgayer/
-
In this episode of RegulatingAI Podcast, we’re joined by Congressman Scott Franklin from Florida’s 18th Congressional District, a member of the House AI Task Force, and a strong advocate for responsible AI regulation. Drawing on his unique background in the Navy, insurance, and agriculture, Rep. Franklin provides valuable insights into Congress’s role in the ever-evolving world of AI governance.
Resources:
https://franklin.house.gov/about
https://en.wikipedia.org/wiki/Scott_Franklin_(politician)
https://x.com/repfranklin
https://www.linkedin.com/in/cscottfranklin/
https://www.congress.gov/member/c-franklin/F000472
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de Madrid, where over 130 leaders from 40+ countries gathered to explore the future of international cooperation and multilateralism.
-
Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU Law. Together, we explore how AI can be regulated to ensure fairness, accountability and civil rights, especially in the context of national security and law enforcement.
Key Takeaways:
(01:53) AI in national security, law enforcement and immigration contexts.
(05:00) The dangers of AI in government decisions, from immigration to surveillance.
(09:09) Long-standing issues with AI, including biased training data in facial recognition.
(12:55) The complexities of regulating AI-generated media, such as deepfakes, while protecting free speech.
(17:00) The need for transparency in AI systems and the importance of scrutinizing outputs.
(20:25) How marginalized communities are disproportionately affected by AI.
(23:30) Companies developing AI must embed civil rights principles into their products.
(26:45) Creating unbiased AI systems is a challenge, but necessary to avoid harm.
(29:58) The need for a dedicated regulatory body to oversee AI, especially in national security.
(34:00) AI’s potential impact on jobs and why policymakers need to prepare for labor disruption.
Resources Mentioned:
Faiza Patel -
https://www.linkedin.com/in/faiza-patel-5a042816/
Brennan Center for Justice -
https://www.brennancenter.org/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
AI Bill of Rights -
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Brennan Center - Faiza Patel -
https://www.brennancenter.org/experts/faiza-patel
National Security Carve-Outs Undermine AI Regulations -
https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations
Senate AI Hearings Highlight Increased Need for Regulation -
https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation
The Perils and Promise of AI Regulation -
https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation
Advances in AI Increase Risks of Government Social Media Monitoring -
https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relations at Imperial Springs International Forum 2024, Madrid, Spain. From its potential to revolutionise education and development to concerns about its effects on democracy and societal values, this conversation delves deep into the opportunities and challenges AI presents.
Resources:
https://x.com/boristadic58
https://clubmadrid.org/who/members/tadic-boris/
https://en.wikipedia.org/wiki/Boris_Tadi%C4%87
-
In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.
Resources:
https://www.linkedin.com/in/mehdi-jomaa-60a8333b/
https://x.com/Mehdi_Jomaa
https://www.facebook.com/M.mehdi.jomaa
In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.
Resources:
https://www.linkedin.com/in/mehdi-jomaa-60a8333b/
https://x.com/Mehdi_Jomaa
https://www.facebook.com/M.mehdi.jomaa
https://clubmadrid.org/who/members/mehdi-jomaa/
-
The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by Professor S. Alex Yang, Professor of Management Science and Operations at the London Business School, to explore the complexities of regulating AI, the challenges of international collaboration, and the potential existential risks posed by AI development. With his extensive experience in AI and risk management, Professor Yang provides unique insights into the future of AI governance.
Key Takeaways:
(02:12) Professor Yang’s early AI experiences and his value chain research.
(06:57) The biggest risks from AI, including existential risk and job displacement.
(11:42) The debate on AI nationalism and the preservation of cultural heritage.
(16:28) How China’s chip-making capacity could reshape AI competition.
(21:13) Open-source versus closed-source AI models and the risks involved.
(25:58) Why monitoring monopolies in AI is crucial for innovation.
(30:44) How content creators can benefit from AI and how copyright law is evolving.
(35:29) The importance of fair use standards for AI-generated content.
(40:14) Data aggregation and its future role in AI development.
(45:00) Professor Yang’s final thoughts on the need for agile, principle-based AI regulation.
Resources Mentioned:
Professor S. Alex Yang -
https://www.linkedin.com/in/songayang/
London Business School | LinkedIn -
https://www.linkedin.com/school/london-business-school/
London Business School | Website -
https://www.london.edu/?utm_source=google&utm_medium=ppc&utm_campaign=MC_BRBRAND_ppc_google&sc_camp=760e17bef14a4b399386ef32e55393a8&gad_source=1&gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&gclsrc=aw.ds
WorldCoin -
https://worldcoin.org/
The Case for Regulating Generative AI Through Common Law -
https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02
Generative AI and Copyright: A Dynamic Perspective -
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by Professor Norman Sadeh, a Computer Science Professor at Carnegie Mellon University, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.
Key Takeaways:
(02:09) How Professor Sadeh’s work in AI and privacy began.
(05:30) Privacy engineers are in AI governance.
(08:45) Why AI governance must integrate with existing company structures.
(12:10) The challenges of data ownership and consent in AI applications.
(15:20) Privacy implications of foundational models in AI.
(18:30) The limitations of current regulations like GDPR in addressing AI concerns.
(22:00) How user expectations shape the principles of AI governance.
(26:15) The growing debate around the need for specialized AI regulations.
(30:40) The role of transparency in AI governance for building trust.
(35:50) The potential impact of open-source AI models on security and privacy.
Resources Mentioned:
Professor Norman Sadeh -
https://www.linkedin.com/in/normansadeh/
Carnegie Mellon University | LinkedIn -
https://www.linkedin.com/school/carnegie-mellon-university/
Carnegie Mellon University | Website -
https://www.cmu.edu/
EU AI Act -
https://artificialintelligenceact.eu/
General Data Protection Regulation (GDPR) -
https://gdpr-info.eu/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.
Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.
-
In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.
Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics & Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.
-
In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.
Our guest, Sean Musch, Founder and CEO of AI & Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.
-
In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.
Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.
-
In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.
Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.
-
In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.
Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.
-
In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.
Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.
-
In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.
Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.
-
On this episode, I’m joined by Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.
Key Takeaways:
(02:14) The need to regulate AI to prevent monopolization by large corporations.
(06:03) The dangers of AI-driven misinformation and its impact on public opinion.
(10:32) The risks AI poses in job displacement across multiple industries.
(14:22) How deepfake technology is evolving and its potential consequences.
(18:47) The challenge of balancing AI innovation with data privacy concerns.
(22:10) AI’s growing role in military applications and the need for careful oversight.
(26:05) How AI agents could autonomously interact and the risks involved.
(31:30) The potential for AI to surpass human performance in certain professions.
(37:14) Why international collaboration is critical for effective AI regulation.
(42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.
Resources Mentioned:
Ruslan Salakhutdinov -
https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/
OpenAI’s Sora Technology -
https://openai.com/index/sora/
Geoffrey Hinton and his contributions to AI -
https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/
Carnegie Mellon University -
https://www.cmu.edu
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by Senator Todd Young, United States Senator (R-Ind.) at the United States Senate. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.
Key Takeaways:
(01:54) The bipartisan effort behind the Senate AI Working Group.
(03:34) How existing laws adapt to an AI-enabled world.
(05:17) Identifying AI risks and regulatory barriers.
(07:41) The role of government expertise in AI-related areas.
(10:12) Understanding the significance of the $32 billion AI public investment.
(13:17) Applying AI innovations across various industries.
(15:27) The impact of China on AI competition and US strategy.
(17:44) Why semiconductors are vital to AI development.
(20:26) Balancing open-source and closed-source AI models.
(22:51) The need for global AI standards and harmonization.
Resources Mentioned:
Senator Todd Young -
https://www.linkedin.com/in/senator-todd-young/
Todd Young -
https://www.young.senate.gov/
United States Senate -
https://www.linkedin.com/company/ussenate/
National AI Research Resource -
https://nairrpilot.org/
CHIPS and Science Act -
https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/
Senate AI Policy Roadmap -
https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf
National Security Commission on Artificial Intelligence -
https://reports.nscai.gov/final-report/introduction
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
- Visa fler