Avsnitt

  • AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by Professor Norman Sadeh, a Computer Science Professor at Carnegie Mellon University, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.

    Key Takeaways:

    (02:09) How Professor Sadeh’s work in AI and privacy began.

    (05:30) Privacy engineers are in AI governance.

    (08:45) Why AI governance must integrate with existing company structures.

    (12:10) The challenges of data ownership and consent in AI applications.

    (15:20) Privacy implications of foundational models in AI.

    (18:30) The limitations of current regulations like GDPR in addressing AI concerns.

    (22:00) How user expectations shape the principles of AI governance.

    (26:15) The growing debate around the need for specialized AI regulations.

    (30:40) The role of transparency in AI governance for building trust.

    (35:50) The potential impact of open-source AI models on security and privacy.

    Resources Mentioned:

    Professor Norman Sadeh -

    https://www.linkedin.com/in/normansadeh/

    Carnegie Mellon University | LinkedIn -

    https://www.linkedin.com/school/carnegie-mellon-university/

    Carnegie Mellon University | Website -

    https://www.cmu.edu/

    EU AI Act -

    https://artificialintelligenceact.eu/

    General Data Protection Regulation (GDPR) -

    https://gdpr-info.eu/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.

    Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.

    Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics & Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.

  • In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.

    Our guest, Sean Musch, Founder and CEO of AI & Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.

  • In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.

    Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.

  • In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.

    Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.

  • In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.

    Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.

  • In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.

    Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.

  • In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.

    Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.

  • On this episode, I’m joined by Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.

    Key Takeaways:

    (02:14) The need to regulate AI to prevent monopolization by large corporations.

    (06:03) The dangers of AI-driven misinformation and its impact on public opinion.

    (10:32) The risks AI poses in job displacement across multiple industries.

    (14:22) How deepfake technology is evolving and its potential consequences.

    (18:47) The challenge of balancing AI innovation with data privacy concerns.

    (22:10) AI’s growing role in military applications and the need for careful oversight.

    (26:05) How AI agents could autonomously interact and the risks involved.

    (31:30) The potential for AI to surpass human performance in certain professions.

    (37:14) Why international collaboration is critical for effective AI regulation.

    (42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.

    Resources Mentioned:

    Ruslan Salakhutdinov -

    https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/

    OpenAI’s Sora Technology -

    https://openai.com/index/sora/

    Geoffrey Hinton and his contributions to AI -

    https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/

    Carnegie Mellon University -

    https://www.cmu.edu

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by Senator Todd Young, United States Senator (R-Ind.) at the United States Senate. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.

    Key Takeaways:

    (01:54) The bipartisan effort behind the Senate AI Working Group.

    (03:34) How existing laws adapt to an AI-enabled world.

    (05:17) Identifying AI risks and regulatory barriers.

    (07:41) The role of government expertise in AI-related areas.

    (10:12) Understanding the significance of the $32 billion AI public investment.

    (13:17) Applying AI innovations across various industries.

    (15:27) The impact of China on AI competition and US strategy.

    (17:44) Why semiconductors are vital to AI development.

    (20:26) Balancing open-source and closed-source AI models.

    (22:51) The need for global AI standards and harmonization.

    Resources Mentioned:

    Senator Todd Young -

    https://www.linkedin.com/in/senator-todd-young/

    Todd Young -

    https://www.young.senate.gov/

    United States Senate -

    https://www.linkedin.com/company/ussenate/

    National AI Research Resource -

    https://nairrpilot.org/

    CHIPS and Science Act -

    https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/

    Senate AI Policy Roadmap -

    https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf

    National Security Commission on Artificial Intelligence -

    https://reports.nscai.gov/final-report/introduction

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.

    In this episode, Raphael Townshend, PhD, Founder and CEO of Atomic AI, sits down with me to discuss the intersection of AI and RNA in drug development. We explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.

    Key Takeaways:

    (02:15) Raphael's background in AI and biology, and founding of Atomic AI.

    (05:59) Reducing time and failure rate in drug discovery with AI.

    (07:16) AlphaFold's breakthrough in understanding molecular shapes using AI.

    (09:23) Ensuring transparency and accountability in AI-driven drug discovery.

    (12:22) Navigating intellectual property concerns in healthcare AI.

    (15:34) Integrating AI with wet lab testing for accurate drug discovery results.

    (17:31) Balancing intellectual property and open research in biotech.

    (20:02) Addressing data privacy and security in AI algorithms.

    (22:30) Educating users and healthcare professionals about AI in drug discovery.

    (24:48) Collaborating with global regulators for AI-driven drug discovery innovations.

    Resources Mentioned:

    Raphael Townshend -

    https://www.linkedin.com/in/raphael-townshend-9154962a/

    Atomic AI | LinkedIn -

    https://www.linkedin.com/company/atomic-ai-rna/

    AlphaFold -

    https://deepmind.google/technologies/alphafold/

    Atomic AI Homepage -

    https://atomic.ai/

    ATOM-1 Large Language Model -

    https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.

    Key Takeaways:

    (01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.

    (05:07) Why intellectual property protections are essential in AI development.

    (07:27) National security implications of AI in weapons systems and defense.

    (09:19) The potential of AI to revolutionize healthcare through faster drug approvals.

    (10:55) How AI can aid in detecting and combating biological threats.

    (15:00) The importance of workforce training to mitigate AI-driven job displacement.

    (19:05) The role of community colleges in preparing the workforce for an AI-driven future.

    (24:00) Insights from international collaboration on AI regulation.

    Resources Mentioned:

    Senator Mike Rounds Homepage - https://www.rounds.senate.gov/

    GUIDE AI Initiative - https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package

    Medshield - https://www.linkedin.com/company/medshield-llc

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • In this episode, I’m joined by Charity Rae Clark, Vermont Attorney General, and Monique Priestley, Vermont State Representative. They have been instrumental in shaping Vermont’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.

    Key Takeaways:

    (02:10) “Free” apps and websites take payment with your data.

    (08:15) The Data Privacy Act includes stringent provisions to protect children online.

    (10:05) Protecting consumer privacy and reducing security risks.

    (15:29) Vermont’s legislative journey includes educating lawmakers.

    (18:45) Innovation and regulation must be balanced for future AI development.

    (23:50) Collaboration and education can overcome intense pressure from lobbyists.

    (30:02) AI’s potential to exacerbate discrimination demands regulation.

    (36:15) Deepfakes present a growing threat.

    (42:40) Consumer trust could be lost due to premature releases of AI products.

    (50:10) The necessity of a strong foundation in data privacy. 

    Resources Mentioned:

    Charity Rae Clark -

    https://www.linkedin.com/in/charityrclark/

    Monique Priestley -

    https://www.linkedin.com/in/mepriestley/

    Vermont -

    https://www.linkedin.com/company/state-of-vermont/

    “The Age of Surveillance Capitalism” by Shoshana Zuboff -

    https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697

    Why Privacy Matters” by Neil Richards -

    https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • Dive into the tangled web of AI and copyright law with Keith Kupferschmid, CEO of the Copyright Alliance, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.

    Key Takeaways:

    (02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.

    (05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.

    (06:00) There have been 17 or 18 AI copyright cases filed recently.

    (08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.

    (13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.

    (15:00) Creators should clearly state their licensing preferences on their works to protect themselves.

    (17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.

    (20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.

    (27:34) Education and public awareness are vital for understanding copyright issues related to AI.

    Resources Mentioned:

    Keith Kupferschmid - https://www.linkedin.com/in/keith-kupferschmid-723b19a/

    Copyright Alliance - https://copyrightalliance.org

    U.S. Copyright Office - https://www.copyright.gov

    Getty Images Licensing - https://www.gettyimages.com

    National Association of Realtors - https://www.nar.realtor

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape?  Today, I’m joined by Maria Luciana Axente, Head of Public Policy and Ethics at PwC UK and Intellectual Forum Senior Research Associate at Jesus College Cambridge, who offers key insights into the ethical implications of AI.

    Key Takeaways:

    (03:56) The importance of integrating ethical principles into AI.

    (08:22) Preserving humanity in the age of AI.

    (12:19) Embedding value alignment in AI systems.

    (15:59) Fairness and voluntary commitments in AI.

    (21:01) Participatory AI and including diverse voices.

    (24:05) Cultural value systems shaping AI policies.

    (26:25) The importance of reflecting on AI’s impact before implementation.

    (27:48) Learning from other industries to govern AI better.

    (28:59) AI as a socio-technical system, not just technology.

    Resources Mentioned:

    Maria Luciana Axente - https://www.linkedin.com/in/mariaaxente/

    PwC UK - https://www.linkedin.com/company/pwc-uk/

    Jesus College Cambridge - https://www.linkedin.com/company/jesus-college-cambridge/

    PWC homepage - https://www.pwc.co.uk/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • Can AI spark new creative revolutions? On this episode, I’m joined by Lianne Baron, Strategic Partner Manager for Creative Partnerships at Meta. Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasizing the irreplaceable value of human imagination. We explore the rapid pace of innovation, the challenges of embracing new tech, and the exciting future of idea generation and delivery.

    Key Takeaways:

    (03:50) Embrace AI's changes; it challenges traditional methods.

    (05:13) AI speeds up the journey from imagination to delivery.

    (07:15) The move to cinematic quality sparks excitement and fear.

    (08:30) Education is key in democratizing AI for all.

    (15:00) Risk of bias without diverse voices in AI development.

    (17:15) Ideas, not skills, are the new currency in AI.

    (26:16) Imagination and human experience are irreplaceable by AI.

    (29:11) AI can democratize storytelling, sharing diverse narratives.

    (33:00) AI breaks down barriers, fostering new creative opportunities.

    (36:20) Understanding authenticity is crucial in an AI-driven world.

    Resources Mentioned:

    Lianne Baron - https://www.linkedin.com/in/liannebaron/

    Meta - https://www.meta.com/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?

    On this episode, I’m joined by Professor Zico Kolter, Professor and Director of the Machine Learning Department at Carnegie Mellon University and Chief Expert at Bosch USA, who shares his insights on AI regulation and its challenges.

    Key Takeaways:

    (02:41) AI innovation outpaces legislation. 

    (04:00) Regulating technology vs. its usage is crucial. 

    (06:36) AI is advancing faster than ever. 

    (11:14) Companies must prevent AI misuse. 

    (15:30) Bias-free algorithms are not feasible. 

    (21:34) Human interaction in AI decisions is essential. 

    (27:49) The competitive environment benefits AI development. 

    (32:26) Perfectly accepted regulations indicate mistakes. 

    (37:52) Regulations should adapt to technological changes. 

    (42:49) AI developers aim to benefit people.

    (45:16) Human-in-the-loop AI is crucial for reliability. 

    (46:30) Addressing gaps in AI systems is critical.

    Resources Mentioned:

    Zico Kolter - https://www.linkedin.com/in/zico-kolter-560382a4/

    Carnegie Mellon University - https://www.linkedin.com/school/carnegie-mellon-university/

    Bosch USA - https://www.linkedin.com/company/boschusa/

    EU AI Act - https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en

    OpenAI - https://www.openai.com/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at the Max Planck Institute for Evolutionary Biology in Plön; Professor at ESPCI in Paris; Fellow of the Royal Society of New Zealand; a Member of EMBO & European Academy of Microbiology; and Honorary Professor at Christian Albrechts University in Kiel. 

    Key Takeaways:

    (00:04) Evolutionary transitions form higher-level structures.

    (00:06) Eukaryotic cells parallel future AI-human interactions.

    (00:08) Major evolutionary transitions inform AI-human interactions.

    (00:11) Algorithms can evolve with variation, replication and heredity.

    (00:13) Natural selection drives complexity.

    (00:18) AI adapts to selective pressures unpredictably.

    (00:21) Humans risk losing autonomy to AI.

    (00:25) Societal engagement is needed before developing self-replicating AIs.

    (00:30) The challenge of controlling self-replicating systems.

    (00:33) Interdisciplinary collaboration is crucial for AI challenges.

    Resources Mentioned:

    Max Planck Institute for Evolutionary Biology

    Professor Paul Rainey - Max Planck Institute

    Max Planck Research Magazine - Issue 3/2023

    Paul Rainey’s article in The Royal Society Publishing

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • In this episode, I’m joined by Jaap van Etten, CEO and Co-Founder of Datenna, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersection of AI, innovation and policy.

    Key Takeaways:

    (01:30) Transitioning from diplomat to tech entrepreneur.

    (05:23) Key differences in AI approaches between China, Europe and the US.

    (07:20) The Chinese entrepreneurial mindset and its impact on innovation.

    (10:03) China’s strategy in AI and the importance of being a technological leader.

    (17:05) Challenges and misconceptions about China’s technological capabilities.

    (23:17) Recommendations for AI regulation and international cooperation.

    (30:19) Jaap’s perspective on the future of AI legislation.

    (35:12) The role of AI in policymaking and decision-making.

    (40:54) Policymakers need scenario planning and foresight exercises to keep up with rapid technological advancements.

    Resources:

    Jaap van Etten - https://www.linkedin.com/in/jaapvanetten/

    Datenna - https://www.linkedin.com/company/datenna/

    https://www.nytimes.com/2006/05/15/technology/15fraud.htm

    http://www.china.org.cn/english/scitech/168482.htm 

    https://en.wikipedia.org/wiki/Hanxin 

    https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/ 

    https://github.com/Kkevsterrr/geneva 

    https://geneva.cs.umd.edu 

    https://www.grc.com/sn/sn-779.pdf

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard