Avsnitt

  • What are AI standards – and why should we care? Our guest today, Dr Kobi Leins, has first-hand experience as both contributor to the development of AI standards for the world and a professional working on supporting safe AI in real world industry contexts. We talk about what AI standards are for and why the discussion and work feeding into standards – and AI development and deployment more broadly – matters for us all. It’s the kind of tricky discussion that starts in industry and day-to-day applications of AI, and ends in military uses of AI.

    If you care about AI ethics, safety, responsibility, all those words – then you need to listen to this conversation.

    Credits

    Guest: Dr Kobi Leins

    Hosts: Zena Assaad and Liz Williams

    Producers: Robbie Slape, Zena Assaad, Liz Williams, Martin Franklin (East Coast Studio)

    Thanks to the Australian National Centre for the Advancement of Science for letting us use their podcast studio.

    For episode links and the full transcript, visit https://algorithmicfutures.org/s03e04

  • The idea that artificial intelligence is taking our jobs can be scary – but in actuality, there are cases where this is a good thing. Dr Sara Webb (Swinburne University of Technology) shares one of these stories in today’s episode, which begins with a TedX talk in Melbourne and ends with a discussion of some of the many ways techniques developed for astrophysics are transforming seemingly unrelated fields. Sara is an astrophysicist based at Swinburne University and is also a published author with a talent for communicating complex ideas about our universe (and AI) for broad audiences. Listen in to hear more about the role AI is increasingly playing in astronomy, how she got into astrophysics in the first place, and more in this wide ranging episode that paints a picture of what a career in STEM can look like.

    Episode credits:

    Guest: Sara Webb

    Co-hosts: Zena Assaad and Liz Williams

    Producers: Zena Assaad, Robbie Slape, Liz Williams, Martin Franklin (East Coast Studio)

    Thanks to the Australian National Centre for the Public Awareness of Science for letting us use their podcast studio to record this episode.

    For the full episode transcript, visit https://algorithmicfutures.org

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • In the age of DALL-E and Stable Diffusion, what counts as art? And what can art tell us about AI? In this episode, we explore these questions and more with the help of Eryk Salvaggio, a US-based artist, designer and researcher whose work explores the fabric of artificial intelligence -- and often playfully defies its boundaries.

    Credits

    Guest – Eryk Salvaggio

    Hosts – Zena Assaad and Liz Williams

    Producers – Robbie Slape, Zena Assaad, Liz Williams

    Audio Producer – Martin Franklin (East Coast Studio)

    Thank you to the Australian National Centre for the Public Awareness of Science for allowing us to use their podcast studio for this episode.

    For the full transcript or episode video, visit https://algorithmicfutures.org

  • It is the launch of season 3 of this podcast, and we thought it was high time for a positionality statement – er, episode. Why not align it with the start of a new season and our debut on YouTube? Listen in for an episode featuring our co-hosts, Liz Williams and Zena Assaad, in which we explore everything from relics, reactions, reciprocity, risk, and the complexities involved in creating and regulating AI systems in the real world.

    Credits:

    Co-hosts: Zena Assaad and Liz Williams

    Producers: Robbie Slape, Zena Assaad, Liz Williams, and Martin Franklin (East Coast Studio)

    Thanks to the Australian National Centre for the Public Awareness of Science for letting us use their podcast studio for recording.

    We would also like to pay our respects to the Traditional Owners of the lands on which we recorded and edited this episode.

    For show notes, the full (edited!) transcript, and maybe even a picture of the big blue ball, visit https://algorithmicfutures.org

  • In our final episode of season 2, we are grateful to be joined by Damith Herath, Associate Professor of Robotics and Art at the University of Canberra. Damith is a multi-talented roboticist with a long history of working in the art world, and an interest in understanding how to shape human-robot collaboration in real-world environments. During our conversation, Damith talks to us about how his innate drive to experiment with electronics and robotics led him from an entrepreneurial childhood in Sri Lanka to the forefront of robotics and automation research in Australia.

    Credits:

    Guest: Damith Herath (University of Canberra)

    Co-hosts: Zena Assaad and Liz Williams

    Producers: Zena Assaad, Liz Williams, Robbie Slape, Martin Franklin (East Coast Studio)

    Acknowledgements:

    A special thanks to the ANU School of Cybernetics for lending us the use of their podcast studio for this recording.

    Transcript:

    A full transcript of this episode is available on our website:

    https://algorithmicfutures.org/s02e09

  • What does responsibility look like in military contexts – and how do you think about encoding it in autonomous military technologies with the capacity to harm? In today’s episode, we explore this topic from a legal perspective with the help of Lauren Sanders. Lauren is a senior research fellow at the University of Queensland with expertise in international criminal law, international humanitarian law, and domestic counter-terrorism law. She is also host and editor of the Law and the Future of War podcast.

    Episode Credits:

    Guest: Lauren Sanders

    Co-Hosts: Zena Assaad and Liz Williams

    Producers: Zena Assaad, Liz Williams, and Martin Franklin (East Coast Studio)

    This episode is rated explicit because the topic of discussion may not be suitable for young listeners.

    For the full episode transcript, visit https://algorithmicfutures.org/s02e08

  • In this episode, we explore the “nuclear mindset” – a term being thrown around in discussions about Australia’s plans to acquire conventionally-armed, nuclear powered submarines as part of the AUKUS trilateral partnership between the US, UK, and Australia. With the help of Veronica Taylor, Will Grant, and Ed Simpson, guest co-host AJ Mitchell and I explore what a nuclear mindset might look like, and discuss how we can help train a new generation of nuclear technology creators, regulators and dreamers approach their work with the care needed to make use of nuclear technologies safely, responsibly, and securely in an Australian context.

    Along the way, we talk about Australia’s already lengthy history of working with nuclear technologies, tricky considerations like how to manage nuclear waste (even for widely accepted applications like nuclear medicine), and far more in this wide ranging and transdisciplinary discussion. There will be lessons in this episode for anyone who designs, manages, or regulates technologies used in safety-critical applications – including those enabled by artificial intelligence.

    Episode Credits:

    Host: Liz Williams

    Guest co-host: AJ Mitchell

    AJ Mitchell is a Senior Lecturer in the ANU Department of Nuclear Physics and Accelerator Applications. He is the convenor of the ANU Graduate Certificate of Nuclear Technology Regulation, does research in fundamental nuclear structure and applied nuclear science, and is a passionate educator and science communicator. He is also actively involved in teacher-training projects in Timor Leste and leads a program with the University of Yangon in Myanmar to build teaching and research capacity in physics.

    Guests:

    Veronica Taylor, the Professor of Law and Regulation in the School of Regulation and Global Governance (or RegNet) at ANU. She is former Dean of the ANU College of Asia and the Pacific, is a member of the ANU Steering Group on Nuclear Technology Stewardship, and is one of the chief investigators for the newly awarded Australian Research Council Industrial Transformation Training Centre for Radiation Innovation.

    Will Grant is Associate Professor in Science Communication at the Australian National Centre for the Public Awareness of Science, which is based at ANU, and is a prolific writer and contributor on the interaction between science, politics and technology. He is also a member of the ANU Working Group on Nuclear Technology Stewardship. Will has some fantastic podcasts of his own: The Wholesome Show, G'day Patriots and G'day Sausages.

    Ed Simpson is a Senior Lecturer at the ANU Department of Nuclear Physics and Accelerator Applications, Nuclear Science Lead for the ANU Research School of Physics, and is one of the few nuclear theorists I know who can hold his own in laboratory settings. He is heavily involved in nuclear science education here on campus, has experience in government through service as an Australian Science Policy Fellow, and is also a Chief Investigator on the new Australian Research Council Industrial Transformation Training Centre for Radiation Innovation.

    Producers: Liz Williams, Martin Franklin (East Coast Studio), Zena Assaad

    Acknowledgements: A special thanks to the Australian National Centre for the Public Awareness of Science (CPAS)for allowing us to use their recording studio for this episode.

    For the full episode transcript, visit: https://algorithmicfutures.org/s02e07/

    Regarding the explicit rating: This episode mentions nuclear weapons and weapons testing, and also talks about the use of nuclear propulsion for Defence. If you don't wish to discuss these topics with small children, it may be worth saving this episode for another time.

  • Our episode today features Tracey Spicer, award winning journalist, author, and social justice advocate who begins this episode with a story from her own life: her son, after watching an episode of South Park, declared “Mum, I want a robot slave.” This declaration prompted Tracey to begin a seven-year journey exploring how society shapes the technology we surround ourselves with, and how technology in turn shapes us. Her findings are documented in her latest book, Man-Made, which was published by Simon & Schuster earlier this year. Tune in to hear more about Tracey’s latest book, her work as a journalist and social justice advocate, how technology is changing journalism, life as a working parent, and so much more.

    Please note: We discuss some of the realities of work for women. This occasionally touches on topics that are not suitable for young listeners.

    Credits

    Guest: Tracey Spicer

    Hosts: Zena Assaad and Liz Williams

    Producers: Zena Assaad, Liz Williams, and Martin Franklin (East Coast Studios)

    Theme music: Coma-Media

  • You have probably heard of ChatGPT – the generative AI language model that is already transforming work and education. In this episode, we explore the many potential benefits and challenges ChatGPT and models like it pose for education and law with the help of Simon Chesterman, author of We, the Robots? Regulating Artificial Intelligence and the Limits of the Law, David Marshall Professor and Vice Provost of Educational Innovation at the National University of Singapore, Senior Director of AI Governance at AI Singapore, and Editor of the Asian Journal of International Law. This episode has something for everyone who is interested in understanding how we can sensibly make the best use of generative AI models like ChatGPT while mitigating their potential for harm.

    Credits:

    Guest: Simon Chesterman

    Hosts: Zena Assaad and Liz Williams

    Guest co-hosts: Tom Chan, Matthew Phillipps

    Producers: Tom Chan, Matthew Phillipps, Robbie Slape, Zena Assaad, Liz Williams, Martin Franklin (East Coast Studios)

    Theme music: Coma-Media

    Thank you to the ANU School of Cybernetics for allowing us to record Tom and Matthew’s audio in their studio.

    Transcript:

    For the full transcript of this episode, visit: https://algorithmicfutures.org/s02e05

  • What does human flourishing have to do with human-machine teams? And how do we meaningfully engage stakeholders in consultations about some of the most challenging problems of our time? Listen in as we explore some of these questions with Kate Devitt, co-founder and CEO of BetterBeliefs – a platform for evidence-based stakeholder engagement and decision-making – who also happens to be an internationally recognized leader in ethical robotics, autonomous systems and AI.

    Credits:

    Guest: Kate Devitt

    Hosts: Zena Assaad and Liz Williams

    Producers: Zena Assaad, Liz Williams, Martin Franklin (East Coast Studios)

    Theme music: Coma-Media

    We would like to acknowledge the Traditional Owners of the lands on which this episode was recorded, and pay our respects to Elders past and present.

    Content notes:

    We have chosen to list this episode as explicit because of some discussion of warfare.

    For the full transcript, visit https://algorithmicfutures.org/s02e04

  • Most of us have a vested interest in what happens in space – whether we know it or not. Listen in as we talk to Cassandra Steer, Deputy Director of the Australian National University Institute for Space – or ANU InSpace, for short – about space law, diversity and inclusivity in the space sector, and why having diverse perspectives contribute to Australia’s future in space is important for us all.

    Credits:

    Guest: Cassandra Steer

    Hosts: Zena Assaad and Liz Williams

    Producers: Zena Assaad, Liz Williams and Martin Franklin (East Coast Studio)

    Theme music: Coma-Media

    ***

    If you enjoyed this episode, please remember to give us a 5-star review on Apple Podcasts and share the episode with friends and colleagues. We put a lot of time and effort into producing every episode and really appreciate your support.

    You can also access the full transcript of this episode on our website: https://algorithmicfutures.org

    ***

    Notes on the content: Our choice to list this episode as explicit is because of some discussion of sexism and racism in the episode, some mention of warfare, and a brief story about discussing terrorism in a classroom setting.

    Disclaimer: This episode is for your education and entertainment only. None of this is meant to be taken as advice specific to your situation.

  • Our second episode of Season 2 features Sue Keay. Sue is currently the robotics technology lead at OZ Minerals, Chair and Founder of Robotics Australia Group, and is a member of the Advisory Committee for the National Robotics Strategy (amongst many other accomplishments). She joined us for a chat shortly before the Department of Industry, Science and Resources released its National Robotics Strategy discussion paper – which Sue had a hand in shaping – and shared with us the many challenges and opportunities she sees for the future of robotics in Australia.

    Episode credits

    Guest: Sue Keay

    Hosts: Zena Assaad and Liz Williams

    Producers: Robbie Slape, Zena Assaad, Liz Williams, and Martin Franklin

  • Our first episode of Season 2 features Julie Carpenter, author of Culture and Human-Robot Interaction in Militarised Spaces: A War Story. Julie is a social scientist based in San Francisco, and her work explores how humans experience emerging technologies. Listen in as we delve into the relationship between humans and robots, exploring everything from love and intimacy to the bonds humans form with robots deployed in military settings.

    Content warning and disclaimer: We talk about adult themes in this episode, so it may not be one to share with minors. We also produce this podcast for your education and enjoyment only. Please don't take anything in this episode as advice specific to your situation.

    Love the episode? We are so glad! Please help others discover our podcast by sharing this episode with friends, family, or colleagues. Listening on Apple Podcasts? You can also help us game the algorithms by giving us a great review.

    Episode Credits

    Guest: Julie Carpenter

    Hosts: Zena Assaad and Liz Williams

    Producers: Zena Assaad, Liz Williams, and Martin Franklin

  • Today, we’re honoured to be joined by Jenny Zhang -- a software engineer and writer based in Canada. Her purpose-driven approach to technology development comes through clearly throughout our time with her, and (we think) offers up valuable lessons to anyone seeking to generate beneficial impact in the tech industry.

    Listen in as we talk to Jenny about her circuitous path to software development, what it means to be a full stack engineer, her considerations of privacy and safety in voice datasets, values and career trajectories, and more.

    This is the last episode for this season of the Algorithmic Futures podcast. Don't worry -- we'll be back next year with more episodes, so stay tuned (and subscribe!).

    ***

    Credits

    Guest: Jenny Zhang

    Hosts: Zena Assaad and Liz Williams

    Producers: Zena Assaad and Liz Williams

    Sound editors: Cyril Buchard (with final edits by Liz Williams)

    ***

    To learn more about the podcast and our guests you can visit our website algorithmicfutures.org. And if you’ve enjoyed this, please like the podcast on Apple Podcasts and share your favourite episodes with others. It really helps us get the word out.

    And now for a short disclaimer: This podcast is for your education and enjoyment only. It is not intended to provide advice specific to your situation.

  • In this episode we talk with Caitlin Bentley, a Lecturer in AI Education at King’s College London. Caitlin’s research has predominantly engaged with questions around how technology systems can be designed and implemented in ways that promote social inclusion, empowerment and democratic participation. Tune in to hear about a theme of fierce women in history, the ups and downs of experimenting with educational pedagogies, intersectionality and its applications in technology research, and critical Black feminists across history.

    Please note: Caitlin briefly mentions encountering evidence of violence against women as part of her experiences in Morocco. This portion of the episode may not be appropriate for young listeners.

    Credits:

    Hosts: Zena Assaad and Liz Williams

    Guest: Caitlin Bentley

    Producers: Zena Assaad and Liz Williams

    Sound editor: Cyril Buchart

    Transcript

  • In this episode, co-hosts Zena and Liz share some of their experiences on creating podcast episodes in support of the Social Responsibility of Algorithms workshop series and discuss the potential futures of the Algorithmic Futures podcast. Along the way, they have a wide-ranging discussion covering everything from how assumptions get embedded in technologies deployed at scale to what it’s like being a woman working in a male-dominated STEM field.

    This episode was developed in support of the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

  • In today’s episode, we get a sense of what it’s like to manage a water system here in Australia with the help of our guest, Sam Yenamandra, the manager of Asset Performance at Murrumbidgee Irrigation. Murrumbidgee Irrigation has been going through something like a technological revolution for the past 20 years -- driven by the need to deliver water more efficiently and reliably, and provide greater flexibility to customers they serve. As you’ll hear from Sam, data is at the heart of this revolution. And what they’re doing now is only the start of a massive – and global – change in the way we feed our planet.

    This episode was put together by Hannah Feldman and Joseph Guillaume, both of whom are part of the ANU Institute for Water Futures. The ANU Institute for Water Futures collaboration includes the ANU School of Cybernetics and ANU Fenner School of Environment and Society.

    This episode was prepared in support of the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

    Disclaimers

    The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

    All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.

    Episode credits

    Hosts: Hannah Feldman, Joseph Guillaume, Zena Assaad, Liz Williams

    Producers: Hannah Feldman, Joseph Guillaume, Elizabeth Williams

    Sound editors: Hannah Feldman, Cyril Burchard

    A special thanks to Nicolas Paget from CIRAD for feedback on the narrative.

    For the episode transcript, visit https://algorithmicfutures.org/episode-7/

  • In this episode, we explore the way technology scales in times of rapid change – such as the time in which we currently live. We’ll use a single piece of technology to shape our exploration: the South Australian Home Quarantine app. This was rolled out as a trial to enable home quarantine during the COVID-19 pandemic. The app was the focus of intense debate in Australia, and drew attention from commentators in the US, because it used facial recognition combined with GPS data to monitor participants in the program.

    We have a fantastic line-up of guests to help us explore this issue, including:

    Professor Peter Wells, Business and Sustainability, Cardiff University, UK Professor Angela Webster, Clinical Epidemiologist, Nephrologist and Transplant Physician, Sydney School of Public Health, University of Sydney Dr Diego Silva, Senior Lecturer in Bioethics, Sydney School of Public Health, University of Sydney Professor Mark Andrejevic, School of Media, Film, and Journalism at Monash University Associate Professor Gavin Smith, School of Sociology at the Australian National University Lizzie O’Shea - lawyer, writer, broadcaster and founder of Digital Rights Watch

    This episode was created by Amir Asadi, Ned Cooper, Memunat Ibrahim and Lorenn Ruster, who are part of the ANU School of Cybernetics 2021 PhD Cohort. Memunat and Lorenn narrate the story.

    This episode was inspired by work co-host Liz Williams has been doing on the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

    Disclaimers:

    The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast or this webpage, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

    All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.

    Episode Credits:

    Hosts:

    Memunat Ibrahim

    Lorenn Ruster

    Liz Williams

    Zena Assaad

    Guests:

    Peter Wells

    Angela Webster

    Diego Silva

    Gavin Smith

    Mark Andrejevic

    Lizzie O’Shea

    Producers:

    Amir Asadi

    Ned Cooper

    Memunat Ibrahim

    Lorenn Ruster

    Liz Williams

  • In this episode, Katherine Daniell and Flynn Shaw from the Australian National University (ANU) join us to talk about the ways Australia and the EU governments approach artificial intelligence policy. This episode is designed to provide attendees of the Social Responsibility of Algorithms 2022 a brief overview of the approaches both sets of governments use to shape the present and future of artificial intelligence.

    Katherine is a professor in the ANU School of Cybernetics and Fenner School of Environment and Society, and Flynn is a researcher in the ANU School of Cybernetics. You can read more about Katherine here and Flynn here.

    This episode was inspired by work co-host Liz Williams has been doing on the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

    Disclaimers

    The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

    All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.

    Episode Credits

    Hosts: Liz Williams and Zena Assaad

    Guests: Katherine Daniell and Flynn Shaw

    Producers / Writers: Katherine Daniell, Flynn Shaw, Liz Williams

    Art selection: Zena Assaad

  • In this episode, we chat with Dan Jermyn, Chief Decision Scientist for Commonwealth Bank of Australia, about an artificial intelligence-enabled digital system the bank uses to communicate with its 15 million+ customers. As you’ll hear in the episode, Dan has a track record of leading teams involved in creating groundbreaking data-driven tools for the financial sector in both the UK and Australia. We invited him to join us today to talk about the Customer Engagement Engine or CEE – a system that uses customer data and artificial intelligence to help the bank communicate with its customers across all of its platforms. CEE is fast becoming a fundamental part of how CBA thinks about engaging with its customers, and is one example of how digital infrastructure with the capacity to connect data to action has the potential to shape the future.

    This episode was inspired by work co-host Liz Williams has been doing on the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

    Disclaimers

    The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast or this webpage, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

    All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.

    Episode Credits

    Podcast Creator – Liz Williams

    Hosts – Zena Assaad, Liz Williams

    Guest – Dan Jermyn

    Producers – Zena Assaad, Liz Williams

    Assistant producer – Brenda Martin

    Episode artwork – Zena Assaad

    Audio editing – Liz Williams