Avsnitt

  • At LanguageTool, Bartmoss St Clair (Head of AI) is pioneering the use of Large Language Models (LLMs) for grammatical error correction (GEC), moving away from the tool's initial non-AI approach to create a system capable of catching and correcting errors across multiple languages.

    LanguageTool supports over 30 languages, has several million users, and over 4 million installations of its browser add-on, benefiting from a diverse team of employees from around the world.

    Episode Summary -

    LanguageTool decided against using existing LLMs like GPT-3 or GPT-4 due to cost, speed, and accuracy benefits of developing their own models, focusing on creating a balance between performance, speed, and cost.The tool is designed to work with low latency for real-time applications, catering to a wide range of users including academics and businesses, with the aim to balance accurate grammar correction without being intrusive.Bartmoss discussed the nuanced approach to grammar correction, acknowledging that language evolves and user preferences may vary, necessitating a balance between strict grammatical rules and user acceptability.The company employs a mix of decoder and encoder-decoder models depending on the task, with a focus on contextual understanding and the challenges of maintaining the original meaning of text while correcting grammar.A hybrid system that combines rule-based algorithms with machine learning is used to provide nuanced grammar corrections and explanations for the corrections, enhancing user understanding and trust.LanguageTool is developing a generalized GEC system, incorporating legacy rules and machine learning for comprehensive error correction across various types of text.Training models involve a mix of user data, expert-annotated data, and synthetic data, aiming to reflect real user error patterns for effective correction.The company has built tools to benchmark GEC tasks, focusing on precision, recall, and user feedback to guide quality improvements.Introduction of LLMs has expanded LanguageTool's capabilities, including rewriting and rephrasing, and improved error detection beyond simple grammatical rules.Despite the higher costs associated with LLMs and hosting infrastructure, the investment is seen as worthwhile for improving user experience and conversion rates for premium products.Bartmoss speculates on the future impact of LLMs on language evolution, noting their current influence and the importance of adapting to changes in language use over time.LanguageTool prioritizes privacy and data security, avoiding external APIs for grammatical error correction and developing their systems in-house with open-source models.
  • In this enlightening episode, Dr. Julia Stoyanovich delves into the world of responsible AI, exploring the ethical, societal, and technological implications of AI systems. She underscores the importance of global regulations, human-centric decision-making, and the proactive management of biases and risks associated with AI deployment. Through her expert lens, Dr. Stoyanovich advocates for a future where AI is not only innovative but also equitable, transparent, and aligned with human values.

    Julia is an Institute Associate Professor at NYU in both the Tandon School of Engineering, and the Center for Data Science.  In addition she is Director of the Center for Responsible AI also at NYU.  Her research focuses on responsible data management, fairness, diversity, transparency, and data protection in all stages of the data science lifecycle. 

    Episode Summary -

    The Definition of Responsible AIExample of ethical AI in the medical world - Fast MRI technologyFairness and Diversity in AIThe role of regulation - What it can and can’t doTransparency, Bias in AI models and Data ProtectionThe dangers of Gen AI Hype and problematic AI narratives from the tech industryThe impotence of humans in ensuring ethical development Why “Responsible AI” is actually a bit of a misleading termWhat Data & AI leaders can do to practise Responsible AI
  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • Luis Moreira-Matias is Senior Director of Artificial Intelligence at sennder, Europe’s leading digital freight forwarder. At sennder, Luis founded sennAI: sennder’s organization that oversees the creation (from R&D to real-world productization) of proprietary AI technology for the road logistics industry.

    During his 15 years of career, Luis led 50+ FTEs across 4+ organisations to develop award-winning ML solutions to address real-world problems in various fields such as e-commerce, travel, logistics, and finance. 

    Luis holds a Ph.D. in Machine Learning from the U. Porto, Portugal. He possesses a world-class academic track with high impact publications at top tier venues in ML/AI fundamentals, 5 patents and multiple keynotes worldwide - ranging from Brisbane (Australia) to Las Palmas (Spain).


  • In this episode Tarush Aggarwal, formerly of Salesforce and WeWork is back on the podcast to discuss the evolution of the Semantic layer and how that can help practitioners get results from LLMs.  We also discuss how smaller ELMs (expert language models) might be the future when it comes to consistent reliable outputs from Generative AI and also the impact of all of this on traditional BI tools.

  • In this episode Patrick McQuillan shares his innovative Biological Model - a concept you can use to enhance data outcome in large enterprises.  The concept takes the idea that the best way to design a data strategy is to align it closely with a biological system.

    He discusses the power of centralized information, importance of data governance, and the necessity for a common performance narrative across an organization.

    Episode Summary -

    - Biological Model Concept

    - Centralized vs. Decentralized Data

    - Data Collection and Maturity

    - Horizontal translation layer 

    - Partnership with vertical leaders

     - Curated data layers 

    - Data dictionary for consistency

    - Focusing on vital metrics

    - Data Flow in Organizations

    - Biological Model Governance

    - Overcoming Inconsistency and Inaccuracy

  • In this episode Heidi Hurst returns to talk to us about how in her current role at Pachama she is using the power of machine learning to fight climate change.  She discusses her work in measuring the capacity of existing forests and reforestation projects using satellite imagery.

    Episode Summary

    1. The importance of carbon credits verification in mitigating climate change

    2. How Pachama is using machine learning and satellite imagery to verify carbon projects

    3. Three types of carbon projects: avoided deforestation, reforestation, and improved forest management

    4. Challenges in using satellite imagery to measure the capacity of existing forests

    5. The role of multispectral imaging in measuring density of forests

    6. Challenges in collecting data from dense rainforests and weather obstructions

    7. The impact of machine learning on scaling up carbon verification

    8. Advancements in the field of satellite imaging, particularly in small satellite constellations

  • Ágnes Horvát is an Assistant Professor in Communication and Computer Science at Northwestern University. Her work focuses on understanding how online networks induce biased information production, sharing and processing across digital platforms. 

    - The new Post-normal era for science - Having an awareness of the context and values that impact scientific research

    Where is science communication in relation to digital platforms? - Scholars work hard on discovering scientific findings, but information doesn’t always reach the public appropriately.  
    How to communicate scientific research - it’s not just about communicating with scientists and general audiences. News needs to reach policymakers and governments too for real change.
    The production of scientific research has exploded recently thanks to decision-making demands - and the pandemic had a lot to do with this. Scientists were under pressure to carry out research quickly and at the expense of quality. 
    Misinformation can have detrimental consequences - even leading to vaccine hesitancy in some communities.
    The surprising effect of retracting papers - papers that get retracted in the future are more likely to receive more engagement before getting withdrawn.
    Why are paper retractions on the rise?  - again, the recent pandemic has caused an increase in retractions. 
    Is social media helping or hindering science research? - while the platforms are helping to  spread real news, social media also helps the spread of false information. 
    As long as you have quality data and robust trends - regardless of the method, you will identity that trend. 

    Reducing the problem of miscommunication - with whom does the responsibility lie?

  • Modern Data Infrastructures and platforms store huge amounts of multidimensional data.  But - data pipelines frequently break and a machine learning algorithm's performance is only as good as the quality and reliability of the data itself.

    In this episode we are joined by Lior Gavish and Ryan Kearns of Monte Carlo, to talk about how the new concept of Data Observability is advancing Data Reliability and Data Quality at Scale.

    Episode Summary

    A overview of Data Reliability/Quality and why it is so critical for organisationsThe limitations of traditional approaches in the area of Data ReliabilityData observability and why it is different to traditional approaches to Data QualityThe 5 Pillars of Data ObservabilityHow to improve data reliability/quality at scale and generate trust in data with stakeholders.How observability can lead to better outcomes for Data Science and engineering teams?Examples of data observability use cases in industryOverview of O’Reilly’s upcoming book, The Fundamentals of Data Quality.
  • In this episode we are joined by Julia Stoyanovich from NYU, to talk about her work into how AI is being used in the hiring process.

    Whether you are responsible for hiring on behalf of a business or are a job seeker, you will find this podcast very interesting, but for very different reasons.

    Episode Summary

    Algorithmic decision making in the hiring process - what does that mean for businesses and job seekers?The hiring process - the funnel effect.Lack of public disclosure about the use of algorithmic tools as part of the talent acquisition pipeline.Are job seekers being unfairly screened out of the hiring process?How AI based implementations of psychometric instruments are used today.Is it possible to measure a person’s personality based on data alone?Do these systems remove bias and discrimination from the hiring process?Testing the stability and consistency of these algorithmic systems.Vendors of systems and their lack of testing / recognising the issues.Are new laws needed so the hiring process is fairer and more transparent?What does the future of hiring look like - fewer AI systems and more human intervention?
  • In this episode we are joined by Professor Maurizio Porfiri from NYU, to talk about his latest academic research which is using data science to uncover why sales of guns in the USA increase after a mass shooting event.  

    His interest and research was borne from a very personal experience 14 years ago when he experienced a mass shooting event at Virginia Tech where he was studying.

    Researching Complex SystemsVirginia Tech Mass Shooting event and its impact on MaurizioWhat is the relationship between mass shooting events and the purchase of guns?Analysis of time series data - 70 mass shootings in around 20 yearsCan media coverage on mass shootings shape public opinion, thereby influencing firearm acquisition?Examining the correlation between three distinct datasetsWhat are the causes of increased gun sales in the aftermath of mass shooting events?Differences in the data at State level V National level?Researching the complex firearm ecosystem with all its pieces - prevalence, violence and regulation
  • In this episode we are joined by Perry Marshall to talk about his latest scientific paper entitled “Biology Transcends the Limits of Computation”.  We also discuss his $10 million Evolution 2.0 Science Prize, which is the largest prize in the world in science currently.

    His paper pushes the boundaries in the field of evolutionary biology and his science prize is driving some truly fascinating and thought provoking implications for the development of strong AI.

  • In this episode we are joined by Arnon Houri Yafin, an Israeli entrepreneur who is the founder of a company called Zzapp Malaria, which recently won the AI XPRIZE sponsored by IBM Watson.

    Their work in using AI to eliminate malaria in Africa is both interesting and inspirational.

    Episode Summary

    Moving from malaria control to malaria eliminationHow Zzapp Malaria started and how investors were attractedThe use of drones, satellite imagery, topography, rain/humidity data and a new mobile appThe development of small neural networks to identify the potential for small water bodiesHow IBM Watson assisted with funding but also the machine learning modelsThe use of biology agents rather than chemicals to treat stagnant water bodiesThe NGO project in Ghana - reduced mosquitos by 60% in 100 daysLatest and biggest operation to date in São Tomé and PríncipeThe direct impact on the GDP of a country’s economy as a result of eliminating malariaHow malaria and poverty are interconnectedWinning the $3 million XPrize and how the money will be usedHow you can help by supporting Malaria No More and Only Nets
  • In this episode we are joined by the Director of AI and Data Operations at XPRIZE whose career path into the world of AI is fascinating.  Neama Dadkhahnikoo shares his journey from his early days at Boeing back in 2005, through start up ventures, Techspert and Caregivers Direct, and re-training right through to the present day at XPRIZE.

    He reveals how anyone has the potential to make a real difference in using AI to help solve real world problems.

    The history of challenge prize competitions and how the British Monarchy were involvedChallenge prize is philanthropy with capitalism thrown inHow a clockmaker determined longitude to win the first ever prizeHow industries are born out of successful challenge prize competitionsThe impact of XPRIZE on the commercial Space industryThe ethos of XPRIZE- a global positive future movementHow the challenges are chosen The IBM Watson AI XPRIZE, a $5 million challenge for teams to use AI for goodHow to monitor the after prize impactThree AI XPRIZE finalists - Aifred Health, Marinus Analytics and ZzappMalariaHow was AI defined for the challenge?How to use and get involved with AI for good
  • In this episode we are joined by an industry veteran who has worked for some of the biggest names in the enterprise Data world.  Tarush Aggarwal shares his journey from his early days at Salesforce and then WeWork, right through to the present day.

    He reveals how to set Data Science & Engineering up for success in both small and large organisations.

    Episode Summary

    How Salesforce leveraged data to grow their company fastHow Mark Benioff ensured his vision was executed effectively at Salesaforce.comWhat it was like to join WeWork at the start of their data functionThe differences between how WeWork and Salesforce.com leveraged dataHow to structure a Data function - centralised V decentralised V hybrid modelHow Spotify structured their data team to scale the businessCan a Fortune 500 business make the hybrid model work?The fundamentals for a new start up - how to get building a data function rightProduct company versus service delivery company - how does that affect the data function structure?What’s next for data privacy?The 5x Company - entry-level training program, what it is and who its for?Data Mastermind groups - are they the way forward?
  • In this episode we discuss the rapidly developing field of Satellite Imaging.

    Our guests on this show are Heidi Hurst & Jerry He.  

    They are two remarkable industry Data Scientists with a strong academic pedigree and experience in the field of Satellite Image Processing.  Heidi is based in Washington DC and Jerry is based in New York.

    Join us as they discuss their journey into Satellite Imaging and share with us the latest developments in this fascinating and evolving area of Data Science.

    Episode Summary

    Why is satellite image processing such an exciting field?What data sources is satellite image data based on?What are the challenges in using satellite image data?Sensors used in satellite imagingMethods used in satellite imaging - Image Processing, Deep Learning, CNNThe Socio-economic applicationsIndustry Applications for Satellite Imaging  - Agriculture, Supply Chain monitoring, Sales Prediction, InsuranceThe Future of Satellite Imaging

    RESOURCES:

    Cool Visual - One Hour of active Satellites orbiting Earth:

    https://www.reddit.com/r/dataisbeautiful/comments/j7pj62/oc_one_hour_of_active_satellites_orbiting_earth/?utm_source=share&utm_medium=ios_app&utm_name=iossmf

     

    DOTA - https://captain-whu.github.io/DOTA/ - Open dataset for object detection in overhead imagery

     

    COwC - https://gdo152.llnl.gov/cowc/ - Cars Overhead with Context - specific detection dataset for car counting algorithms

     

    xView - http://xviewdataset.org/ - dataset put together by the National Geospatial Intelligence Agency for an object detection challenge, including some particularly rare classes

  • Every so often on the podcast we will bring you something a little bit different.

    This episode is part two of our conversation with Esports Legends TLO & MaNa.  They are professional Starcraft II players and they tell us the story of what it was like to compete against Google DeepMinds AlphaStar AI agent.

    This is a fascinating discussion about the technical capability of AI agents and about the psychology involved when Humans take on the machines.

    Episode Summary

    The live event Rematch against AlphaStarThe game plan for the rematchTrying to match AlphaStarThe importance of the human aspect to the future of Starcraft IIWhat TLO & MaNa learned from AlphaStarThe importance of human intervention to prevent mistakes from the AIWhat impact could AlphaStar have on improving Esports players?Did AlphaStar Show any signs of being able to improvise?Mistakes AlphaStar madeWhy limiting the abilities of an AI might make it smarterCan AI develop intuition in the future?

    Resources:

    Deepmind Alphastar Videos:

    https://deepmind.com/research/open-source/alphastar-resources

    TLO Profile: - https://liquipedia.net/starcraft2/TLO

    MaNa Profile:  - https://liquipedia.net/starcraft2/MaNa

  • Every so often on the podcast we will bring you something that is a little bit different.

    This episode is part one of a conversation with Esports Legends TLO & MaNa. They are professional Starcraft II players and they tell us the story of what it was like to compete against Google DeepMinds AlphaStar AI agent.

    This is a fascinating discussion about the technical capability of AI agents and about the psychology involved when Humans take on the machines.

    Episode Summary

    A typical day for an esports athleteThe similarities and differences between Esports and normal sportsThe importance of actions and screens per minutes for high performanceThe role of gaming in driving development of AIEvolution of gaming AI agentsThe challenges in competing against a blackboxExploiting the game playing tendencies of the AlphaStarThe psychological pressure of playing against AlphaStarUnderestimating the ability of AlphaStar

    Resources:

    Deepmind Alphastar Videos:

    https://deepmind.com/research/open-source/alphastar-resources

    TLO Profile: - https://liquipedia.net/starcraft2/TLO

    MaNa Profile: - https://liquipedia.net/starcraft2/MaNa

  • This is Part Two of our conversation about Deep Fakes with two experts in their respective fields.

    We talk to Dr Eileen Culloty of the Institute for Future Media and Journalism at Dublin City University and Dr Stephane Lathuiliere of Telecom Paris.

    EPISODE SUMMARY:

    The implications of disinformation on the mediaThe role of fact checkingHow to deal with Deep fakesWhat Adobe and Microsoft are doing about Deep FakesThe future role of GAN’s in detecting Deep Fakes

    Resources:

    Video of First Order Motion Model For Video Animation:

     https://www.youtube.com/watch?v=u-0cQ-grXBQ&ab_channel=AliaksandrSiarohin

     PROVENANCE program: https://fujomedia.eu/provenance/

  • This is Part one of our conversation about Deep Fakes with two experts in their respective fields.

    We talk to Dr Eileen Culloty of the Institute for Future Media and Journalism at Dublin City University and Dr Stephane Lathuiliere of  Telecom Paris.

    Stephane reveals what is possible and what is not possible technically with current Deep Fakes Technology.

    Eileen helps us cut through the hype about Deep Fakes and tells us about their real world social and political impact.

    EPISODE SUMMARY:

    Short History of Media ManipulationThe breakthroughs in Deep Learning enabling current Deep Fake Technology.The role of increased data availability in generating Deep Fakes.Why Cheap fakes are still a bigger problem than Deep Fakes.How the First Order Motion Model has advanced the field of Image Animation.Positive use cases of Deep Fake Technology.The Future of Image Animation/Deep Fake Technology.Challenges for media and Journalism in the age of Deep Fake Technology.Societal impact of Disinformation and fake news content.Deep Fakes V Cheap Fakes during COVID Pandemic.

     

     

    RESOURCES:

    Video of First Order Motion Model For Video Animation:

     https://www.youtube.com/watch?v=u-0cQ-grXBQ&ab_channel=AliaksandrSiarohin

     

    PROVENANCE program:  https://fujomedia.eu/provenance/

  • This is Part 2 of our conversation with Professor Philipp Koehn of Johns Hopkins University.  Professor Koehn is one of the world’s leading experts in the field of Machine Translation & NLP.  

    In this episode we delve into commercial applications of machine translation, open source tools available and also take a look into what to expect in the field in the future.

    Episode Summary:

     

    Typical datasets used for training models
    The role of infrastructure and technology in Machine Translation
    How the academic research in Machine Translation has manifested into industry applications
    Overview of what’s available in Open source tools for Machine Translation

     

    The Future of Machine Translation and can it pass a Turing test

     

    Resources:

     

    Philipp Koehn latest book - Neural Machine Translation - Amazon link: 

     

    https://www.amazon.com/Neural-Machine-Translation-Philipp-Koehn/dp/1108497322

     

    Omniscien Technologies - Leading Enterprise Provider of machine translation services:

     

    https://omniscien.com/

     

    Open Source tools:

     

    - Fairseq https://fairseq.readthedocs.io/en/latest/

    - Marian https://marian-nmt.github.io/

    - OpenNMT https://opennmt.net/

    - Sockeye https://awslabs.github.io/sockeye/

     

    Translated texts (parallel data) for training:

     

    - OPUS http://opus.nlpl.eu/

    - Paracrawl https://paracrawl.eu/

     

    Two papers mentioned about excessive use of computing power to train NLP models:

     

    - GPT-3 https://arxiv.org/abs/2005.14165

    - Roberta https://arxiv.org/abs/1907.11692