Avsnitt

  • In 2022, Lin Qiao decided to leave Meta, where she was managing several hundred engineers, to start Fireworks AI. In this episode, we sit down with Lin for a deep dive on her work, starting with her leadership on PyTorch, now one of the most influential machine learning frameworks in the industry, powering research and production at scale across the AI industry.

    Now at the helm of Fireworks AI, Lin is leading a new wave in generative AI infrastructure, simplifying model deployment and optimizing performance to empower all developers building with Gen AI technologies.

    We dive into the technical core of Fireworks AI, uncovering their innovative strategies for model optimization, Function Calling in agentic development, and low-level breakthroughs at the GPU and CUDA layers.

    Fireworks AI

    Website - https://fireworks.ai

    X/Twitter - https://twitter.com/FireworksAI_HQ

    Lin Qiao

    LinkedIn - https://www.linkedin.com/in/lin-qiao-22248b4

    X/Twitter - https://twitter.com/lqiao

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:20) What is Fireworks AI?

    (02:47) What is PyTorch?

    (12:50) Traditional ML vs GenAI

    (14:54) AI’s enterprise transformation

    (16:16) From Meta to Fireworks

    (19:39) Simplifying AI infrastructure

    (20:41) How Fireworks clients use GenAI

    (22:02) How many models are powered by Fireworks

    (30:09) LLM partitioning

    (34:43) Real-time vs pre-set search

    (36:56) Reinforcement learning

    (38:56) Function calling

    (44:23) Low-level architecture overview

    (45:47) Cloud GPUs & hardware support

    (47:16) VPC vs on-prem vs local deployment

    (49:50) Decreasing inference costs and its business implications

    (52:46) Fireworks roadmap

    (55:03) AI future predictions

  • Retrieval-Augmented Generation (RAG) has become a dominant architecture in modern AI deployments, and in this episode, we sit down with Douwe Kiela, who co-authored the original RAG paper in 2020. Douwe is now the founder and CEO of Contextual AI, a startup focusing on helping enterprises deploy RAG as an agentic system.

    We start the conversation with Douwe's thoughts on the very latest advancements in Generative AI, including GPT 4.5, DeepSeek and the exciting paradigm shift towards test time compute, as well as the US-China rivalry in AI.

    We then dive into RAG: definition, origin story and core architecture. Douwe explains the evolution of RAG into RAG 2.0 and Agentic RAG, emphasizing the importance of self-learning systems over individual models and the role of synthetic data. We close with the challenges and opportunities of deploying AI in real-world enterprise, discussing the balance between accuracy and the inherent inaccuracies of AI systems.

    Contextual AI

    Website - https://contextual.ai

    X/Twitter - https://x.com/ContextualAI

    Douwe Kiela

    LinkedIn - https://www.linkedin.com/in/douwekiela

    X/Twitter - https://x.com/douwekiela

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:57) Thoughts on the latest AI models: GPT-4.5, Sonnet 3.7, Grok 3

    (04:50) The test time compute paradigm shift

    (06:47) Unsupervised learning vs reasoning: a false dichotomy

    (07:30) The significance of DeepSeek

    (10:29) USA vs. China: is the AI war overblown?

    (12:19) Controlling AI hallucinations at the model level

    (13:51) RAG: definition and origin story

    (18:46) Why the Transformers paper initially felt underwhelming

    (20:41) The core architecture of RAG

    (26:06) RAG vs. fine-tuning vs. long context windows

    (30:53) RAG 2.0: Thinking in systems and not models

    (31:28) Data extraction and data curation for RAG

    (35:59) Contextual Language Models (CLMs)

    (38:04) Finetuning and alignment techniques: GRIT, KTO, LENS

    (40:40) Agentic RAG

    (41:36) General vs. specialized RAG agents

    (44:35) Synthetic data in AI

    (45:51) Deploying AI in the enterprise

    (48:07) How tolerant are enterprises to AI hallucinations?

    (49:35) The future of Contextual AI

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • In this episode, we dive into how AI is transforming video editing with Gaurav Misra, the CEO of Captions. Launched in New York in 2021, Captions already empowers over 10 million creators worldwide, leveraging AI to make video production as simple as clicking a button.

    Discover the strategic framework that led to the inception of Captions, and learn how the founders identified societal changes and technological advancements to build a groundbreaking company. We explore the challenges and opportunities of building an AI product for video editing, including how Captions is outpacing traditional content production workflows.

    Gaurav shares insights into the future of video editing, the role of AI in democratizing video production, and the unique approach Captions takes to differentiate itself from industry giants like Adobe and Capcut.

    Captions

    Website - https://www.captions.ai

    X/Twitter - https://x.com/getcaptionsapp

    Gaurav Misra

    LinkedIn - https://www.linkedin.com/in/gamisra1

    X/Twitter - https://x.com/gmharhar

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:30) What is Captions?

    (03:43) How did Captions start?

    (08:25) The strategy behind launching Captions

    (12:32) How is Captions different from other editing tools?

    (14:13) How does it compare to CapCut?

    (18:22) Who is the typical Captions user?

    (20:13) Why ‘Captions’?

    (23:47) Captions’ product suite for production and editing

    (26:37) AI models powering Captions

    (36:22) AI lipsync

    (38:49) Personalized fine-tuned models for creators?

    (39:38) Building models vs. building wrappers

    (43:09) Cloud AI vs. Local AI

    (45:19) Optimizing for low latency

    (48:07) AI/ML stack at Captions

    (51:10) “Hallucinations are a feature, not a bug”

    (53:19) Prompt engineering

    (54:12) Have we passed the uncanny valley for AI avatars?

    (01:01:47) The impact of deepfakes

    (01:04:33) CapCut ban and its effects

    (01:05:05) Evolving from paid to freemium

    (01:07:42) Building a company on foundation models

    (01:09:01) Running an AI company in New York

  • AI customer service agents are quickly replacing the often clunky AI chatbots of years past, and revolutionizing how we all interact with customer service. In this episode, we dive into this rapid transformation with Mike Murchison, CEO of Ada, a fast-growing leader in the space.

    Mike shares how harnessing the power of several Generative AI models enables Ada to automate up to 83% of customer interactions, providing a seamless and empathetic service that rivals, and will soon surpass, human agents. We explore the challenges and triumphs of deploying AI in customer service in this new era, from the intricacies of model orchestration to the importance of resolution and empathy. Mike also teases the future of agentic AI in the enterprise, where AI agents collaborate across departments to innovate and improve products.

    Ada

    Website - https://www.ada.cx

    X/Twitter - https://x.com/ada_cx

    Mike Murchison

    LinkedIn - https://www.linkedin.com/in/mikemurchison

    X/Twitter - https://x.com/mimurchison

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (02:27) Why is customer service a perfect use case for AI?

    (03:36) Why didn’t foundation models replace AI “thin wrappers” out of the box?

    (05:27) What is Ada?

    (10:41) Reasoning engine, model orchestration, instruction following, routing

    (15:45) Hybrid systems, finetuning, customization

    (18:28) Prompt engineering, observability, self-improvement

    (22:07) RAG (Retrieval-Augmented Generation) and AI as a judge

    (23:06) Guardrails and security

    (24:33) Should we expect perfection from AI?

    (26:14) Measuring “resolution”

    (29:29) What actions can Ada AI Agents take?

    (32:12) Authentication and personalization

    (35:09) Handoff vs human delegation

    (38:12) ACX (AI Customer Experience) and the future of customer service professionals

    (42:13) Leveraging analytics and customer support data

    (45:54) AI agents for cross-selling and upselling

    (48:25) Traditional AI chatbots vs the new generation of AI Agents

    (51:24) Emotion, empathy, personality

    (54:56) Transparency and AI improvement

    (57:58) Managing AI: the measure-coach-improve loop

    (1:00:15) Ada Voice and Email

    (1:06:25) Future predictions for AI

    (1:07:56) Multi-agent collaboration

  • Replit is one of the most visible and exciting companies reshaping how we approach software and application development in the Generative AI era. In this episode, we sit down with its CEO, Amjad Masad, for an in-depth discussion on all things AI, agents, and software.

    Amjad shares the journey of building Replit, from its humble beginnings as a student side project to becoming a major player in Generative AI today. We also discuss the challenges of launching a startup, the multiple attempts to get into Y Combinator, the pivotal moment when Paul Graham recognized Replit’s potential, and the early bet on integrating AI and machine learning into the core of Replit.

    Amjad dives into the evolving landscape of AI and machine learning, sharing how these technologies are reshaping software development. We explore the concept of coding agents and the impact of Replit’s latest innovation, Replit Agent, on the software creation process. Additionally, Amjad reflects on his time at Codecademy and Facebook, where he worked on groundbreaking projects like React Native, and how those experiences shaped his entrepreneurial journey. We end with Amjad's view on techno-optimism and his belief in an energized Silicon Valley.

    Replit

    Website - https://replit.com

    X/Twitter - https://x.com/Replit

    Amjad Masad

    LinkedIn - https://www.linkedin.com/in/amjadmasad

    X/Twitter - https://x.com/amasad

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:36) The origins of Replit

    (15:54) Amjad’s decision to restart Replit

    (19:00) Joining Y Combinator

    (30:06) AI and ML at Replit

    (32:31) Explain Code

    (39:09) Replit Agent

    (52:10) Balancing usability for both developers and non-technical users

    (53:22) Sonnet 3.5 stack

    (58:43) The challenge of AI evaluation

    (01:00:02) ACI vs. HCI

    (01:05:02) Will AI replace software development?

    (01:10:15) If anyone can build an app with Replit, what’s the next bottleneck?

    (01:14:31) The future of SaaS in an AI-driven world

    (01:18:37) Why Amjad embraces techno-optimism

    (01:20:36) Defining civilizationism

    (01:23:11) Amjad’s perspective on government’s role

  • In this episode, we explore the cutting-edge world of data infrastructure with Justin Borgman, CEO of Starburst — a company transforming data analytics through its open-source project, Trino, and empowering industry giants like Netflix, Airbnb, and LinkedIn.

    Justin takes us through Starburst’s journey from a Yale University spin-out to a leading force in data innovation, discussing the shift from data lakes to lakehouses, the rise of open formats like Iceberg as the future of data storage, and the role of AI in modern data applications. We also dive into how Starburst is staying ahead by balancing on-prem and cloud offerings while emphasizing the value of optionality in a rapidly evolving, data-driven landscape.

    Starburst Data

    Website - https://www.starburst.io

    X/Twitter - https://x.com/starburstdata

    Justin Borgman

    LinkedIn - https://www.linkedin.com/in/justinborgman

    X/Twitter - https://x.com/justinborgman

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:32) What is Starburst?

    (02:32) Understanding the data layer

    (05:06) Justin Borgman’s story before Starburst

    (10:41) The evolution of Presto into Trino

    (13:20) Lakehouse vs. data lake vs. data warehouse

    (22:06) Why Starburst backed the lakehouse from the start

    (23:20) Starburst Enterprise

    (27:31) Cloud vs. on-prem

    (29:10) Starburst Galaxy

    (31:23) Dell Data Lakehouse

    (32:13) Starburst’s data architecture explained

    (38:30) The rise of data apps

    (38:54) Starburst AML

    (40:41) “We actually built the Galaxy twice”

    (43:13) Managing multiple products at scale

    (45:14) “We founded the company on the idea of optionality”

    (47:20) Iceberg

    (48:01) How open-source acquisitions work

    (51:39) Why Snowflake embraced Iceberg

    (53:15) Data mesh

    (55:31) AI at Starburst

    (57:16) Key takeaways from go-to-market strategies

    (01:01:18) Lessons from the Dell partnership

    (01:04:40) Predictions for 2025

  • As AI takes over the world, data is more than ever “the new oil”, and data engineering is the discipline that makes data usable behind the scenes. In this episode, we dive deep into the present and future of data engineering with Ben Rogojan, also known as the Seattle Data Guy. A seasoned data engineering consultant, Ben has built a big brand and reputation in the field with over 100k followers on platforms like YouTube and Substack.

    We started the conversation with a deep dive into data engineering as a profession: what do data engineers actually do? What is the career path, and what should aspiring data engineers learn?

    We then explored some of the biggest stories of 2024 (including the rise of Iceberg) and went into some predictions for 2025, as a way to discuss some key topics everyone should be familiar with in data engineering, including the integration of AI in data workflows, the potential for automation, and why SQL isn't going anywhere. Discover how companies are navigating the complexities of data infrastructure, the rise of open table formats like Iceberg, and the ongoing battle between data giants like Snowflake and Databricks.

    Ben Rogojan

    Website - https://www.theseattledataguy.com

    Newsletter - https://seattledataguy.substack.com

    LinkedIn - https://www.linkedin.com/company/seattle-data-guy

    X/Twitter - https://x.com/seattledataguy

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:20) Why 2025 will be huge for data engineering

    (02:55) The story of the Seattle Data Guy

    (06:51) What exactly is data engineering?

    (07:41) Data, AI, and ML: where do they overlap?

    (09:23) Data analyst vs. data engineer vs. data scientist: what’s the difference?

    (11:20) A day in the life of a data engineer

    (12:58) Data engineering: Silicon Valley vs. everywhere else

    (15:27) How to become an AI engineer

    (28:46) Will AI replace AI engineers?

    (33:42) Why is the data world so complex?

    (36:53) The functional consolidation of the data world

    (38:34) Big data stories from 2024

    (39:28) Why Iceberg is a game-changer

    (46:02) How startups manage data in their early days

    (48:44) Seattle Data Guy’s favorite tools

    (50:09) Bold predictions for 2025

  • In this episode, we dive deep into the world of AI engineering with Chip Huyen, author of the excellent, newly released book "AI Engineering: Building Applications with Foundation Models".

    We explore the nuances of AI engineering, distinguishing it from traditional machine learning, discuss how foundational models make it possible for anyone to build AI applications and cover many other topics including the challenges of AI evaluation, the intricacies of the generative AI stack, why prompt engineering is underrated, why the rumors of the death of RAG are greatly exaggerated, and the latest progress in AI agents.

    Book: https://www.oreilly.com/library/view/ai-engineering/9781098166298/

    Chip Huyen

    Website - https://huyenchip.com

    LinkedIn - https://www.linkedin.com/in/chiphuyen

    Twitter/X - https://x.com/chipro

    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (02:45) What is new about AI engineering?

    (06:11) The product-first approach to building AI applications

    (07:38) Are AI engineering and ML engineering two separate professions?

    (11:00) The Generative AI stack

    (13:00) Why are language models able to scale?

    (14:45) Auto-regressive vs. masked models

    (16:46) Supervised vs. unsupervised vs. self-supervised

    (18:56) Why does model scale matter?

    (20:40) Mixture of Experts

    (24:20) Pre-training vs. post-training

    (28:43) Sampling

    (32:14) Evaluation as a key to AI adoption

    (36:03) Entropy

    (40:05) Evaluating AI systems

    (43:21) AI as a judge

    (46:49) Why prompt engineering is underrated

    (49:38) In-context learning

    (51:46) Few-shot learning and zero-shot learning

    (52:57) Defensive prompt engineering

    (55:29) User prompt vs. system prompt

    (57:07) Why RAG is here to stay

    (01:00:31) Defining AI agents

    (01:04:04) AI agent planning

    (01:08:32) Training data as a bottleneck to agent planning

  • In this episode, we sit down with Florian Douetteau, co-founder and CEO of Dataiku, a global category leader in enterprise AI and a fixture on the Forbes Cloud 100 list and in the Gartner Leader Quadrant.

    Florian shares his journey from a Parisian student fascinated by functional programming to leading a global enterprise software company. We discuss how Dataiku bridges the gap between technical and business teams to democratize AI in the enterprise, the challenges of selling to enterprise clients, and how Dataiku acts as an orchestration layer for Generative AI, helping businesses manage complex data processes and control AI, so they can build more with AI.

    Dataiku

    Website - https://www.dataiku.com/

    X/Twitter - https://twitter.com/dataiku

    Florian Douetteau

    LinkedIn - https://www.linkedin.com/in/fdouetteau

    X/Twitter - https://twitter.com/fdouetteau

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (02:08) Florian's life before Dataiku

    (06:58) Creation of Dataiku

    (12:08) Secret behind the Dataiku's name

    (12:47) How does Dataiku stay insightful about the future?

    (14:46) Building a platform, not just a tool

    (17:26) How to sell to the enterprise from the beginning

    (20:09) Dataiku platform today

    (26:55) Data is always the problem

    (28:50) LLM Mesh

    (36:02) Will Gen AI replace ML?

    (39:41) Managing Gen AI and traditional AI on one platform

    (40:37) Gen AI deployment in the enterprise

    (48:33) Dataiku's roadmap

    (50:28) What has changed with the company's growth?

  • In this episode, we dive into the world of generative AI with May Habib, co-founder of Writer, a platform transforming enterprise AI use. May shares her journey from Qordoba to Writer, emphasizing the impact of transformers in AI. We explore Writer's graph-based RAG approach, and their AI Studio for building custom applications.

    We also discuss Writer's Autonomous Action functionality, set to revolutionize AI workflows by enabling systems to act autonomously, highlighting AI's potential to accelerate product development and market entry with significant increases in capacity and capability.

    Writer

    Website - https://writer.com

    X/Twitter - https://x.com/get_writer

    May Habib

    LinkedIn - https://www.linkedin.com/in/may-habib

    X/Twitter - https://x.com/may_habib

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    This session was recorded live at a recent Data Driven NYC, our in-person, monthly event series, hosted at Ramp's beautiful HQ. If you are ever in New York, you can join the upcoming events here: https://www.eventbrite.com/o/firstmark-capital-2215570183

    (00:00) Intro

    (01:47) What is Writer?

    (02:52) Writer's founding story

    (06:54) Writer is a full-stack company. Why?

    (07:57) Writer's enterprise use cases

    (10:51) Knowledge Graph

    (17:59) Guardrails

    (20:17) AI Studio

    (23:16) Palmyra X 004

    (27:18) Current state of the AI adoption in enterprises

    (28:57) Writer's sales approach

    (31:25) What May Habib is excited about in AI

    (33:14) Autonomous Action use cases

  • Nathan Benaich, founder and GP at VC firm Air Street Capital, publishes every year "State of AI", one of the most widely-read and comprehensive reports on all things AI across research, industry, and policy. In this episode, we sit down with Nathan to discuss some of the highlights of the 2024 edition of the report, including the "vibes" shift in the industry from existential risk concerns last year to the current monetization race, the financial success of the foundation model labs, how a generative AI app could top the Apple Store charts in 2025, and the challenges facing humanoid robotics.

    State of AI 2024 report: https://www.stateof.ai/2024-report-launch

    State of AI 2024 video: https://youtu.be/EVMbnPOuUl0

    Air Street Capital

    Website - https://www.airstreet.com

    X/Twitter - https://x.com/airstreet

    Nathan Benaich

    LinkedIn - https://www.linkedin.com/in/nathanbenaich

    X/Twitter - https://x.com/nathanbenaich

    FirstMark

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    (01:08) Who is Nathan Benaich?

    (04:57) "Vibe" shift in AI

    (09:13) Current state of the foundation models

    (22:01) AI companies vs. SaaS

    (23:31) AI consumer apps

    (25:49) AI applications from a VC's perspective

    (29:25) "You don't need to be an AI engineer to build an AI company"

    (30:46) AI in robotics

    (34:36) AI regulations in Europe

    (40:55) Predictions on the future of AI

    (49:30) Nathan Benaich's favorite sources of information

  • In this special episode of the MAD Podcast, Matt Turck and Aman Kabeer from FirstMark delve into the AI market from a venture investor perspective, in the final weeks of an incredibly packed and exciting 2024. They comment on their favorite news stories, such as OpenAI's record-breaking $6.6 billion funding round and the massive $200B investments in AI infrastructure by Meta, Google, and Amazon. They tackle the latest trends in funding and valuations in both public and private markets, debate the critical question of whether we're in an AI bubble, examine the current state of AI demand, the potential of scaling laws, and the future of AI-driven innovation. They then discuss where they see opportunities for startups and investors across AI hardware, compute, foundation models, AI tooling, and both consumer and enterprise AI applications.

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck

    Aman Kabeer (Investor)

    LinkedIn - https://www.linkedin.com/in/aman-kabeer/

    X/Twitter - https://x.com/AmanKabeer11

    (00:00) Intro

    (02:20) The Year of Record-Breaking Evaluations and Investments

    (05:23) AI's Environmental Impact and Nuclear Revival

    (06:48) AI Valuations and Market Dynamics

    (17:01) Are We in an AI Bubble?

    (25:01) AI Progress and Demand

    (35:06) AI's Role in Consumer Applications

    (41:02) AI's Influence on SaaS and Business Models

    (50:55) AI's Role in Enterprise Transformation

    (01:04:00) The Future of AI: Apps and Agents

  • Before he founded Modal, Erik Bernhardsson created Spotify's music recommendation system. Today he's bringing a consumer app approach to radically simplifying developer experience for data and AI projects on the Modal platform.

    In this episode, we dive into the broader AI compute landscape, discussing the roles of hyperscalers, GPU clouds, inference platforms, and the emergence of alternative AI cloud providers. Erik gives us a product tour of the Modal platform, provides insights into the AI industry's shift from training to inference as the primary use case, and speculates on the future of AI-native consumer applications. Learn about Modal's commitment to fast feedback loops, their cloud maximalist approach, their dedication to building a product that developers truly love, as well as founder lessons Erik learned along the way.

    Erik's blog: https://erikbern.com

    "It's hard to write code for humans": https://erikbern.com/2024/09/27/its-hard-to-write-code-for-humans

    Modal

    Website - https://modal.com

    Twitter - https://x.com/modal_labs

    Erik Bernhardsson

    LinkedIn - https://www.linkedin.com/in/erikbern

    Twitter - https://x.com/bernhardsson

    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:35) What is Modal?

    (02:18) Current state of AI compute space

    (09:54) Erik's path to starting Modal

    (13:57) Core elements of the Modal platform

    (28:52) Is serverless the right level of abstraction for AI compute?

    (33:35) Balancing costs: GPU vendor fees vs. customer pricing

    (37:56) Designing products for humans

    (42:43) Modal's early go-to-market motion

    (45:32) Managing early engineering team

    (48:26) The only correct way to add a new function to the company

    (50:07) Building company in NYC

    (52:05) Modal's roadmap

    (54:04) Erik's predictions on AI

  • A founding engineer on Google BigQuery and now at the helm of MotherDuck, Jordan Tigani challenges the decade-long dominance of Big Data and introduces a compelling alternative that could change how companies handle data.

    Jordan discusses why Big Data technologies are an overkill for most companies, how MotherDuck and DuckDB offer fast analytical queries, and lessons learned as a technical founder building his first startup.

    Watch the episode with Tomasz Tunguz: https://youtu.be/gU6dGmZzmvI

    Website - https://motherduck.com

    Twitter - https://x.com/motherduck

    Jordan Tigani

    LinkedIn - https://www.linkedin.com/in/jordantigani

    Twitter - https://x.com/jrdntgn

    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (00:56) What is the Small Data?

    (06:56) Marketing strategy of MotherDuck

    (08:39) Processing Small Data with Big Data stack

    (15:30) DuckDB

    (17:21) Creation of DuckDB

    (18:48) Founding story of MotherDuck

    (24:08) MotherDuck's community

    (25:25) MotherDuck of today ($100M raised)

    (33:15) Why MotherDuck and DuckDB are so fast?

    (39:08) The limitations and the future of MotherDuck's platform

    (39:49) Small Models

    (42:37) Small Data and the Modern Data Stack

    (46:47) Making things simpler with a shift from Big Data to Small Data

    (50:04) Jordan Tigani's entrepreneurial journey

    (58:31) Outro

  • With a $4.5B valuation, 5M AI builders and 1M public AI models, Hugging Face has emerged as the key collaboration platform for AI, and the heart of the global open source AI community.

    In this episode of The MAD Podcast, we sit down with Clément Delangue, its co-founder and CEO, and delve deep into Hugging Face's journey from a fun chatbot to a central hub for AI innovation, the impact of open-source AI and the importance of community-driven development, and discuss the shift from text to other AI modalities like audio, video, chemistry, and biology. We also cover the evolution of Hugging Face's business model, and the different approach to company culture that the founders have implemented over the years.

    Hugging Face

    Website - https://huggingface.co

    Twitter - https://x.com/huggingface

    Clem Delangue

    LinkedIn - https://www.linkedin.com/in/clementdelangue

    Twitter - https://x.com/clemdelangue

    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:46) Miami vs. New York vs. San Francisco

    (03:25) Current state of open source AI

    (11:12) Government regulation of AI

    (13:18) What is open source AI?

    (15:21) Open source AI: China vs U.S.

    (18:32) LLMs vs. SLMs

    (22:01) Are commercial LLMs just 'Training Wheels' for enterprises?

    (24:26) Software 2.0: built with AI

    (28:03) Hugging Face founding story

    (37:03) Are there any competitors?

    (44:06) Most interesting models on Hugging Face

    (50:35) Shifting focus in enterprise solutions

    (55:06) Bloom & Idefix

    (58:44) The culture of Hugging Face

    (01:04:44) The future of Hugging Face

  • This episode is a captivating conversation with Richard Socher, serial entrepreneur, investor, and AI researcher.

    Richard elaborates on why he likens the impact of AI to the Industrial Revolution, the Enlightenment, and the Renaissance, discusses important current issues in AI, such as scaling laws and agents, provides a behind-the-scenes tour of YOU.com and its evolving business model, and finally describes his current investment strategy in AI startups.

    You.com

    Website - https://you.com/business

    Twitter - https://x.com/youdotcom

    Richard Socher

    LinkedIn - https://www.linkedin.com/in/richardsocher

    Twitter - https://x.com/richardsocher

    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (02:00) "AI era is the Industrial Revolution, Renaissance, and the Enlightenment combined"

    (07:49) Top-performers in the Age of AI

    (11:15) Comeback of the Renaissance Person

    (13:05) People tried to stop Richard from doing deep learning research. Why?

    (14:34) Jevons paradox of intelligence

    (17:08) Scaling Laws in Deep Learning

    (23:23) Can Deep Learning and Rule-Based AI coexist?

    (25:42) Post-transformers AI Architecture

    (28:20) Achieving AGI and ASI

    (36:43) AI for everyday tasks: how far is it?

    (44:50) AI Agents

    (55:45) Evolution of You.com

    (01:02:11) Technical side of You.com

    (01:06:46) Is AI getting cheaper?

    (01:13:05) What is AIX Ventures?

    (01:16:36) VC landscape of 2024

    (01:24:31) Research vs Entrepreneurship

    (01:26:12) OpenAI’s transformation and its impact on the industry

  • In this episode, we sit down with Tobie Morgan Hitchcock, the founder of SurrealDB, to dive deep into the evolving world of databases and the future of data storage, querying, and real-time analytics.SurrealDB isn’t just another database — it’s a multi-model database that merges document, graph, and time-series data, making it easier for developers to consolidate their backend without sacrificing performance. You'll learn how SurrealDB separates storage from compute for scalability, its innovative take on graph databases, and the radical decision to rewrite the entire platform in Rust. Tobie also shares how SurrealDB is designed to handle real-time analytics and integrate AI/ML models directly inside the database.If you're curious about the future of databases, this episode is packed with insights you won’t want to miss.SurrealDBWebsite - https://surrealdb.comTwitter - https://x.com/SurrealDBTobie Morgan Hitchcock:LinkedIn - https://www.linkedin.com/in/tobiemorganhitchcockTwitter - https://x.com/tobiemhFIRSTMARKWebsite - https://firstmark.comTwitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/Twitter - https://twitter.com/mattturck

    (00:00) Intro
    (02:03) What is SurrealDB?
    (02:53) How did SurrealDB get started?
    (09:10) The Challenges of Building a Database from Scratch
    (10:36) Why SurrealDB Chose Rust
    (12:54) A Deep Dive into SurrealDB’s Unique Features
    (19:30) Why Now?
    (26:32) What Sets SurrealDB Apart from Other Databases
    (30:01) SurrealDB’s Role in the Future of AI and Machine Learning
    (32:45) Why Developers Are Choosing SurrealDB
    (36:14) What’s New in SurrealDB 2.0?
    (40:10) SurrealDB Cloud: Scalability Meets Simplicity
    (42:21) How SurrealDB Fits into the Competitive Database Landscape
    (45:37) Early Lessons from Building SurrealDB
    (48:34) Co-Founding SurrealDB with His Brother

  • In this episode, we dive deep into the story of how Datadog evolved from a single product to a multi-billion dollar observability platform with its co-founder, Olivier Pomel. Olivier shares exclusive insights on Datadog's unique approach to product development—why they avoid the "Apple approach" of building in secret and instead work closely with customers from day one.

    You’ll hear about the early days when Paul Graham of Y Combinator turned down Datadog, questioning their lack of a first product. Olivier also reveals the strategies behind their iterative product launches and why they insist on charging early to ensure they’re delivering real value.

    The second half of the conversation is focused on all things AI and data at Datadog - the company's initial reluctance to use AI in its products, how Generative AI changed everything, and Datadog's current AI efforts including Watchdog, Bits AI and Toto, their new time series foundational model.

    We close the episode by asking Olivier about his thoughts on the topic du jour: founder mode!▶️ Listen to 2020 Data Driven NYC episode with Oliver Pomel: https://www.youtube.com/watch?v=oXKEFHeEvMsDATADOGWebsite - https://www.datadoghq.comTwitter - https://x.com/datadoghqOlivier PomelLinkedIn - https://www.linkedin.com/in/olivierpomelTwitter - https://x.com/oliveurFIRSTMARKWebsite - https://firstmark.comTwitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/Twitter - https://twitter.com/mattturck

  • In this episode, we sit down with Ali Dasdan, CTO of ZoomInfo, a titan in the B2B sector, who harnesses vast datasets and advanced AI to redefine sales and marketing for over 35,000 global customers with $21.2 billion in annualized revenue.

    We delve deep into ZoomInfo's AI initiatives, including their transformative 'Copilot,' explore sophisticated data management, and discuss their dual platforms catering to internal and customer-facing needs.

    ZoomInfo

    Website - https://www.zoominfo.com

    Twitter - https://x.com/zoominfo

    Ali Dasdan

    LinkedIn - https://www.linkedin.com/in/dasdan

    Twitter - https://x.com/alidasdan

    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (02:03) What is ZoomInfo

    (04:47) Data as service

    (06:15) Ali Dasdan's story

    (07:31) Organization of ZoomInfo

    (10:48) ZoomInfo Data Platform

    (21:02) Lessons from building a data platform

    (23:19) AI application at ZoomInfo

    (27:58) ZoomInfo's Copilot

    (37:43) ZoomInfo AI toolstack

    (39:30) Working with small vs. big companies in the AI business

    (43:39) Using data and AI for internal productivity

  • In this episode, we sit down with Eric Glyman, co-founder of Ramp, the company that revolutionized finance management to become a powerhouse valued at $7.6 billion.

    Eric shares the tradition of counting the days since Ramp's founding and how it fosters a sense of urgency and productivity, explains the use of AI to automate expense management and fraud detection, and gives an inside look at Ramp's cutting-edge AI products, including the Ramp Intelligence Suite and experimental agentic AI use cases.

    Ramp

    Website - https://www.ramp.com

    Twitter - https://x.com/tryramp

    Eric Glyman

    LinkedIn - https://www.linkedin.com/in/eglyman

    Twitter - https://x.com/eglyman

    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap

    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck

    (00:00) Intro

    (01:49) What is Ramp?

    (04:25) How did the company start?

    (09:18) Technical aspects of Ramp infrastructure

    (12:17) "We can tell you if you're paying too much"

    (14:20) Data privacy at Ramp

    (16:13) Data infrastructure tools used at Ramp

    (17:58) Traditional AI use cases

    (24:51) GenAI use cases

    (27:47) AI/human interaction

    (33:32) Ramp Intelligence Suite

    (39:38) How Ramp keeps high product release and product velocity

    (42:37) How did Ramp get to product-market fit?

    (45:54) Eric's perspective on building a company in NYC