Powered by RND
PodcastsTechnologyThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Latest episode

Available Episodes

5 of 455
  • Building Your AI First Business: Who's the ONE Additional Human You Need? (Ep. 446)
    If you were starting your first AI-first business today, and you could only pick one human to join you, who would it be? That’s the question the Daily AI Show hosts tackle in this episode. With unlimited AI tools at your disposal, the conversation focuses on who complements your skills, fills in the human gaps, and helps build the business you actually want to run.Key Points DiscussedEach host approached the thought experiment differently: some picked a trusted technical co-founder, others leaned toward business development, partnership experts, or fractional executives.Brian emphasized understanding your own gaps and aspirations. He selected a “partnership and ecosystem builder” type as his ideal co-founder to help him stay grounded and turn ideas into action.Beth prioritized irreplaceable human traits like emotional trust and rapport. She wanted someone who could walk into any room and become “mayor of the town in five days.”Andy initially thought business development, but later pivoted to a CTO-type who could architect and maintain a system of agents handling finance, operations, legal, and customer support.Jyunmi outlined a structure for a one-human AI-first company supported by agent clusters and fractional experts. He emphasized designing the business to reduce personal workload from day one.Karl shared insights from his own startup, where human-to-human connections have proven irreplaceable in business development and closing deals. AI helps, but doesn’t replace in-person rapport.The team discussed “span of control” and the importance of not overburdening yourself with too many direct reports, even if they’re AI agents.Brian identified Leslie Vitrano Hugh Bright as a real-world example of someone who fits the co-founder profile he described. She’s currently VP of Global IT Channel Ecosystem at Schneider Electric.Andy detailed the kinds of agents needed to run a modern AI-first company: strategy, financial, legal, support, research, and more. Managing them is its own challenge.The crew referenced a 2023 article on “Three-Person Unicorns” and how fewer people can now achieve greater scale due to AI. The piece stressed that fewer humans means fewer meetings, politics, and overhead.Embodied AI also came up as a wildcard. If physical robots become viable co-workers, how does that affect who your human plus-one needs to be?The show closed with an invitation to the community: bring your own AI-first business idea to the Slack group and get support and feedback from the hosts and other membersTimestamps & Topics00:00:00 🚀 Intro: Who’s your +1 human in an AI-first startup?00:01:12 🎯 Defining success: lifestyle business vs. billion-dollar goal00:03:27 💬 Beth: looking for irreplaceable human touch and trust00:06:33 🧠 Andy: pivoted from sales to CTO for span-of-control reasons00:11:40 🌐 Jyunmi: agent clusters and fractional human roles00:18:12 🧩 Karl: real-world experience shows in-person still wins00:24:50 🤝 Brian: chose a partnership and ecosystem builder00:26:59 🧠 AI can’t replace high-trust, long-cycle negotiations00:29:28 🧍 Brian names real-world candidate: Leslie Vitrano Hugh Bright00:34:01 🧠 Andy details 10+ agents you’d need in a real AI-first business00:43:44 🎯 Challenge accepted: can one human manage it all?00:45:11 🔄 Highlight: fewer people means less friction, faster decisions00:47:19 📬 Join the community: DailyAIShowCommunity.com00:48:08 📆 Coming this week: forecasting, rollout mistakes, “Be About It” demos00:50:22 🤖 Wildcard: how does embodied AI change the conversation?00:51:00 🧠 Pitch your AI-first business to the Slack group00:52:07 🔥 Callback to firefighter reference closes out the showThe Daily AI Show Co-Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
    --------  
    53:06
  • The Real World Filter Conundrum
    The Real-World Filter ConundrumAI already shapes the content you see on your phone. The headlines. The comments you notice. The voices that feel loudest. But what happens when that same filtering starts applying to your surroundings? Not hypothetically, this is already beginning. Early tools let people mute distractions, rewrite signage, adjust lighting, or even soften someone’s voice in real time. It’s clunky now, but the trajectory is clear.Soon, you might walk through the same room as someone else and experience a different version of it. One of you might see more smiles, hear less noise, feel more calm. The other might notice none of it. You’re physically together, but the world is no longer a shared experience.These filters can help you focus, reduce anxiety, or cope with overwhelm. But they also create distance. How do you build real relationships when the people around you are living in versions of reality you can’t see?The conundrum:If AI could filter your real-world experience to protect your focus, ease your anxiety, and make daily life more manageable, would you use it, knowing it might make it harder to truly understand or connect with the people around you who are seeing something completely different? Or would you choose to experience the world as it is, with all its chaos and discomfort, so that when you show up for someone else, you’re actually in the same reality they are?This podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.
    --------  
    17:20
  • Did that just happen in AI? (Ep. 445)
    The team takes a breather from the firehose of daily drops to look back at the past two weeks. From new model releases by OpenAI and Google to AI’s evolving role in medicine, shipping, and everyday productivity, the episode connects dots, surfaces under-the-radar stories, and opens a few lingering questions about where AI is heading.Key Points DiscussedOpenAI’s o3 model impressed the team with its deep reasoning, agentic tool use, and capacity for long-context problem solving. Brian’s custom go-to-market training demo highlighted its flexibility.Jyunmi recapped a new explainable AI model out of Osaka designed for ship navigation. It’s part of a larger trend of building trust in AI decisions in autonomous systems.University of Florida released VisionMD, an open-source model for analyzing patient movement in Parkinson’s research. It marks a clear AI-for-good moment in medicine.The team debated the future of AI in healthcare, from gait analysis and personalized diagnostics to AI interpreting CT and MRI scans more effectively than radiologists.Everyone agreed: AI will help doctors do more, but should enhance, not replace, the doctor-patient relationship.OpenAI's rumored acquisition of Windsurf (formerly Codium) signals a push to lock in the developer crowd and integrate vibe coding into its ecosystem.The team clarified OpenAI’s model naming and positioning: 4.1, 4.1 Mini, and 4.1 Nano are API-only models. o3 is the new flagship model inside ChatGPT.Gemini 2.5 Flash launched, and Veo 2 video tools are slowly rolling out to Advanced users. The team predicts more agentic features will follow.There’s growing speculation that ChatGPT’s frequent glitches may precede a new feature release. Canvas upgrades or new automation tools might be next.The episode closed with a discussion about AI’s need for better interfaces. Users want to shift between typing and talking, and still maintain context. Voice AI shouldn’t force you to listen to long responses line-by-line.Timestamps & Topics00:00:00 🗓️ Two-week recap kickoff and model overload check-in00:02:34 📊 Andy on model confusion and need for better comparison tools00:04:59 🧮 Which models can handle Excel, Python, and visualizations?00:08:23 🔧 o3 shines in Brian’s go-to-market self-teaching demo00:11:00 🧠 Rob Lennon surprised by o3’s writing skills00:12:15 🚢 Explainable AI for ship navigation from Osaka00:17:34 🧍 VisionMD: open-source AI for Parkinson’s movement tracking00:19:33 👣 AI watching your gait to help prevent falls00:20:42 🧠 MRI interpretation and human vs. AI tradeoffs00:23:25 🕰️ AI can track diagnostic changes across years00:25:27 🤖 AI assistants talking to doctors’ AI for smoother care00:26:08 🧪 Pushback: AI must augment, not replace doctors00:31:18 💊 AI can support more personalized experimentation in treatment00:34:04 🌐 OpenAI’s rumored Windsurf acquisition and dev strategy00:37:13 🤷‍♂️ Still unclear: difference between 4.1 and o300:39:05 🔧 4.1 is API-only, built for backend automation00:40:23 📉 Most API usage is still focused on content, not dev workflows00:40:57 ⚡ Gemini 2.5 Flash release and Veo 2 rollout lag00:43:50 🎤 Predictions: next drop might be canvas or automation tools00:45:46 🧩 OpenAI could combine flows, workspace, and social in one suite00:46:49 🧠 User request: let voice chat toggle into text or structured commands00:48:35 📋 Users want copy-paste and better UI, not more tokenization00:49:04 📉 Nvidia hit with $5.5B loss after chip export restrictions to China00:52:13 🚢 Tariffs and chip limits shrink supply chain volumes00:53:40 📡 Weekend question: AI nodes and local LLM mesh networks?00:54:11 👾 Sci-Fi Show preview and final thoughtsThe Daily AI Show Co-Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
    --------  
    55:20
  • When to use OpenAI's latest models: 4.1, o3, and o4-mini (Ep. 444)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comIntroWith OpenAI dropping 4.1, 4.1 Mini, 4.1 Nano, O3, and O4-Mini, it’s been a week of nonstop releases. The Daily AI Show team unpacks what each of these new models can do, how they compare, where they fit into your workflow, and why pricing, context windows, and access methods matter. This episode offers a full breakdown to help you test the right model for the right job.Key Points DiscussedThe new OpenAI models include 4.1, 4.1 Mini, 4.1 Nano, O3, and O4-Mini. All have different capabilities, pricing, and access methods.4.1 is currently only available via API, not inside ChatGPT. It offers the highest context window (1 million tokens) and better instruction following.O3 is OpenAI’s new flagship reasoning model, priced higher than 4.1 but offers deep, agentic planning and sophisticated outputs.The model naming remains confusing. OpenAI admits their naming system is messy, especially with overlapping versions like 4.0, 4.1, and 4.5.4.1 models are broken into tiers: 4.1 (flagship), Mini (mid-tier), and Nano (lightweight and cheapest).Mini and Nano are optimized for specific cost-performance tradeoffs and are ideal for automation or retrieval tasks where speed matters.Claude 3.7 Sonnet and Gemini 2.5 Pro were referenced as benchmarks for comparison, especially for long-context tasks and coding accuracy.Beth emphasized prompt hygiene and using the model-specific guides that OpenAI publishes to get better results.Jyunmi walked through how each model is designed to replace or improve upon prior versions like 3.5, 4.0, and 4.5.Karl highlighted client projects using O3 and 4.1 via API for proposal generation, data extraction, and advanced analysis.The team debated whether Pro access at $200 per month is necessary now that O3 is available in the $20 plan. Many prefer API pay-as-you-go access for cost control.Brian showcased a personal agent built with O3 that created a complete go-to-market course, complete with a dynamic dashboard and interactive progress tracking.The group agreed that in the future, personal agents built on reasoning models like O3 will dynamically generate learning experiences tailored to individual needs.Timestamps & Topics00:01:00 🧠 Intro to the wave of OpenAI model releases00:02:16 📊 OpenAI’s model comparison page and context windows00:04:07 💰 Price comparison between 4.1, O3, and O4-Mini00:05:32 🤖 Testing models through Playground and API00:07:24 🧩 Jyunmi breaks down model replacements and tiers00:11:15 💸 O3 costs 5x more than 4.1, but delivers deeper planning00:12:41 🔧 4.1 Mini and Nano as cost-efficient workflow tools00:16:56 🧠 Testing strategies for model evaluation00:19:50 🧪 TypingMind and other tools for testing models side-by-side00:22:14 🧾 OpenAI prompt guide makes big difference in results00:26:03 🧠 Carl applies O3 and 4.1 in live client projects00:29:13 🛠️ API use often more efficient than Pro plan00:33:17 🧑‍🏫 Brian demos custom go-to-market course built with O300:39:48 📊 Progress dashboard and course personalization00:42:08 🔁 Persistent memory, JSON state tracking, and session testing00:46:12 💡 Using GPTs for dashboards, code, and workflow planning00:50:13 📈 Custom GPT idea: using LinkedIn posts to reverse-engineer insights00:52:38 🏗️ Real-world use cases: construction site inspections via multimodal models00:56:03 🧠 Tip: use models to first learn about other models before choosing00:57:59 🎯 Final thoughts: ask harder questions, break your own habits01:00:04 🔧 Call for more demo-focused “Be About It” shows coming soon01:01:29 📅 Wrap-up: Biweekly recap tomorrow, conundrum on Saturday, newsletter SundayThe Daily AI Show Co-Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
    --------  
    59:35
  • Big AI News Drops! (Ep. 443)
    It’s Wednesday, and that means it’s Newsday. The Daily AI Show covers AI headlines from around the world, including Google's dolphin communication project, a game-changing Canva keynote, OpenAI’s new social network plans, and Anthropic’s Claude now connecting with Google Workspace. They also dig into the rapid rise of 4.1, open-source robots, and the growing tension between the US and China over chip development.Key Points DiscussedGoogle is training models to interpret dolphin communication using audio, video, and behavioral data, powered by a fine-tuned Gemma model called Dolphin Gemma.Beth compares dolphin clicks and buzzes to early signs of AI-enabled animal translation, sparking debate over whether we really want to know what animals think.Canva's new “Create Uncharted” keynote received praise for its fun, creator-first style and for launching 45+ feature updates in just three minutes.Canva now includes built-in code tools, generative image support via Leonardo, and expanded AI-powered design workspaces.ChatGPT added a new image library feature, making it easier to store and reuse generated images. Brian showed off graffiti art and paint-by-number tools created from a real photo.OpenAI’s GPT-4.1 shows major improvements in instruction following, multitasking, and prompt handling, especially in long-context analysis of LinkedIn content.The team compares 4.0 vs. 4.1 performance and finds the new model dramatically better for summarization, tone detection, and theme evolution.Claude now integrates with Google Workspace, allowing paid users to search and analyze their Gmail, Docs, Sheets, and calendar data.The group predicts we’ll soon have agents that work across email, sales tools, meeting notes, and documents for powerful insights and automation.Hugging Face acquired a humanoid robotics startup called Paulin and plans to release its Reachy 2 robot, potentially as open source.Japan’s Hokkaido University launched an open-source, 3D-printable robot for material synthesis, allowing more people to run scientific experiments at low cost.Nvidia faces a $5.5 billion loss due to U.S. export restrictions on H20 chips. Meanwhile, Huawei has announced a competing chip, highlighting China’s growing independence.Andy warns that these restrictions may accelerate China’s innovation while undermining U.S. research institutions.OpenAI admitted it may release more powerful models if competitors push the envelope first, sparking a debate about safety vs. market pressure.The show closes with a preview of Thursday’s episode focused on upcoming models like GPT-4.1, Mini, Nano, O3, and O4, and what they might unlock.Timestamps & Topics00:00:18 🐬 Google trains AI to decode dolphin communication00:04:14 🧠 Emotional nuance in dolphin vocalizations00:07:24 ⚙️ Gemma-based models and model merging00:08:49 🎨 Canva keynote praised for creativity and product velocity00:13:51 💻 New Canva tools for coders and creators00:16:14 📈 ChatGPT tops app downloads, beats Instagram and TikTok00:17:42 🌐 OpenAI rumored to be building a social platform00:20:06 🧪 Open-source 3D-printed robot for material science00:25:57 🖼️ ChatGPT image library and color-by-number demo00:26:55 🧠 Prompt adherence in 4.1 vs. 4.000:30:11 📊 Deep analysis and theme tracking with GPT-4.100:33:30 🔄 Testing OpenAI Mini, Nano, Gemini 2.500:39:11 🧠 Claude connects to Google Workspace00:46:40 🗓️ Examples for personal and business use cases00:50:00 ⚔️ Claude vs. Gemini in business productivity00:53:56 📹 Google’s new VO2 model in Gemini Advanced00:55:20 🤖 Hugging Face buys humanoid robotics startup Paulin00:56:41 🔮 Wrap-up and Thursday preview: new model capabilitiesThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    58:04

More Technology podcasts

About The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Podcast website

Listen to The Daily AI Show, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.16.1 | © 2007-2025 radio.de GmbH
Generated: 4/22/2025 - 12:17:49 PM