Powered by RND
PodcastsTechnologyTech on the Rocks

Tech on the Rocks

Kostas, Nitay
Tech on the Rocks
Latest episode

Available Episodes

5 of 18
  • Business Physics: How Brand, Pricing, and Product Design Define Success with Erik Swan
    SummaryIn this episode, Erik reflects on his long and storied tech career—from the days of punch cards to founding multiple startups, including a stint at Splunk. At 61, he offers a unique perspective on how the industry has evolved and shares candid insights into what it takes to build a successful company. He discusses the evolution from building simple tools to creating comprehensive solutions and eventually platforms, emphasizing the importance of starting with a “hammer”—a focused, simple tool—before scaling to a broader offering. Eril introduces his concept of the “physics of business,” a framework for understanding go-to-market dynamics, pricing, and the critical role of brand in differentiating a product in a crowded market. He also touches on the challenges of product-led growth, the importance of achieving a strong “K value” (viral or network effects), and the pitfalls of allowing short-term quarterly pressures to derail long-term vision. Toward the end, he hints at his current project, Bestimer, which aims to apply lessons from his past ventures and leverage modern AI to tackle a massive, data-intensive problem.Chapters00:00 Erik's Journey Through Tech History04:06 The Philosophy of Designing for Success09:49 Understanding the Physics of Business14:29 Timing and Luck in Startups18:09 Lessons Learned from Splunk23:30 The Power of Brand in Business28:02 Leveraging AI for Brand Development32:04 The Resilience of Splunk36:45 Building a Competitive Edge37:28 From Tool to Solution40:59 The Importance of Onboarding44:32 Navigating Growth and Market Fit51:11 Innovating with AI: The Next Chapter
    --------  
    1:01:31
  • Incremental Materialization: Reinventing Database Views with Gilad Kleinman of Epsio
    SummaryIn this episode, Gilad Kleinman, co-founder of Epsio, shares his unique journey from PHP development to low-level kernel programming and how that evolution led him to build an innovative incremental views engine. Gilad explains that Epsio tackles a common challenge in databases: making heavy, complex queries faster and more efficient through incremental materialization. He describes how traditional materialized views fall short—often requiring full refreshes—and how Epsio seamlessly integrates with existing databases by consuming replication streams (CDC) and writing back to result tables without disrupting the core transactional system. The conversation dives into the technical trade-offs and optimizations involved, such as handling stateful versus stateless operators (like group-by and window functions), using Rust for performance, and the challenges of ensuring consistency. Gilad also contrasts Epsio’s approach with streaming systems like Flink, emphasizing that by maintaining tight integration with the native database, Epsio can offer immediate, up-to-date query results while minimizing disruption. Finally, he outlines his vision for the future of incremental stream processing and materialized views as a means to reduce compute costs and enhance overall system performance.Chapters00:00 From PHP to Kernel Development: A Journey07:30 Introducing Epsio: The Incremental Views Engine10:56 The Importance of Materialized Views15:07 Understanding Incremental Materialization19:21 Optimizing Query Performance with Epsio24:53 Integrating Epsio with Existing Databases27:02 The Shift from Theory to Practice in Data Processing29:42 Seamless Integration with Existing Databases32:02 Understanding Epsio Incremental Processing Mechanism34:46 Challenges and Limitations of Incremental Views36:49 The Complexity of Implementing Operators39:56 Trade-offs in Incremental Computation41:21 User Interaction with Epsio43:01 Comparing EPSIO with Streaming Systems45:09 Architectural Guarantees of Epsio50:33 The Future of Incremental Data Processing
    --------  
    52:19
  • From Data Mesh to Lake House: Revolutionizing Metadata with Lakekeeper
    SummaryIn this episode, Viktor Kessler shares his journey and insights from his extensive experience in data management—from building risk management systems and data warehouses to working as a solutions architect at MongoDB and Dremio, and now co-founding a startup.Initially exploring data mesh concepts, Viktor explains how real-world challenges—such as the disconnect between technical data models and business needs, inconsistent definitions across departments, and the difficulty in managing actionable metadata—led him and his co-founder to pivot toward building a lake house solution. His startup is developing Lakekeeper, an open source REST catalog for Apache Iceberg, which aims to bridge the gap between decentralized data production and centralized metadata management. The conversation also delves into the evolution of data catalogs, the necessity for self-service analytics, and how creating consumption-ready data products can transform data functions from cost centers into profit centers. Finally, Viktor outlines ways for interested listeners to get involved with the Lakekeeper community through GitHub, upcoming meetups, and a dedicated Discord channel.Chapters00:00 Introduction to Viktor Kessler and His Journey04:57 Transitioning from Data Mesh to Lake House09:15 Understanding Data Mesh: Pain Points and Solutions13:47 The Role of Metadata in Data Management18:16 The Evolution of Catalogs and Metadata Management28:14 Stabilizing the Consumption Pipeline31:18 Centralizing Metadata for Decentralized Organizations37:09 Bridging the Gap: Technical and Business Perspectives43:17 Rethinking Data Products and Consumption50:45 Finding Balance: Control and Flexibility in Data Management
    --------  
    57:25
  • Reinventing Stream Processing: From LinkedIn to Responsive with Apurva Mehta
    SummaryIn this episode, Apurva Mehta, co-founder and CEO of Responsive, recounts his extensive journey in stream processing—from his early work at LinkedIn and Confluent to his current venture at Responsive. He explains how stream processing evolved from simple event ingestion and graph indexing to powering complex, stateful applications such as search indexing, inventory management, and trade settlement. Apurva clarifies the often-misunderstood concept of “real time,” arguing that low latency (often in the one- to two-second range) is more accurate for many applications than the instantaneous response many assume. He delves into the challenges of state management, discussing the limitations of embedded state stores like RocksDB and traditional databases (e.g., Postgres) when faced with high update rates and complex transactional requirements. The conversation also covers the trade-offs between SQL-based streaming interfaces and more flexible APIs, and how Responsive is innovating by decoupling state from compute—leveraging remote state solutions built on object stores (like S3) with specialized systems such as SlateDB—to improve elasticity, cost efficiency, and operational simplicity in mission-critical applications.Chapters00:00 Introduction to Apurva Mehta and Streaming Background08:50 Defining Real-Time in Streaming Contexts14:18 Challenges of Stateful Stream Processing19:50 Comparing Streaming Processing with Traditional Databases26:38 Product Perspectives on Streaming vs Analytical Systems31:10 Operational Rigor and Business Opportunities38:31 Developers' Needs: Beyond SQL45:53 Simplifying Infrastructure: The Cost of Complexity51:03 The Future of Streaming ApplicationsClick here to view the episode transcript.
    --------  
    58:13
  • Semantic Layers: The Missing Link Between AI and Data with David Jayatillake from Cube
    In this episode, we chat with David Jayatillake, VP of AI at Cube, about semantic layers and their crucial role in making AI work reliably with data. We explore how semantic layers act as a bridge between raw data and business meaning, and why they're more practical than pure knowledge graphs. David shares insights from his experience at Delphi Labs, where they achieved 100% accuracy in natural language data queries by combining semantic layers with AI, compared to just 16% accuracy with direct text-to-SQL approaches. We discuss the challenges of building and maintaining semantic layers, the importance of proper naming and documentation, and how AI can help automate their creation. Finally, we explore the future of semantic layers in the context of AI agents and enterprise data systems, and learn about Cube's upcoming AI-powered features for 2025.00:00 Introduction to AI and Semantic Layers05:09 The Evolution of Semantic Layers Before and After AI09:48 Challenges in Implementing Semantic Layers14:11 The Role of Semantic Layers in Data Access18:59 The Future of Semantic Layers with AI23:25 Comparing Text to SQL and Semantic Layer Approaches27:40 Limitations and Constraints of Semantic Layers30:08 Understanding LLMs and Semantic Errors35:03 The Importance of Naming in Semantic Layers37:07 Debugging Semantic Issues in LLMs38:07 The Future of LLMs as Agents41:53 Discovering Services for LLM Agents50:34 What's Next for Cube and AI Integration
    --------  
    59:03

More Technology podcasts

About Tech on the Rocks

Join Kostas and Nitay as they speak with amazingly smart people who are building the next generation of technology, from hardware to cloud compute. Tech on the Rocks is for people who are curious about the foundations of the tech industry. Recorded primarily from our offices and homes, but one day we hope to record in a bar somewhere. Cheers!
Podcast website

Listen to Tech on the Rocks, Lex Fridman Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.17.1 | © 2007-2025 radio.de GmbH
Generated: 5/10/2025 - 12:20:35 AM