Powered by RND
PodcastsTechnologyArtificial Ignorance
Listen to Artificial Ignorance in the App
Listen to Artificial Ignorance in the App
(524)(250,057)
Save favourites
Alarm
Sleep timer

Artificial Ignorance

Podcast Artificial Ignorance
Charlie Guo
Interviews with founders, investors, and deep thinkers on artificial intelligence and its impact on our world. For more deep dives and news stories, visit www.i...

Available Episodes

5 of 7
  • How a $3000/hour escort uses AI to automate sex work
    As someone who grew up with the Internet, I’ve occasionally found myself in strange corners of it. But more often than not, I’ve walked away from those experiences with more knowledge than before, and a constant reminder to stay open-minded.So when I stumbled upon Adelyn Moore, an escort who charges $3000 per hour and tweets about using AI for sex work, I wanted to learn more. In our candid and thought-provoking interview, Adelyn - a self-described "autistic courtesan" - offers a fascinating glimpse into the world of modern sex work and its intersection with technology.As the adult industry grapples with the rise of AI-generated content and "digital girlfriends," Adelyn sheds light on the complex interplay between authenticity, technology, and human connection. Her experiences challenge common assumptions about sex work and reveal how the world’s oldest profession can leverage its newest technology.“When choosing a profession, I took the Lindy Effect literally.”– Adelyn MooreThree key takeawaysAI-generated adult content still struggles with realism. While there are attempts to create ever more realistic AI porn, the current technology still struggles to produce convincingly realistic images and videos. The skin doesn't look real. The video quality at AI right now is mediocre. Image generation always looks weird - it looks like hentai often. It's not like at a point right now where it's good enough in the ways that you can actually be like utilized.Likewise, imperfection and "realness" are increasingly valued in the adult industry. As “perfect” photos and videos become more prevalent, there's a growing appreciation for content that shows authenticity and flaws.A lot of times I'll just disregard [online] photos because I'm like, “Okay, this photo just looks too manicured.” It's just boring. There's this heightened level of sensitivity because people see so little of just being messy.I have a video on my OnlyFans where I'm awkwardly… like, I wish I could do a sexy thing where I like rip open a condom with my teeth and I'm just trying and I'm like, I can't. I don't know if that's particularly sexy. But I like that stuff.The escort-client relationship offers a unique form of intimacy that may be difficult to automate away. The controlled environment and strict boundaries of these interactions allow for a rare level of openness and vulnerability than in other social contexts.When I first started [escorting], I thought it was always interesting how like people would tell me stuff that like, they were honest, that they didn't tell anyone else in their lives.…People often have a certain amount of vulnerability with me immediately. It's also this weird thing where it's, you have this parasocial relationship because they've met me and they've often people will be like, “I've been following you for like a year. I've been following you on Twitter for like months,” or like the fact that people will be like “I created a special anon account just to follow you.” And it's just that you immediately have this degree of intimacy that I have not seen replicated in any other part of my life.And three things you might not know:* Escorts hang out on social media just like everyone else; that thirst trap you’re scrolling past might actually be content marketing.* While most text-to-image generators like Midjourney have strict filters on nudity and adult content, other projects are working diligently to enable fully uncensored AI images.* Sex workers can perform pretty thorough background checks and screening on potential clients - including requiring IDs, phone numbers, and LinkedIn profiles.Artificial Ignorance is reader-supported. If you found this interesting or insightful, consider becoming a free or paid subscriber. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit www.ignorance.ai/subscribe
    --------  
    31:32
  • Saving the world with AI and government grants
    At OpenAI's DevDay, I had the pleasure of meeting Helena Merk, the CEO and founder behind Streamline Climate. Streamline is rethinking the way climate tech startups secure vital government funding, but leveraging AI to navigate, automate, and optimize the labyrinthine grant application process.In our interview from March (before the release of GPT-4o or Claude 3.5 Sonnet), Helena shares insights on how Streamline Climate is not just saving time, but potentially accelerating our global response to climate change. From tackling bureaucratic hurdles to leveling the playing field for innovators, discover how AI is reshaping the landscape of climate tech funding and possibly our planet's future.Four key takeawaysBureaucracy is a silent killer. In Helena's experience, many climate grants are ineligible right out of the gate because of incorrect clerical errors or misunderstandings of the grant's eligibility criteria. Ultimately, this wasted time and effort compounds across an entire industry.Half of the reason you would get rejected is because of a clerical error on submitting your files correctly, or filing for something for which you're not eligible, the grant was never meant for you. So people are spending, on average, over 100 hours on a single grant application. And half of those have zero chance of ever winning.Prompt engineering wins out over fine-tuning. OpenAI's advice to Streamline (which mirrors what I've seen elsewhere) is that prompt engineering can get you much farther than you may think. Most people believe they need fine-tuned models, but they really want things to "just work."We've thought a lot about the trade offs and performance benefits of, training on models, fine tuning, etc. And what we've kind of come up with, and this was definitely also part of the advice from OpenAI, is that prompt engineering would get us there fastest in the cheapest way. And probably perform just as good, if not better than training our own models.Besides automating grant writing, there are many potential optimizations in the climate tech space. I learned a ton about the ecosystem around government grants, and we discussed some of Streamline's roadmap beyond just "AI for grant writing."You don't get the money when you win a grant: you have to report on your progress, file all your receipts. After you spend money, you have to get reimbursed. What this means startups is that they have to go get a loan. And that's silly. There's no way the government is going to change this process because of their own risk, but it means that every single company who's winning these grants needs to go get a working capital loan. There's people who provide, specific financing vehicles for this, and we can easily play matchmaker [between grant recipients and financing companies].Mission-driven founders have to strike a balance between mission and monetization. Helena and I talked about balancing prioritizing the mission with dealing with the incentives inherent in taking VC funding and pursuing growth as a for-profit startup.I've heard people refer to this as missionary versus mercenary type founders. Missionary founders, they're obsessed with whatever mission they're on and they don't really care about anything else. Mercenary founders are really just driven by "I'm going to build a bit of business that like, It goes to the moon."And that is how most of, I think, the Bay area operates. And it leads to pretty decent business outcomes in one way. But I realized that that is never going to be enough for me. I would so much rather run out of money and keep grinding on something that I care deeply about t han pivot into something that's going to be profit generating and maximize returns.I think I learned that when I was working on my first company, Glimpse, where we were working on a video chat company during the pandemic. A pretty safe bet. And I care very deeply about what we started on, which was helping to connect communities of people having deep conversations. It then turned into helping connect remote teams. And the further we got, the more transactional it felt. And I found it really hard.Artificial Ignorance is reader-supported. If you found this interesting or insightful, consider becoming a free or paid subscriber. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit www.ignorance.ai/subscribe
    --------  
    37:09
  • Getting to the top of the GPT Store (and building an AI-native search engine, too)
    Back in January, I wrote about what business models might make sense with OpenAI’s GPT Store:As part of that, I included a screenshot of the featured GPTs, which included Consensus, the number one “Research & Analysis” GPT since day one. That said, the company existed well before the GPT Store - they’ve been working for years on an AI-native search engine for research papers. They’re also a part of the latest AI Grant cohort. So, I was pleasantly surprised to learn that one of Consensus's founders, Christian Salem, was a reader! I was equally impressed with our conversation - while I’ve included some key takeaways below, it’s a pretty wide-ranging interview covering RAG and search engine architecture, the value of the GPT Store, how the NFL thinks about product management (and might use AI in the future), and more.Three key takeawaysVector search is not a silver bullet. Rather, it’s another tool that engineers can use as they build search infrastructure. It enables some new capabilities, but it also comes with tradeoffs.Search engines are so much more than just semantic similarity. The vector databases, they're amazing - some of these encoding models are getting better and better. But there's so much that goes into finding relevant documents that isn't just about the distance between two points in vector space. There's things like phrase matching, applying filters, finding synonyms. Sometimes we suggest queries to users. There's fuzzy matching - if you screw up and did a typo, it's kind of weird, but a lot of the vector and encoding models are not as good at typos and fuzzy matching can play tricks on them. There’s real value in the GPT Store. I was surprised to discover that the GPT Store has driven many new users for Consensus, and they currently have ~50% higher retention than other channels. That said, Consensus can capture some value by converting users to paid subscriptions, which isn’t true of free services.So actually we get some of our best users from ChatGPT, which is not something that I would have predicted when we set out doing this. But I think the last time I looked, our day 30 retention is, I want to say like 50 percent higher for users who we acquired through ChatGPT than every other channel. … It's actually turned into awesome users for us who not only use us in ChatGPT, but then come back to our website and use the web application day after day, and many of those users have converted to our premium subscription. So, so far it's resulted in a ton of value.LLMs are not one size fits all. Christian’s comments on using multiple models were similar to the ones from my interview with Andrew Lee - to build a fast, efficient system, you’ll likely need different LLMs for different use cases. There are so many tradeoffs around speed, cost, and quality that it’s hard for one model to win at all three.We were just counting this out the other day, 15 features powered by LLMs are in the product. Only three of them use OpenAI models. The rest all use open source models that we hand fine tune.…When we're assessing which models to use for a new feature or a new task within the product, I think there's a few really important criteria to go through. One, how similar is the task to OpenAI training data? If the task is, “hey, take a bunch of text and summarize it,” GPT-4 is so good at that task. It's seen that over and over and over again. So many users have asked it to do that in ChatGPT. And so they have RLHF on that. And for like a super basic summarization task that is very similar to some core GPT behavior.…Another thing that you obviously have to look at is cost. So, when you ask a question in Consensus for the top 20 papers that we return, we always do a RAG answer from the paper relevant to your question. That is very similar to something that GPT-4 could probably do pretty well. However, it would not be, economical to make 20 OpenAI calls on every single search for both premium and free users.And three things you might not know:* There still isn’t a great way for LLMs to parse structured PDF data, especially in table format. In the case of Consensus, it’s still a human-powered task.* The GPT Store has started testing monetization with a handful of US-based creators but is not yet broadly available.* While the NFL may look like any other company, its "shareholders" are the 32 team owners, meaning new product launches often have to get the approval of owners and their friends and family.Artificial Ignorance is reader-supported. If you found this interesting or insightful, consider becoming a free or paid subscriber. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit www.ignorance.ai/subscribe
    --------  
    41:11
  • How Intercom is transforming customer support with AI
    In our post-ChatGPT world, few companies have moved to integrate generative AI as quickly as Intercom. Mere weeks after ChatGPT first went viral, Intercom had its AI-powered chatbots in the hands of beta testers, and it's continued to iterate on its products since.That's why I was eager to talk with Fergal Reid, the VP of AI at Intercom and one of the company's key champions of generative AI. Before leading AI at Intercom, Fergal co-founded an ML startup, Synference, which was acquired by Optimizely. We had a chance to talk about Intercom's history building with AI, and how this new technology shift is going to impact customer support reps (for better or worse).In addition to discussing the future of customer support, we discussed Fin AI Copilot, a new AI product from Intercom that's launching today. Like Intercom's existing AI, Fin AI Copilot can pull from internal and external documents, knowledge bases, and other content. But it can also learn from and reference past conversations, and has been built to improve agent interactions, not replace them.Three key takeawaysIntercom is expanding from chatbots that talk to customers to copilots that augment agents. The company's first AI product, Fin (now Fin AI Agent), was designed to chat with users about your knowledge base before escalating to human support staff. Now, Intercom is bringing AI capabilities to the support staff themselves, with new features that can write answers, reference past conversations, and supercharge support staff.This framing of a copilot for support really stuck with us. It's a tool. It's got to empower you. It's got to make you faster, more efficient, more effective. And we have for years been really deeply looking at what's the job a support agent does.So often a support agent, especially if they're new or maybe they're getting a question they haven't gotten before, they look at question that comes in and they're like, I don't know what to do here. And then they have to go and read the company documentation, or they have to go and they have to search, they have to go into intercom, they have to search and they have to find the time their more experienced colleague answered this question a week or two before.And we were like, can we use an LLM to do an end run around that? Can we use LLM to just make that experience really seamless?The earliest adopters of generative AI are companies that were both already experimenting with machine learning, and had executive buy-in to move quickly. At Intercom, Fergal's team was able to deploy the first ChatGPT-enabled features within 7 weeks of ChatGPT's launch. That wasn't an accident - they had been playing with this technology for years, laid the groundwork with the rest of the organization.When ChatGPT came out, that was at the end of November and we had features live with our own internal CS team by the holidays that year. ... I'm lucky that we have a very experienced team of some really great folks here who I guess had been in the space for a while. And then we just, we had the executive support we needed you know, because we had done the groundwork because we had talked a lot about how, "Hey, we think to something disruptive here," but we could get the alignment we needed internally to just force velocity of a project like that.Bringing AI to customer support could create more jobs, not less. We've discussed this point in the context of software engineers, but my conversation with Fergal echoed a lot of the same points when it comes to customer support. When each CS rep is much more valuable as a result of AI, does that result in more reps or less? We don't yet have a clear answer.While we don't know how this plays out, one thing we're really confident of at Intercom is that, customer support Is not close to servicing all the demand. There is huge latent demand. Like we see this in every industry study we do. Every time we talk to end users, they want customer support to be dramatically better, faster, friendlier than it currently is.So many customer support experiences are really terrible. And so there's this absolutely latent demand for drastically more customer support. I have to believe that for many businesses, if it becomes way cheaper per customer support rep to deliver great customer support, there's tons more demand there to service.And three things you might not know:* Intercom's AI chatbot, Fin, has a 42% resolution rate on average (and up to 80% resolution rate in the best cases).* When ChatGPT first launched (and even when GPT-4 launched), RAG wasn't a thing for the very first companies trying to build with it - they were inventing the techniques from scratch less than a year ago.* Klarna, a “buy now pay later” company, has said their AI customer support agent does the work of 700 full-time employees. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit www.ignorance.ai/subscribe
    --------  
    36:25
  • Bridging AI and human creativity
    Something that I’m often thinking about is AI’s ongoing impact on the arts. Clearly, Midjourney and Stable Diffusion have unlocked a new engine for creativity, but it’s just that: an engine. Most of us wouldn’t get much value out of a V8 if it was just dropped in our garage, and most professionals probably can’t go from diffusion model to productive workflow without some extra steps. So designers, especially UX and Figma designers, are still safe from AI for the time being. But there is a lot of change on the horizon - and one of the best people to discuss that change is Harrison Telyan, the co-founder of NUMI, which offers startups access a guild of vetted, professional designers for a flat monthly subscription. Before founding NUMI, Harrison was the founding designer of Imgur, and graduated from the RISD - the Rhode Island School of Design, a world-class design program. Harrison and I talked about his experience rapidly scaling a prior business in Africa, how AI is eating the design world (and the jobs at risk of being eaten), NUMI’s unique, engineering-esque approach to providing a design service, and much more.Three key takeawaysReal feedback comes from paying customers. In Harrison’s experience, founders can be reluctant to reach out and talk to their customers directly - and sometimes are even reluctant to charge customers at all.[Something] that I see a lot in founders is how unwilling, maybe not even unwilling, but they have forgotten to actually start the business at some point. I always recommend you chop up your customers in half and start charging them - you will see very quickly the type of feedback that you'll get when you try to separate someone from their money. That's when the real feedback comes.AI has a ways to go before replacing talented designers. Harrison is bullish about AI’s impact on the design community - but he also admits that areas like entry level graphic design work (as opposed to higher level brand identity or UX work), is going to be at risk from AI pretty soon.The real problem that I see though, is none of these [AI] companies have design leaders behind their prompting or their code, and so naturally they're capped. … I'm looking at the landscape and I'm quite bullish on how AI is going to serve the design community. We hear all the time from Guild members at NUMI, is AI gonna replace me? No. It's just gonna allow you to do work faster, more efficiently and you know, it's gonna take away the kinda like rote administrative stuff of design.Not all design agencies are the same. At first, it’s easy to think of NUMI as just another “agency.” But Harrison pushes back on that label - first, because they think of their design community as a guild, not as independent contracts, and second, because they’re building tools and education for the guild to get better, rather than subcontracting work.We always cringe at the word agency when someone's describing us because on the surface, call us whatever you want, but we know what we are. And what we are is a company that was started by designers for designers. And that may not mean much, but when you look at our competition, all of them were started by people in marketing, and then they just create these commodified versions of us that ask for the lowest price at the highest quality with the most communication. We just take a different approach, and that approach is: how can you lift up the designer through technology? How can you remove all the BS from the admin side of what they have to do so that they can get back to designing? It comes down to leveraging tech to remove the BS, to make the designer move faster and put them up on a pedestal. It's actually very similar to how Airbnb thinks about its hosts. Put them up on a pedestal and the rest will work itself out. And that's what we do.And three things I learned for the first time:* Boda bodas are bicycles and motorcycle taxis commonly found in East Africa.* Figma plugins suffer from bit rot - they need to be regularly maintained to keep up with the underlying platform changes.* Many founders seek design services too early, when they really need to be experimenting and talking to customers as much as possible. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit www.ignorance.ai/subscribe
    --------  
    39:40

More Technology podcasts

About Artificial Ignorance

Interviews with founders, investors, and deep thinkers on artificial intelligence and its impact on our world. For more deep dives and news stories, visit www.ignorance.ai. www.ignorance.ai
Podcast website

Listen to Artificial Ignorance, Search Engine and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.6.0 | © 2007-2025 radio.de GmbH
Generated: 2/5/2025 - 11:53:17 AM