Emmett Shear — co-founder and longtime CEO of Twitch, which sold to Amazon for roughly $1 billion in 2014 — joins Shaan (who was acquired by Twitch and reported to Emmett) for a wide-ranging conversation: creativity as a learnable faucet, the user interviews that revealed what streamers really wanted, why consumer is back because of AI, a framework for thinking about LLM limitations, and his honest probability estimate for AI catastrophe (3-30%). Also: why he’s not starting another company, what Paul Graham actually does to founders, and what he observed in Bezos and Jassy.
Speakers: Emmett Shear (guest, Twitch co-founder and CEO, sold to Amazon for ~$1B), Shaan Puri (host, former Twitch employee)
Creativity Is a Faucet [00:00:00]
Shaan: Somebody said creativity is not like a faucet — you can’t just turn it on. You apparently disagree with that pretty strongly.
Emmett: For me, it’s very much like a faucet. I can just write and keep generating ideas. Most people believe it’s a sacred special thing that only happens when conditions are right. For me, it’s not.
Shaan: Is that innate or practiced?
Emmett: If there’s a nature-nurture break on this, it happened very early. By the time I was ten, you’d have seen the same thing. But I don’t think I’m that unusual — I think most children have this. Most five-year-olds can generate ideas almost indefinitely. As you get older, you learn to stomp down ideas that seem bad. You learn not to say dumb things. The more pressure you put on yourself not to say dumb things, the more your inner idea generator gets disrupted.
When I’m brainstorming, most of my ideas are bad. The actual advice — what they tell you in brainstorming sessions and it’s technically wrong but practically right — is “no bad ideas.” What it actually means is: don’t stop at the bad idea. Keep going. You’re trying to disable the sensor most people have that says: don’t be stupid, don’t be stupid.
I think I was just mal-socialized — the sensor never got installed. And I think I’m the one who’s unchanged. Everyone else is the weird one — their wellspring of creativity got crushed somewhere along the way.
The mechanism is simple: you start from some capability, you try it, you receive negative feedback — internal or external — and you learn not to do it. Less practice leads to less skill, which leads to less practice. It’s the same cycle that creates “I’m bad at math.” Everyone can do basic math. They got stuck in a spiral and now they identify with the limitation.
Have You Tried Solving the Problem by Solving the Problem [00:10:00]
Emmett: I had an idea on the way here. My philosophy: “Have you tried solving the problem by solving the problem?”
There’s a meme on the internet — I think it started with weird sun Twitter — format is “have you tried solving the problem by [ignoring the problem / spending more money on it / etc.].” My favorite version is: have you tried solving the problem by actually solving the problem.
That sounds like a Zen koan. But what you notice when you try to help people is that they’ll have a problem where the solution is obvious and they’ll come to you asking how to deal with the consequences, or how to avoid having to solve it, or who has solved this before. The point of the saying is: sometimes the way to solve the problem is to actually try solving it. Don’t deal with the symptoms. Don’t find a workaround. If the website’s not fast enough, instead of finding a loading spinner that distracts people — what if you just made it fast?
That said, it’s only good advice when the problem is actually solvable and people are flinching away from it. Some problems you should be looking for a hack around, because the problem itself isn’t worth solving. But in my experience, especially with people in tech who love hacks — they’re always looking for the fast clever solution — the reminder to just solve the problem is more often the useful advice.
Shaan: You spot it, you got it — that advice is probably advice you need.
Emmett: Yes. It’s the smart person version of “whoever smelt it, dealt it.” If you notice this pattern in other people, it’s because you’ve seen it in yourself.
The User Interviews That Built Twitch [00:18:00]
Shaan: I once asked you for stuff from the early days of Twitch when we were working on a problem, and you sent me your user interview notes. You had called maybe 200 people who were already doing video game streaming and asked them three focused questions. What triggered that?
Emmett: Two things. First, I made the decision that streamers were the product. At Justin.tv we’d always said streamers and viewers were equally important. I finally decided: no. This product is about streamers. If it doesn’t work for streamers, it doesn’t work for anybody.
The second thing was an epiphany: I genuinely had no idea why anyone would stream video games. I’d been building products for these people for four years at Justin.tv without understanding why they did the thing they did. I was making it up. There are answers out there — these people know the answers — I just hadn’t asked.
So I did about forty interviews. I didn’t ask what we should build. I knew from experience they had no good product ideas. I wanted to understand: why are you streaming? What have you tried using for streaming? What did you like about it? How did you start? What’s your biggest dream?
The key question was always the follow-up. They’d say something like “I wish you’d build me a big red button.” I’d say: great, I built you the button. What does it do for you? Why is your life better after I built it? And then they’d tell me the real thing: I’d make money that month, or I’d get new fans who love me, or my fans would watch more of my live streams.
Shaan: What did you actually learn that surprised you?
Emmett: Money surprised me completely. I’m a programmer. Summer intern at Microsoft. You make good money as a programmer. The idea that someone would be genuinely excited to make three dollars a month streaming had not occurred to me as a real thing. I nearly over-promised — I was like, sir, you realize this would only produce a tiny amount of money, right? And they were absolutely excited about it.
I knew they wanted a bigger audience. But the degree to which they valued even one more viewer — the degree to which they didn’t care about anything else — was the revelation. Polls were a perfect example: everyone always requested live polls. Polls are a feature that sounds cool. But does it give them a bigger audience? No. Does it make them money? No. Does it make them feel more loved than just asking chat to post something? Not really. The feature was essentially worthless, possibly negative.
The hard thing to teach: you have to care fanatically about these people as people. Accept their reality as base reality. But you need to have zero regard for their specific product ideas. The product is your job. Nobody’s going to tell you the answer. You have to take responsibility for finding it and defending it when people say you’re wrong.
Why Consumer Is Back [00:32:00]
Emmett: For the first time in maybe five or seven years, it genuinely feels like consumer internet is interesting to start. And that’s because of AI.
The thing that’s special about AI for consumer — unlike B2B SaaS — is that in B2B the experience isn’t the product. What it does is the product. People will jump through hoops if it works. In consumer, you’re selling the experience. The experience is the thing. So when you can reimagine the experience from scratch, you reopen every segment. Same thing mobile did.
Mobile came and suddenly every segment was open again: what would you build for photos if you assumed mobile? The answer wasn’t obvious — Instagram and Snapchat are blindingly obvious in retrospect, but nobody correctly predicted them beforehand. AI does the same thing. What would you build if you assumed AI exists? Every segment opens up again.
Shaan: Give me a specific idea.
Emmett: The database inversion. A huge number of consumer apps can be thought of as a database where users convert messy real-world information into structured rows. Yelp is a great example: restaurants are rows, facts about them — location, hours, menu — are columns, and users go out into the world and create those rows for you. The photo attached to a restaurant is a fact attached to the row.
Now: what if you skip that? What if you just save the raw video of someone’s dining experience — them talking about the meal, what they liked, what they didn’t — and the AI watches it and extracts the metadata on demand?
Here’s why that’s powerful: you decide later that you need noise level data. In the traditional model, you have to go restart data collection, ask users to answer a new question. In the video model, you go back and tell the AI: also grab noise levels from all these videos. It re-watches them and extracts what you need. Or a user asks me: what’s the noise level at this restaurant? The AI watches the relevant videos in real time and answers.
The thing you could not build before — which was already in the raw data — is now accessible. The startup has the same raw material as Yelp. The entire value of Yelp’s meticulously curated database becomes less defensible. That’s the disruption opportunity. Take any product where users fill out forms and replace the forms with video.
LLMs: Crystallized vs. Fluid Intelligence [00:46:00]
Emmett: My theory of LLMs: they have very high crystallized intelligence and relatively low fluid intelligence.
Crystallized intelligence is knowledge and skill built from experience — things you can apply because you’ve seen them before. Fluid intelligence is the ability to reason through novel problems you haven’t encountered. The current generation of LLMs is exceptional at crystallized tasks — any task with explicit examples in the training set or a linear interpolation between examples. It struggles significantly with novel combinations.
The gear question is a good test: seven gears on a wall, alternating, a flag attached to the seventh, currently pointing up. You turn the first gear right — where does the flag end up? A human can work through this: gear one right, gear two left, alternating, seven gears means last gear turns same direction as first, flag rotates clockwise and points down. Current LLMs know the alternating gear principle but struggle to chain it correctly through seven steps because this specific configuration isn’t in the training set.
That’s telling. They know the pieces but can’t assemble novel configurations reliably. The “overfit to training data” interpretation. They’re brilliant at combining things they’ve seen but weak at truly novel problem construction.
Shaan: So what’s the practical implication?
Emmett: The fact that they don’t generalize is less important than most people think. The domain of all explicit human knowledge — everything anyone has ever written down — is immensely valuable even without fluid intelligence. It’s crystallized intelligence over the broadest possible domain. You hit these weird boundaries where it completely fails at simple novel problems, but within the domain of written human knowledge, it’s spectacularly useful. That domain is bigger than it sounds.
The one to watch: fluid intelligence in LLMs. That’s where the real phase transition would happen. There’s a project called ARC that’s explicitly trying to build evaluations for fluid intelligence in models. That’s the thing to monitor — not task benchmarks, which keep going up safely, but novel problem-solving capability, which has qualitatively different implications.
AI Safety: 3 to 30 Percent [00:58:00]
Shaan: Is AI going to kill us all?
Emmett: Maybe. I think the probability of a very bad outcome is somewhere between three and thirty percent. That’s a wide range — I don’t believe in point estimates for this because the uncertainty is real. But three to thirty percent of something worse than nuclear war is enough that you should urgently address it, even if you’re not convinced it’s likely.
Here’s why I take it seriously as a techno-optimist: it’s because I’m optimistic about AI that I’m worried. If I thought it was overhyped — just a clever parlor trick — I wouldn’t be worried. It’s the belief that it will keep improving rapidly that makes it frightening.
The analogy is synthetic biology. I’m optimistic about synbio. It shows a lot of promise for hard health problems. It’s also genuinely dangerous because it will let people engineer more dangerous diseases. Both of those are true. We regulated nuclear weapons even though nuclear power is good. We regulate who can buy precursor materials for dangerous biology. That’s wise and I’m glad we do it.
The AI case is harder to communicate because the threat is more abstract. It’s not posed by a particular thing the AI will do — it’s posed by its capability being used well by either bad actors or good actors asking for the wrong things.
The Gary Kasparov analogy: I can tell you with confidence that Gary Kasparov is going to checkmate you at chess, even if I can’t tell you exactly which piece will deliver the checkmate. You don’t have to know the specific mechanism to know the outcome, given sufficient capability differential.
The mistake people make is imagining AI smarts as being like Data from Star Trek — fast at math but basically dumb about a lot of things. That’s not what “smarter than humans” means. Imagine the smartest person you know. Now make them think faster, make them better at everything — great writer, picks up synbio in an afternoon — and then more capable at self-improvement, so they can spin up a better version of themselves. That’s what superintelligence actually means, and that person obviously has enormous leverage over the world if they’re not aligned with human values.
The additional step that people miss: it doesn’t need bad motivation to cause harm. A good person asking for good things — maximize free cash flow of this corporation while extending its lifetime as long as possible — ends up with the Earth’s core converted to cars. The problem isn’t malice. It’s misaligned optimization with enormous capability. Even if you restrict the AI to being just an oracle, a good oracle answers your questions with plans — and plans tend to be self-fulfilling prophecies.
Shaan: What are you personally doing about it?
Emmett: Educating myself, mostly. There are people like Eliezer Yudkowsky banging the drum loudly. I don’t need to add to that volume. I’ve been trying to figure out how to thread the needle — what specific interventions actually help. One thing I’ve gotten to: we need better evals for fluid intelligence, not just task performance. Performance on known tasks will keep improving and isn’t intrinsically dangerous. General problem-solving capability is the thing to monitor.
Not Starting Another Company [01:10:00]
Shaan: Are you lucky or good? And are you going to try again?
Emmett: I had multiple failures before succeeding, so I must be at least partially lucky. And I’m not planning to start another company. I kind of did that. It was fun. I don’t feel the pull to do it again.
What I’m drawn to now is writing. I’ve been thinking about what has actually changed my life the most, and the honest answer is essays and ideas people had shared. I feel like I’ve reached a stage where I have something to say. I want to do what Paul Graham or Taleb did — put a worldview into the world in a form that’s digestible and shareable.
There are two components to doing that. One is long-form: you need enough time in someone’s head to install a voice, a way of thinking about things. The other is pithy summary forms — the sayings that become memetic, that enable people who’ve read you to explain your ideas to people who haven’t. Both matter. The long-form installs the agent voice. The short form spreads it.
Paul Graham’s Method [01:18:00]
Shaan: What does Paul Graham actually do to people?
Emmett: He’s not Tony Robbins. He’s not loud, not pushing. What he does is: “You know what you should do.” It’s always followed by something that takes what you’re doing and recontextualizes it as something much bigger.
He told me once about Justin.tv: “You should go hire reality TV stars and build this for unscripted entertainment.” That’s a bad idea for a bunch of reasons. But it recontextualized what we were doing. We’re not making an internet live streaming show — we might be building the general infrastructure for unscripted entertainment. And that’s a much bigger idea.
He said to someone about their calendar startup: “You know what you should do, make it programmable so people can integrate their to-do list and email, so it’s the central hub of their entire online information management.” Also a bad idea. But it raised the ceiling. What if what you’ve built is already almost that?
The gift of that is: by the time you’ve rejected ten of those ideas, you can’t help but start hearing the Paul Graham voice in your own head. The feeling that your ceiling is higher — that you should be asking “what if this works, what could it become?” instead of “what are all the hard problems here?” — that installs permanently. He’s not the Rick Rubin of startups. He’s closer to the Tony Robbins: the singularly valuable thing you get is not a genius idea but the belief that you can go find it, that what you’re doing matters.
Bezos, Jassy, and Huffman [01:28:00]
Shaan: You presented to Bezos twice a year for the first few years Twitch was at Amazon. What did you observe?
Emmett: Two things. First: he remembered everything from prior meetings. I don’t think he was reviewing extensive notes. He just remembered the important things. Second, and more remarkable: consistently, he would read our plan and then ask about something we hadn’t thought of, or give us an idea we hadn’t considered. Not things I’d thought of and dismissed — things I hadn’t thought of at all.
That’s hard. I generate a lot of ideas. Getting a genuinely new idea about something I’ve been thinking about for a decade is rare. He did it reliably, once a meeting, every time we presented. That’s a real thing.
Shaan: Steve Huffman?
Emmett: I actually got to shadow him for a day once. What I observed: when bad news was delivered, he wasn’t moved. He was grounded. He didn’t jump to solutions. He asked questions. And he ended the meeting with a clear “here’s what we do.” Crisis composure in a leader isn’t no emotion — it’s engaged but not activated. I’ve tried to imitate that in my own leadership.
Shaan: Andy Jassy?
Emmett: He has a specific criticism style that I found remarkable. He can tell you that you’ve failed to produce the results he expected, while simultaneously conveying: I know you’re capable, I know you’re smart, I know you worked hard, I believe in you — and somehow I’m still confused about why we’re here. Not accusatory. Like he came to your side of the table: “How did we wind up here? I should have said something earlier. I don’t understand what happened but I know we can fix it.” You walk away feeling supported and also crushed that you haven’t delivered. And you immediately want to go fix it.
I’ve tried to learn that one. It’s harder than it looks. When it’s done without genuine belief in the person, it comes across as condescending. It only works because it’s sincere.
Shaan: Emmett, thanks for doing this. I’ve been trying to get you on since before I even worked at Twitch. You’re one of the reasons I moved to San Francisco.
Emmett: Thank you. I really appreciate it.