PodcastsBusinessThe Innovators Studio with Phil McKinney

The Innovators Studio with Phil McKinney

Phil McKinney
The Innovators Studio with Phil McKinney
Latest episode

964 episodes

  • The Innovators Studio with Phil McKinney

    How to Overcome Confirmation Bias

    06/05/2026 | 14 mins.
    Confirmation bias is shaping your decisions right now. Not occasionally. Every day. And the unsettling part is that the smarter you are, the harder it is to see it happening.
    By the end of this episode you'll know exactly what confirmation bias is. How to recognize when it has taken over a room. And three specific practices that actually work. Not borrowed frameworks, but what forty years of high-stakes decisions has taught me.
    Let's get into it.
    What Is Confirmation Bias?
    Confirmation bias is your brain's tendency to seek out, favor, and remember information that confirms what you already believe, filtering out everything that contradicts it.
    Most people think that just means seeking out information that agrees with them. That's part of it. But here's what makes it truly dangerous.
    Once you form a strong belief, three things happen automatically.
    Unequal Evaluation. Picture two studies landing on your desk. One says your strategy is working. One says it isn't. You read the first and nod. You read the second and start looking for the flaw: the methodology, the sample size, the funding source.
    Selective Memory. Your brain doesn't store evidence equally. What supports your belief stays accessible. What contradicts it becomes harder to recall the longer you hold the belief.
    The Backfire Effect. When someone directly challenges a belief you hold, your brain treats it as a threat. The response isn't reconsideration. It's defense. Studies show you actually leave the argument more convinced than when you entered it.
    Together, the longer you hold a belief and the more it matters to you, the harder it becomes to change, no matter how much evidence says you should.
    Confirmation Bias in Today's World
    Confirmation bias has always been part of human thinking. What's changed is the environment around it.
    Algorithms feed you content that matches what you already believe. Social media shows you opinions from people who think like you. Search engines rank results based on what you've clicked before. Every system you interact with daily is built to confirm your existing views. Not by accident, but because confirmation keeps you engaged.
    The result compounds. The more confirming information you consume, the stronger your existing beliefs become. The stronger your beliefs become, the more your brain filters out opposing information. The more that information gets filtered, the harder it becomes to update your thinking, even when updating is exactly what the situation demands.
    This is mindjacking in action. The systematic replacement of your thinking by systems built to do it for you. And confirmation bias is one of its most powerful tools.
    It's visible everywhere. In public discourse where people can no longer agree on basic facts. In organizations that keep funding failing strategies long after the evidence says stop. In leaders who build teams designed to tell them what they want to hear.
    You might assume that smarter, more experienced people are less susceptible to this. The research says otherwise.
    The Smartest Person in the Room Gets It Wrong
    Here's what surprises most people.
    Confirmation bias doesn't get weaker as you get smarter. It gets stronger.
    Dan Kahan at Yale ran a study. He gave people a math problem where the correct answer contradicted their political beliefs. The smarter the person, the more likely they were to get the answer wrong, in the direction that protected their belief.
    More intelligence, applied more effectively, in service of the conclusion they'd already reached.
    A smart person who has formed a wrong belief is better at defending it. They find flaws in the opposing data faster. They construct more sophisticated arguments. They're more convincing to others and to themselves.
    I watched this play out in a board meeting. A CEO had championed a major strategy. Three separate analyses came back contradicting it. Each time, he found a different flaw in the methodology. By the end of the meeting he'd convinced the room the data was unreliable. The strategy continued. The outcome was exactly what the data predicted.
    He wasn't dishonest. He was skilled. His intelligence was working against him. And everyone in that room let it happen.
    If you're intelligent, experienced, and confident in your judgment, you are not immune to confirmation bias. You are more vulnerable to it.
    If you know someone who is always the smartest person in the room, send them this episode. They need it more than most.
    How to Overcome Confirmation Bias: What Actually Works
    Knowing about confirmation bias doesn't stop it. I know this from experience, not from research. I've been in rooms where everyone understood exactly what was happening and it happened anyway.
    What works is different from what you've probably been taught.
    Catch It in Yourself: The Flip Debate
    The moment I've most reliably caught confirmation bias operating in myself hasn't come from a checklist or a framework. It's come from a specific kind of conversation.
    I keep a small group of trusted advisors, people I call my kitchen cabinet. These aren't peers. They're almost never inside the organization. They have no stake in the outcome and no incentive to tell me what I want to hear. When I'm about to make a significant decision and I feel the pull of certainty, I take it to one of them.
    The conversation has a specific structure. I argue my position, fully and genuinely, the strongest version I can make. Then I stop. And I argue the opposite. Not a token acknowledgment of the other side. A real debate. I take the side I'm most resistant to and make the best case I can for it.
    What happens in that second argument is where confirmation bias shows up. The gaps. The assumptions I'd been protecting. The evidence I'd felt the urge to dismiss. When you're forced to argue a case you don't believe, you find the things you didn't want to see when you were arguing the one you do.
    An outside advisor is essential. Someone who will push back, ask hard questions, and notice when the flip argument is being faked. You can't do this with someone who needs something from you. The absence of stakes is what makes the honesty possible.
    Catch It in a Room: Two Signals to Watch For
    I've learned to watch for two signals that tell me confirmation bias has taken over a room. Both are visible before the decision is made. Almost everyone misses them.
    The first signal is the unwillingness to debate the other side.
    When a room has really decided, before the discussion is officially over, nobody wants to argue the opposing position. Not even hypothetically. Raise the other side and watch what happens. Eyes go flat. The conversation moves on. Someone changes the subject. If a room can't genuinely engage with the strongest case against the preferred direction, confirmation bias is driving.
    The second signal is circular justification.
    Listen for reasoning that keeps returning to its own starting point. The evidence for the decision is the decision itself. When you can't find an external reason, just a restatement of the conclusion, confirmation bias is driving.
    When I hear circular justification in a room, I stop the conversation. Not to embarrass anyone. To name what's happening. "We're not evaluating anymore. We're confirming. Let's go back to the evidence."
    That single intervention has changed the outcome of more decisions than any framework I've ever been taught.
    Change How You Decide: Full Options, Real Challenge
    Here's the most consistent change I've made in my own decision-making, and it comes directly from watching what confirmation bias costs people: I force a full pros and cons analysis on every serious option. Not just the one I'm leaning toward.
    This sounds obvious. Almost nobody does it.
    The natural pull is to build the case for the option that already feels right and compare it against the weaknesses of the alternatives. That's confirmation bias disguised as analysis. What I do instead is give every option on the table the same treatment. The best case for it. The best case against it. Without knowing in advance which one I'm going to choose.
    For decisions that carry real weight, I take it further. I bring in my brain trust: direct reports who will tell me what I don't want to hear, kitchen cabinet advisors, trusted board members. I ask specifically for the challenges. Not validation. Not enthusiasm. The places where the thinking is weak, the assumptions that might not hold, the evidence I might have filtered out.
    One question has changed how I approach every major decision: what am I not seeing?
    The answers, from people who have no incentive to protect my view, are exactly where the confirmation bias lives.
    Confirmation Bias Exercise: Try This Today
    This week, before you finalize any decision you've already started leaning toward, do one thing.
    Find one person outside your organization, someone with no stake in the outcome, and run the flip debate. Argue your position fully. Then stop and argue the opposite, with the same effort and commitment.
    Don't summarize the other side. Argue it. Make the best case you can for the view you're most resistant to.
    Notice what comes up in that second argument. The gaps. The assumptions. The evidence you'd been setting aside.
    That's where your confirmation bias is living.
    Run that exercise this week. Not once. Every time you feel the pull of certainty on a decision that matters.
    The Benefits of Overcoming Confirmation Bias
    The payoff from these practices compounds over time.
    Examined beliefs are more reliable than accumulated ones. Decisions that accounted for opposing evidence hold up better than decisions that filtered it out. Judgment that evaluates rather than confirms earns a different kind of trust from the people around you.
    Beyond your own decisions, catching confirmation bias makes you harder to capture. Every algorithm, every platform, and every persuader around you is built to exploit it. Seeing it operate in yourself reduces their leverage over your thinking.
    That's what these practices build. Not certainty. Something better.
    Examined confidence.
  • The Innovators Studio with Phil McKinney

    Why Most Organizations Aren't Funding Innovation

    29/04/2026 | 21 mins.
    Twelve official definitions for R&D. Zero agreement.
    The US government publishes at least a dozen distinct official definitions across agencies, accounting standards, tax authorities, and international bodies. Not one agrees with the others on where research ends and development begins.
    Trillions of dollars flow through R&D budgets every year. Boards approve them. Investors evaluate them. Governments subsidize them. Analysts benchmark them. And the term at the center of all of it has no settled definition.
    A company can gut its research investment without triggering a single alarm on its income statement. Researchers who gained rare access to confidential federal R&D data found exactly this: when companies face financial pressure, they cut research while leaving development essentially untouched, and the combined number barely moves. Every benchmark, every board conversation, every investment thesis built around the R&D line may be built on sand.
    Innovation, ideas made real, requires both. Research is how you find the idea. Development is how you make it real. Strip out the research and you're not innovating, you're iterating on what already exists. Strip out the development and you're just experimenting. The problem is that nobody in the room knows which one they're actually funding, because the definition that would tell them doesn't exist.
    Someone needs to draw the line. This episode is about why nobody has, and the definition I think should replace the chaos.
    By the end, I'm going to put that definition in front of you and ask you to push back on it. Not to agree. To tell me where it breaks.
    How We Got Here
    Four institutions took a run at defining R&D. Each one got it right for their own purposes. None of them got it right for yours.
    Frascati: Built for Governments
    In June 1963, OECD economists met at a villa in Frascati, Italy, south of Rome, and produced what became the international standard for measuring R&D across nations. Now in its seventh edition.
    The Frascati Manual divides R&D into three tiers: basic research (theoretical work with no application in view), applied research (original investigation toward a specific practical objective), and experimental development (using existing knowledge to produce new products or processes). To qualify, an activity must be novel, creative, uncertain in outcome, systematic, and transferable.
    Used by governments across roughly 75 countries. Solid for what it was designed to do: let nations compare R&D investment on consistent terms.
    What Frascati cannot tell you: whether a specific company's spending is creating competitive advantage. It counts the type of activity. It doesn't assess what the activity produces for the organization doing the spending. A company can satisfy every Frascati criterion investigating something every competitor already knows. The knowledge is new to them. That is enough.
    The accountants drew a different line, for a different reason, with a different consequence.
    FASB: Built for Accountants
    In October 1974, the Financial Accounting Standards Board issued Statement No. 2, Accounting for Research and Development Costs, now codified as Topic 730. Every public company filing under US GAAP operates under it.
    The rule: all R&D costs expensed as incurred. Research, development, basic, applied: one line on the income statement. Their definition: research is a planned search aimed at discovery of new knowledge. Development is the translation of research findings into a plan or design for a new product.
    The rationale is explicit in the original standard. Future benefits from R&D are, in FASB's language, "at best uncertain." Expense everything immediately. The standard solved the problem it was asked to solve, which was accounting treatment: when to recognize the cost, not whether the cost was strategically sound.
    The consequence: sustaining engineering, feature maintenance, and incremental product updates all land on the same line as genuine exploratory research. Nobody looking at the income statement from outside can see the difference. The number is technically accurate and analytically opaque. Abraham Briloff, the late accounting professor at Baruch College, put it plainly: "Accounting statements are like bikinis. What they show is interesting, but what they conceal is significant." He was talking about financial reporting broadly. He could have been writing specifically about the R&D line. Researchers at Duke and London Business School spent years tracking corporate scientific output and found that it declined steadily across industries even as headline R&D spending kept rising. The combined number was hiding a substitution. Nobody on the outside could see it.
    Outside the United States, a different standard governs, and it creates a comparison problem most analysts never account for.
    IFRS: Built for International Investors
    IAS 38 governs R&D under IFRS, and its treatment differs from FASB in one significant way.
    Research costs are always expensed, same as FASB. But development costs can be capitalized as an asset on the balance sheet once a company can demonstrate technical feasibility, intent to complete, ability to use or sell the result, likely future economic benefit, adequate resources, and reliable cost measurement.
    A European company that capitalizes its development phase carries those costs as an asset: lower expenses in the period, higher total assets. An identical US company expensing everything under FASB takes the full hit immediately: higher expenses, lower assets. Same underlying investment. Incomparable financial pictures.
    Run the standard industry benchmark, R&D as a percentage of revenue, and you may conclude the US company is investing more aggressively. You may be comparing the same dollar invested under two different accounting regimes. Roughly 169 jurisdictions use IFRS. The United States does not. India uses an adapted version. Japan maintains its own standards board. The benchmark the industry trusts most is meaningless for cross-border comparison, and almost nobody says so.
    Section 174: Built for Tax Authorities
    The Internal Revenue Code adds another layer. Section 174 governs the deductibility of what the US tax authority calls "research or experimental expenditures," and the definition is not the same as FASB Topic 730.
    A company's R&D for tax purposes and its R&D for financial reporting can cover different activities and produce different numbers. The Tax Cuts and Jobs Act of 2017 tightened this further: domestic R&D expenses that were previously deductible immediately now must be amortized over five years, international over fifteen. The definition of what qualifies shifted when the timing rules changed.
    Within one country, one company, three definitional regimes apply simultaneously: Frascati for any government reporting, FASB for the income statement, and Section 174 for taxes. A single dollar of R&D spending can be classified three different ways depending on who's asking.
    The Gap None of Them Fill
    Four frameworks, built by four institutions, for four different purposes. Not one was built for the question that actually matters.
    Is this investment creating new knowledge that gives us a capability nobody else can easily replicate?
    The gap between them is where innovation decisions actually live. The National Science Foundation recognized the problem clearly enough that it publishes a separate annotated document just to catalog the competing definitions, because they're too inconsistent to assume any two readers are using the same one. That gap isn't an oversight. It's a structural consequence of four institutions doing their own jobs well. The question practitioners need answered was nobody's institutional job.
    You've been in the room. The R&D number is on the slide. Nobody asks what's inside it, because the accounting standard doesn't require an answer, and the room has learned not to expect one.
    So it went unanswered. Until now.
    A Better Definition for R&D
    Research is work directed at creating new knowledge where the outcome is genuinely uncertain and the knowledge cannot be readily obtained from existing sources. Development is the translation of that knowledge into products, services, or processes that meaningfully advance an organization's capability in ways competitors cannot easily replicate.
    Four elements define it:
    Genuinely uncertain outcome. If you know what you're going to get before the work starts, it's engineering execution, not research. The uncertainty doesn't have to be total. Most applied research has a likely direction. But there has to be real doubt about whether the approach works, whether the knowledge emerges.



    Cannot be obtained from existing sources. This is the one nobody puts in writing. If the knowledge is already in the literature, available from a consulting engagement, or present in a competitor's published work, finding it again isn't research. Generating new knowledge and capturing existing knowledge are different activities. Only one belongs here. This criterion alone would reclassify a significant portion of what companies currently call R&D.



    Advances capability competitors cannot easily replicate. Development only qualifies when it translates research into something that genuinely moves the organization forward competitively. Sustaining engineering doesn't pass it. Feature parity doesn't. Competitive catch-up doesn't. All real work, none of it development under this definition.



    Agnostic to accounting jurisdiction. This definition doesn't tell you how to expense or capitalize anything. That's already governed by whichever standard applies. What it does is establish what genuinely belongs in each category, regardless of where the company files. That makes it usable across FASB and IFRS companies without translation.



    There is a simpler way to put it. For any project in your R&D budget, ask two questions. First: are we creating new knowledge, or executing against something we already know? If you're executing, it's not research. Second: does this translate into a capability competitors cannot easily replicate? If not, it's not development either. It's product engineering, valuable and necessary, but a different budget category entirely. Three buckets: Research, Development, and Product Engineering. That taxonomy, applied honestly across a typical portfolio, would reclassify a significant share of what most companies are currently reporting as R&D.
    The Call
    I'm not asking FASB to rewrite Topic 730.
    What I am asking: that the people who actually make innovation decisions start applying a definition built for the question they're trying to answer.
    If you run an R&D function: apply this definition to your current portfolio. Not to change the accounting. To see what's actually in the category and what isn't. The gap between what your budget calls R&D and what this definition calls R&D will tell you something worth knowing.
    If you sit on a board: ask what portion of the R&D line is directed at new knowledge creation versus sustaining existing products. If no one in the room can answer, you're governing a number you don't understand.
    And if you think the definition is wrong, tell me. Where should the line be drawn differently? What element doesn't hold? What did I miss? That's not a polite invitation. That's the actual point of this episode.
    Definitions become standards when enough serious people apply them consistently and make the case until the institutions catch up. The four frameworks we inherited were each built by an institution serving its own purpose. This one is built for the people making the decisions.
    The most consequential line in any company's budget is the one separating what builds the future from what protects the present. Nobody drew it clearly. It's past time someone did.
    The idea was never the hard part. It never is. The call is.
    If this episode shifted something for you, subscribe wherever you listen to podcasts. On YouTube, hit subscribe and the bell so you don't miss the next one. And if you want to go deeper every Monday, Studio Notes is free at philmckinney.com.
    Until next time. See the pattern. Make the call. The Innovators Studio | philmckinney.com
  • The Innovators Studio with Phil McKinney

    R&D Spending Is the Most Misleading Number in Business

    15/04/2026 | 16 mins.
    Every public company's R&D number is a lie hiding in plain sight.
    Not because anyone falsified it. Because the number was never built to tell the truth. It was built to satisfy an accounting standard written in 1974. And for fifty years, boards, analysts, and CEOs have been making billion-dollar innovation decisions based on a number designed by accountants to solve a different problem entirely.
    Here's what makes this genuinely strange. The real number exists. The government has been collecting it from every major US company for decades. It would answer the question every innovation leader and investor actually needs answered. And it is locked away by federal law. Confidential. Never published. Never seen by the people who need it most.
    It's sitting in a federal database right now. And there's a way to estimate it for any public company, without asking anyone's permission.
    I know it exists because I spent years building it from the inside.
    Why the R&D Signal Was Blurry
    When I was running innovation at HP, we discovered this problem firsthand. We had a connection between R&D investment and gross margin that held up across decades of HP history. Better than anything Wall Street was using. But the signal was blurry. None of us could figure out why.
    The answer came from a question someone on the team asked almost as an aside.
    What if R&D isn't one thing?
    Research and Development Are Not the Same Thing
    Think about what actually lives inside a typical R&D budget.
    There's a team somewhere investigating whether a new approach could enable a capability that doesn't exist yet. No product defined. No spec written. Asking whether something is even possible.
    And there's a team building the next version of a product that ships in eighteen months. Spec locked. Timeline set. Engineering executing against a defined target.
    Both show up on the same line in the budget. Both get called R&D. Both count equally toward the number that gets reviewed every quarter.
    They are not the same thing.
    One is Research. The other is Development.
    Research is the work you do when you don't yet know what you're building. The output is understanding. New knowledge that might enable future products nobody has designed yet. You can't know exactly what you'll find. If you already knew, it wouldn't be research.
    Development is the work you do when you know exactly what you're building. The spec exists. The product is defined. The question isn't what to make. It's whether it can be made, on time, at cost, at quality.
    One creates the future. The other delivers the present. And for fifty years, every public company in America has been required to report them as one indistinguishable number.
    When we split the HP data along that line, Research on one side and Development on the other, the signal sharpened immediately. Research spend, measured against gross margin three to five years later, was a meaningfully stronger predictor than the combined number had ever been.
    The blur hadn't been in the gross margin data. It had been in the R&D number itself. Two fundamentally different things, averaged together, producing a number that looked precise and predicted almost nothing.
    But splitting R from D at the company level was only the beginning. The model was still lying to us. Just more quietly.
    Why Company-Level R&D Splits Still Mislead
    Even with the split, something was still soft. HP wasn't one business. It was dozens. Printers, PCs, servers, software, each running on different timelines, different technology cycles, different competitive dynamics.
    What if the R/D split meant something different depending on where it was applied?
    We pushed it to the product line level. Then further, to the platform level within product lines.
    Printers were the clearest example.
    HP's printer business wasn't one story. There were platforms built on established technology. Mature ink systems, proven print head chemistry, products that had been shipping for years. And there were platforms built on genuinely new core technology. New chemistry. New mechanisms. New approaches to fundamental problems that nobody had solved yet.
    Research investment by platform told a completely different story than Research investment by product category. The Research going into new technology platforms had a completely different relationship to future margin than Research going into mature platforms. Different time horizons. Different risk profiles. Different margin implications years down the road.
    Laptops told the same story. A traditional consumer laptop line and a high-performance portable workstation weren't the same investment. One was Development-heavy. Defined product, known market, engineering executing against spec. The other had genuine Research behind it. Unsolved thermal problems, new form factor constraints, and materials questions that hadn't been answered yet.
    When a single R&D assumption is applied across all of that, treating every dollar the same regardless of what it actually does, the signal disappears into the average. Peanut butter across the portfolio.
    The model only got honest when it got specific. Research by platform and Development by platform, matched against the margin performance of those specific platforms years later. Which platforms were building future margin? Which ones were running on margin that past Research had already bought?
    We could see it because we were inside the company. The question is whether anyone on the outside could ever see the same thing.
    The R&D Data the Government Collects and Won't Release
    Outside the internal budget process, everyone sees the same thing: a single line on the income statement.
    The US government recognized decades ago that the combined R&D number was analytically useless. So they built a system to collect the real one.
    The National Science Foundation runs a survey called the Business Enterprise Research and Development survey. The BERD survey. Every year, roughly 47,500 US companies are required to report their R&D spending broken into three categories: basic research, applied research, and experimental development. The split that every board and every investor needs to see. Mandatory. Collected. Verified.
    And then locked away.
    The firm-level data is confidential under federal law. The NSF publishes only industry-level aggregates. So every company fills out this survey and reports its real R/D split to the government. That data sits in a federal database. And the boards, investors, and analysts who need it most cannot access it.
    Researchers at Northwestern and Boston University were given rare access to that confidential data. What they found is striking. When companies face financial pressure and cut R&D, they don't cut Development. They cut Research. Almost entirely. Development barely moves.
    Every earnings squeeze. Every activist campaign. Every cost optimization program. Systematically targeting the one part of R&D that builds future margin. And because the combined number barely moves, nobody on the outside sees it happening.
    That's not a coincidence. That's the accounting standard doing exactly what it was designed to do: produce one clean number for the income statement. It was never asked to protect the future.
    How to Estimate the Research-to-Development Split Without Inside Access
    So what can actually be done without access to the locked data?
    More than most people realize.
    Step 1. Find the industry baseline. The aggregate BERD data is public at the sector level. Ask an AI tool for the Research-to-Development ratio for the relevant industry. That's the benchmark. Everything else gets measured against it. A company spending 8% of its R&D on Research in an industry where the average is 25% is telling you something the combined number never would.
    Step 2. Look at the gross margin trend compared to peers. Gross margin over time is the most honest external signal of Research health. A company with a declining margin relative to peers, while reporting flat or growing R&D spend, is almost certainly shifting the mix toward Development. The math works in the other direction, too. An AI tool can pull this comparison for any public company in minutes. This is exactly the signal that was invisible at HP until it was too late.
    Step 3. Look at patent trends compared to peers over time. Patents are an imperfect but useful directional indicator. Not because more patents always means more Research. It doesn't. But a sustained decline in patent output relative to peers, alongside flat R&D spend, suggests the investment is maintaining existing products rather than creating new knowledge. Combined with the gross margin trend, it starts to triangulate where the split actually sits.
    None of these three steps requires access to an internal budget. All of them can be done in an afternoon with public data and an AI tool. Together, they produce a working picture of the R/D split that the income statement was never designed to reveal.
    What the R&D Split Revealed at HP That No One Outside Could See
    When Hurd took over in 2005, HP was spending $3.5 billion on R&D. Roughly 4% of revenue. By 2009, his last full year as CEO, that had dropped to $2.8 billion. Revenue had grown significantly over that period, so the percentage had fallen further still, to under 2.5%. Both the dollar amount and the ratio were declining simultaneously while the company got larger.
    Wall Street tracked the combined number. The board reviewed it. Nobody raised a structural alarm.
    The Research component within that total was well below the industry average for comparable technology companies. Not slightly. Significantly.
    The margin consequences arrived years later. They always do.
    What Happens When the Definition of Research Doesn't Exist
    The R/D split gave us a real predictive signal. We ran with it. The conversations were sharper. But the team kept pulling on a thread that nobody expected.
    When we looked closely at what was actually being called Research, project by project and budget line by budget line, things that didn't feel the same kept appearing. Work aimed at fundamental discovery. Work aimed at solving a specific defined problem using entirely new methods. Both labeled Research. Up close, they behaved differently, predicted different things, and when budgets got tight, got treated very differently.
    So we went looking for the agreed definition. The official standard that would tell exactly where to draw the lines inside Research.
    It didn't exist. Not the way we needed it to. And without it, everything we'd built was sitting on sand.
    How do you build a predictive model on a definition that doesn't exist?
    That's the next episode.
    If this helped you see something you might have missed, subscribe wherever you listen to podcasts. On YouTube, hit subscribe and the bell so you don't miss the next episode. And if you want to go deeper every Monday, join us at Studio Notes — free, at philmckinney.com.
    Until next time. See the pattern. Make the call.
  • The Innovators Studio with Phil McKinney

    The Innovation Metric Bill Hewlett and Dave Packard Used

    01/04/2026 | 19 mins.
    Every public company in the technology industry measures innovation spending the same way. R&D as a percentage of revenue.
    Why? Because Wall Street tracks it. Boards benchmark it. CEOs get fired over it.
    And it tells you almost nothing about whether the spending is working.
    Bill Hewlett and Dave Packard knew that. From the very beginning, they measured something different. Something the rest of the industry has been ignoring for seventy years. And the proof was sitting in a paper that Chuck House pulled out and sent to me after a conversation at a Computer History Museum board meeting.
    By the end of this episode, you'll know what that metric is, why it works, and why the one everyone else uses makes it nearly impossible to tell whether your innovation investment is building the future or just burning cash.
    Here's how I found it.
    The Question That Wouldn't Let Go
    In the last episode, I talked about the argument with Mark Hurd. The question was over whether HP should cut R&D as a percentage of revenue to match Acer. I knew Mark was fundamentally wrong. But I couldn't prove it. The only metric on the table was R&D as a percentage of revenue. That was what Wall Street expected. It's what shareholders expected. It's what the board expected.
    But I couldn't argue against it, because I didn't have the data.
    I needed a better metric. So I decided to go back to the beginning. HP's complete financial records dating back to the 1940s. Division by division. R&D project by R&D project. The actual operating data. I got access to all of it. The HP archive team gave me direct access to Bill and Dave's original notebooks.
    Now, data alone wasn't enough. It was mountains and mountains of data, and you're trying to extract the signal. What is the trigger in that data?
    The conversation that cracked it open happened outside HP.
     
     
    The Man with the Medal of Defiance
    I was at a Computer History Museum board meeting, standing next to Chuck House, and I shared with him the struggle I was having.
    A little context on Chuck. He spent twenty-nine years at HP. He was the Corporate Engineering Director and he helped launch dozens of products. He's also the recipient, from David Packard himself, of the Medal of Defiance.
    The Medal of Defiance was given to him because David had told him at one point to kill a product line. Chuck went around that decision, put the product into the catalog, shipped it, and it turned into a phenomenal success. When David gave Chuck the medal, the citation was something along the lines of: "for going above and beyond the stupidity of management and doing what was right."
    Chuck and Raymond Price co-authored a book called The HP Phenomenon, published by Stanford Press. It's the deep dive into the history of the innovation culture inside HP, all of the metrics used back in the Bill and Dave days that put in place the structure that allowed HP to be successful.
    By the time I'm at HP, Chuck had long since moved on. He was running Media X at Stanford, the university's research program on innovation, media, and technology. But we both served on the Computer History Museum board.
    At that board meeting, I shared the argument I'd had with Mark and the search for a better metric. I had a strong feeling there was something around gross margin. That R&D investment impacted gross margin. But a feeling isn't an argument. I needed data. I needed to correlate R&D spend to margin, and that's extraordinarily hard to do when you've got all these different product lines and divisions.
    Chuck got this little smile on his face and said, "I need to send you something."
    The Paper and the Whiteboard
    What he sent me was a paper. A journal paper he and a few of his colleagues had written decades before. And it laid out the connection between research investment and margin performance. The correlation I suspected but couldn't prove was right there on the page.
    I read it that night. The next morning I emailed Chuck, and I was just really excited. What they'd written decades ago matched what I was finding in the data.
    That email exchange turned into an invitation. I asked Chuck to come to HP Labs. We met in a conference room in Building 3, the main building for HP Labs at the time. And I'll tell you, I look back on this and it makes me smile a little, because this conference room was just down the hall from Bill and Dave's offices. HP preserved those offices exactly as Bill and Dave left them. You can walk in there today, see their desks, see their offices, just as they were on their last day. There's something about being that close to where it all started that makes the history feel less like history and more like unfinished business.
    Chuck walked up to the whiteboard and drew two things.
    On the left side: R&D as a percentage of revenue. The metric every company reports. The metric Mark used to argue HP was overspending. Chuck's point was simple. That metric tells you how much you're spending. That's it. Nothing about whether your products are any good. Nothing about whether customers value what you built. It's an input metric pretending to be an output metric.
    Two ways to improve the ratio: spend less on research, or sell more of what you've already got. Neither of those is innovation. You can manipulate R&D as a percentage of revenue by cutting your R&D spend, or you can cut prices to drive top-line revenue. But neither has any connection to measuring whether your innovation is actually working.
    On the right side, he drew gross margin. The distance between the cost to make something and what the customer pays for it. Chuck said: that gap is a direct measure of differentiation. Solve a problem nobody else can solve, and customers will pay for that difference. Margin expands. Build a product that looks like everyone else's, and customers have no reason to pay more. They'll shop you. Margin compresses.
    Then he drew the line connecting both sides. Research investment flows in. If the research produces differentiated products, gross margin expands. That expanded margin funds the next round of research. A virtuous cycle.
    But only if you're watching margin. The moment you manage to the spending ratio instead, the cycle breaks. The boardroom conversation stops being about whether research is producing differentiation. It becomes about whether the spending number looks right compared to some peer.
    That's what happened with Mark. HP's PC group margins were compressing toward commodity levels. The response, driven by that revenue-ratio metric, was to cut research spending to match the compression. Exactly backwards. Compressing margins are the alarm bell. Fix the research pipeline. Fix your innovation. Not just more innovation, but good innovation. Don't defund it.
    Bill and Dave's First Product, and What It Actually Proved
    Standing at that whiteboard, I could see it running through HP's entire history.
    The HP 200A audio oscillator. 1939. HP's first commercial product. Competitors were selling oscillators for over $200. Bill and Dave were selling theirs for $89.40.
    Now that's not because they undercut the market. What Bill figured out as part of his master's degree project at Stanford was that by using a light bulb inside the circuit as a self-regulating component, you could smooth the output in a way competitors couldn't match. Technically superior instrument. Radically cheaper to build. Walt Disney bought eight of them for Fantasia.
    The founders tracked the gap. Cost versus what customers pay. Not total revenue. That gap is gross margin. And that gap funded everything that came after. A lower-priced product, a higher-quality product, and the margin it generated is what drove HP's ability to continue to reinvest.
    David Packard codified it. He described what he called the six-to-one ratio. Products at HP were considered genuinely successful only when the profit from a product over time was six times the cost of developing it. If it was lower than that, it wasn't generating enough. And this is also how Bill and Dave decided which product lines to kill off. The ratio determined where research dollars were earning their return and where they weren't.
    The products that crushed that ratio weren't the ones with the biggest R&D budgets or the most engineers. They were the ones earning the highest return on the research dollar, because customers paid a premium for what the research produced.
    And here's what this enabled: self-financing. No debt. No banks. No Wall Street ninety-day pressure. That was back before HP was even public. It was the freedom to invest in research on a ten-year horizon, and that's only possible with healthy margins.
    At HP's margins, spending landed at about eight to ten percent of revenue.
    Why Eight to Ten Percent Is Not a Contradiction
    Now you might hear "eight to ten percent of revenue" and think I'm contradicting myself. I just spent ten minutes telling you that R&D as a percentage of revenue is a useless metric.
    Here's the difference.
    Bill and Dave didn't start with the percentage and work backwards. They started with margin. They funded the research that kept margins healthy, and the spending that produced happened to land at eight to ten percent. The percentage was a byproduct, not a target. The moment you flip that and make the percentage the goal, you've lost the plot.
    That's the distinction the entire industry missed.
    Chuck drew all of this in about twenty minutes on a whiteboard. Decades of institutional knowledge, distilled into one diagram. And the thing that hit me hardest wasn't the analysis. It was the realization that HP had already figured this out. The knowledge was in a paper that had been sitting around for decades. The company had just forgotten.
    What was old had become what was new. HP didn't need a breakthrough. It just needed to remember.
    Confirming the Pattern: Art Fong and John Young
    After the session with Chuck, I reached out to two other people who'd been there in the early days.
    Art Fong. I've talked about Art many times on this show, and there's an interview with him in the archive. He was the sixth R&D engineer Bill Hewlett ever hired. At one point in the 1960s, twenty-seven percent of HP's total revenue came from Art Fong's innovations and projects.
    And John Young. John was the first CEO after the founders stepped back, after Bill and Dave retired. He took HP from $1.3 billion in revenue to $16 billion.
    I had the same discussion with both of them about R&D as a percentage of revenue, about margin. And they both confirmed it. They shared their own stories about margin priority, the six-to-one ratio, and their direct conversations with Bill and Dave. That series of conversations with Chuck, Art, and John, capturing all of that history, really drove me to refine the thinking on the R&D-to-margin connection.
    So what did I do next? I back-cast against the entire HP history. Division by division. Is it predictive? Can you use a metric to actually predict? That's what turned an insight into something defensible in a boardroom.
    But here's the thing. This isn't just an HP problem. Most companies never had the margin insight. They started with R&D as a percentage of revenue because that's what Wall Street asks for, and they've never questioned it.
    Margin would have caught it. Margin starts telling you the truth years before the revenue line does. By the time you see revenue take a dip, the damage is done. That is the result of decisions made three, five, ten years prior. Margin compression is the early warning. Differentiation is fading. Research is not producing what it needs to produce.
    Half the Answer, and a New Problem
    Walking out of HP Labs that day, I thought I'd found the answer.
    Track margin, not spending. Watch the output, not the input.
    It took me another year to realize I'd only found half of it.
    When I started tracing where HP's R&D dollars were actually going, division by division, I found a problem hiding inside two letters.
    R and D.
    We say it like it's one thing. It's how we report it in financial filings. It's how Wall Street looks at it. It's how the press views it. But it's not one thing. Research and development are two completely different activities, with completely different time horizons, different risk profiles, and different impacts on the business. The moment you combine them into a single line item, you can move money from one to the other, and nobody outside the building can tell.
    That's what we're going to get into in the next episode. The split nobody sees.
     
     
    Here's a question for you. If you've found a way to connect R&D spending to actual business outcomes in your company, how do you do it? What metric are you using with your leadership to make the difference? Drop it in the comments. I read every one of them, and the best answers end up shaping future episodes.
    If this episode changed how you think about innovation investment, hit subscribe so you don't miss the next one. And share this with someone in your company who's fighting this fight right now. They'll thank you for it.
    Two ways to keep going between episodes. Studio Notes comes out every Monday. That's where I take apart a real company's innovation decisions using public data. This week I dig into PayPal's innovation health. You want to check that out. Studio Sessions, what you're watching right now, drops every Wednesday. This is where the decisions happened. The real rooms, the real calls, what went right and what went wrong.
    Show notes and the full analysis are at philmckinney.com.
    The idea was never the hard part. It never is. The call is.
  • The Innovators Studio with Phil McKinney

    The R&D Metric Mark Hurd and HP Got Wrong

    25/03/2026 | 13 mins.
    Twenty years. Nearly one thousand episodes on this show. And starting today, we're going to try something a little different this season.
    Season 21 is about the decisions that actually determine whether innovation lives or dies inside any organization. The real calls. Not the fluff stuff we read in academic textbooks. I want to actually put you in the rooms where these decisions are happening. What went right. What went wrong.
    My objective is to expose you to the patterns in innovation decisions so that you can recognize them. Recognize them in yourself, in the people you need to influence, long before you step into any landmines.
    So let's get into it.
    The Encounter on the Top Floor of Building 25
    Making generational decisions on innovation investment can be a make-or-break moment. What I refer to as a CLM, a Career Limiting Move. In my case, it started with a chance conversation with Mark Hurd, HP's CEO.
    Let me take you back to 2005. HP headquarters is on Page Mill Road in Palo Alto, referred to internally as Building 25. The top floor is where all of the executive offices are. That's where Mark's office was.
    I was up there doing some meetings and got snagged by Mark. Now, Mark had a reputation. He was a big numbers guy. He believed in what he called extreme benchmarking. You tore into your competitors' numbers. You knew your own numbers in and out.1
    Others had warned me about this. He had a famous quote that everybody shared: 
    "Stare at the numbers long enough, and they will eventually confess."
    Mark believed you could not lead a critical role at HP if you did not know your numbers cold, inside and out. Didn't matter whether it was sales, CTO, a function, or a division. It didn't matter. And Mark tested everyone on the leadership team. Not just the leadership team. He would randomly stop employees and ask them for their numbers based on what group they worked in. It was non-stop. It was constant. To where support staff was literally constantly preparing briefing books for managers, VPs, leaders, just in case they got nabbed by Mark.
    In my case, I happened to be walking past his office. Mark waved me in. I sat down, and he immediately started drilling me on the CTO numbers.
    The number he focused on was R&D as a percentage of revenue.
    The Broken Benchmark: R&D as a Percentage of Revenue
    Now, if you've been a regular listener of this show, you know my opinion of that metric. R&D as a percentage of revenue is a meaningless number.2 It is absolutely meaningless. But every public company CEO at an innovation-dependent company, all the tech companies, AI companies, even automotive, they live by this number. It's a number that Wall Street looks at. You have to report it as part of your quarterlies, and from there it's simple math.3
    When Mark grilled me, he was focused specifically on the PC group at HP. HP's number at the time for the PC group was about one and a half percent. R&D as a percentage of the PC group's revenue. Acer, which was a key competitor, was at 0.8%. Less than one percent. Roughly half of HP's number.4
    Apple was at four percent.5
    Mark's question, and he was really pounding on this, was: How do we get our ratios in line with Acer? Basically, he was saying: how do we cut costs so that our R&D expense as a percentage of revenue equals Acer at 0.8%?
    This is exactly the problem with choosing the wrong metric.
    Now I'm going to quote somebody who I think was probably one of the most insightful leaders in the business world. Charlie Munger. If you've ever watched any of his talks, he had a really strong opinion on certain metrics. Specifically EBITDA, earnings before interest, taxes, depreciation and amortization. Charlie referred to EBITDA as BS earnings. It was a metric Wall Street swore by, and Munger said it hid more than it revealed. His exact words: "Every time you see the word EBITDA, just substitute the word 'bullshit' earnings."6
    R&D as a percentage of revenue is the same problem in a different disguise. It's the metric that makes every company look like it's investing when all it's doing is spending.
    Mark was using a broken instrument to make a generational decision. If you make decisions based on R&D as a percentage of revenue, and then you do comparisons like "let's make our numbers look like Acer," what you are actually deciding to do is cut your R&D. That is generational. You will destroy a company's innovation capability over the next ten to twenty years before you can even have a hope of rebuilding it.7
    "We Are Not Apple and We Never Will Be"
    I looked at him and said: Why aren't we raising our R&D spend to match Apple?
    Mark didn't hesitate. He said: "We are not Apple and we never will be."
    I took offense at that. I was offended that he wouldn't even contemplate it. And I pushed back. I pushed back hard. I argued we could be Apple in areas where we had genuine advantage.
    Here's one example. Go back to September 2004, about a year before my meeting with Mark. Carly Fiorina was still CEO. Carly had just handed Steve Jobs access to the retail shelf space HP spent thirty years building.8
    At that time, HP controlled about nine, nine and a half percent of all retail shelf space for consumer electronics, the largest single entity holding in that category. Where did all that come from? It traces back to the calculator days in the 1970s. Those relationships, those stocking slots, that footprint: HP had spent three decades building that access.
    Apple was launching the iPod.9 It had no retail distribution in consumer electronics. None. And rather than HP taking advantage of that for itself, it actually opened the door and allowed Apple to come in. That is how the iPod got its traction. It bought Apple the time to build out its own retail strategy, which is ultimately what allowed Apple to be where it is today.
    That wasn't an accident of history. That was HP giving away a structural competitive asset.
    When I tried to push back on Mark, saying we could be better with the right investment, it didn't land. Mark viewed the PC business as a commodity. And if it's a commodity, you manage expenses. You don't invest in capabilities.
    Monthly Arguments and the Search for Better Metrics
    There was no decision made that day. But something shifted in me.
    That was the first of many monthly arguments I had with Mark. And they were non-stop.
    What it drove me to do was start looking for better metrics. We had something most companies don't have: HP's complete financial history going all the way back to the 1940s. I had access to the numbers, division by division, for one of the founding companies of Silicon Valley.10
    We were getting traction. I was actually getting Mark to align. I was getting the HP board to align. And then what happens? Mark gets removed as CEO and Leo comes in. Then Meg kicked Leo out and she took over. Then the split of HP into two companies.
    Acer today? Still roughly 0.9% of revenue in R&D.11 Twenty years later, almost exactly where Mark wanted HP to get to.
    What I Would Do Differently: Right Argument, Wrong Language
    If I'm being honest about what I would do differently, I had the right argument. I had the wrong language.
    The job wasn't to prove Mark wrong. Nobody changes their mind when they're being told they're wrong. I needed to stop speaking CTO and start speaking CEO. Meet him where he was. Make the case in the language of margin, risk, competitive position, the language he already trusted.
    But that language didn't exist when it came to R&D and innovation. That's the reason I spent the rest of my career building something better.
    And that is what this season is about.
    What Comes Next: The Metrics That Tell the Truth
    That conversation with Mark sent me looking. If R&D as a percentage of revenue was the wrong metric, and I believe to my core that it was, and is, then what's the right one?
    We went back through HP's own numbers. We back-cast all the way to the 1940s, looking at the numbers by division, by the overall organization. And then something unexpected happened.
    The archive team at HP gave me access to something nobody had looked at in decades: Bill Hewlett and Dave Packard's original notebooks.
    What I found in there pointed me somewhere nobody had thought to look.
    In the next episode, we're going to talk about the metrics that actually tell the truth when it comes to R&D and innovation.
     
     
    If this episode gave you some insights, shifted something, share it with somebody who you think needs to hear it. Particularly if you're trying to fight senior leaders around R&D investment. And in the comments below, tell me: what's that one benchmark that you are required to hit, and yet you've never questioned? Is it the right benchmark? Have you really looked at it? I genuinely would like to know.
    Show notes and this week's Studio Notes are over at philmckinney.com. Subscribe there. That's where the deeper analysis lives. Every Monday that we post, subscribe. You don't want to miss the next one.
    I'll see you in the next episode.

More Business podcasts

About The Innovators Studio with Phil McKinney

Forty years of billion-dollar innovation decisions. The real stories, the hard calls, and the patterns that repeat across every organization that's ever tried to build something new. Phil McKinney shares what those decisions actually look like. Phil was HP's CTO when Fast Company named it one of the most innovative companies in the world three years running. He co-founded a company and took it public. Now he runs CableLabs, the R&D engine behind the global broadband industry. This isn't theory. It's what happened. And what you can see coming if you know what to look for. Running since 2005, originally as The Killer Innovations Show, now The Innovators Studio. Tens of millions of downloads. Full archive at killerinnovations.com. New episodes at philmckinney.com.
Podcast website

Listen to The Innovators Studio with Phil McKinney, Aspire with Emma Grede and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Innovators Studio with Phil McKinney: Podcasts in Family