The IFIC Podcast

International Foundation for Integrated Care (IFIC)
The IFIC Podcast
Latest episode

22 episodes

  • The IFIC Podcast

    From Evaluation to Improvement with Rob Reid

    13/03/2026 | 28 mins.
    In this episode, IFIC Chief Executive Dr Niamh Lennox-Chhugani is joined by Dr Robert Reid, Chief Scientist Emeritus at the Institute for Better Health in Toronto and Professor at the University of Toronto and McMaster University.

    Robert is a global expert in population health, primary care, and learning health systems. In this conversation, he reflects on his journey from primary care physician into research and evaluation, and on what it takes to build learning health systems that genuinely improve health for patients and communities.

    Drawing on his experience embedding research within health systems in Canada and beyond, Robert explores how evaluation can better support real-world decision-making. He discusses the importance of balancing the priorities of researchers, system leaders, and communities; how rapid mixed-method evaluations can generate useful evidence for policymakers; and why evaluation should be built into implementation from the beginning.

    The conversation also looks at how learning health systems can expand beyond healthcare to address the wider determinants of health, working with partners across sectors such as education, urban design, and transportation. Throughout, Rob emphasises that evaluation is most powerful when it is used not just to judge success or failure, but to continuously improve care.

    The discussion draws on Rob’s work with Sarah Greene in the article Gathering Speed and Countering Tensions in the Rapid Learning Health System, which explores why health systems still struggle to generate and use evidence quickly enough to improve care. The paper highlights the tensions that arise when researchers, health system leaders, and funders pursue different priorities, and argues that these tensions must be actively managed if learning health systems are to succeed.

     

    Key insights from Robert Reid
    On the purpose of evaluation

    “It's not research and evaluation per se that's important — it's actually the practical applications of it to reach patients, all patients, and their communities.”

    On whose priorities matter

    “The whole purpose for the health system is to deliver health for patients and communities… those priorities should be our north star.”

    On bringing stakeholders together

    “When we bring people together, we can drive consensus in fairly efficient ways.”

    On generating evidence quickly

    “We have to generate evidence much quicker and be creative in the methods that we use.”

    On mixed methods

    “Mixing quantitative and qualitative evidence is absolutely essential.”

    On evaluation that actually changes practice

    “I view evaluation as essential for the improvement part of it… threading it through the implementation of any project.”

    On learning health systems

    “Learning health systems are really quality improvement on steroids.”

    On looking beyond healthcare

    “Health is a product of many things — the environment, work, transportation, family dynamics and social supports.”
  • The IFIC Podcast

    Evaluating Digital Transformation with Kathrin Cresswell

    05/03/2026 | 25 mins.
    In this episode, IFIC Chief Executive Niamh Lennox-Chhugani is joined by Professor Kathrin Cresswell, Professor of Digital Innovations in Health and Care at the Usher Institute, University of Edinburgh.

    Kathrin is a social scientist with extensive experience evaluating large-scale digital transformation programmes, including the National Programme for IT, the Global Digital Exemplar Programme, and most recently the NHS AI Lab. Drawing on this work, she reflects on what formative evaluation can offer complex, digitally enabled change in health and care.

    The conversation explores why impact evaluation alone is rarely enough in complex systems. Kathrin makes the case for formative and process evaluation that is embedded early, identifies emerging risks, and supports programmes to adapt in real time. Together, they discuss why some evaluations “sit on the shelf,” the tensions between independence and partnership, and the challenge of demonstrating impact when digital interventions can take years to stabilise.

    Looking ahead, Kathrin argues for evaluation that is closer to practice — co-constructed with frontline teams, focused on learning, and continually asking whether an intervention is truly addressing the need it set out to solve.

    Key insights from Kathrin Cresswell

    On formative evaluation
    “You become part of the intervention… you make it work.”

    On the limits of traditional impact studies
    “By then, you have no idea how it works… and you miss your chance to look at how you could make it work.”

    On expectations of rapid impact
    “Asking after two years whether a programme was successful… is absolutely crazy.”

    On evaluations that lack real learning
    “They’re the ones where the people who commission you want you to find something that they know in advance.”

    On being involved early enough
    “We always come in too late.”

    On staying focused on purpose
    “We need to keep coming back to what need this thing is meant to address.”
  • The IFIC Podcast

    Rigour and Relevance in Evaluation with Ingo Meyer

    26/02/2026 | 26 mins.
    In this episode, IFIC Chief Executive Niamh Lennox-Chhugani is joined by Dr Ingo Meyer, Head of PMV Research at the University Hospital of Cologne.

    With more than 15 years’ experience evaluating complex interventions across Europe, Ingo reflects on what it really means to evaluate integrated care in practice. From early European projects to his current work in Germany across oncology, palliative and primary care, he explores the persistent tensions between scientific rigour, practical relevance, and stakeholder expectations.

    The conversation examines the blurred boundaries between research, evaluation and performance monitoring, and the challenge of delivering answers that are both methodologically sound and useful to decision-makers. Together, they discuss mixed methods, stakeholder communication, co-design approaches, and the growing — but still uncertain — role of artificial intelligence in evaluation.

    Key insights from Ingo Meyer

    On the complexity of evaluating integrated care
    “The complexity of the evaluation is just as high as the complexity of what I want to evaluate.”

    On rigour versus usefulness
    “How can I look at things in an evaluation that is meaningful… and at the same time has enough scientific rigour so it’s done properly?”

    On the difference between research and evaluation
    “In research… maybe I find an answer, maybe not. In evaluation… it will be less open in terms of ‘sorry, we didn’t find anything.’”

    On shifting the core evaluation question
    “It is not often the right question, ‘Did it work, yes or no?’ — but rather, ‘How can I make it work?’”

    On tailoring findings to different audiences
    “My results need to be short, two pages, executive summary… but I always try to deliver the other things with it.”

    On the importance of context alongside numbers
    “I need to make sure that someone cannot just take the figure and run away with it… The context needs to be really glued to the numbers.”

    On combining quantitative and qualitative insight
    “I can see a lot of things, but I will never fully understand the why.”

    On co-design and citizen science approaches
    “At the beginning of my project, I’m pulling my stakeholders together… and we define at least a part of our research questions together.”

    On artificial intelligence in evaluation
    “I’m still a bit on the fence… I think it’s more a question of time rather than whether AI will ever be useful.”
  • The IFIC Podcast

    Social Prescribing Evaluated with Anna Wilding

    19/02/2026 | 27 mins.
    In this episode, IFIC Chief Executive Dr Niamh Lennox-Chhugani is joined by Anna Wilding, Research Fellow at the University of Manchester, speaking from Melbourne.

    Anna is a co-author of the recent paper Impact of the rollout of the national social prescribing link worker programme on population outcomes: evidence from a repeated cross-sectional survey, published in the British Journal of General Practice (available here: https://bjgp.org/content/75/761/e880). Drawing on this work, she reflects on how social prescribing has been implemented through primary care networks in England and what evaluation can tell us about its impact on population outcomes and patient experience.

    The conversation highlights the practical challenges of evaluating complex, system-wide interventions — including data access and governance barriers, working with imperfect real-world data, and balancing methodological rigour with pragmatic decision-making. Together, they explore what evaluation can (and can’t yet) tell us about social prescribing at scale, why early involvement of evaluators matters, and how multidisciplinary teams can produce more meaningful and useful insights for policymakers and practitioners.

    Key insights from Anna Wilding 

    On making complex evaluation accessible
    “We knew we weren’t going for quite a general journal… so we wanted to make it as accessible as possible for people to understand.”

    On linking data to study social prescribing
    “We applied for data from the GP patient survey… and then we linked it with data sets from NHS Digital… that’s where the social prescribing link workers are funded from.”

    On why evaluation design matters from the start
    “The data wasn’t designed for research… so our ethics committee… wasn’t going to allow us to access that data for research.”

    On the need to involve evaluators early
    “It would be good to have the people who would be evaluating it embedded in the process from the beginning.”

    On pragmatism versus perfection
    “Sometimes done is better than perfect.”

    On limits of causality in complex systems
    “We have associations… but we might not know that this definitely caused this.”

    On managing expectations about impact
    “It’s not going to have these massive effects that we’re sort of expecting.”

    On the value of multidisciplinary teams
    “Them together actually makes a more powerful message… mixing together is better.”

    On big data and its limits
    “Sometimes the big data isn’t well collected either, even if you think it is.”
  • The IFIC Podcast

    Evaluation from pilot to practice with Deborah Cohen

    12/02/2026 | 26 mins.
    In this episode, IFIC Chief Executive Dr Niamh Lennox-Chhugani is joined by Deborah Cohen, Professor of Family Medicine at Oregon Health & Science University and a member of the US National Academy of Medicine.

    Deb reflects on evaluating the scale and spread of an integrated, team-based care model in Oregon, originally piloted to support pregnant women living with substance use disorder. While early pilots showed promising outcomes, the expansion into rural settings revealed significant implementation challenges — offering a powerful real-world example of why evaluation needs to go beyond whether something “works” and focus on how and why interventions succeed or struggle in different contexts.

    The conversation explores what evaluation can reveal about implementation, scale-up, and system readiness, and how evaluators can support learning in complex health and care systems — particularly when programmes move from successful pilots to wider adoption.

    Key insights from Deborah Cohen 

    On what was known — and what wasn’t — about the pilot
    “We know that this program is effective. We know that it costs more to deliver this care. What we don’t really know is how to implement it.”

    On why scale-up struggled in new contexts
    “None of those behavioral health organizations really have been able to navigate a relationship with the medical care part of the team such that they can truly integrate medical care and behavioral health care together.”

    On what earlier evaluations missed
    “That was work that had not been acknowledged in the other evaluation.”

    On the role of evaluation in learning
    “Evaluation, in my opinion, is meant to be designed to accelerate that learning process as rapidly as possible, because it’s all about making mistakes and learning from them in a transparent way.”

    On how hard it is for implementers to hear difficult findings
    “It’s very hard as an implementer to take in lessons when things aren’t working.”

    On investing properly in evaluation
    “If you shortchange your evaluation, you tend to sometimes shortchange what you can learn.”

    On the value of mixed methods
    “Evaluation gets stronger when those elements are being done together… and done together iteratively.”

    On psychological safety and commissioning
    “The tone and culture gets sort of set from the top and having a commissioner that’s open to really understanding the complexity of what’s going on on the ground can make a huge difference.”

    On bringing evaluators in early
    “That latter case is way better because you really get to build relationships early on.”

More Health & Wellness podcasts

About The IFIC Podcast

The International Foundation for Integrated Care has defined nine pillars of integrated care based on the evidence accumulated over the last 2 decades. One of those pillars is Aligned Payments that Promote Integration. This is a difficult subject to understand particularly for policymakers, service managers and health and care professionals working in systems trying to implement integrated care who are not financing and payment experts. This short podcast series features our Chief Executive Dr Niamh Lennox-Chhugani in conversation with four leading practitioners who have been researching and designing new payment models around the world. They demystify the language of payment models and the different models we see emerging in different countries.
Podcast website

Listen to The IFIC Podcast, Feel Better, Live More with Dr Rangan Chatterjee and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/16/2026 - 6:15:36 AM