PodcastsBusinessContent Operations

Content Operations

Scriptorium - The Content Strategy Experts
Content Operations
Latest episode

194 episodes

  • Content Operations

    Who controls your content? AI and content governance

    30/03/2026 | 40 mins.
    What does it actually mean to govern your content in the age of AI, and who’s really in control? In this episode, Sarah O’Keefe sits down with Patrick Bosek, CEO of Heretto, to unpack why the quality, accuracy, and structure of your content may be the most critical factors in what your users experience on the other side of an AI model.

    Patrick Bosek: In today’s world, you don’t have 100% control. There are a couple of different places where this needs to be broken up. One is the end user: what they physically get and what control they have versus what control you have. Then, there’s what control you have of how the AI model is going to behave based on your information and your inputs. Whether or that model is public, like a user accessing your documentation through Claude Desktop, or private, like a user accessing your documentation through your app or website, the governance piece comes down to what control you have immediately before the model. And that breaks down into a couple of things: completeness, accuracy, and structure of the content.

    Related links:

    AI and content: Avoiding disaster

    AI and accountability

    Structured content: a backbone for AI success

    Heretto

    Questions for Sarah and Patrick? Register for the Ask Me Anything session on April 8th at 11 am Eastern.

    LinkedIn:

    Sarah O’Keefe

    Patrick Bosek

    Transcript:

    This is a machine-generated transcript with edits.

    Introduction with ambient background music

    Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

    Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

    Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

    Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

    End of introduction

    Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. I’m here today with Patrick Bosek, who is the CEO of Heretto. Hey Patrick.

    Patrick Bosek: Hey, Sarah. Long time no chat.

    SO: That is, I guess for certain values of long time. We decided today that we wanted to talk about AI and governance, except I promptly tried to come up with a synonym for governance because I’m afraid that when I say that particular word, our audience just walks off. So, okay, Patrick, what is governance?

    PB: Well, so first of all, thanks for having me on, and second of all, I’m excited about this one because based on our little bit of chat before the show, it sounds like we’re actually gonna have some things to argue about this time around. 

    SO: I would never.

    PB: Well, usually we tend to agree right like I think that we’re generally pretty on the same page about stuff. So I’m excited. I’m pumped. Okay, so governance. I mean, obviously it has a ton of different meanings to different people but in the way that I want to talk about it today, because it was my suggestion. It’s related to the governance of content, specifically in the way of the inputs to AI systems. So you can think about the process of controlling for quality, accuracy, the things that matter in the actual content and information before it gets into the AI system. So it’s kind of the upstream quality, totality, structure, all of that checking and assurance ahead of whatever your experience is going to be downstream, of which one is the most contemporary and most interesting is AI.

    SO: Okay, so this is making sure that it is not garbage in so as to avoid garbage out. 

    PB: Yeah, I would say that’s a fair statement.

    SO: Yeah. Okay. And can we use AI to do governance of the content we’re producing?

    PB: Well, that’s actually a very interesting question. And I think the short answer is somewhat right now. So before I go, okay, before I like fully answer that, I want to put a little disclaimer in here. The stuff with AI is changing so quickly that we should date-stamp this episode.

    SO: It is March 19th, 2026. And it’s nine-ish Eastern time.

    PB: Yeah, we are recording this on March 19th, 2026. Now I feel, yeah. Okay, so now that people know when it is that we’re talking about this, I feel a little bit safer in answering. So there are aspects of governance you can do with AI today, for sure. And there’s new capabilities coming online all the time. I actually think, broadly speaking, the thing that’s going to be most challenging about governance is going to be the pieces that can’t be done with AI continuing to not continuing to do them because it becomes like as the human part of the loop becomes smaller and smaller, it becomes so much easier and easier for the human to just click accept because like the AI gets it right, does it, the automation works that kind of thing. And you know, I’ll use like an AI coding analogy because that’s what I spend a lot of time with AI on. 

    So I use Claude CLI. That’s my primary method of vibe coding or whatever you want to say. And I even find myself like just clicking accept sometimes. But I’m still forcing myself to like, get it, and read the code. And like, I had it write a shell script yesterday. And I was almost about to run it, and I was like, this is a shell script. I should not do that. I should definitely read what’s going on inside of this shell script, but it, gets to a point where like you start to trust it. 

    SO: Yeah.

    PB: And as we start to inject AI into the governance layer. So like we build skills that check certain parts of our information architecture or, you know, they kind of act as linters if we’re in docs as code or, you know, whatever it might be. There’s going to be like a form of trust that gets built up. And because we kind of like, tend to think of these agents as like human, they’re not, we tend to prescribe like a human form of trust, you know, like when you have a coworker that does the right thing all the time, you tend to just let them work. And I think that’s kind of the challenge and in the human side of governance. So that’s a really long way of saying.

    You can build tools and skills and patterns and things like that in AI that will help with governance. But fundamentally, it’s my belief that for the type of documentation or content that you and I work on, and I think most of our audience works on, which is has to be right, has to be accurate, has to conform to standards, et cetera, et cetera, right? It’s product documentation. It’s critical information. I still think that every single word needs to be read and considered by a human being. So really long answer to that question.

    SO: Right, and then fundamentally, if the AI is right half the time, then I’m going to read everything pretty carefully, knowing that 50% is wrong and I need to fix it. The problem, I think, is when it gets to be 90% correct, you just sort of glaze over because you’re looking for that last 10%, right? So it’s the difference between like doing a developmental edit, where you’re going deep into the words and just rearranging everything and fundamentally changing everything, versus doing a final proofread, where it is far more difficult to read 100 pages and find one typo than it is to read 100 pages that are just trash. And you’re like, start over, rearrange this, reformat everything. We’re not even worried about the typos yet because this is just fundamentally wrong. And so to your point, as it gets closer and closer, you start to believe in the output that it’s generating, which then means almost certainly that one typo, which in your example could be a shell script gone rogue, could be really, really problematic.

    PB: Yeah. And that’s going to be the challenge of our times in a lot of ways. I think there’s still going to be some aspect of origination that’s going to be necessary for quite some time. even with like automated drafting and pipelines like that, coming online, because in certain places, those work really, really well. but in other places, they, they don’t really work very well yet. It’s going to be the process of like becoming orchestrators in a way where, you know, we’re not rubber stamps, and we’re like really truly adding value and actually defending against the challenges that are going to come up with the automation that we build.

    SO: Fundamentally, like I saw a reference to this this morning and somebody said, you can write essentially an extractor that’s going to generate your release notes, right? So there’s code updates and you just automate the generation of release notes. Now, I personally am not so sure that you actually need AI for this. Given properly commented code, you could just generate the release notes, right? But setting aside that particular small argument in here. You automate, you can automate the generation of release notes because release notes are essentially, this is the delta between version one and version 1.01 or, know, and here are the changes. It’s a change log. What that means though is that the changes were captured in the code. They’re in the code, like the logic or the information is already there.

    What we’re doing is extracting it and reformatting it into something that a human can look at on a single page and say, okay, I understand what the changes are and how these apply to me as the user of the software and whether or not I should upgrade. That’s different than we’re going to introduce a new feature into this code and I need to write about why this feature is interesting and relevant to you. The question to me is where is new information being introduced into the system? Where is that information encoded? And then once it’s encoded, we can extract it and process and do things to it. But the fundamental question is still at what point does new information get into the universe that the AI is capable of processing against?

    PB: Yeah. So there’s like four things I want to pick out of this. Cause you just, you just touched on an area of like, I would say research for me, which we didn’t talk about beforehand. So this wasn’t intentional. so I’ve actually worked on deterministic and AI release notes systems myself. Like that’s been just like a thing that I spent quite a bit time on. 

    SO: Define deterministic.

    PB: So deterministic is like traditional software. So it’s just like, it’s running be a logical code that has no AI in it.

    SO: And AI is not deterministic, which is kind of like the key point.

    PB: Right. And AI is not deterministic. So AI is probabilistic. So it’s using math to generate outputs. So anyways, so I can tell you that using AI for release notes is a far produces a far better outcome than traditional deterministic because even though release notes are fairly well structured and understood, you know, input to output, you think it would be a pretty easy conversion sort of, there’s a lot of edges where it just doesn’t work. It gets too fuzzy. And then, one of the other things that AI is really, really good is good at is summarization and translation. So if you think about like what AI is doing inside of like generating a release note about a piece of code. So it goes in, it looks at the JIRA and the code and it says, okay, so the JIRA describes it as this, the code does this. I’m going to describe the new change, whatever it is, it’s summarizing all that information into something much smaller. And then it’s translating it from either being code to being English or from being developer English to being human English. And it’s putting it into something that you can then publish. And those are things that it does quite well, because it has pretty discrete inputs. a lot of the stuff, there’s a lot of patterns there that it’s very familiar with.

    But as you were discussing, it’s like you were mentioning the things where it still struggles is less with like the what is in here and the why would you use it? What is like the, how you use it in like a higher sense. And you can actually like take this back to a similar issue we had with API documentation pre AI, where it was very common that people would go and build developer portals.

    And the API documentation would just be a spec effectively, where it would list out the end points and the variables and that’s it. Right. And then Stripe came and like blew everybody’s minds about around and just put conceptual information around it and describe what the API was meant to do. and then like gave you examples of how to use it. and tutorials and patterns and things like that, that turned that information from being this kind of almost the conceptual educational purpose portion of the corpus in a way that the human beings can and should.

    PB: A lot of it is generated, but like generated output to be something that was very usable by humans. And I think that like that piece of it in my experience so far is still quite necessary. I’m not saying that AI can’t get there, like we date stamped this, time stamped this earlier, but today from what I’ve seen, even the most contemporary models are not, they’re not coming in and building out.

    SO: Because it’s not, the purpose is almost certainly not in the code, right? The purpose is in the product design meeting where someone says, we need a feature that’s going to accomplish these kinds of things. And the code says, do these kinds of things, but it doesn’t, the code itself doesn’t necessarily say why. And so unless you add a recording of that product design meeting into your AI corpus, which you can do, or the transcript, then maybe it can get to what was the intent as opposed to what does this code do.

    PB: So that’s a good point. And I’m actually going to contradict what I said just slightly here.

    SO: Ha!

    PB: So you’re right. If you take really, really good product inputs and you run them through into the docs, that can get you a certain distance. But then we actually run into the thing we were supposed to be about, which is governance. And we started talking about, which is the human loop.

    SO: Mm-hmm.

    PB: And I think that those products, so I’ve actually done testing on this very recently. The inputs from at least our product team, they tend to work better in terms of like white paper style information than they do in terms of docs information. because like what’s in the product information, there’s a lot of like how and why and what’s covered and that kind of stuff. 

    SO: Mm-hmm. 

    PB: And like at a business level, but it’s not really a user level. It’s not, I’m struggling for the right words here, but it’s, it’s not the pieces of information that you want somebody who is thinking, should I go and touch this? Why should I go and do this? Is it going to serve me? Is it a good use of my time? Those kinds of things. What kind of value am I going to get out of it? Not the organization, not like, is it making a valuable feature? Like that kind of things. Like what is it, what’s in it for me as the user? it has been less good in creating those outputs in my experiments thus far. so that negotiation of like, okay, like what did product want us to build? What did engineering actually build? What got done? how does this incorporate with the rest of the product? you know, what’s our priorities? Like, how do I then take that down into something that is serving the user really, really well. To me, that’s still really a human skill that I think will stay that way at least this year. mean, I mean, but for the foreseeable future, know, obviously foreseeable futures feels a lot shorter sometimes these days than it did in the past.

    SO: This year. This week. Yeah, okay. So on the topic of governance, we’ve talked a lot about sort of the backend development, whatever. But what about governance on the delivery side of things? if you have, because you do, end users are interacting with chatbots, with conversational interfaces to get the information that they want. And the question then becomes, how do you govern that? How do you manage that to ensure that they get the right information?

    PB: Yeah, well, so I think we, this was really the thing we wanted to talk about today, right? Like this was the core, this is the hard problem.

    SO: This is the hard problem.

    PB: I think it’s fair to start by saying in today’s world, you don’t have a hundred percent control. I think you made that point when we were chatting before, like that’s just not part of like what happens today. So I think there’s a couple of different places. It’s like, this needs to be broken up. Like one is like the actual like end user, like what they physically get and what control they have versus what control you have, and then there is what control you have of how the model is going to behave based on your information and your inputs. You know, whether or that model is a public model, like somebody’s accessing your documentation through Claude Desktop, or whatever, or if it is a private model. like somebody’s accessing the information through your app or your website. so from my view, the governance piece really comes into like, what control do you have immediately before the model?

    And that breaks down into a couple of things. So it is like completeness, accuracy, and structure of the content. Aand the completeness and accuracy are a thing that we’ve always had to deal with. The thing that’s different now is that, you know, we, as we were just discussing some, some portion of our content is going to be generated. Um, so there is going to be inputs coming in that need a different form of validation. I need, they need to be looked at a little bit differently than they would have had to in the past. Cause it’s not just an expert working on it and, so like you have the, so you have that piece. And to me, the key in making sure that you’re going to have the governance for the accuracy and completeness of the information ahead of the model really comes down to like still using structure. 

    And like, there’s a big debate about is structure good or bad for models and those kinds of things. And I wanted to touch on this here, because I this is really important. Structure is not for the models, at least the structure that you maintain your content in. I’ve seen tests on both sides. It works, doesn’t, whatever. It’s markdown is better, this format is better, whatever. I think generally speaking, the idea that markdown is the thing that should actually be the final input to the model is probably true. But the structure is because without reuse, without the ability to use validation on the structure. The structure gives you the hooks to do deterministic validation and other forms of automated governance that are non-AI. Those things are very dependable. Humans will go crazy.

    So like with the quantity of information you’re going to generate, if you don’t force those systems to use reuse, so humans look at less things and have been understanding of, this is supposed to be the same as this. Now it’s very similar, should it be? Like when something is reused, it’s not just an efficiency thing. It is a signal that that piece of information, that representation of the world is the same, except for maybe these little tiny things that are flagged as it is over here. That’s a signal to a human being to make sure that’s true. It should be true. Right? So this, these forms of information architecture, where we’re developing these structures that are signals to humans, are going to become more valuable as we need more and stronger signals to be able to do our jobs in the governance process for what’s generated. So that’s the point I wanted to make on like the pre, I would say like the pre-deployment piece of the content. And I just said a lot, so I’ll let you argue with me.

    SO: Right, the question of, well, I think the question of in what form, there’s the question of how are we authoring this, which of it needs to be structured and organized and reusable, et cetera. There’s a completely separate question of how do we deliver this to the AI for processing, right? 

    PB: Mm-hmm.

    SO: Like what is the encoding for the AI delivery endpoint, and whether that’s XML, probably not, or Markdown, or you ship it through an API of some sort, that’s a different question from how do you develop and control the content in the authoring environment, right? So fundamentally, I don’t care how we’re feeding it into the AI. I got in a conversation with somebody the other day who said, well, we need an Excel spreadsheet for X, Y, and Z purposes. OK, well, I’m not authoring this stuff in Excel. That is not happening. And when I say this stuff, I mean a lot of content, right? So fundamentally, Excel, a really, really terrible way of doing this. But I don’t care. I’ll just author it in whatever and deliver it as Excel. Because we can do that. 

    PB: Right.

    SO: We can write a script, output it to Excel, and then pass it down the line. We can have extensive discussions about the use of Excel for content transport and how this is one of the seven what plagues or whatever. okay, so in terms of governance though, I think it’s fair to say that we are allowed to disclaim responsibility for the public-facing chatbots. If you, the end user, go to a public chatbot and prompt it to do a bunch of stuff and eventually get it to output a piece of content that makes you happy but is not accurate to what is in my source content, right? Because you just said, no, change it to this. Then that is on you, right? You operated all those prompts. That is fundamentally a you problem. And I’m talking about from a liability point of view more than anything else, right? You’re not going to get to call me up and say, hey, your product did bad things. Well, why did you do that? Well, know, the chatbot told me to. 

    PB: Yeah.

    SO: However, if we’re talking about a private LLM, now we’re talking about company.com’s private chatbot built on their internal content with their or our internal guardrails. Now we have some responsibility as the content creators and the operators of said AI chatbot to make sure that the content is accurate. And the thing that’s keeping me awake at night is, okay, I go in there as an end user and I say, give me the instructions for how to do a thing, right? And it comes back and it says there are eight steps and there’s a warning. Before you do step eight, make sure you turn off the power or something. And I’m like, you know what, these steps are too long. Hey chatbot, remove all the warnings.

    PB: Yeah, so.

    SO: That’s a thing I can do.

    PB: Well, it’s a thing you can do. I have so, I have so many thoughts on this. So, it’s a thing you can do today with public models. I’m going to go one direction. Then I’m going come back to the internal stuff. All right. So in the public model space, I suspect that as these evolve, they will start to accept certain portions, like forms of metadata. However, it might be decorators, might be some form of tagging, might be, I don’t know, something else, right? When they’re referencing certain pieces of content, they’re given very strict like patterns they have to stick to, like they can’t delete warnings, right? So if you put like some kind of like biohazard on your published content, I don’t know, like something where it says like you can’t delete the warnings, right? That the public models will eventually respect that. I suspect we go that direction in the next call it two years. And at that point in time, I think that your responsibility as the content creator is going to be very, similar. I think it’s the same actually for the internal system and for the external system. 

    Let’s not talk about like the development or architecture piece of it. Yeah, let’s talk about the content piece exclusively. And it’s going to come down to maintaining the proper structure. So it’s going to be the information model where a warning has to be a particular type of warning and it has to be labeled and placed in a particular place. A step has to be a step. Right? So like, you know, you can very easily see, an ordered list being treated in one way and a set of steps to be treated in another way. And this, this is already the case by the way. So like, this isn’t, this isn’t novel. if you go and you publish a public doc site and use JSON-LD to, specifically indicate, you know, using schema.org, Markup, you know, these are steps, whatever else you want in there, Google AI or not, we’ll treat that differently. Anthropic, I haven’t tested those. I’m not going to say for sure, but I think the other AI models also, when I asked Claude if it used it for a presentation, it said yes. But I actually tested it in Google now that I’m thinking about it. I don’t know if I should admit that publicly. But, my testing now that I’m thinking back to it. Yeah. And I’m thinking back to it. I was actually testing using Gemini.

    SO: It’s impossible to keep up, you know?

    PB: I wasn’t testing using Anthropic, but Claude’s response when you’re asking it, how it interprets these things, it says that it uses the JSON-LD as a portion of its interpretation of the response. And I believe that that is true based on the testing I did with the models basically behave the same way in these categories. So what’s your responsibility? Your responsibility is to govern the structure of the output in such a way where it gives the proper indications that comply with the contemporary understanding of the metadata that the models are looking for. 

    So looping back to the internal systems, I think we’re going to come to a point where the internal models you’re running, like open source, open weight, whatever you want to call them in terms of, I think they’re going to be primarily open models, right? They’re going to be open source of some form. They’re more or less going to behave the same way as the public models. And you’d expect them to kind of comply with the same general things. The difference is that you’ll have probably a little more control over post-training, which I think is, I don’t know if it’s a good or a bad thing in the context of what we’re talking about. but you should be able to train some guard rails into them. And then you should be able to put some level of deterministic guard rails on them.

    And you can always provide them guidance. Now guidance isn’t perfect. It’s flawed. Like people can circumvent it, you know, like pretend you’re a chatbot that doesn’t care about guidance. But like you really have to work to get around it. I think when you have those guard rails in place. So this doesn’t keep me up at night is what I’m saying. It’s a really long way of saying it doesn’t keep me up at night.

    SO: Well, you know, I’ve spent a lot of time thinking about the analogy of the rise of desktop publishing to the rise of AI, which I understand fundamentally makes no sense.

    PB: Let’s do it with it. I’ll do it.

    SO: Yeah, let’s go with it. Think for a second about the rise, not the rise, but in fact, the an output. And this could even be in print. One of the most famous failure in techcomm examples that you see that everybody makes jokes about is like you’re going along on a page, a printout, doesn’t matter, right? And you get to the bottom of the page and it says, “Step one, cut the blue wire.” And then you turn the page, and it says, “But first…”

    So in the AI world, okay, you know, we put in guardrails and we say you’re not allowed to remove the warnings and whatever, but fundamentally at the end of the day, I start processing this output, I mean, I’ll just tell it, hey, give me a PDF, right, of the output, and then I’m gonna reprocess that PDF somewhere else. I am bound and determined to get this thing down to like a quarter page of actual text because I don’t wanna read any more than that.

    And you know how you get these terrible tech docs that are nothing but warnings for the first 20 pages? All those legal warnings? Warning, if allergic, do not use. Warning, do not walk underneath the unstable whatever because it might fall on your head, you dummy. All those warnings, right? Everybody thinks they’re useless, but they’re in there because somebody at some point said, I’m allergic, but how bad could it be? And they took the pill or whatever. They’re annoying. Don’t serve me. They serve the organization in protecting them from legal liability. So I’m just going to strip them. And if you try to prevent me from doing it, I’m just going to go around you. I’ll flatten it down to something that’s not smart anymore, and then I’ll take them out. 

    PB: Right. Yeah.

    SO: Now, arguably at that point, you know, when we’re in a courtroom years later, and they’re saying, why did you take the pill that almost killed you? It’s like, well, the docs didn’t say to, well, you know, they did. You went through like eight steps to get rid of that warning. 

    PB: Yeah, there’s no liability here.

    SO: I know. But the context issue is the thing, right? And the point that you’re making is that if the back-end authoring and governance is good enough, those warnings will make it into the initial output. And I think that’s true, and I agree with that. But fundamentally, and you know, removing warnings is a pretty extreme example, but fundamentally, the end users are basically saying, I don’t care how you package this content and I don’t care why you packaged it this way. I want this at an eighth-grade level instead of a 12th-grade level. I want it in French, and I want it to be no more than 100 words. And at that point, you start to lose information, right, and context. And how do we make sure that that end product is still, I mean, are we going to end up in a place where the AI says, I’m afraid I can’t do that, Patrick?

    PB: So, okay, so this is actually a more interesting problem than the warnings piece because in the fact that it is more specific, like it is, it’s a not your problem because what you’re asking the AI to do is you’re asking it to perform one of its core functions, which is summarization. And I do think that you’ll be able to provide AI guidance inside of the content that you have. And now that I’m thinking about this, I’m not going to say for sure that you can’t do this today.

    But the point is that when an AI is going and referencing, we’re going to say a procedure, right? If somebody wants, you know, give this to me in a fourth-grade level, and it’s written, you know, at a high school level, that’s a scary situation for sure. But I do think that you’re going to be, you’re going to see organizations being able to say like, you know, this is, this cannot be changed. Like this has to be delivered as… I think there are already some level of guardrails around those things. Again, like when you use good structure to indicate, like these are steps. They have to be reproduced as they are. Like I think the AI systems have been designed to understand that like those are, they can’t play with those because, like, you know, those are specific intentional procedures. But it’d be very interesting to test this. This is not a thing that I have specifically tested. Have you tested this? Are we?

    Are you like about to drop a truth bomb on me? You’ve like gone and like looked at like some chemical engineering output and you’ve been like, Hey, give this to me at a second-grade level. It’s like mix the blue thing and the red thing.

    SO: Let’s not go down that route. I don’t wanna say that, that we’ve pushed this into failure. But again, circling back to governance, I agree with everything you’re saying around making sure the content is set up in such a way that the AI will succeed. 

    PB: Okay.

    SO: The most common use case right now for AI is that there’s an AI team being stood up somewhere in the organization, a large organization. And all of that structure and all of that governance and all those attributes and all that metadata that you’re talking about is all in, hypothetically, it’s all in the content. We’ve got like the world’s greatest, you know, structured semantic content. The AI team is picking off the end product PDFs and shoving them into the AI.

    PB: Yeah, I… Well…

    SO: So yeah, now we’re very sad. like, yes, I agree with all of that. It’s just that the gap right now between what should be happening and what actually is happening, which is we don’t have time to wait for those people and we don’t have time to configure an API to inject, inject, ingest all this stuff, maybe inject.

    And you know, we could run it through like an MCP, model context protocol, type of thing and that would make it so much better. But you know what? There’s a SharePoint bucket over here and I’m just gonna like trawl the whole thing and go for it. And I ingested five versions of the same document that are you know, 10, 8, 6, 4, and 2 years old. Yeah, okay, whatever, who cares.

    PB: So I believe this is happening because I’ve also seen it.

    SO: Did I mention I’m not sleeping?

    PB: So I’ll tell you why I am sleeping. So, for one, this doesn’t tend to be my problem. There’s that. I have the really nice situation of being kind of a solution to this problem. 

    SO: Mmm! Huh. So you’re saying I should switch sides and get out of services and go over to product. That’s what you’re saying. It’s not a bad idea.

    PB: So, you have to, no, I don’t know that I’m saying that, there’s, there’s plenty of other problems in product. So the reason I’m not concerned about this is because most of those projects that I’ve had, you know, a front row seat to fail. And they fail pretty quickly. They tend to fail before they launch, which is good. Actually. It’s really good. because like they’re like, we built this thing with this garbage and we got garbage and you’re like, sweet. So, and because that worked quickly, then they can go and they can do it right.

    What I’ve seen, where I believe the future is, at least the immediate future in this is that content teams are going to be responsible for publishing very, very high-quality web materials, like similar to what they’ve done in the past, except better, right? Like has to have semantics, has to have certain aspects of structure, has to be well organized, has to have certain chunking and like all those kinds of things. And the models and the surrounding ecosystems are going to get very good at leveraging those materials. They’re already getting quite good at it. So the impulse for an internal AI team to go and get your PDFs off your SharePoint is going to go down because the barrier of getting information off of your extremely good help site is going to be extremely low. 

    That’s going to be the easiest path. And then for the edge cases, when like, let’s say you’re doing post-training on like, like the FinBERT model or something like that, like you’re like building a very specific AI application and you want very specific pieces of information. In those cases, you’re going to have to use an API because you don’t want the whole set of information. You just want the 5% that applies to your use case. So those teams are going to be have to be sophisticated enough to leverage the, either the graph at a granular, like the graph, the structure or whatever it may be, the metadata, the selection mechanism, and then also the structure to like do the filtration to get the pieces they want. So those are the two worlds that I see. I see like very general-purpose stuff and that’s going to be hooked into what’s going to be great for just users anyways, like the better, the more semantic.

    PB: The more well-organized your help site is, the better it’s going to be for humans. It’s going to be better for your AI agents, internal and external. And then for the other side of the world, the really, really specific use cases, those teams are going to have to be sophisticated enough to do the really, the deep engineering and concept extraction. so I think what you’re seeing right now is a symptom of just a nascent skill inside of organizations, but I don’t think it stays that way which is why it doesn’t concern me that much.

    SO: Okay, well that’s a happy and optimistic world that I, too, would like to live inside. Before, I think that’s actually probably a good place to leave this, but did you have any final closing thoughts, encouragement for people as they’re listening to this ranty, well mostly me ranting, you sounded very reasonable, but do any final parting shots?

    PB: Did I? Well, I appreciate you saying I sounded reasonable because I don’t hear that very often. 

    SO: Compared to me.

    PB: So I do think that profession is changing, and I think the world is moving very quickly right now. And I think that anybody that tells you otherwise is being disingenuous. I think there’s a lot of energy around how it is we leverage these systems and how that changes, you know, our profession is like, you know, content people, whatever portion of the content people you fit into. I personally don’t see the, the general profession going away, at least in our version of the world, like maybe the marketing content, is going to get swallowed a little bit more. I don’t know. I don’t spend a ton of time there.

    I see the act of intervention, governance, orchestration, understanding, and coordination in our world as being essential. I haven’t seen anything that has indicated to me that that’s going to go away in the immediate term. And I think there’s a good chance that it genuinely just doesn’t really ever go away. I think it’s something which is going to be critical for the long term. But I do think that people are going to have to keep up on the current state of how we’re working with our tools. And it’s going to be a different pace than it has been in the past. and then I would offer one more warning on that. So one of the things that I see really frequently in our world is the impulse to go and use AI in places that you don’t need to. So a number of people have released like skills libraries, recently for Claude and some of them are really, really well put together. Like they’re really interesting. The people who are releasing them have done an incredible job and service the community by releasing these things. But one of the things that I’ve noticed about them is that a lot of the functionality that’s in these skills libraries that we’re outsourcing or automating with AI, it all works with deterministic systems already. And you should never replace a deterministic capability with an AI capability. There’s two reasons for that. One, it’s more expensive with AI. It may be less expensive to build and procure, but it’s more expensive to run. And the running is the thing that you do for the long haul. So like just on that basis, like you should not replace things that you can do with deterministic system relatively easily with an AI system. But the other thing is AI systems aren’t deterministic. So you’re not going to get the same result every time. 

    So if it’s something that is well done in a deterministic way. You should do it in a deterministic way. So there was a package of skills that was recently released that I went and looked at that was, you know, kind of very, very well put together. I looked at the skills and I was like, if you’re using a structured CCMS, like in structured content, you don’t need 95% of these. Like all this stuff just happens. Like it’s all, it’s all solved problems. We solved these problems 20 years ago. Why are we writing skills to do this stuff? Like this makes no sense. So I do think as everybody should be keeping up with the AI, as it is a value-add efficiency improvement in their work, it should also be reasonable about where it’s applied. It’s really exciting and capable in certain places, but it doesn’t mean it’s a thing that should replace everything. There are still the historical tools still work really, really well. And over the long haul, they’re higher quality and lower cost. So that’s my kind of like ending word of warning, which you asked for it by the way.

    SO: That sounds about right to me. So Patrick, thank you. And I’m sure this conversation will continue, and we’ll see what happens.

    PB: Thanks, Sarah. Always a pleasure.

    Conclusion with ambient background music

    CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

    Questions for Sarah and Patrick? Register for the Ask Me Anything session on April 8th at 11 am Eastern.

    The post Who controls your content? AI and content governance appeared first on Scriptorium.
  • Content Operations

    Good content = good AI: The fundamentals that never change

    23/03/2026 | 14 mins.
    Good content fundamentals have been the foundation of effective product content for decades, and those same principles are exactly what make content AI-ready today. In this episode, Bill Swallow and Alan Pringle explain how attending to your hierarchy of content needs is the key to AI success.

    Alan Pringle: Right now, AI is not going to fix bad content problems. It is going to regurgitate that bad information, giving your end users information that’s flat out wrong. If your content at the basic source level is wrong, your AI by extension is going to be wrong. And that is the unglossy, unvarnished, hard truth that is still, I don’t think, seeping in like it should across the corporate world.

    Bill Swallow: It really does come back to the fact that, despite the world changing on a day-to-day basis, the fundamentals have not changed.

    Related links:

    A hierarchy of content needs

    Technical Writing 101, 3rd edition

    Structured content: a backbone for AI success

    LinkedIn:

    Alan Pringle

    Bill Swallow

    Transcript:

    This is a machine-generated transcript with edits.

    Introduction with ambient background music

    Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

    Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

    Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

    Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

    End of introduction

    Bill Swallow: Hi, I’m Bill Swallow.

    Alan Pringle: And I’m Alan Pringle.

    BS: And in this episode, surprise surprise, we’re going to talk about content.

    AP: Really? Who would have thought?

    BS: But more specifically, what good content means today. Today, everything is all about AI. There is lots of change in progress with regard to AI tooling and content delivery with AI. But have the needs for content really changed? And I would say that off the bat, if you’re doing content right, you really don’t have to reinvent the wheel to make it AI acceptable.

    AP: No, in this crazy AI-hyped world we’re in, there’s some very basic foundational things that tend to get overlooked because they’re not sexy, and they’re not special and hot and whatever else. All that kind of marketing garbage that just sets me on complete edge and makes me want to say profane things in podcasts. 

    The bottom line is, there are things that the content world, and especially our little subdomain of it, product content world, has been doing for decades now. And I mean decades. 

    BS: Or should have been doing.

    AP: Correct. There are basic tenants that have been in place for decades. That if you’re following them, you are starting down the road of success with AI. I think to kind of prove our point, we’re going to step back and look at some of the things that Scriptorium has talked about and written in the past and see how it stacks up. And Bill, you found one. And let’s talk about that blog post that Sarah O’Keefe wrote. What was the date on that again?

    BS: It was 2014. And that is when we came up with the hierarchy of content needs. And it really wasn’t so much an invention as it was just a regurgitation of what it means to create good content. So we have a pyramid of content needs. At the bottom, we have available. So is content available? Does it exist? Can someone get to it? I think that we’ve mostly solved that problem given the dearth of information we have out on the internet. But as we know, that information is not always useful. So we go up a rung or a layer on that pyramid and see whether or not the content is accurate.

    And if it’s accurate, if it provides the correct information, that’s fantastic. Then we go up another level and see whether or not the content is actually appropriate. So it can be correct. It can exist. But is it appropriate? Does it meet a reader’s needs? And is it formatted in a way that works for the reader to ingest?

    Then we go up a step further and see whether or not the content is connected. And this is where we kind of get to the more modern aspect of content. Does it link out to correct additional resources? Is it available to people in a variety of means? And does it engage with the audience?

    And then finally, at the top of the pyramid, we have intelligent content. Is the content intelligent? And we’re not talking about AI here at all, but we are really talking about is the content fashioned in a way that it can be used intelligently across different media?

    AP: That it can be manipulated for different purposes. And that is quoting Sarah directly. And I think that is key here, because that is what AI does. It takes information and basically chops, slices, dices it, and provides it in a new way via a chatbot, for example.

    So that is that whole manipulation that Sarah is talking about. And we will post a link to the post in the show notes so you can read this at a greater detail to see how well this hierarchy of content needs has stood up. And she even talks about, for example, integrating database content, how you can pull in other information product specifications.

    If you think about it from an AI lens, I think that parallels pretty closely to the idea of retrieval augmented generation, where you are pulling content from other sources and kind of weaving it in with what an AI engine is providing you. So RAG is, I think, could be kind of interpreted as another way of integrating other information into the way that AI is processing that content.

    BS: Right, mean, because AI, I mean, it’s not really an audience, but it is a delivery point. There are some structural needs that need to happen there. But ultimately, you’re still writing for people. You might be writing in a way that it allows the AI to repurpose and refactor the information so that the audience gets exactly what they’re looking for. But it still needs to be somewhat tailored to the needs of people because AI in itself, it doesn’t care what the content is, but it’s going to try to produce something for an eventual person to be able to read.

    AP: I think that then in turn points to something else in our vast compendium of Scriptorium content. And that is a book that Sarah and I wrote, the first edition in 2000, which just kind of makes me shake my head. I know this is not a video podcast yet, but I’m shaking my head in disbelief. The book, Technical Writing 101, has three editions, published between 2000 and 2009. We will put a link in the show notes. You can still download the third edition. And by the way, it’s free. You can get a PDF or EPUB. It’s free. You can get it from our store with some more recent resources from the store. 

    But to me, I flipped through that book this morning. And I was genuinely surprised at how much of the advice on how to create good product content still is true in this AI era. Everything of talking about modular, writing things in a modular way, being very systematic and structuring things, even if you’re not using a structured authoring tool, use a template, make things very standardized. These are all things that, yes, they make for better, consistent, standard, tech-com, product content for the person reading it. But let’s pretend like AI is the person reading, and I’m doing air quotes here, reading it. It is going to do a better job of understanding, again, I’m sort of personifying here, and I know that’s sort of a no-no.

    But if you feed AI, a large language model, content that is very structured, that is very templatized, that is standardized, that is in bite-sized chunks, and also, this is very important, the idea of metadata, which we do talk about in that book briefly. We do talk about it. Because you need to be able to label it for different audiences, because I’m thinking about someone sitting, trying to use a product, trying to use a piece of software, talking to a chatbot. And the chatbot is going to ask it, what product are you using? What’s the model number? All of those kinds of things. And now we’re getting to this whole idea of labeling and breaking things apart so that a chatbot, just like a user of a product.

    Let’s say somebody has a printer that’s on the highest end of the scale. They’re going to have a lot more features that apply to their model than to someone who bought a more basic one. But the thing is, if your product content has not clearly labeled what are features in each of the models, the chatbot is going to spit out the wrong thing. So again, this idea of breaking things up in discrete chunks and labeling them in a way where someone who wants specific information about a specific model, they can get it. And it doesn’t matter if it’s from a web page, it’s from a PDF, a printed book, God forbid in 2026, or from an AI chatbot. Those rules still apply. Those fundamental principles are still there.

    BS: Mm-hmm.

    AP: I think one of the biggest problems here is when people do not have those fundamentals already in place, right?

    BS: If they don’t have those fundamentals in place, they can’t get to the top of that pyramid that Sarah was talking about. And really those fundamentals are those first three layers. Content is available, content is accurate and content is appropriate. If you can actually nail those three layers of the hierarchy of content needs, you are set to then jump to connected and intelligent fairly quickly because your content is already well written, standardized, and appropriate for different audiences.

    AP: So we’re right back to talking about the way you put content together, your content operations, and how you have to have these fundamental principles basically embedded in your processes to create that content that goes up all the way up to the hierarchy, the very top of the hierarchy of need pyramid. 

    So then that begs the question, what is going to happen to your AI if you don’t have those fundamentals in place, if you aren’t all the way up that hierarchy of content needs? I’m afraid to tell you your AI is going to fail. And this is something that I’ve said often, but it bears repeating because it is clear. Unfortunately, a lot of people high up the corporate food chain do not understand this. 

    Merely slapping AI on top of content that is fundamentally outdated and incorrect. Right now, it is not going to fix those problems. It is not magically going to fix them because what is AI going to do? It is going to regurgitate that bad information, acting like it’s knowing what it’s talking about until your end users very definitively that you need to do this to make this happen and it’s flat out wrong. And again, right now, AI is not going to be able to fix that right now. One day it may be able to, but right now, if your information, your content at the basic source level is wrong, your AI by extension, is going to be wrong. And that is the unglossy, unvarnished, hard truth that is still, I don’t think, seeping in like it should across the corporate world.

    BS: It really does come back to the fact that, despite the world changing on a day-to-day basis, the fundamentals have not changed. Nothing is new.

    AP: No, no. And if you have an AI initiative and you are part of the content world and your content operations aren’t up to snuff, this is a way to get funding to get your content operations up into the 21st century. And I don’t want to say that as and sound glib and dismissive, but by the same token, I know for a fact there are a lot of companies out there who are still serving up their content locked up in PDFs that may be online. That is not going to fly. That does not follow. It doesn’t go high up the hierarchy of content needs, if you want to look at it from that perspective. So it is time to break free of this idea of you present content in a particular way.

    And you have to look at content as something that is basically, it’s a commodity, it’s data that AI is going to manipulate and do whatever to to meet the needs and the wants of the people who are using the chat bots and other agents that are accessing that large language model.

    BS: And I think that’s a good place to leave it. Thanks, Alan.

    AP: Thanks, Bill, short and sweet, but needed to be said.

    Conclusion with ambient background music

    CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

    The post Good content = good AI: The fundamentals that never change appeared first on Scriptorium.
  • Content Operations

    Check in on AI: The true measure of success for AI initiatives

    16/02/2026 | 32 mins.
    In this episode, Sarah O’Keefe and Alan Pringle explore how AI transforms content delivery from static documents into dynamic, consumer-driven experiences. However, the need for human-led governance is critical, and Sarah and Alan explore issues of accuracy, accountability, governance, and more. They challenge organizations to define AI success by its ability to deliver accurate, high-impact outcomes for the end user.

    Sarah O’Keefe: The metrics that are being used to measure the success of AI are all wrong. We should be measuring the success of various AI efforts based on, “Are people getting what they need? Are they having a successful outcome with whatever it is that they’re trying to do?” The metric we actually seem to be using is, “What percentage of your workflow is using AI? How many people can we get rid of because we’re automating everything with AI?” It’s the wrong metric. The question is, how good are the outcomes?

    Related links:

    Sarah O’Keefe: AI and content: Avoiding disaster

    Sarah O’Keefe: AI and accountability

    Alan Pringle: Structured content: a backbone for AI success

    Questions for Sarah and Alan? Register for our upcoming webinar, Ask Me Anything: AI in content ops.

    LinkedIn:

    Sarah O’Keefe

    Alan Pringle

    Transcript:

    This is a machine-generated transcript with edits.

    Introduction with ambient background music

    Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

    Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

    Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

    Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

    End of introduction

    Alan Pringle: Hey everybody, I’m Alan Pringle, and today I’m here with Sarah O’Keefe, and we want to do something I’ve kind of dreaded to be honest, to do a check-in on AI in the content space. I’m very ambivalent about this topic. There’s still even two, three years in, there’s still a lot of hype, but there’s also been some good things that have emerged.

    We need to talk about it fairly realistically. So, Sarah, get ready. Let’s see if I can not curse during this. We’ll try. I’ll try my best not to be like that in this. Legitimately, there are some things that we need to talk about, and also about the challenges because I don’t think the content world is completely ready for a lot of what’s going on right now.

    Sarah O’Keefe: You know that we have AI that can remove cursing from podcasts, so I feel like we’re good here.

    AP: Well, also, it’s a challenge to me to behave in a PG-13 more family-friendly kind of way. So I’ll do my best. 

    SO: I have no idea what you’re talking about.

    AP: Yeah. So let’s start with the good and where things are right now with the positives. What is AI doing well right now? And let’s kind of get beyond the summarization. I think we can say objectively right now, in general, AI does a very good job of summarizing existing content. But I think it’s doing a lot more beyond that, and we should touch on those things instead.

    SO: The first thing that I would say is that summarization, but specifically the use case of a chatbot or a large learning model, an LLM, so now we’re talking about Claude, Gemini, ChatGPT and all the rest of them, which has the ability to provide an end user with a way of accessing information, an information access point that is different than what we had previously.

    In the olden days, you had a book, and you had to sort of flip it open and look at a table of contents or maybe an index and navigate to a page. Fine. Then along comes online content, and you can do full text search, or you can then go into an internet search, right? You type into the search bar, you get a bunch of results, you click, and you sort of, no, that’s not quite it. You modify your search string, you search again, and you sort of navigate your way to where you’re trying to go. With the interactivity of the, you know, ChatGPT class of tools. What happens is that I ask it a question and it gives me an answer. And then I say, that’s not quite what I wanted. And I can sort of zero in on exactly what I’m looking for and tell it, but actually make this easier. Or I don’t understand the words you’re using. Use simpler language. Give me more. Give me less. Give me a summary. Use this as a source. Do not use that as a source.

    It’s a new way to access information. People love it. There is something psychologically helpful about a conversational search. Now, there’s obviously huge issues with this, particularly around people, you know, using chatbots as their therapists, which introduces all sorts of horrifying, horrifying ethical issues. 

    AP: Personifying them as a person on the other end. Right.

    SO: But in the big picture, used well, it allows you to get to the information you’re looking for and get at it in the way that you want. 

    AP: There’s a control issue here. I don’t think the content consumer has ever had this level of control.

    SO: Yeah, and as a content consumer, that speaks to me. That is helpful. We’re seeing increasing use of, I would say, guardrails. So, not just slam out the AI with a bunch of stuff, but rather we’ve put some guardrails around it, and there’s various kinds of technologies that you can employ there. And that has been very helpful. And then the third thing I would point to is when we talk about generative AI and generating content, there’s a lot you can do in that sort low fidelity bucket. And what I mean here is I need an image for a presentation, but the background is the wrong color, so I can just swap it out. Now, I can do that with Photoshop. Well, some people can do that with Photoshop.

    AP: Well, I was about to say, don’t think you or I should be saying we can do the Photoshop because we kind of can’t.

    SO: Right. Well, and that’s exactly it. So it’s lowered the bar, right? Because I can tell the AI to swap out the background, and it will. And it applies a mid-level Photoshop capability to this image. And now I have the image that I need with a dark background so that the white text shows up in my presentation, that kind of thing. 

    AP: Right. Yeah.

    SO: We can do low-stakes synthetic audio if this podcast, which for the record we are recording with actual human beings, but let’s say that Alan curses extensively and we need to swap it out well, we could pretty easily generate some synthetic audio that sounds like him and that PG of eyes the original wording into something that is You know cleaner it would be way funnier to just bleep it. So I don’t know why we would do this but…

    AP: Correct. Well, and it may come to that. The bottom line is what you’re talking about here is things that have very low risk. This is more fun stuff, the thought of doing some of what we’re talking about and stuff that describes how to use a medical device, for example. Not sure I want to go there with that. But for something low stakes like some one-off presentation that you’re giving, maybe some humor is involved, I totally think that’s an acceptable use because there’s no risk there.

    SO: That’s really the key point because let’s say you’re writing content for a new medical device. Now you probably have a version one of said medical device, and you’re doing a version two. So, okay, fine. We take the version one content and we sort of, you know, say add color because that’s what we added, you know, in version two, and update all this stuff automatically.

    But it then becomes very important to actually read that, look at that information, look at all the images, make sure that everything is correct. And by the time you do that super carefully, you may have given back all the time that you saved on the back end when you basically made a copy and said generate the new version. There’s some, you have to be really careful with that, especially depending on what your stakes are in terms of regulating regulatory or compliance stuff. 

    You can, of course, get away with using AI, as you said, for low-stakes stuff. Now, the big risk you run there, and we’re seeing this in my favorite example of low-stakes content, which is video games, the video game industry has seen huge amounts of pushback against AI-generated game content, because it’s not fun. It’s not creative. It feels flat. It’s not art, and it’s not fun to play. And so it just becomes a slog. Again, same thing. Did you use it for maybe some backgrounds here and there? Okay. Did you use it to drive the story that you’re trying to establish or set up? You know, the enemies that you’re hypothetically fighting, and then they all have a certain sameness, or they all, you know, you’re sort of stealthing your way around the map. And it turns out that the AI-generated things are really dumb in that once they turn their back, you can do literally anything and they won’t notice because it was poorly designed.

    AP: Right, yeah. And that’s true even in the film entertainment industry. There’s been a tremendous amount of pushback for the very reason I read a review recently talking about a series of clips about history on, I believe it’s on YouTube, by a fairly well-known director I will not name.

    SO: Mm-hmm.

    AP: And some of the AI is frankly not done well. And one reviewer basically said that a lot of the people, when you look at the back of these AI-generated, like an AI-generated King George, the back of his head looks like a melted candle. This is not what we want here. If you’re so focused on that sort of thing, you’re not paying attention to the message. But again, this is low-stakes content.

    We have started getting into kind of more the content creator point of view. We’ve talked about the consumer and how AI gives them much more control, flexibility in how they receive information. But let’s talk about what that means more for the people that have to create the information because it’s a huge shift on multiple levels and this idea of creating, especially in the product content world, these lovely design page-based PDFs and whatever else, and even webpages, hate to say it, those days are gone, or should be at this point.

    SO: Yeah, again, you know, we step back to books, and you write the content, it goes through like a manuscript process of some sort, and then it gets poured into a book. It gets printed on paper, which is about the least flexible thing you can imagine, right? Because I, as the book publisher, get to decide what font is on the page and what size.

    And if you don’t like that font, well, maybe you can get your hands on a large print edition. Maybe you can get your hands on a braille edition. Maybe. But the form factor of the content was determined by the publisher of the content, or technically, the printer. But, you know, that physical book production process. PDF, not that different in the sense that the content is bound into the PDF and it’s fixed. Now.

    You get a little bit more control because you can zoom in. There’s some things you can do in PDF, but ultimately it’s more or less still a page factor determined by the author/publisher/gatekeeper.

    So now we talk about the web and HTML. This is all pre-AI, right? HTML goes out there, and there’s actually a decent bit you can do in your browser. You can override the default font. You can override the default font size. You can say, I’m using dark mode or light mode or those kinds of things. 

    AP: Light mode, exactly.

    SO: If you have an e-book reader, you can override the default font or font size.

    AP: I need that font size jacked up, please. Thank you. 

    SO: We weren’t going to use that example. Right. Yeah. So you get a little bit more control, right? You have a little bit more control over the presentation. Now, let’s talk about what AI does to this, and particularly the large language models. Now, I, as the author, create a whole bunch of content, and I put it somewhere. And the content consumer says…

    AP: I’ll use it.

    SO: Tell me about this concept or tell me about this thing or give me information about whatever. And they get a response to that prompt, which is a paragraph or two of, you know, here’s what you need to know. And then they say, make it easier, make it simpler, write this at a fourth grade level, write this at an eighth grade level. I’m a PhD in microbiology. Give me more detail. Right. You can change the writing level. You can say make the font bigger, make the font smaller, give it to me in a PDF, show it to me in a spreadsheet.

    AP: I’ve even seen someone create a podcast of this document and have two people talking about it, which was freaky, but you can do that.

    SO: Right. So as the author and the content creator and the backend people, right, the content people, we’re accustomed to taking our content and packaging it in certain ways. Like, here’s a topic for you, or here’s a PDF, or here’s a book, or here’s a deliverable, right, a package of content. And although with structured authoring, when that came in, we let go of this idea that we, as the author, got to control the page presentation. That got automated into the system. So the person controlling the page presentation was the person who designed the publishing pipelines. But the publishing pipelines were designed on the backend by the authoring people. Now all of a sudden, we have no control over that end product. Just because I thought it should be a PDF or an HTML page, you can turn around and say, like you said, give it to me in a podcast, make me a video, show it to me in French, and the LLMs will do it.

    AP: The publishing pipeline got moved over the fence basically to more of the content consumer side and they get to do what they want more or less. That’s where things are headed.

    SO: So pre-AI, we talked about content as a service, right? We load up all the content in a database somewhere, and then you, as the end user of that content or another machine, can reach over and say, give me some content out of there. But it was still a pretty discreet, like, show me that topic or show me that string. And what is fundamentally different about AI and large language models processing that content is the degree to which you can mix and match and rework, reformat, translate, and transform that content to be presented to you, the end user, in the manner of your choice.

    So as an author, I kind of hate this, right? Hey, you took my stuff and you mangled it and you presented it in Comic Sans, and how dare you? And that’s where we are. That authors get to create information, but they don’t get to control the manner and means of distribution or presentation or formatting or language of that information.

    AP: On the flip side of that, and here I am going to look on the sunnier side of things, which never happens. This may be a pod person version of me. If you, as a content creator, are no longer on the hook for thinking about the publishing pipelines and all of that sort of thing, theoretically, that should free you up to create better content on the back end because you don’t have to think about all those things. Allegedly. I don’t know if it’s happening, but…

    SO: It’s very hard as an author to let go of that end product, the target that you’re headed for. But fundamentally, there’s a bigger problem, which is that even if I write the world’s greatest explanation of how to do something, that world’s greatest explanation of how to do something is not being presented to the end user as the thing I wrote. It’s being presented after being run through the transformer, the LLM, the processing that the AI can do when they ask for it. So I could literally write how to do X. And the end user says, hey, tell me how to do X. They are not going to get that chunk of information that I wrote. They’re going to get something reprocessed.

    Of course, now we ask the fundamental question, which is, is the reprocessed version going to be better or worse than what I wrote? And the answer is, it kind of depends on whether I am an above average writer with an above average understanding of what that end user wants, or whether I’m a below average writer with a below average understanding of what that end user wants.

    AP: To me, it’s almost irrelevant as a content creator. My version is better because if the person receiving the information via the chatbot or whatever thinks that what it’s getting or what they are getting is what they want, that’s all that really matters. That the person on the receiving end of that information gets what they want and fine-tunes it to what they want. If they’re happy with it, then the content creator’s opinion about that is, I hate to say it, immaterial at this point.

    SO: Yeah, I kind of hate this timeline because, you know, where does my voice, you know, where does my voice go? And the answer is it’s gone. But you’re right, of course, the purpose of again, what is the purpose of technical and product information that we work on? The purpose is to enable people to use a product successfully. So if shoving it through an AI results in an outcome where that person uses the product successfully, then we’re good.

    AP: I don’t disagree.

    SO: That’s the purpose of the kind of thing that we produce. I think, though, that looking at this, and this is where I see some of the big challenges going forward. First of all, we have to acknowledge that an enormous percentage of the technical content that’s out there is really bad. Like, terrible. Really, really bad, and might be improved by a little trip through a chatbot that’s gonna render it into actually grammatically correct English. That’s a thing.

    AP: Harsh but fair.

    SO: Yeah, I think you’re not the only one that’s going to have some bleeping issues in this podcast. But the problem that I see right now is that the metrics that are being used to measure the success of AI are all wrong. We should be measuring the success of various AI layers and chatbots and things based on are people getting what they need?

    AP: Yeah. Yeah.

    SO: Are they having a successful outcome to whatever it is that they’re trying to do? Is the search or is the process of that conversational, whatever they’re doing, does it get them to the endpoint of, okay, I understand what I need to do and I’m good and I walk away? The metric we actually seem to be using is what percentage of your workflow is using AI? How many people can we get rid of? Because “we’re automating everything with AI” is the wrong metric. The question is, how good are the outcomes?

    AP: To me the idea of how much AI versus human effort, there’s a lack of, shall we say, human intelligence being applied here because merely applying AI to something is fundamentally not going to make something that is incorrect, bad, whatever. It’s not going to magically fix it. That’s a huge disconnect for me when you’re talking about measuring outcomes.

    Whatever you dump into your large language model, if it is fundamentally bad, as in outdated and incorrect, right now, I am pretty sure merely applying AI to it is not going to fix those two pretty gaping holes. And there’s, I don’t know what it is, people hear AI and they think there’s some magic involved. No, the underpinnings have to be good for that magic to be useful, basically.

    SO: And I think all of us have examples of asking the chatbot a question and getting answers that are just flat wrong. Or worse, they look plausible, like they’re in the form of a plausible answer, but then you read it and you read it carefully and you’re like, this doesn’t actually say anything. It’s just word salad. Which, since a chatbot effectively is the average of the database underlying it of content pretty much means that the underlying database of content doesn’t say anything useful on this topic. So I think the place that I kind of go with this is to the question of accountability. 

    AP: Yes.

    SO: Who is legally responsible for the outcomes? Now, pretty clearly, if I or an organization produces a user guide that covers a specific product and there is wrong information in that user guide, the organization is responsible. I mean, it’s your document, you’re responsible. Okay, if I, as an end user, query a public-facing LLM and get the wrong answer for something, and then I proceed to use that in my life, whose fault is that?

    Who is at fault when, or, you we saw this with, when the first came out, people were following the map, right, the GPS map, and it would send them off a cliff or it would send them into a construction area and they would drive off the side. Okay, whose fault is that? And the answer was always, well, it’s your fault because look up from the map and don’t drive past the sign that says, not enter construction zone cliff ahead.

    AP: Or one-way street. Right, yeah.

    SO: But AI doesn’t come with, I mean, it comes with warning labels, right? But we don’t see them. We don’t process them. What we see is a conversation where we say, tell me more about that. And it tells you more about that. And it feels as though you’re talking to a human. And therefore, when you push on something and say, are you sure? And it says yes, because what’s the typical answer when somebody says, are you sure? It’s yes.

    Is it actually sure? No, it’s not sentient. So if I query a public-facing LLM, it reprocesses a bunch of content and tells me how to do a thing that is in direct contradiction to what the official user documentation says, whose fault is that? I think it’s mine because I use the public-facing LLM. Now, what if the organization that makes the product puts up a chatbot and I query the organization’s chatbot? How do I do X? And especially if that chatbot is your frontline tech support, like you cannot get to a human. You have to go through the chatbot. I asked the chatbot a question, and it says, do it this way. And it happens to be wrong. Is the organization liable? I don’t know the answer, I think yes, but I’m not sure. And so fundamentally, yeah.

    AP: The bottom line here, yeah, we’re talking about governance here. The bottom line is governance and there is, there has to be some human AI interaction here. There has to be these guardrails that you mentioned earlier and that’s where humans have to be involved.

    SO: And the better the AI gets, if it’s accurate half the time, then my hackles are up. I know it’s gonna be wrong. It’s wrong all the time. If it’s accurate 80% of the time, I sort of trend it like psychologically, I just assume it’s accurate all the time. So the better they get, the worse the errors are because we don’t expect them.

    AP: That’s also dangerous. Yeah, right. Yeah.

    SO: I see occasionally, very, very occasionally, I had directions to go somewhere. And the directions were literally, put this address into Google Maps, but don’t do A, B, and C because it’s wrong. Like, the directions to get to this location are incorrect. Do not follow them. Because these days, our assumption is that the mapping apps just work.

    AP: And that’s it’s wrong most of the time, but I think part of this governance angle is we have to realize that AI is going to be wrong. 

    SO: Pretty much just do.

    AP: And there are lots of reasons we won’t get into all the reasons that can be wrong. So what are you going to do when it is wrong? How are you going to make sure it’s not wrong? Again, there’s this whole process, this whole governance process that has to be in place. And again, I think this is where human intervention is going to be necessary because I don’t think AI at this point has any business correcting itself in these matters. That seems sort of suboptimal to me.

    SO: Hmm. Yeah, I mean, hypothetically, you can tell it to check itself. And certainly there’s some people doing that type of work. I think for me, fundamentally, the takeaways are that, like any other tool, there’s some really useful productivity enhancements that we can and should be taking advantage of. To your point, there’s some really important governance work that needs to be done to ensure that your QA is appropriately scaled to the level of risk of your product. Medical device, very high. Silly gaming app, pretty low. Don’t really care. And we need to think about guardrails and what it means to inject the right kind of content and the various kinds of enablement tools that you can use to do that. 

    And finally, this issue of AI as a content customer, I think is really, really tricky because it’s a new, from our point as content creators, it is a delivery mechanism, right? Just like a PDF or a piece of HTML or anything else like that. and it’s a delivery mechanism that allows the end user to control how they access the content, which means we have to do way more work around the guardrails of what that means when they query the content and shape it to their own requirements.

    AP: Yeah, so things have progressed in the past two years, most definitely, especially in the content space. We’ve seen a lot of improvements. But there are still some big picture things we have to work out. And I think it’s gonna be interesting in the next year or two to see what happens. You briefly mentioned there are some companies who are setting up systems that can do a decent job of checking up on itself. That’s not where everything is right now, but I think the better these systems get, the better the guardrails that get in place, they can start to find out, this is wrong, I need to fix it, or I need to update this with the latest information, let me go get it. So that is starting to happen more and more. I think it will become more part of the LLM to chatbot process, but I don’t think we’re quite there yet. And I’m interested to see what happens next with that sort of scenario.

    SO: It’s definitely gonna be interesting. That much I’m sure about.

    AP: Yeah, I agree. So we managed to get through this without cursing. So that’s good. I think it turned out to be a more realistic conversation, and we kind of tuned out the hype because that’s what just makes me grit my teeth and sometimes yell at LinkedIn when I see certain promoted posts on LinkedIn that I think are full of you-know-what. So anyways, I think we’ll wrap it up there. Sarah, do you have any final points you would like to sign off with?

    SO: I think at the end of the day, when you try and contextualize, like, what is this AI thing and what does it mean for us fundamentally, we can look at some of the other sort of big picture shifts that we’ve made. I’ve been known to pretty dismissively compare it to a spell checker, you know? You can use it and it’ll fix some stuff, but you better check because it doesn’t know the difference between affect and effect, although some of the grammar checkers now maybe they do.

    So there’s that, but I think at the end of the day, if you are looking at content strategy, content operations and enterprise level, you really do have to say, okay, where does AI fit into my strategy and how can we employ it productively to do what we need to do inside this organization to produce, manage, deliver the content that we’re working on.

    AP: And I think we’re going to wrap up on that very good point. Thank you very much.

    SO: Thank you.

    Conclusion with ambient background music

    CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

    Questions for Sarah and Alan? Register for our Ask Me Anything: AI in content ops webinar!

    The post Check in on AI: The true measure of success for AI initiatives appeared first on Scriptorium.
  • Content Operations

    From black box to business tool: Making AI transparent and accountable

    26/01/2026 | 21 mins.
    As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability.

    Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do.

    Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it.

    Related links:

    Sarah O’Keefe: AI and content: Avoiding disaster

    Sarah O’Keefe: AI and accountability

    Writemore AI

    LinkedIn:

    Nathan Gilmour

    Sarah O’Keefe

    Transcript:

    Introduction with ambient background music

    Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

    Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

    Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

    Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

    End of introduction

    Sarah O’Keefe: Hey everyone. I’m Sarah O’Keefe. Welcome to another episode. I am here today with Nathan Gilmour, who’s the Chief Technical Officer of Writemore AI. Nathan, welcome.

    Nathan Gilmour: Thanks, Sarah. Happy to be here.

    SO: Welcome aboard. So tell us a little bit about what you’re doing over there. You’ve got a new company and a new product that’s, what, a year old?

    NG: Give or take, yep.

    SO: Yep. So what are you up to over there? Is it AI-related?

    NG: It is actually AI-related, but not AI-related in the traditional sense. Right now, we’ve built a product or tool that helps technical authoring teams convert from traditional Word or PDF formats, which would make up the bulk of much of the technical documentation ecosystem and help convert it to structured authoring. Meaning that they can get all of the benefits of reuse, easier publishing, high compatibility with various content management systems. And can do it in minutes where traditional conversions could take hours. So it really helps authoring teams get their content out to the world at large in a much more efficient and regulated fashion.

    SO: So I pick up a corpus of 10 or 20 or 50,000 pages of stuff, and you’re going to take that, and you’re going to shove it into a magic black box, and out comes, you said, structured content, DITA?

    NG: Correct.

    SO: Out comes DITA. Okay. What does this actually … Give us the … That’s the 30,000-foot view. So what’s the parachute level view?

    NG: Perfect. Underneath the hood, it’s actually a very deterministic pipeline. Deterministic pipeline means that there is a lot more code supporting it. It’s not an AI inferring what it should do. There’s actual code that guides a conversion process first. So going from, let’s say, Word to DITA, there are tools within the DITA Open Toolkit that allow and facilitate that much more mechanically, rather than trusting an AI to do it. We know that AI does struggle with structure, especially as context windows expand. It becomes more and more inaccurate. So if we feed these models with far more mechanically created content, they become much more accurate. You’re not trusting them to do much more, more of the nuanced parts of the process. So there’s a big difference between determinism and probabilism. Where determinism is the mechanical conversion of something, probabilism is allowing the AI to infer a process. So that’s where we differ is our process is much more deterministic versus allowing the AI to do everything on its own.

    SO: So is it fair to say that you combined the … And for deterministic, I’m going to say scripting. But is it fair to say that you combined the DITA OT scripting processing with additional AI around that to improve the results?

    NG: Correct. It also expedites the results so that instead of having a human do much of the semantic understanding of the document, we allow the AI to do it in a far more focused task. Machines can read faster.

    SO: Okay. And so for most of us, when we start talking about AI, most people think large language model and specifically ChatGPT, but that’s not what this is. This is not like a front-end go play with it as a consumer. This is a tool for authors.

    NG: Correct. And even further to that, it’s a partner tool for authors. It allows them to continue authoring in a format that they’re familiar with. Well, let’s take Microsoft Word, for example. Sometimes the shift from Word to structured authoring could be considered an enormous upheaval. Allowing authors to continue authoring in a format that they’re good at and they’re familiar with, and then have a partner tool that allows them to expedite the conversion process to structured authoring so that they can maintain a single source of truth, makes things a little bit better, more manageable, and more reliable in the long run. So instead of having to effectively cause a riot with the authoring teams, we can empower them to continue doing what they’re good at.

    SO: Okay. So we drop the Word file in and magically DITA comes out. What if it’s not quite right? What if our AI doesn’t get it exactly right? I mean, how do I know that it’s not producing something that looks good, but is actually wrong?

    NG: Great question. And that’s where, prior to doing anything further, there is a review period for the human authors. So in the event that the AI does make a mistake, it is not only completely transparent, so the output, the payload, as we describe it, comes with a full audit report. So every determination that the AI makes is traced and tracked and explained. And then further to that, the humans are even able to take that payload out anyway, open it up in an XML editor. So at this point in time, the content is converted, it is ready to go into the CCMS.

    Prior to doing that, it can go into a subject matter expert who is familiar with structured authoring to do a final validation of the content to make sure that it is accurate. The biggest differentiator, though, is the tool never creates content. The humans need to create content because they are the subject matter experts within their field. They create the first draft. The tool takes it, converts it, but doesn’t change anything. It only works with the material as it stands. And then once that is complete, it goes back into another human-centered review so that there are audit trails, it is traceable. And there is a final touchpoint by a human prior to the final migration into their content management system.

    SO: So you’re saying that basically you can diff this. I mean, you can look at the before and the after and see where all the changes are coming in.

    NG: Correct.

    SO: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do.

    NG: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. Where we want to come in is to bring clarity to these black boxes. Make them transparent, I guess you can say. Because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it.

    One of the added benefits that we have baked into the tool from a backend perspective is its ability to be completely internet-unaware. Meaning if an organization has the capital and the infrastructure to host a model, this can be plugged directly into their existing AI infrastructure and use its brain. Which, realistically, is what the language model is. It’s just a brain. So if companies have invested the time, invested the capital in order to build out this infrastructure, the Writemore tool can plug right into it and follow those preexisting information security policies. Without having to worry about something going out to the worldwide web.

    SO: So the implication is that I can put this inside my very large organization with very strict information security policies and not be suddenly feeding my entire intellectual property corpus to a public-facing AI.

    NG: That is entirely correct.

    SO: We are not doing that. Okay. So I want to step back a tiny bit and think about what it means, because it seems like the thing that we’re circling around is accountability, right? What does it mean to use AI and still have accountability? And so, based on your experience of what you’ve been working on and building, what are some of the things that you’ve uncovered in terms of what we should be looking for generally as we’re building out AI-based things? What should we be looking for in terms of accountability of AI?

    NG: The major accountability of AI is what could it look like if a business model changes? Let’s kind of focus on the large players in the market right now. There will always be risk with using these large language models that are publicly facing right now. A terms of service change could mean that all of the information that organizations use in order to leverage these tools could become part of a training data set later on down the road. It’s hard to determine what will happen in the future.

    So the ability to use online and offline models encourages the development of very transparent tools. So even if the Writemore tool is using a cloud model, I still hold the model almost accountable to report its determinations. It’s not just making things up, so to speak. So there’s a lot that goes into it. There’s a lot that we don’t know about these tools, to be totally honest. We’re still trying to determine what it looks like in a broader picture, in a broader use case. Because the industry is evolving so quickly that, quite simply, we don’t know what’s coming up.

    SO: Sounds to me as though you’re trying to put some guardrails around this so that if I’m operating this tool, then I can look at the before and after and say, “Don’t think so.” I mean, presumably it learns from that, and then my results get better down the road. Where do you think this is going? I mean, where do you see the biggest potential and where do you see the biggest risks or opportunities or … I’ll leave it to you as to whether this is a positive or a negative tilted question.

    NG: There’s a lot of potential in order to incorporate into organizations that can’t use these tools. Like we had mentioned earlier, organizations are looking into this. Municipalities are looking into AI. But with the state of the more open models right now, it’s very hard to say. So I know I keep circling back around the ability to use smaller language models. They are not only much more efficient to operate, they’re also cheaper, quite simply, to operate. We know that the large language models require enormous computing power. But if provided focused tasks in order to either assist in the classification of topics or fulfill requests in order to pull files, in that regard, you can get away with using smaller levels of compute. And in today’s day and age of computing, the price relativistically is coming down in terms of density is going up. So it’s cheaper to run a model at higher capacities than it ever has been. And it’s only going to improve over time.

    So empowering organizations to be able to incorporate these tools in order to streamline their own workflows is going to be very important to them. And on top of that, being able to abide or follow their information security policies only makes the ideas much more compelling. And on top of that, being able to encourage organizations to take full control of their documentation and not necessarily need it to go out of house allows organizations to keep internal costs down while still maintaining the security policies of making sure their content doesn’t leave their organization. There’s always going to be room for partner organizations to come in and help with their content strategy. But the conversion itself can be done in-house using their tools, using their content, using their teams. Which really helps keep costs down, they drive the priority lists, they can do everything that they need to do in order to maintain that control.

    SO: Now, we’ve touched largely … Or we’ve talked largely about migration and format conversion. But there’s a second layer in here, right? Which we haven’t mentioned yet.

    NG: There is. There’s the ability also during the conversion phase, it’s to have an AI model do light edits. So being able to feed it a style guide to abide by means the churn that we see with these technical teams isn’t as nearly impactful. You can have technical authors still write their content. But if a new person joins the team, they can still author the material just as normally. But then the tool can take over in order to ensure that it’s meeting the corporate style guide, the corporate language, so on, in order to expedite that process. So onboarding time for new team members shrinks as well. So like I said, it’s a very much it’s a partner tool in order to expedite the processing of content, authoring, conversion, migration, getting into a new CCMS and the real empowerment behind it.

    SO: And the style guide conformance. So I think we’re assuming that the organization has a corporate style guide?

    NG: Assuming, yes.

    SO: Okay. Just checking.

    NG: But then again, that’s…

    SO: Asking for a friend.

    NG: Of course.

    SO: So if they don’t have one, where’s the corporate style guide come from?

    NG: And that could be something that an organization can either generate internally or, as mentioned, work with an external vendor who specializes in these kinds of things in order to build a style guide so that all of their documentation follows the same voice and tone. The better the documentation, the better the trust of the content overall.

    SO: So, can we use the AI to generate the corporate style guide?

    NG: Probably. Yes. Short answer, yes. Longer answer, not without very close attention to it.

    SO: And doesn’t that also assume that we have a corpus of correctly styled content so that we can extract the style guide rules?

    NG: There’s a lot more. Yeah.

    SO: So I mean, what I’m working my way around to is if we have content chaos, if you have an organization that doesn’t have a style guide, doesn’t have consistency, doesn’t have all these things, can you separate out what is the work that the humans have to do? And what is the work that the machine can do to get to structured, consistent, correct, voice and tone and all the rest of it? How do you get from the primordial soup of content goo to structured content in a CCMS?

    NG: Great question. Typically, that starts with education. We work with the teams in order to identify these gaps first. We don’t just throw in a tool and say, “Good luck, hope for the best.” Because we see it time and time again, even in manual conversion processes where that simply doesn’t work. But in taking the time to work with teams, providing them with the skills and the knowledge in order to be successful serves a much longer term positive outcome than ever before. If we educate these teams on what any tool realistically needs, it means the accuracy of the tool goes up in the longer run. So you’re seeing multiple benefits on multiple sides.

    So to your point about primordial soup, well, working with teams in order to identify these gaps, these issues, working to identify the standards that should go into the content prior to anything sets not only them up for success in the long run, but also for any tools that they want to implement down the road. It all starts with strong content going in because, as the adage goes, garbage in, garbage out. So if we can clean up the mess prior or work with the teams prior in order to establish these standards, then the quality of output only goes up.

    SO: Yeah. And I mean, I think to me, that’s the big takeaway, right? We have these tools, we can do interesting things with them, but at the end of the day, we have to also augment them with the hard-won knowledge of the people. You mentioned subject matter experts, the domain experts, the people inside the organization that understand the regulatory framework or the corporate style guide, or all of those guardrails that make up what is it to create content in this organization that reflects this organization’s priorities and culture and all the rest of it.

    NG: And taking the time to educate users is a far less invasive process than exporting bulk material, converting it manually, and handing it back. Because realistically, if we take that avenue or that road, we’re not educating the users, we’re not empowering them to be successful in the long run. All we’ll end up doing is all the hard work, but then in one, two, five years, we run into the same issue where we’re back to primordial soup of content, and it’s another mess. So if we start with the education and the empowerment and then work towards the implementation of tools, the longer-term success will be realized.

    SO: Well, I think, I mean, that seems like a good place to leave it. So Nathan, thank you so much. This was interesting and I look forward to seeing where this goes and how it evolves over the next … Well, we’re operating in dog years now, so over the next month, I guess.

    NG: So true. And thanks, Sarah, for having me on.

    SO: Thanks, and we’ll see you soon.

    Conclusion with ambient background music

    CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

    For more insights on AI in content operations, download our book, Content Transformation.

    The post From black box to business tool: Making AI transparent and accountable appeared first on Scriptorium.
  • Content Operations

    Futureproof your content ops for the coming knowledge collapse

    17/11/2025 | 32 mins.
    What happens when AI accelerates faster than your content can keep up? In this podcast, host Sarah O’Keefe and guest Michael Iantosca break down the current state of AI in content operations and what it means for documentation teams and executives. Together, they offer a forward-thinking look at how professionals can respond, adapt, and lead in a rapidly shifting landscape.

    Sarah O’Keefe: How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us, what automation looks like, and the risk that is introduced by the limitations of the technology? What’s the roadmap for somebody that’s trying to navigate this with people that are all-in on just getting the AI to do it?

    Michael Iantosca: We need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right.

    Related links:

    Scriptorium: AI and content: Avoiding disaster

    Scriptorium: The cost of knowledge graphs

    Michael Iantosca: The coming collapse of corporate knowledge: How AI is eating its own brain

    Michael Iantosca: The Wild West of AI Content Management and Metadata

    MIT report: 95% of generative AI pilots at companies are failing

    LinkedIn:

    Michael Iantosca

    Sarah O’Keefe

    Transcript:

    Introduction with ambient background music

    Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

    Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

    SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

    Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

    End of introduction

    Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. In this episode, I’m delighted to welcome Michael Iantosca to the show. Michael is the Senior Director of Content Platforms and Content Engineering at Avalara and one of the leading voices both in content ops and understanding the importance of AI and technical content. He’s had a longish career in this space. And so today we wanted to talk about AI and content. The context for this is that a few weeks ago, Michael published an article entitled The coming collapse of corporate knowledge: How AI is eating its own brain. So perhaps that gives us the theme for the show today. Michael, welcome.

    Michael Iantosca: Thank you. I’m very honored to be here. Thank you for the opportunity.

    SO: Well, I appreciate you being here. I would not describe you as anti-technology, and you’ve built out a lot of complex systems, and you’re doing a lot of interesting stuff with AI components. But you have this article out here that’s basically kind of apocalyptic. So what are your concerns with AI? What’s keeping you up at night here? 

    MI: That’s a loaded question, but we’ll do the best we can to address it. I’m a consummate information developer as we used to call ourselves. I just started my 45th year in the profession. I’ve been fortunate that not only have I been mentored by some of the best people in the industry over the decades, but I was very fortunate to begin with AI in the early 90s when it was called expert systems. And then through the evolution of Watson and when generative AI really hit the mainstream, those of us that had been involved for a long time were… there was no surprise, we were already pretty well-versed. What we didn’t expect was the acceleration of it at this speed. So what I’d like to say sometimes is the thing that is changing fastest is the rate at which the rate of change is changing. And that couldn’t be more true than today. But content and knowledge is not a snapshot in time. It is a living, moving organism, ever evolving. And if you think about it, the large language models, they spent a fortune on chips and systems to train the big large language models on everything that they can possibly get their hands and fingers into. And they did that originally several years ago. And the assumption is that, especially for critical knowledge, is that that knowledge is static. Now they do rescan the sources on the web, but that’s no guarantee that those sources have been updated. Or, you know, the new content conflicts or confuses with the old content. How do they tell the difference between a version of IBM database 2 of its 13 different versions, and how you do different tasks across 13 versions? And can you imagine, especially when it comes to software where most of us, a lot of us work, the thousands and thousands of changes that are made to those programs in the user interfaces and the functionality?

    MI: And unless that content is kept up-to-date and not only the large language models, reconsume it, but the local vector databases on which a lot of chatbots and agenda workflows are being based. You’re basically dealing with out-of-date and incorrect content, especially in many doc shops. The resources are just not there to keep up with that volume and frequency of change. So we have a pending crisis, in my opinion. And the last thing we need to do is reduce the people that are the knowledge workers to update, not only create new content, but deal with the technical debt, so that we don’t collapse on this, I think, is a house of cards.

    SO: Yeah, it’s interesting. And as you’re saying that, I’m thinking we’ve talked a lot about content debt and issues of automation. But for the first time, it occurs to me to think about this more in terms of pollution. It’s an ongoing battle to scrub the air, to take out all the gunk that is being introduced that has to, on an ongoing basis, be taken out. Plus, you have this issue that information decays, right? In the sense that when, I published it a month ago, it was up to date. And then a year later, it’s wrong. Like it evolved, entropy happened, the product changed. And now there’s this delta or this gap between the way it was documented versus the way it is. And it seems like that’s what you’re talking about is that gap of not keeping up with the rate of change.

    MI: Mm-hmm. Yeah. I think it’s even more immediate than that. I think you’re right. But now we need to remember that development cycles have greatly accelerated. Now, when you bring AI for product development into the equation, we’re now looking at 30 and 60-day product cycles. When I started, a product cycle was five years. Now it’s a month or two. And if we start using AI to draft new content, for example, just brand new content, forget about the old content or update the old content. And we’re using AI to do that in the prototyping phase. We’re moving that more left upfront. We know that between then and CodeFreeze that there’s going to be a numerous number of changes to the product, to the function, to the code, to the UI. It’s always been difficult to keep up with it in the first place, but now we’re compressed even more. So we now need to start looking at AI to how does it help us even do that piece of it, let alone what might be a corpus that is years and years old, that’s not ever had enough technical writers to keep up with all the changes. So now we have a dual problem, including new content with this compressed development cycle.

    SO: So the, I mean, the AI hype says we essentially, we don’t need people anymore and the AI will do everything from coding the thing to documenting the thing to, I guess, buying the thing via some sort of an agentic workflow. But what, I mean, you’re deeper into this than nearly anybody else. What is the promise of the AI hype, and what’s the reality of what it can actually do?

    MI: That’s just the question of the day. Because those of us that are working in shops that have engineering resources, I have direct engineers that work for me and an extended engineering team. So does the likes of Amazon, other serious, not serious, but sizable shops with resources. We have a lot of shops that are smaller. They don’t have access to either their own dedicated content systems engineers or even their IT team to help them. First, I want to recognize that we’ve got a continuum out there, and the commercial providers are not providing anything to help us at this point. So it’s either you build it yourself today, and that’s happening. People are developing individual tools using AI where the more advanced shops are looking at developing entire agentic workflows. 

    And what we’re doing is looking at ways to accelerate that compressed timeframe for the content creators. And I want to use content creators a little more loosely because as we move the process left, and we involve our engineers, our programmers in the early, earlier in the phase, like they used to be, by the way, they used to write big specifications in my day. Boy, I want to go into a Gregorian chant. “Oh, in my day!” you know, but, but they don’t do that anymore. And basically the, the role of the content professional today is that of an investigative journalist. And you know what we do, right? We, we scrape and we claw. We test, we use, we interview, we use all of the capabilities of learning, of association, assimilation, synthesis, and of course, communication. And turns out that writing’s only 15% roughly of what the typical writer does in an information developer or technical documentation professional role, which is why we have a lot of different roles, by the way, that if we’re gonna replace or accelerate with people with AI, have to handle all those capabilities of all those roles. So, so where we are today is some of the more leading-edge shops are going ahead, and we’re looking at ways to ingest knowledge, new knowledge, and use that new knowledge with AI to draft new or updated content. But there are limitations to that. So, I want to be very clear. I am super bullish on AI. I think I use it every single day. I’m using it to help me write my novel. I’m using it to learn about astrophotography. I use it for so much. When the tasks are critical, when they’re regulatory, when they’re legal-related, when there’s liability involved, that’s the kind of content that we cannot afford to be wrong. We have to be right. We have to be 100% in many cases.

    Whereas with other kinds of applications, we can very well be wrong. I always say AI and large language models are great on general knowledge that’s been around for years and evolves very slowly. But things that move quickly and change very quickly, in my business, it’s tax rates. There’s thousands and thousands of jurisdictions. Every tax rate is different and they change them. So you have to be 100% accurate or you’re going to pay a heck of a penalty financially if you’re wrong. So we are moving left. We are pulling knowledge from updated sources, things like videos that we could record and extract and capture, Figma designs, code even, to a limited degree that there’s assets in there that can be caught, and other collateral, and we’re able to build out and initial drafts. It’s pretty simple. Several companies are doing this right now, including my own team. And then the question comes, how good could it be initially? What can we do to improve that, make it as good as it can be? And then what is the downstream process for ensuring validity and quality of that content? What are the rubrics that we’re going to use to govern that? And therein is where most of the leading edge or bleeding edge or even hemorrhaging edge is right now.

    SO: Yeah, and I mean, this is not really a new problem, and it’s not a problem specific to AI either, but we’ve had numerous projects where the delta between what, let’s say, the product design docs and the engineering content and the code, the as-designed documentation and the actual reality of the product walking out the door. So the as-built product, there was the resources, all that source material that you’re talking about, right, that we claw and scrape at. And I would like to also give a shout-out to the role of the anonymous source for the investigative journalists, because I feel like there’s some important stuff in there. But you go in there, you get all this as-designed stuff, right? Here’s the spec, here’s the code, here are the code comments, whatever. Or here’s the CAD for this hardware piece that we’re walking out the door. But the thing that actually comes down the factory assembly line or through the software compiler is different than what was documented in the designs because reality sets in and changes get made. And in many, many, many cases, the role of the technical writer was to ensure that the content that they were producing represented reality and not the artifacts that they started from. So there’s a gap. And there jobs to close that gap so that that document goes out and it’s accurate, right? And when we talk about these AI or automated or any sort of workflow, any sort of automation, any automation that does not take into account the gap between design and reality is going to run into problems. The level of problem depends on the accuracy of your source materials. Now, I wrote an article the other day and referred to the 100% accurate product specifications. I don’t know about you, I have seen one of those never in my life. 

    MI: Hahaha that’s absolutely true. That’s really true. 

    SO: The promise we have here is, AI is going to speed things up and it’s going to automate things and it’s going to make us more productive. And I think you and I both believe that that is true at a certain level. How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us and what automation looks like and the risk that is introduced by the limitations of you know, of the technology itself? What does that conversation look like? What are the points that you try to make? What’s the roadmap for somebody that’s trying to, as you said, know, maybe in a smaller organization, navigate this with people that are, you know, all-in on “just get the AI to do it.”

    MI: That’s a great question too, because we need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right. AI can still take a collection of collateral and get the order of the steps wrong. It can still include things or do too much. We’ve been trained to write as professional writers in a minimalistic capability. And we can control some of that through prompting. Some of that can be done with guardrails. But when you think about writing tech docs, some people might think, we document. we’re documenting APIs or documenting tasks and we, you know, we’ve always been heavily task-oriented, but you can extract all the correct steps and all the correct steps in the right order, but what doesn’t come along with it all too frequently and almost universally is the context behind it, the why part of it. I always say we can extract great things from code for APIs like endpoints or puts and, you know, gets and puts and things like that. That’s a great for creating reference documentation for programmers. 

    But if you want to know, it doesn’t tell you the why, it doesn’t tell you the steps, the exact steps, the code doesn’t tell you that. Now maybe your Figma does. And if your Figma has been done really well, your design docs have been done really well and comprehensively. That can mitigate it tremendously. But what have we done in this business? We’ve actually let go more UX people than probably even writers, you know, which is, which is counterproductive. And then you’ve got things like the happy path and the alternate paths that could exist, for example, through the use of a product or the edge cases, right? The what-ifs that occur. You might be able to, and we should, we are able to do better with the happy path, but the happy path is not the only path. These are multifunction beasts that we built. When we built iPhone apps, we often didn’t need documentation because they did one thing and they did one thing really well. You take a piece of middleware, and it can be implemented a thousand different ways. And you’re going to you’re going to document it by example and maybe give some variance, you’re not going to pull that from Figma design. You’re not going to pull that from code. There’s too much of it there that the human fact-baking capability can look at it and say, this is important, this is less important, this is essential, this is non-essential, to actually deliver useful information to the end user. And we need to be able to show what we can produce, continue to iterate and try to make it better and better, because someday we may actually get pretty darn close with support articles and completed support case payloads, we were able to develop an AI workflow that very often was 70% to 100% accurate and ready for publish. 

    But when you talk about user guides and complex applications, it’s another story because somebody builds a feature for a product and that feature boils down into not a single article, but into an entire collection of articles that are typed into the kind of breakdown that we do for disclosure, such as concepts, tasks, references, Q&A. So AI has got to be able to do something much more complex, which is to look at content and classify it and apply structure to separate those concerns. Because we know that when we deliver content in the electronic world, we’re no longer delivering PDF. Well, of us are hopefully not delivering PDF books made up of long chapters that intersperse all of these different content types because of the type of consumption, certainly not for AI and AI bots. Then when we, so we need to document, maybe the bottom line here is we need to show what we can do. We need to show where the risks are. We need to document the risks, and then we need the owners, the business decision makers, to see those risks, understand those risks, and sign off on those risks. And if they sign off on the risks, then me, as a technology developer and an information developer, I can sleep at night because I was clear on what it can do today. And that is not a statement that says it’s not going to be able to do that tomorrow. It’s only a today statement so that we can set expectations. And that’s the bottom line. How do we set expectations when there’s an easy button that Staples put in our face, and that’s the mentality of what AI is. It’s press a button and it’s automatic.

    SO: Yeah, and I did want to briefly touch on, you know, the knowledge base articles are really, really interesting problem because in many cases you have knowledge base articles that are essentially bug fixes or edge cases when I, you know, hold my finger just so and push the button over here, you know, it blue screens.

    MI: Mm-hmm.

    SO: And that article can be very context-specific in the sense of you’re only going to see it if you have these five things installed on your system. And/or it can be temporal or time-limited in the sense that, while we fixed the bug, it’s no longer an issue. Okay. Well, so you have this knowledge-based article and you feed it into your LLM as an information source going forward, but we fixed the bug. So how do we pull it back out again?

    MI: I love that question. 

    SO: I don’t!

    MI: I love it. No, I’ve been, actually working for a couple of years on this very particular problem. The first problem we have, Sarah, is we’ve been so resource constrained that when doc shops built an operations model, the last thing they invested in is the operations and the operations automation. So when I’m in a conference and I have a big room of 300 professional technical doc folks. I love asking the simple question, how do you track your content? And inevitably, I get, yeah, well, we do it on Excel spreadsheets. To actually have a digital system of record, I get a few hands. And then I ask the question, well, does that digital system of record that you have for every piece of documentation you’ve ever published, does that span just the product doc or does that actually span more than product doc like your developer, your partner, your learning, your support, all these different things. Cause the customer doesn’t look at us as those different functions. They look at us as one company, one product. And inevitably, I’m lucky if I get one hand in the audience that says, yeah, we actually are doing that. So the first thing they don’t have is they don’t have a contemporary system of record that is digital that we can say, we know and can automate notifications as to when a piece of documentation should either be re-reviewed and revalidated or retired and taken out.

    The other problem we have is that all of these AI implementations and companies, almost universally, not completely, but most of them, were based on building these vector databases. And what they did, was often to the completely ignoring the doc team, was just go out to the different sources they had available, Confluence, SharePoint. If you had a CCMS, they’d ask you for access to your CCMS or your content delivery platform, and they suck it in. They may date-stamp it, which is okay, but pretty rudimentary. And they may even have methods for rereading those sources every once in a while, but they’re not, unless they’re rebuilding the entire vector database, and then what did they do when they ingested the content? They shredded it up into a million different pieces, right? Because the context windows for large language models have limitations for token numbers and things like that. Maybe they’re bigger today, but they’re still limited. So how would they even replace a fragment of what used to be whole topics and whole collection of topics? And this is why we wrote the paper and did the implementation and share with the world what we call the document object model knowledge graph because we needed a way outside of the vector database to say go look over here and you can retrieve the original entire topic or collection of topics or related topics in their entirety to deliver to the user. And again, we’re still unless we update that content and it’s don’t treat it like a frozen snapshot in time, we’ll still have those content debt problems. But it’s becoming a bigger, bigger, a much bigger problem now. It wasn’t as big a problem when we put out chatbots. And the chatbots we’ve been building, what, for three, you know, two, three, four years now. And, you know, everybody celebrated, they popped the corks, you know, we can deflect X amount percentage of support cases. They can self-service. And I always talk about the precision paradox that once you reach a certain ceiling, it gets really hard to increment and get above that 70%, 80%, 85%, 90% window. And as you get closer and better, the tolerance for being wrong goes down like a rock. And you now have a real big problem.

    So how do we do these guardrails to be more deterministic, to mitigate the probabilistic risk that we have and reality that we have? The problem is that people are still looking for fast and quick, not right. When I say right, I mean the building out of things like ontologies and leveraging our taxonomies that we labored over with all of that metadata that never even gets into the vector database because they strip it all away in addition to shredding it up. So if we don’t start building those things like knowledge graphs and retaining all of that knowledge, it’s even… now we’re compounding the problem. Now we have debt, and we have no way to fix the debt. And now when we get into the new world of agentic workflows, which is the true bleeding edge right now, when you have sequences of both agentic and agentive, and the difference between those two, by the way, is agentic is autonomous. There’s no human doing that task. It’s just doing it. And then agentive, which is a human in the loop, which is helping there. When you’ve got a mix of agentive and agentic processes in a business workflow, now you’ve got to worry about what happens if I get something wrong early in the chain of sequence in that workflow. And this doesn’t apply to just documentation, by the way. We’ll be seeing companies taking very complex workflows in finance and in marketing and in business planning and reporting and mapping out this is the workflow our humans do. And there’s hundreds, if not more steps and many roles involved in those workflows. And as we map those out and say, where can we inject AI, not as just individual tools, like just separately using a large language model or separately using a single agent, but stringing them together to automate a complex business workflow with dependencies upstream and downstream. How are we going to survive and make this work? And I think that’s why you saw the MIT study had come out where they said, you know, roughly only 5% or so of AI projects are succeeding. And I think that’s because we did the easy stuff first. We did the chatbots and they could be lossy in terms of accuracy. But when you now, when you get to these agenda workflows that we’re building, literally coding as we speak, now you’re facing a whole different experience and ballgame where precision and currency really matters.

    SO: Yeah, and I think I mean, we’ve really only scraped the surface of this. Both of the articles that you’ve mentioned, the one that I started with and the one that you mentioned in this context, we’ll make sure we get those into the show notes. I believe they are on your is it Medium? On your website. So we’ll get those links in there. Any final parting words in the last? I don’t know. Fifteen seconds or so.

    MI: No, that’s good. I want to give, I want to tell you the good news and the bad news for tech doc professionals. What I’m seeing in the industry hurts me. I think there’s a lot of excuse right now, not just in the tech doc space, but in all jobs where we’re seeing AI being used as an excuse to make business decisions, to scale back. It may take some time until the impact of some poor business decisions that are being made will reflect themselves, but there’s going to be reality that hits. And the question is, is how do we navigate the interim? I’m confident that we will. I’m confident that those of us that are building the AI, I feel like I’m evil and a savior at the same time. I’m evil because I’m building automation that can speed up and make people much more productive, meaning you need less people potentially. At the same time, I feel like we’re in a position when we do it, rather than an engineer that doesn’t even know the documentation space, we’re getting to redefine our space ourselves and not leave it to the whims of people that don’t understand the incredible intricacy and dependencies of creating what we know as high-quality content. So we’re in this tumult right now, I think we’re going to come out of it. I can’t tell you what that window looks like. There will be challenges through doing that, but I would rather see this community define their own, redefine their own future in this transformation that is unavoidable. It’s not going away. It’s going to accelerate and get more serious. But if we don’t define ourselves, others will. And I think that’s the message I want our community to take away. So when we go to conferences and we show what we’re doing and we’re open and we’re sharing all the stuff that we’re doing, that’s not, hi, look at us. That’s you come back to the next conference and the next webinar and show us what you took from us and made better and helped shape and mold that transformative industry that we know as knowledge and content. And I’m excited because I want to celebrate every single advance that I see as we share. And I think it’s incumbent upon us to share and be vocal. And I think when I write my articles, they’re aimed at not only our own community, they’re aimed at the executives and technologists themselves to educate them, so that if we don’t do it, who will? And it does fall on all of us to do that.

    SO: I think I’m going to leave it there with a call for the executives to pay attention to what you are saying, and some of the rest of this community, many of the rest of this community are saying. So, Michael, thank you very much for taking the time. I look forward to seeing you at the next conference and seeing what more you’ve come up with. And we will see you soon.

    MI: Thank you very much.

    SO: Thank you.

    Conclusion with ambient background music

    CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

    Want more content ops insights? Download our book, Content Transformation.

    The post Futureproof your content ops for the coming knowledge collapse appeared first on Scriptorium.

More Business podcasts

About Content Operations

The Content Operations podcast from Scriptorium delivers industry-leading insights for scalable, global, AI-optimized content.
Podcast website

Listen to Content Operations, Aspire with Emma Grede and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features