Powered by RND
PodcastsBusinessContent Operations

Content Operations

Scriptorium - The Content Strategy Experts
Content Operations
Latest episode

Available Episodes

5 of 190
  • Futureproof your content ops for the coming knowledge collapse
    What happens when AI accelerates faster than your content can keep up? In this podcast, host Sarah O’Keefe and guest Michael Iantosca break down the current state of AI in content operations and what it means for documentation teams and executives. Together, they offer a forward-thinking look at how professionals can respond, adapt, and lead in a rapidly shifting landscape. Sarah O’Keefe: How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us, what automation looks like, and the risk that is introduced by the limitations of the technology? What’s the roadmap for somebody that’s trying to navigate this with people that are all-in on just getting the AI to do it? Michael Iantosca: We need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right. Related links: Scriptorium: AI and content: Avoiding disaster Scriptorium: The cost of knowledge graphs Michael Iantosca: The coming collapse of corporate knowledge: How AI is eating its own brain Michael Iantosca: The Wild West of AI Content Management and Metadata MIT report: 95% of generative AI pilots at companies are failing LinkedIn: Michael Iantosca Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. In this episode, I’m delighted to welcome Michael Iantosca to the show. Michael is the Senior Director of Content Platforms and Content Engineering at Avalara and one of the leading voices both in content ops and understanding the importance of AI and technical content. He’s had a longish career in this space. And so today we wanted to talk about AI and content. The context for this is that a few weeks ago, Michael published an article entitled The coming collapse of corporate knowledge: How AI is eating its own brain. So perhaps that gives us the theme for the show today. Michael, welcome. Michael Iantosca: Thank you. I’m very honored to be here. Thank you for the opportunity. SO: Well, I appreciate you being here. I would not describe you as anti-technology, and you’ve built out a lot of complex systems, and you’re doing a lot of interesting stuff with AI components. But you have this article out here that’s basically kind of apocalyptic. So what are your concerns with AI? What’s keeping you up at night here?  MI: That’s a loaded question, but we’ll do the best we can to address it. I’m a consummate information developer as we used to call ourselves. I just started my 45th year in the profession. I’ve been fortunate that not only have I been mentored by some of the best people in the industry over the decades, but I was very fortunate to begin with AI in the early 90s when it was called expert systems. And then through the evolution of Watson and when generative AI really hit the mainstream, those of us that had been involved for a long time were… there was no surprise, we were already pretty well-versed. What we didn’t expect was the acceleration of it at this speed. So what I’d like to say sometimes is the thing that is changing fastest is the rate at which the rate of change is changing. And that couldn’t be more true than today. But content and knowledge is not a snapshot in time. It is a living, moving organism, ever evolving. And if you think about it, the large language models, they spent a fortune on chips and systems to train the big large language models on everything that they can possibly get their hands and fingers into. And they did that originally several years ago. And the assumption is that, especially for critical knowledge, is that that knowledge is static. Now they do rescan the sources on the web, but that’s no guarantee that those sources have been updated. Or, you know, the new content conflicts or confuses with the old content. How do they tell the difference between a version of IBM database 2 of its 13 different versions, and how you do different tasks across 13 versions? And can you imagine, especially when it comes to software where most of us, a lot of us work, the thousands and thousands of changes that are made to those programs in the user interfaces and the functionality? MI: And unless that content is kept up-to-date and not only the large language models, reconsume it, but the local vector databases on which a lot of chatbots and agenda workflows are being based. You’re basically dealing with out-of-date and incorrect content, especially in many doc shops. The resources are just not there to keep up with that volume and frequency of change. So we have a pending crisis, in my opinion. And the last thing we need to do is reduce the people that are the knowledge workers to update, not only create new content, but deal with the technical debt, so that we don’t collapse on this, I think, is a house of cards. SO: Yeah, it’s interesting. And as you’re saying that, I’m thinking we’ve talked a lot about content debt and issues of automation. But for the first time, it occurs to me to think about this more in terms of pollution. It’s an ongoing battle to scrub the air, to take out all the gunk that is being introduced that has to, on an ongoing basis, be taken out. Plus, you have this issue that information decays, right? In the sense that when, I published it a month ago, it was up to date. And then a year later, it’s wrong. Like it evolved, entropy happened, the product changed. And now there’s this delta or this gap between the way it was documented versus the way it is. And it seems like that’s what you’re talking about is that gap of not keeping up with the rate of change. MI: Mm-hmm. Yeah. I think it’s even more immediate than that. I think you’re right. But now we need to remember that development cycles have greatly accelerated. Now, when you bring AI for product development into the equation, we’re now looking at 30 and 60-day product cycles. When I started, a product cycle was five years. Now it’s a month or two. And if we start using AI to draft new content, for example, just brand new content, forget about the old content or update the old content. And we’re using AI to do that in the prototyping phase. We’re moving that more left upfront. We know that between then and CodeFreeze that there’s going to be a numerous number of changes to the product, to the function, to the code, to the UI. It’s always been difficult to keep up with it in the first place, but now we’re compressed even more. So we now need to start looking at AI to how does it help us even do that piece of it, let alone what might be a corpus that is years and years old, that’s not ever had enough technical writers to keep up with all the changes. So now we have a dual problem, including new content with this compressed development cycle. SO: So the, I mean, the AI hype says we essentially, we don’t need people anymore and the AI will do everything from coding the thing to documenting the thing to, I guess, buying the thing via some sort of an agentic workflow. But what, I mean, you’re deeper into this than nearly anybody else. What is the promise of the AI hype, and what’s the reality of what it can actually do? MI: That’s just the question of the day. Because those of us that are working in shops that have engineering resources, I have direct engineers that work for me and an extended engineering team. So does the likes of Amazon, other serious, not serious, but sizable shops with resources. We have a lot of shops that are smaller. They don’t have access to either their own dedicated content systems engineers or even their IT team to help them. First, I want to recognize that we’ve got a continuum out there, and the commercial providers are not providing anything to help us at this point. So it’s either you build it yourself today, and that’s happening. People are developing individual tools using AI where the more advanced shops are looking at developing entire agentic workflows.  And what we’re doing is looking at ways to accelerate that compressed timeframe for the content creators. And I want to use content creators a little more loosely because as we move the process left, and we involve our engineers, our programmers in the early, earlier in the phase, like they used to be, by the way, they used to write big specifications in my day. Boy, I want to go into a Gregorian chant. “Oh, in my day!” you know, but, but they don’t do that anymore. And basically the, the role of the content professional today is that of an investigative journalist. And you know what we do, right? We, we scrape and we claw. We test, we use, we interview, we use all of the capabilities of learning, of association, assimilation, synthesis, and of course, communication. And turns out that writing’s only 15% roughly of what the typical writer does in an information developer or technical documentation professional role, which is why we have a lot of different roles, by the way, that if we’re gonna replace or accelerate with people with AI, have to handle all those capabilities of all those roles. So, so where we are today is some of the more leading-edge shops are going ahead, and we’re looking at ways to ingest knowledge, new knowledge, and use that new knowledge with AI to draft new or updated content. But there are limitations to that. So, I want to be very clear. I am super bullish on AI. I think I use it every single day. I’m using it to help me write my novel. I’m using it to learn about astrophotography. I use it for so much. When the tasks are critical, when they’re regulatory, when they’re legal-related, when there’s liability involved, that’s the kind of content that we cannot afford to be wrong. We have to be right. We have to be 100% in many cases. Whereas with other kinds of applications, we can very well be wrong. I always say AI and large language models are great on general knowledge that’s been around for years and evolves very slowly. But things that move quickly and change very quickly, in my business, it’s tax rates. There’s thousands and thousands of jurisdictions. Every tax rate is different and they change them. So you have to be 100% accurate or you’re going to pay a heck of a penalty financially if you’re wrong. So we are moving left. We are pulling knowledge from updated sources, things like videos that we could record and extract and capture, Figma designs, code even, to a limited degree that there’s assets in there that can be caught, and other collateral, and we’re able to build out and initial drafts. It’s pretty simple. Several companies are doing this right now, including my own team. And then the question comes, how good could it be initially? What can we do to improve that, make it as good as it can be? And then what is the downstream process for ensuring validity and quality of that content? What are the rubrics that we’re going to use to govern that? And therein is where most of the leading edge or bleeding edge or even hemorrhaging edge is right now. SO: Yeah, and I mean, this is not really a new problem, and it’s not a problem specific to AI either, but we’ve had numerous projects where the delta between what, let’s say, the product design docs and the engineering content and the code, the as-designed documentation and the actual reality of the product walking out the door. So the as-built product, there was the resources, all that source material that you’re talking about, right, that we claw and scrape at. And I would like to also give a shout-out to the role of the anonymous source for the investigative journalists, because I feel like there’s some important stuff in there. But you go in there, you get all this as-designed stuff, right? Here’s the spec, here’s the code, here are the code comments, whatever. Or here’s the CAD for this hardware piece that we’re walking out the door. But the thing that actually comes down the factory assembly line or through the software compiler is different than what was documented in the designs because reality sets in and changes get made. And in many, many, many cases, the role of the technical writer was to ensure that the content that they were producing represented reality and not the artifacts that they started from. So there’s a gap. And there jobs to close that gap so that that document goes out and it’s accurate, right? And when we talk about these AI or automated or any sort of workflow, any sort of automation, any automation that does not take into account the gap between design and reality is going to run into problems. The level of problem depends on the accuracy of your source materials. Now, I wrote an article the other day and referred to the 100% accurate product specifications. I don’t know about you, I have seen one of those never in my life.  MI: Hahaha that’s absolutely true. That’s really true.  SO: The promise we have here is, AI is going to speed things up and it’s going to automate things and it’s going to make us more productive. And I think you and I both believe that that is true at a certain level. How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us and what automation looks like and the risk that is introduced by the limitations of you know, of the technology itself? What does that conversation look like? What are the points that you try to make? What’s the roadmap for somebody that’s trying to, as you said, know, maybe in a smaller organization, navigate this with people that are, you know, all-in on “just get the AI to do it.” MI: That’s a great question too, because we need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right. AI can still take a collection of collateral and get the order of the steps wrong. It can still include things or do too much. We’ve been trained to write as professional writers in a minimalistic capability. And we can control some of that through prompting. Some of that can be done with guardrails. But when you think about writing tech docs, some people might think, we document. we’re documenting APIs or documenting tasks and we, you know, we’ve always been heavily task-oriented, but you can extract all the correct steps and all the correct steps in the right order, but what doesn’t come along with it all too frequently and almost universally is the context behind it, the why part of it. I always say we can extract great things from code for APIs like endpoints or puts and, you know, gets and puts and things like that. That’s a great for creating reference documentation for programmers.  But if you want to know, it doesn’t tell you the why, it doesn’t tell you the steps, the exact steps, the code doesn’t tell you that. Now maybe your Figma does. And if your Figma has been done really well, your design docs have been done really well and comprehensively. That can mitigate it tremendously. But what have we done in this business? We’ve actually let go more UX people than probably even writers, you know, which is, which is counterproductive. And then you’ve got things like the happy path and the alternate paths that could exist, for example, through the use of a product or the edge cases, right? The what-ifs that occur. You might be able to, and we should, we are able to do better with the happy path, but the happy path is not the only path. These are multifunction beasts that we built. When we built iPhone apps, we often didn’t need documentation because they did one thing and they did one thing really well. You take a piece of middleware, and it can be implemented a thousand different ways. And you’re going to you’re going to document it by example and maybe give some variance, you’re not going to pull that from Figma design. You’re not going to pull that from code. There’s too much of it there that the human fact-baking capability can look at it and say, this is important, this is less important, this is essential, this is non-essential, to actually deliver useful information to the end user. And we need to be able to show what we can produce, continue to iterate and try to make it better and better, because someday we may actually get pretty darn close with support articles and completed support case payloads, we were able to develop an AI workflow that very often was 70% to 100% accurate and ready for publish.  But when you talk about user guides and complex applications, it’s another story because somebody builds a feature for a product and that feature boils down into not a single article, but into an entire collection of articles that are typed into the kind of breakdown that we do for disclosure, such as concepts, tasks, references, Q&A. So AI has got to be able to do something much more complex, which is to look at content and classify it and apply structure to separate those concerns. Because we know that when we deliver content in the electronic world, we’re no longer delivering PDF. Well, of us are hopefully not delivering PDF books made up of long chapters that intersperse all of these different content types because of the type of consumption, certainly not for AI and AI bots. Then when we, so we need to document, maybe the bottom line here is we need to show what we can do. We need to show where the risks are. We need to document the risks, and then we need the owners, the business decision makers, to see those risks, understand those risks, and sign off on those risks. And if they sign off on the risks, then me, as a technology developer and an information developer, I can sleep at night because I was clear on what it can do today. And that is not a statement that says it’s not going to be able to do that tomorrow. It’s only a today statement so that we can set expectations. And that’s the bottom line. How do we set expectations when there’s an easy button that Staples put in our face, and that’s the mentality of what AI is. It’s press a button and it’s automatic. SO: Yeah, and I did want to briefly touch on, you know, the knowledge base articles are really, really interesting problem because in many cases you have knowledge base articles that are essentially bug fixes or edge cases when I, you know, hold my finger just so and push the button over here, you know, it blue screens. MI: Mm-hmm. SO: And that article can be very context-specific in the sense of you’re only going to see it if you have these five things installed on your system. And/or it can be temporal or time-limited in the sense that, while we fixed the bug, it’s no longer an issue. Okay. Well, so you have this knowledge-based article and you feed it into your LLM as an information source going forward, but we fixed the bug. So how do we pull it back out again? MI: I love that question.  SO: I don’t! MI: I love it. No, I’ve been, actually working for a couple of years on this very particular problem. The first problem we have, Sarah, is we’ve been so resource constrained that when doc shops built an operations model, the last thing they invested in is the operations and the operations automation. So when I’m in a conference and I have a big room of 300 professional technical doc folks. I love asking the simple question, how do you track your content? And inevitably, I get, yeah, well, we do it on Excel spreadsheets. To actually have a digital system of record, I get a few hands. And then I ask the question, well, does that digital system of record that you have for every piece of documentation you’ve ever published, does that span just the product doc or does that actually span more than product doc like your developer, your partner, your learning, your support, all these different things. Cause the customer doesn’t look at us as those different functions. They look at us as one company, one product. And inevitably, I’m lucky if I get one hand in the audience that says, yeah, we actually are doing that. So the first thing they don’t have is they don’t have a contemporary system of record that is digital that we can say, we know and can automate notifications as to when a piece of documentation should either be re-reviewed and revalidated or retired and taken out. The other problem we have is that all of these AI implementations and companies, almost universally, not completely, but most of them, were based on building these vector databases. And what they did, was often to the completely ignoring the doc team, was just go out to the different sources they had available, Confluence, SharePoint. If you had a CCMS, they’d ask you for access to your CCMS or your content delivery platform, and they suck it in. They may date-stamp it, which is okay, but pretty rudimentary. And they may even have methods for rereading those sources every once in a while, but they’re not, unless they’re rebuilding the entire vector database, and then what did they do when they ingested the content? They shredded it up into a million different pieces, right? Because the context windows for large language models have limitations for token numbers and things like that. Maybe they’re bigger today, but they’re still limited. So how would they even replace a fragment of what used to be whole topics and whole collection of topics? And this is why we wrote the paper and did the implementation and share with the world what we call the document object model knowledge graph because we needed a way outside of the vector database to say go look over here and you can retrieve the original entire topic or collection of topics or related topics in their entirety to deliver to the user. And again, we’re still unless we update that content and it’s don’t treat it like a frozen snapshot in time, we’ll still have those content debt problems. But it’s becoming a bigger, bigger, a much bigger problem now. It wasn’t as big a problem when we put out chatbots. And the chatbots we’ve been building, what, for three, you know, two, three, four years now. And, you know, everybody celebrated, they popped the corks, you know, we can deflect X amount percentage of support cases. They can self-service. And I always talk about the precision paradox that once you reach a certain ceiling, it gets really hard to increment and get above that 70%, 80%, 85%, 90% window. And as you get closer and better, the tolerance for being wrong goes down like a rock. And you now have a real big problem. So how do we do these guardrails to be more deterministic, to mitigate the probabilistic risk that we have and reality that we have? The problem is that people are still looking for fast and quick, not right. When I say right, I mean the building out of things like ontologies and leveraging our taxonomies that we labored over with all of that metadata that never even gets into the vector database because they strip it all away in addition to shredding it up. So if we don’t start building those things like knowledge graphs and retaining all of that knowledge, it’s even… now we’re compounding the problem. Now we have debt, and we have no way to fix the debt. And now when we get into the new world of agentic workflows, which is the true bleeding edge right now, when you have sequences of both agentic and agentive, and the difference between those two, by the way, is agentic is autonomous. There’s no human doing that task. It’s just doing it. And then agentive, which is a human in the loop, which is helping there. When you’ve got a mix of agentive and agentic processes in a business workflow, now you’ve got to worry about what happens if I get something wrong early in the chain of sequence in that workflow. And this doesn’t apply to just documentation, by the way. We’ll be seeing companies taking very complex workflows in finance and in marketing and in business planning and reporting and mapping out this is the workflow our humans do. And there’s hundreds, if not more steps and many roles involved in those workflows. And as we map those out and say, where can we inject AI, not as just individual tools, like just separately using a large language model or separately using a single agent, but stringing them together to automate a complex business workflow with dependencies upstream and downstream. How are we going to survive and make this work? And I think that’s why you saw the MIT study had come out where they said, you know, roughly only 5% or so of AI projects are succeeding. And I think that’s because we did the easy stuff first. We did the chatbots and they could be lossy in terms of accuracy. But when you now, when you get to these agenda workflows that we’re building, literally coding as we speak, now you’re facing a whole different experience and ballgame where precision and currency really matters. SO: Yeah, and I think I mean, we’ve really only scraped the surface of this. Both of the articles that you’ve mentioned, the one that I started with and the one that you mentioned in this context, we’ll make sure we get those into the show notes. I believe they are on your is it Medium? On your website. So we’ll get those links in there. Any final parting words in the last? I don’t know. Fifteen seconds or so. MI: No, that’s good. I want to give, I want to tell you the good news and the bad news for tech doc professionals. What I’m seeing in the industry hurts me. I think there’s a lot of excuse right now, not just in the tech doc space, but in all jobs where we’re seeing AI being used as an excuse to make business decisions, to scale back. It may take some time until the impact of some poor business decisions that are being made will reflect themselves, but there’s going to be reality that hits. And the question is, is how do we navigate the interim? I’m confident that we will. I’m confident that those of us that are building the AI, I feel like I’m evil and a savior at the same time. I’m evil because I’m building automation that can speed up and make people much more productive, meaning you need less people potentially. At the same time, I feel like we’re in a position when we do it, rather than an engineer that doesn’t even know the documentation space, we’re getting to redefine our space ourselves and not leave it to the whims of people that don’t understand the incredible intricacy and dependencies of creating what we know as high-quality content. So we’re in this tumult right now, I think we’re going to come out of it. I can’t tell you what that window looks like. There will be challenges through doing that, but I would rather see this community define their own, redefine their own future in this transformation that is unavoidable. It’s not going away. It’s going to accelerate and get more serious. But if we don’t define ourselves, others will. And I think that’s the message I want our community to take away. So when we go to conferences and we show what we’re doing and we’re open and we’re sharing all the stuff that we’re doing, that’s not, hi, look at us. That’s you come back to the next conference and the next webinar and show us what you took from us and made better and helped shape and mold that transformative industry that we know as knowledge and content. And I’m excited because I want to celebrate every single advance that I see as we share. And I think it’s incumbent upon us to share and be vocal. And I think when I write my articles, they’re aimed at not only our own community, they’re aimed at the executives and technologists themselves to educate them, so that if we don’t do it, who will? And it does fall on all of us to do that. SO: I think I’m going to leave it there with a call for the executives to pay attention to what you are saying, and some of the rest of this community, many of the rest of this community are saying. So, Michael, thank you very much for taking the time. I look forward to seeing you at the next conference and seeing what more you’ve come up with. And we will see you soon. MI: Thank you very much. SO: Thank you. Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. Want more content ops insights? Download our book, Content Transformation. The post Futureproof your content ops for the coming knowledge collapse appeared first on Scriptorium.
    --------  
    32:49
  • The five stages of content debt
    Your organization’s content debt costs more than you think. In this podcast, host Sarah O’Keefe and guest Dipo Ajose-Coker unpack the five stages of content debt from denial to action. Sarah and Dipo share how to navigate each stage to position your content—and your AI—for accuracy, scalability, and global growth. The blame stage: “It’s the tools. It’s the process. It’s the people.” Technical writers hear, “We’re going to put you into this department, and we’ll get this person to manage you with this new agile process,” or, “We’ll make you do things this way.” The finger-pointing begins. Tech teams blame the authors. Authors blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations say, “We’ve got to start making a change.” They’re either going to double down and continue building content debt, or they start looking for a scalable solution. — Dipo Ajose-Coker Related links: Scriptorium: Technical debt in content operations Scriptorium: AI and content: Avoiding disaster RWS: Secrets of Successful Enterprise AI Projects: What Market Leaders Know About Structured Content RWS: Maximizing Your CCMS ROI: Why Data Beats Opinion RWS: Accelerating Speed to Market: How Structured Content Drives Competitive Advantage (Medical Devices) RWS: The all-in-one guide to structured content: benefits, technology, and AI readiness LinkedIn: Dipo Ajose-Coker Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey, everyone. I’m Sarah O’Keefe and I’m here today with Dipo Ajose-Coker. He is a Solutions Architect and Strategy at RWS and based in France. His strategy work is focused on content technology. Hey, Dipo. Dipo Ajose-Coker: Hey there, Sarah. Thanks for having me on. SO: Yeah, how are you doing? DA-C: Hanging in there. It’s a sunny, cold day, but the wind’s blowing. SO: So in this episode, we wanted to talk about moving forward with your content and how you can make improvements to it and address some of the gaps that you have in terms of development and delivery and all the rest of it. And Dipo’s come up with a way of looking at this that is a framework that I think is actually extremely helpful. So Dipo, tell us about how you look at content debt. DA-C: Okay, thanks. First of all, I think before I go into my little thing that I put up, what is content debt? I think it’d be great to talk about that. It’s kind of like technical debt. It refers to that future work that you keep storing up because you’ve been taking shortcuts to try and deliver on time. You’ve let quality slip. You’ve had consultants come in and out every three months, and they’ve just been putting… I mean writing consultants. SO: These consultants. DA-C: And they’ve been basically doing stuff in a rush to try and get your product out on time. And over time, those sort of little errors, those sort of shortcuts will build up and you end up with missing metadata or inconsistent styles. The content is okay for now, but as you go forward, you find you’re building up a big debt of all these little fixes. And these little fixes will eventually add up and then end up as a big debt to pay. SO: And I saw an interesting post just a couple of days ago where somebody said that tech debt or content debt, you could think of it as having principle and interest and the interest accumulates over time. So the less work you do to pay down your content debt, the bigger and bigger and bigger it gets, right? It just keeps snowballing and eventually you find yourself with an enormous problem. So as you were looking at this idea of content debt, you came up with a framework for looking at this that is at once shiny and new and also very familiar. So what was it? DA-C: Yeah, really familiar. I think everyone’s heard of the five stages of grief, and I thought, “Well, how about applying that to content debt?” And so I came up with the five stages of content debt. So let’s go into it. I’m not going to keep referring to the grief part of it. You can all look it up, but the first stage is denial. “Our content is fine. We just need a better search engine. We can actually put it into this shiny new content delivery platform and it’s got this type of search,” and so on and so forth. Basically what you’re doing is you’re ignoring the growing mess. You’re duplicating content. You’ve got outdated docs. You’re building silos, and then you’re ignoring that these silos are actually getting even further and further apart. No one wants to admit that the CMS or whatever system, bespoke system that you’ve put into place, is just a patchwork of workarounds. This quietly builds your content debt until, actually the longer denial lasts, the more expensive that cleanup is. As we said in that first bit, you want to pay off the capital of your debt as quickly as possible. Anyone with a mortgage knows that. You come into a little bit of money, pay off as much capital as you can so that you stop accruing that debt, the interest on the debt. SO: And that is where when we talk about AI-based workflows, I feel like that is firmly situated in denial. Basically, “Yeah, we’ve got some issues, but the AI will fix it. The AI will make it all better.” Now, we painfully know that that’s probably not true, so we move ourselves out of denial. And then what? DA-C: There we go into anger. SO: Of course. DA-C: “Why can’t we find anything? Why does every update take two weeks?” And that was a question we used to get regularly where I used to work at a global medical device manufacturer. We had to change one short sentence because a spec change and it took weeks to do that. Authors are wasting time looking for reusable content if they don’t have an efficient CCMS. Your review cycles drag through because all you’re doing is giving the entire 600-page PDF to the reviewer without highlighting what’s in there. Your translation costs balloon and your project managers or leadership gets angry because, “Well, we only changed one word. Can’t you just use Google Translate? It should only cost like five cents.” Compliance teams then start raising flags. And if you’re in a regulated industry, you don’t want the compliance teams on your back, and especially you don’t want to start having defects out in the field. So eventually, productivity drops, your teams feel like they’re stuck. And the cracks are now starting to show across other departments and you’re putting a bad name on your doc team. SO: Yeah. And a lot of this, what you’ve got here, is the anger that’s focused inward to a certain extent. It’s the authors that are angry at everybody. I’ve also seen this play out as management saying, “Where are our docs? We have this team, we’re spending all this money, and updates take six months.” Or people submit update requests, tickets, something, the content doesn’t get into the docs, the docs don’t get updated. There’s a six-month lag. Now the SOP, the standard operating procedure, is out of sync with what people are actually doing on the factory floor, which it turns out, again, if you’re in medical devices, is extremely bad and will lead to your factory getting shut down, which is not what you want generally. DA-C: Yeah, it’s not a good position to be in. SO: And then there’s anger. DA-C: Yeah. SO: “Why aren’t they doing their job?” And yet you’ve got this group that’s doing the best that they can within their constraints, which are, as you said, in a lot of cases, very inefficient workflows, the wrong tool sets, not a lot of support, etc. Okay, so everybody’s mad. And then what? DA-C: Everyone’s mad, and eventually, actually this is a closed little loop because all you then do is say, “Okay, well, we’re going to take a shortcut,” and you’ve just added to your content debt. So this stage is actually one of the most dangerous of the parts of it because all you end up trying to do without actually solving the problem is just add to the debt. “Let’s take a shortcut here, let’s do this.” The next stage is now the blame stage. “It’s the tools. It’s the process. It’s the people.” These here and then you get calls of technical writers or, “Well, we’re going to put you into this department and we’ll get this person to rule you with this new agile process,” or, “We’ll get you to be doing it in this way.” The finger-pointing begins. Tech teams will blame the authors. Authors will blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations see that we’ve got to start making a change. They’re either going to double down and continue building that content debt or they start looking for a scalable solution. SO: Right. And this is the point at which people look at it and say, “Why can’t we just use AI to fix all of this?” DA-C: Yep, and we all know what happens when you point AI at garbage in. We’ve got the saying, and this saying has been true from the beginning of computing, garbage in, garbage out, GIGO. SO: Time. DA-C: Yeah. I changed that to computing. SO: Yeah. It’s really interesting though because the blame that goes around, I’ve talked to a lot of executives who, and we’re right back to anger too, it is sort of like, “We’ve never had to invest in this before. Why are you telling us that this organization, this group, this tech writers, content ops,” whatever you want to call it, “that they are going to need enterprise tools just like everybody else?” And they are just halfway astounded and halfway offended that these worker bees that were running around doing their thing… DA-C: Glorified secretaries. SO: Yeah, that whole thing, like, “How dare they?” And it can be helpful, sometimes it is and sometimes it isn’t, to say, “Well, you’ve invested in tools for your developers. You wouldn’t dream of writing software without source control, I assume,” although let’s not go down the rabbit hole of vibe coding. DA-C: Let’s not go down that one. SO: And the fact that there are already people with the job title of vibe coding remediation specialist. DA-C: Nice. SO: Yeah. So that’s going to be a growth industry. DA-C: That’s what, if you can get it. SO: But this blame thing is we are saying, “This is an asset. You need to invest in it. You need to manage it. You need to depreciate it just like anything else. And if you don’t invest properly, you’re going to have some big problems.” And to your- DA-C: A lot of that- SO: Yeah, they don’t want to do it. They’re horrified. DA-C: Yeah. A lot of that comes to looking at docs departments as cost centers. They’re costing us money. We’re paying all these people to produce this stuff that people don’t read. The users don’t want to. But if you look at it properly, deeply, the documentation department can be classed as a revenue generator. What are your sales teams pointing prospects at? They’re pointing at docs. Where are they getting the information about how things work? They’re pointing at the docs. What are you using? Especially if you’re having people looking through trying to find a solution? I know I do this. I go and look at the user manuals. And first thing that I want to see in there that is properly written, if I see something that does not describe the gadget or whatever I’m trying to buy properly, then I’m like, “Well, if you’ve taken shortcuts there, you’ve probably done the same with the actual thing that I’m going to buy.” So I’m going to walk away. Reducing costs for online centers. If your customers can find the information very quickly that describes the exact problem that they’re trying to solve, then you’ve got fewer calls to your online help center. And then while escalating onto the next person, because the level, I don’t know how this goes, level three, two, one, let’s say the level three is the lowest level, if that person can not find the information that is true, clear, one source of truth, then they’re going to escalate it onto that person who you’re paying a lot more, is at that level two, that person can’t find it, moved on. So it’s basically costing you a lot of money not to have good documentation. It’s a revenue generator. SO: So my experience has been that the blame phase is perhaps the longest of all the phases. DA-C: Yeah. SO: And some organizations just get stuck there forever and they blame different people every year. I’ve also, I’m sure you’ve seen this as well, we were talking about reorganizing. “Well, okay, the tech writers are all in one group. Let’s burst them out and put them all on the product team.” DA-C: Yes. SO: “So you go on product team A and you go on product team B and you go on product team C.” And I talk to people about this and they say, “This is terrible and I don’t want to do it.” I’m like, “It’s fine, just wait two years.” DA-C: Yeah. SO: Because it won’t work, and then they’ll put them all back together. Ultimately, I’m not sure it matters whether they’re together or apart because we fall into this sort of weird intermediate thing. What matters is that somebody somewhere understands the value, to your point, and isn’t making the investment. I don’t care if you do that in a single group or in a cross-functional matrix, blah, blah, but here we are. All right. So eventually, hopefully, we exit blame. DA-C: And then we move into acceptance. SO: Do we? DA-C: “Okay, we need a better way to manage that.” And this is like when people start contacting you, Sarah, it’s like, “I’ve heard there’s a better way to manage this. Somebody’s talked to me about there’s something called the component content management system or the structured content,” and all of this. So teams start to acknowledge, one, that they’ve got debt and that debt is growing. Then they start auditing that content and then really seeing that, “Oh, well, yes, things are really going bad. We’ve got 15 versions of this same document living in different spaces in different countries. The translations always cost us a bomb.” So leadership then starts budgeting for a transformation. This is where they then start doing their research to find structured content, competent reuse, they enter the conversation. If they look at their software departments, software departments reuse stuff. You’ve got libraries of objects. Variables is the simplest form of that reuse. And they’ve been using this for years. And so, “Well, why aren’t we doing this? Oh, there’s DITA, there’s metadata. We can govern our content better. We can collaborate using this tool.” So there is a better way to do this. And then we know what to do. SO: I feel like a lot of times the people that reach out to us are in stage four, they’ve reached acceptance, but their management is still back in anger and bargaining and denial and all the rest of that. DA-C: They’re still blaming and trying to find a reason. SO: Yeah, blaming and all of it, just, “How dare you?” All right, so we acknowledge that we have a problem, which I think is actually the first step in a different step process, but okay. DA-C: Yeah. SO: And then what? DA-C: And then there’s action. Let’s start fixing this before it gets totally out of control, before it gets worse. Then they start investing in structured content authoring platforms like Tridion Docs, I work for RWS, I’ve got to mention it. They start speaking with experts, doing that research, listening to their documentation team leaders, speaking with content strategists to define what the content model is, first of all, and then where can we optimize efficiency by having a reuse strategy? A reuse without a strategy is just asking for trouble. You’re basically going to end up duplicating content. And then you’ve got to govern how that is used. What rules have you got in place and what ways have you got to implement those rules? The old job of having an editor used to work in the good old days where you’d print something off and somebody would sign it off and so on and so forth. Now, we’re having to deliver content really quickly and we’re using a lot of technology to do that. And so, well, you need to use technology to govern how that content is being created. Then your content becomes an asset. It’s no longer a liability. This is where that transformation happens, and then you start paying down your content debt. You’re able to scale the content that you’re creating a lot faster without raising the number of the headcount, without having to hire more people. And if you want to then really expand, let’s say, because you’ve got this really great operation now and you’re able to create that content that takes hours and not weeks, then you’re able to expand your market. You’re able to say, “Okay, well, now we’re going to tackle the Brazilian market. Now, we can move into China because they’ve got different regulations.” Again, I speak a lot on the regulatory side of things. That’s where I passed most of my time as a technical writer. Having different content for different regulatory regimes and so on is just such a headache where you don’t have something that is helping you with that structure, applying structure to that content, applying rules to that content, making sure that your workflows are carried out in the way that you set it out six months ago and people have changed and so on and they’re not doing their own thing again. If your organization is stuck at stages one to three, as I just mentioned it, it’s basically time to move. SO: Yeah, I think it’s interesting thinking about this in the larger context of when we talk about writers, the act of writing, right? DA-C: Yes. SO: Culturally, that word or that process is really loaded with this idea of a single human in an attic somewhere writing the great American or French or British novel, writing a great piece of literature or creating a piece of art on their own, by themselves, in solitude. And of course, we know that technical writing- DA-C: Starting at A and going all the way to Z. SO: And we know that technical writing is not that at all, but it does really feel as though when we describe what it means to be a writer or a content creator in a structured content environment, it is just the 180 degree opposite of what it means to be a writer. It’s not the same thing. You are a creator of these little components. They all get put together. We need consistent voice and tone. You have to kind of subordinate your own voice and your own style to the corporate style and to the regulatory and to all the rest of it. And so it’s just this sort of… I think we maybe sometimes underestimate the level of cultural push and pull that there is between what it is to be a writer and what it is to be a technical writer. DA-C: Yes. SO: Or a technical communicator or content creator, whatever you want to call that role. Okay, so we’ve talked about a lot of this and then we’ve not talked a lot about AI, but a big chunk of this is that when you move into an environment where you are using AI for end users to access your content, so they go through a chatbot to get to the content or they’re consulting ChatGPT or something like that, and asking, “Tell me about X.” All of the things that you’re describing in terms of content debt play into the AI not performing, the content not getting in there, not being delivered. So what does it look like? What are some of the specifics of good source content, of paying down the debt and moving into this environment where the content is getting better? What does that mean? What do I actually have to do? We’ve talked about tools. DA-C: Yeah. So first, you’ve got to understand how AI accesses content and how large language models get trained. AI interprets patterns as meaning. If your content deviates from pattern predictability, then you’re going to get what we call hallucinations. And so asking the ChatGPT without having it plugged as an enterprise AI thing where you’ve really trained it on your own content, you get all sorts of hallucinations. Basically, they’ve taken two PDFs that have similar information, but two different conclusions. And so you’re looking for a conclusion in document A, but ChatGPT has given you the one in B. And it’s mixed and matched those because it does not know how one bit of information relates to the other. So good source content needs to be accurate. Your facts are correct. They reflect the current state of the product or subject. It needs to be kept up to date. You need to have single copies of it, that’s what we talk about, a single source of truth. You can not have two sources of truth. It’s either black or it’s white. There are no gray zones with AI, it will hallucinate. You’ve got to have that consistency in style and tone. How do you get that? Well, you’ve got the brand and the way we speak. In French, you would say, “Do you vouvoie or do you tutoie?” Do use the formal voice, formal tone, or do you speak like you’re speaking with your friends? How do you enforce some of that? Well, you can use controlled terminology. These are special terms that you’ve defined, a special voice. But the gold part of it is having that structured formatting and presentation. There’s always a logical structure and sequence to the way that you present that information. Your heading, subheading, steps, lists, are always displayed in the same way. You’ve defined an information architecture to then give that pattern. And the way AI then understands or creates relationships with those patterns is from the metadata that you’re adding onto it. And so good source content is accurate, up to date, consistent in style and tone, uses control terminology, has structure in formatting. Forget the presentation because that you put on the end of things, in that what it looks like, how pretty it is. But the presentation in terms of I always start with a short description and then I follow up with the required tools. And then I describe any prerequisites, and that is the way every one of my writers are contributing towards this central repository of knowledge, this single repository of knowledge. And you can do that as well if you’ve got a great CCMS by using templates, building templates into that CCMS so that it guides the author. And the author no longer has to think about, “Oh, how is this going to look? Should I be coloring my tables green, red, blue? Should they be this wide?” They’re basically filling in a template form. And some of the standards that we’ve developed like DITA allow you to do this, allow you to have a particular pattern for creating that information and the ability to put it into a template which is managed by your CCMS. SO: Yeah, and that’s the roadmap, right? We talk about how as a human, if I’m looking at content and I notice that it’s formatted differently, like, “Oh, they bolded this word here but not there,” and I start thinking, “Well, was that meaningful?” DA-C: Yeah. SO: And at some point, I decide, “No, it was just sloppy and somebody screwed up and didn’t bold the thing.” But AI will infer meaning from pattern deviations. DA-C: Yeah. SO: And so the more consistent the information is in all the levels that you’ve described, the more likely it is that it will process it correctly and give you the right outcome. Okay, so that seems like maybe the place that we need to wrap this up and say, folks, you have content debt. Dipo is giving you a handy roadmap for how to understand your content debt and understand the process of coming to terms with your content debt, and then figuring out how and where to move forward. So any closing thoughts on that before we say good luck to everybody? DA-C: Basically before, or, I mean, most enterprises today have already jumped on the AI bandwagon. They’re already trying to put it in, but at the same time, start taking a look at your content to ensure that it is structured and has semantic meaning to it. Because the day that you then start training your large language model on that, if you’ve not built those relationships into it, it’s like teaching a kid bad habits. They’re going to just continue doing it. It’s basically train your AI right the first time by having content that is structured and semantic, and you’ll find your AI outcomes are a lot more successful. SO: So I’m hearing that AI is basically a toddler? Okay. Well, I think we’ll leave it there. Dipo, thanks, it’s great to see you as always. DA-C: Thanks for having me. SO: Everybody, thank you for joining us, and we’ll see you on the next one. Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. Want more content ops insights? Download our book, Content Transformation. The post The five stages of content debt appeared first on Scriptorium.
    --------  
    27:00
  • Balancing automation, accuracy, and authenticity: AI in localization
    How can global brands use AI in localization without losing accuracy, cultural nuance, and brand integrity? In this podcast, host Bill Swallow and guest Steve Maule explore the opportunities, risks, and evolving roles that AI brings to the localization process. The most common workflow shift in translation is to start with AI output, then have a human being review some or all of that output. It’s rare that enterprise-level companies want a fully human translation. However, one of the concerns that a lot of enterprises have about using AI is security and confidentiality. We have some customers where it’s written in our contract that we must not use AI as part of the translation process. Now, that could be for specific content types only, but they don’t want to risk personal data being leaked. In general, though, the default service now for what I’d call regular common translation is post editing or human review of AI content. The biggest change is that’s really become the norm. —Steve Maule, VP of Global Sales at Acclaro Related links: Scriptorium: AI in localization: What could possibly go wrong? Scriptorium: Localization strategy: Your key to global markets Acclaro: Checklist | Get Your Global Content Ready for Fast AI Scaling Acclaro: How a modular approach to AI can help you scale faster and control localization costs Acclaro: How, when, and why to use AI for global content Acclaro: AI in localization for 2025 LinkedIn: Steve Maule Bill Swallow Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Bill Swallow: Hi, I’m Bill Swallow, and today I have with me Steve Maule from Acclaro. In this episode, we’ll talk about the benefits and pitfalls of AI in localization. Welcome, Steve. Steve Maule: Thanks, Bill. Pleasure to be here. Thanks for inviting me. BS: Absolutely. Can you tell us a little bit about yourself and your work with Acclaro? SM: Yeah, sure, sure. So I’m Steve Maule, currently the VP of Global Sales at Acclaro, and Acclaro is a fast-growing language services provider. So I’m based in Manchester in the UK, in the northwest of England, and I’ve been now in this industry, and I say this industry, the language industry, the localization industry for about 16 years, always in various sales, business development, or leadership roles. So like I say, we’re a language services provider. And I suppose the way we try and talk about ourselves is we try and be that trusted partner to some of the world’s biggest brands and the world’s fastest growing global companies. And we see it Bill as our mission to harness that powerful combination of human expertise with cutting edge technology, whether it be AI or other technology. And the mission is to put brands in the heads, hearts, and hands of people everywhere. BS: Actually, that’s a good lead in because my first question to you is going to be where do you see AI and localization, especially with a focus of being kind of the trusted partner for human-to-human communication? SM: My first answer to that would be it’s no longer the future. AI is the now. And I think whatever role people play in our industry, whether you’re like Acclaro, you’re a language services provider, offering services to those global brands, whether you are a technology provider, whether you run localization, localized content in an enterprise, or even if you’re what I’d call an individual contributor, maybe you’re a linguist or a language professional. I think AI is already changed what you do and how you go about your business. And I think that’s only going to continue and to develop. So I actually think we’re going to stop talking at some stage relatively soon about AI. It’s just going to be all pervasive and all invasive. BS: It’ll be the norm. Yeah. SM: Absolutely. We don’t talk any more about the internet in many, many industries, and we won’t talk about AI. It’ll just become the norm. And localization, I don’t think is unique in that respect. But I do think that if you think about the genesis of large language models and where they came from, I think localization is probably one of the primary and one of the first use cases for generative AI and for LLMs. BS: Right. The industry started out decades ago with machine translation, which was really born out of pattern matching, and it’s just grown over time. SM: Absolutely. And I remember when I joined the industry, what did I say? So 2009, it would’ve been when I joined the industry. And I had friends asking me, what do you mean people pay you for translation and pay for language services? I’ve just got this new thing on my phone, it’s called Google Translate. Why are we paying any companies for translation? So you’re absolutely right, and I think obviously machine translation had been around for decades before I joined the industry. So yeah, I think that question has come into focus a lot more with every sort of, I was going to say, every year that passes, quite honestly, it’s every three months. BS: If that. SM: Exactly, yeah. Why do companies like Acclaro still exist? And I think there are probably a lot of people in the industry who actually, if you think about the boom in Gen I over the last two, two and a half years, there’s a lot of people who see it as a very real existential threat. But more and more what I’m seeing amongst our client base and our competitors and other actors in the industry, the tech companies, is that there’s a lot more people who are seeing it as an opportunity actually for the language industry and for the localization industry. BS: So about those opportunities, what are you seeing there? SM: I think one of the biggest things, it doesn’t matter what role you play, whether you’re an individual linguist or whether you’re a company like ours, I think there’s a shift in roles and the traditional, I suppose most of what I dealt with 16 years ago was a human being doing translation, another human being doing some editing. There were obviously computers and tools involved, but it was a very human-led process. I think we’re seeing now a lot of those roles changing. Translators are becoming language strategists; they’re becoming quality guardians. Project managers are becoming sort of almost like solutions architects or data owners. So I think that there’s a real change. And personally, I don’t think, and I guess this is what this podcast is all about. I don’t see the roles of a few things going away, but I do see those roles changing and developing. And in some cases, I think it’s going to be for the better. And I think what we’re seeing is a lot of, because there’s all this kind of doubt and uncertainty and sort of threat, people are wanting to be shown the way, and people are wanting companies like our company and other companies like it to sort of lead the way in terms of how people who manage localized content can kind of implement AI. BS: Yeah. We’re seeing something similar in the content space as well. I know there was a big fear, certainly a couple of years ago, or even last year, that, oh, AI is going to take all the writing jobs because everyone saw what ChatGPT could do until they really started peeling back the layers and go, well, this is great. It spit out a bunch of words, it sounds great, but it really doesn’t say anything. It just kind of glosses over a lot of information and kind of presents you with the summary. But what we’re seeing now is that a lot of people, at least on the writing side, yeah, they’re using AI as a tool to automate away a lot of the mechanical bits of the work so that the writers can focus on quality. SM: We’re seeing exactly the same thing. I had a customer say to me she wants AI to do the dishes while she concentrates on writing the poetry. So it is the mundane stuff, the stuff that has to be done, but it’s not that exciting. It’s mundane, it’s repetitive. Those have always been the tasks that have been first in line to be automated, first in line to be removed, first in line, to be improved. And I think that’s what we’re seeing with AI.  BS: So on the plus side, you have AI potentially doing the dishes for you, while you’re writing poetry or learning to play the piano, what are some of the pitfalls that you’re seeing with regard to AI and translation? SM: I think there’s a few, and I think it depends on whereabouts AI is used, Bill, in the workflow. I think the very active translation itself is a very, very common use now of AI. But I think there’s some kind of a, I’m going to call them translation adjacent tasks as well, like we’ve mentioned with the entire workflow. So I think the answer would depend on that. But I think one of the biggest pitfalls of AI, and it was the same again, 2009 when I joined the industry and friends of mine had this new thing in their pocket called Google Translate. One of the pitfalls was, well, it’s not always right. It’s not always accurate. And even though the technology has come on leaps and bounds since then, and you had neural NT before large language models, it still isn’t always accurate. And I think you mentioned it before, it does almost always sound smooth and fluid and almost like it sounds like it’s very polished, and it sounds like it should be, right? I’m thinking, “I’m in sales myself. So it could be a metaphor for a salesperson, couldn’t it? Not always, right? But always sounds confident. But I think there’s a danger where in any type of translation, sometimes accuracy doesn’t actually matter. I mean, if the type of content we’re talking about is, I don’t know, some frequently asked questions on how I can get my speaker to work as a customer, you’re going to be very patient if it’s not perfect English or if you speaking to the language, if it’s not perfect, as long as it gets you to get your speaker to work, you’re not really going to mind. But there’s other content where accuracy is absolutely crucial. In some industries could even be life or death. But I go back to my first year or two in the industry, and we had a customer that made really good digital cameras, and they had a huge problem because their camera was water resistant, and one of their previous translators had translated it as waterproof. And of course, the customer takes it scuba diving or whatever they were doing with the digital camera, and the camera stops working because it wasn’t waterproof, it was just water resistant. So sometimes what would be a very kind of seemingly innocuous choice of term, it wasn’t life or death, but obviously it was the difference between a thousand-dollar camera working or not. So I think accuracy is really critical. And even though it sounds confident, it’s not always accurate. And I think that’s one of the biggest pitfalls. Language is subjective, and some things are sort of black and white or wrong, but other things are a lot more nuanced. And what we see is, especially because a lot of the large language models are trained in English and with English data, they don’t necessarily always get the cultural or the sort of linguistic specific nuances of different markets. We’ve seen some examples, it could be any markets, but specifically Arabic requires careful handling because of the way certain language comes across. Japanese, the politeness Japanese and what do they say, 50 words for snow. Some things aren’t sort of black or white in terms of whether they’re right or wrong. So it’s very, very gray areas in language. And again, however confident the output sounds, sometimes it’s not always culturally balanced or culturally sensitive. BS: You don’t want it to imply anything or have anyone kind of just take away the wrong message because it was unclear or whatnot. SM: Absolutely, absolutely. And especially when you’re thinking of branded content. I mean, some of the companies we work with and some of the companies, I’m sure that people listen to the podcast, they’d spend millions on protecting building, first of all, but also protecting their brand in different markets and the wrong choice of language, the wrong translation can put that at risk. BS: Yeah. With branding, I assume that there’s a tone shift that you need to watch for. There’s certainly what you can and can’t say in certain contexts regarding the brand. SM: Well, I think with AI, when you are using GenAI to translate, the other thing is it’s because I think you mentioned before, the technology it is a pattern-based technology. The content could be quite inherently repetitive. And again, whilst they’ll be confident, whilst they’ll be polished, it doesn’t always take into account the creativity or the emotion. And it’s less and less now we’re seeing AI sort of properly trained on a specific brand’s content. The models are more, they’re too big really to be trained just on a brand-specific content. So sometimes the messaging can appear quite generic or not really in step with the identity that a brand wants to portray. I think most of our clients would be in agreement when it comes to brand. It can’t be left to the machines alone. BS: And I would think that any use of AI or even machine translation in something with regard to branding, where you want to own that messaging and really tailor that messaging, you really don’t want to have other influences coming in from the wild. So I would imagine that with an AI model that’s trained to work in that environment, you really don’t want it to know that there’s an outside internet, there’s an outside world that it can harvest information from because you might be getting language from your competitors or what have you. SM: Yeah, absolutely. Absolutely. Yeah, you’re sort of getting it from too many sources where it kind of needs to be beyond brand really. I think there’s other things as well that we see. I mean, there’s still quite common cases of bias and stereotyping because like you say, it, taking content if you like, or data from all sorts of sources. And if there’s bias in there, there’s misgendered language, especially with some target languages. I mean, you’ve got, in English, it’s kind of fine, really, but in Spanish and French and German, you’ve got to choose a gender for every noun, every adjective, in order to be accurate. BS: Otherwise, it’s wrong. SM: Yeah, absolutely. Yeah, absolutely. And it compounds because the models are built on such scale, it compounds over time. So again, without that sort of active monitoring and without that human oversight, what might be a problem today will compound, and it’d be even worse tomorrow in the months ahead. BS: How about the way in which the translation process works? Have you seen AI really shifting a lot of those workflows? SM: So the short answer is yes. So by far, the most common workflow, if you’re looking at translation by far, the most common workflow with our customers now is to start with AI output. And to have a human being review some or all of that output. It is very, very rare. Now, when we are working with the enterprise-level companies, it’s very, very rare that they’d want, well, actually I might hold that thought, but it’s very rare that they’d want, for most content, they would want a fully human translation. Except one of the pitfalls that we have seen is, or one of the concerns if you like, that a lot of enterprises have about using AI is security and confidentiality. And in fact, we have some customers where it’s written in our contract that we must not use AI as part of the translation process. Now, that could be for some specific content types only, and a lot of the time it’s a factor of, if you like, the attitude to risk or the attitude to confidentiality that that particular customer might have. But a lot of people are still very, very paranoid about that. They don’t want to be risking personal data being effectively leaked or being used to train and being cross pollinated, like your previous example. But in general, the sort of default service now for what I’d call regular common translation is post editing or human review of AI content. So that, that’s probably the biggest change is that’s now really become the norm. BS: Okay. We talked a lot about the pitfalls here, so let’s talk about some benefits that you get at of using AI and localization. SM: Well, I think the first thing is scale. I think it just allows you to do so much more because it almost, well, it doesn’t remove, but it significantly reduces those budget and time constraints that the traditional translation process used to have. Yeah, you can translate content really, really fast, very, very affordably, and it’s huge volumes that you just couldn’t consider if that technology wasn’t there. So you could argue you’ve always been able to do that since machine translation was available. But I think large language models, they do bring more fluency. They do bring more sort of contextual understanding than those sort of pattern-based machine translation models. They can, even though we’ve talked about how some of the challenges around nuance and tone, they can improve style and tone. So we’ve seen a lot of benefits and a good opportunity really in sort of pairing the two technologies, neural machine translation, large language models, and again, you can’t get away when they’re guided by the human expertise. They can offer a really good balance of scale, but also quality that you weren’t able to achieve before. And this is what I would say to people who are sort of worried about the existential threat of, oh my gosh, I’m a translator, so AI is taking my job. Absolutely, it’s probably changing your job. But we see AI translation not replace human translation, but replacing no translation. So that mountain of content, the majority of content actually that was never translated before because of time and budget constraints can now be translated to a certain level of quality. And so we see the overall volume of content localize, exploding, and ideally a similar level of human involvement or even more, in some cases, human involvement than before, but as a proportion of the overall, it’s a lot less, if that makes sense. BS: Yeah. So what about multimedia? So audio and video, I know those have been traditionally a more difficult format to handle in localization, particularly when you may need to change the visuals along the way. SM: If you ask any project manager in our company, the most expensive, the most time-consuming type projects traditionally to deliver, and you’re absolutely right, you make a mistake with terminology and you’re doing a professional voiceover and the studio’s booked and the actor’s booked and you want to change three or four words or three or four terms. Okay, that’s fine. Rebook the studio, rebook the actor. Yeah. I mean, it was traditionally, and I say traditionally, we’re talking only three or four years ago, one of the most expensive forms of content to translate. So I think what we see is it’s been revolutionized by AI, video localization, audio localization, and this is a great example of actually where it’s replacing no translation. I mean, we had customers who just wouldn’t, we don’t want to dub that video. We don’t want to localize their audio, we just can’t afford it. We haven’t got the time. And now with synthesized voice synthesized videos, the quality is sort of very natural, very expressive, and you can produce training videos and product demos and all those kind of marketing assets in various markets that used to cost you lots and lots of money for 10 times less the cost, and probably more than 10 times less the speed. BS: Nice. Yeah. I know that one of the things that we saw, particularly with using machine translation is that there was a pretty good check for accuracy built into a lot of those systems, but they weren’t quite a hundred percent. How does AI compare with that because it does understand language a bit more. So with regard to QA, how is that being leveraged? SM: Well, they can understand. It’s not just about accuracy and grammatical correctness and spelling errors and that sort of thing has always been around, like you say, with machine translation. But the LLMs now, they can evaluate that sort of fluency terminology, use adherence to brand guidelines, style guidelines, and they can do that. So what we see is that whereas before LLMs came around and you had neural machine translation, pretty much most of the machine, unless it was very low value output, and unless it was very invisible or less visible content, let’s say if it was something that the clients cared about, they would want a human review of every single segment or every single sentence effectively. Whereas now, LLMs can help you sort of hone in and identify that percentage of the content that might need looking at by a human. And actually, I mean, there’s no real pattern, but if an LLM as a first pass can look at a large volume of content and say, actually 70% of that is absolutely fine, it matches the instructions that we’ve given it. Not only is it accurate, but also it adheres to fluency and terminology and so on. Why don’t you human beings focus on this 30%? I mean, that’s a huge benefit to a lot of companies, saves a lot of time, saves a lot of costs, and just again, allows them to localize a lot more of that content than they were ever able to do before. So it’s great as a first pass before an extra layer if you like, a technology-dead layer before any human involvement and focusing the humans on the work that matters and the work that’s going to have the most impact. BS: Nice. So if someone is looking to adopt AI within their localization efforts, what are the first steps for building AI into a strategy that you would recommend? SM: Just call me. No, I’m kidding. I think it is any new process bill or any new technology, I think, and it sounds kind of common sense, but I think when deciding on any new strategy, it’s kind of be clear about why you’re doing it. You asked earlier on how AI is changing the localization industry. I think one huge thing I see, I speak to enterprise buyers of localization services every day. That’s my job. That’s what me and my team do. And one of the things that they tell me is that all of a sudden the C-suite know who they are. All of a sudden, the guys with the money, the people with the money, they know they exist. And oh, we’ve got a localization department because as we said, GenAI, one of the earliest adopters, one of the earliest use cases for this was localization and was translation. So now there’s a lot of pressure from people who previously didn’t even know you existed or sort of maybe just saw you as a cost of doing business. Now they’re putting pressure on you to use AI. How are you using GenAI in your workflow? What can we as a business learn from it? Where can we save costs? Where can we increase volume? How can we use it as a revenue driver? Those sort of things. So that being said, that’s a big opportunity, but where we see it not go right or where we see it go more wrong more often than not is where people are doing it just because of that pressure and they think, oh, I have to do it because I’m getting asked to do it. I’m getting asked to experiment. Again, it sounds really obvious, but they don’t really know what they’re looking for. Are they looking for time to be saved? Are they looking for costs to be removed? Are they looking to increase efficiencies with in their overall workflow? So I think it’s like anything, isn’t it? Unless you know how you’re going to measure success, you probably won’t be successful. So I think that’s the first tip I’d give people. Be clear about what it is you’re looking for AI in localization to achieve. And again, one of the pitfalls is we see lots of people wanting to experiment and it’s good, and you want to encourage that. I suppose as a chief exec or even with our clients, we’d love to see experimentation, but when you see lots of people doing lots of different things just because it looks cool and they just want to experiment, unless it’s joined up and unless it’s with a purpose, it doesn’t always work well. So I think what we see when people do it well is they have that purpose. They have it documented actually, they have that sort of agreed, if you like, with they have that executive buy-in, this is why we’re doing it, and this is what we’re hoping to see, not just because it’s cool because it might save us X dollars or it might save us X amount of time. And I think what we see well is when people do that and then they kind of embrace those small iterative tests. One of our solutions architects was on a call with me with a customer, just advise them not to boil the ocean. And again, I know this isn’t specific to AI, but just let’s not do everything all at once. Lots of localization workflows. They have legacy technology, they have legacy connectors to other content repositories, and you can’t just rip it out without a lot of pain and start again. So you’ve got to decide where you’re going to have that impact. Start small, very small tests, iterate frequently, get the feedback. That’s one of the key things. And then it just becomes any other implementation of technology or implementation of a workflow. One of the things we did at Acclaro is actually publish a checklist to help companies answer that exact same question, but when you read it, there’s not going to be much there about specific AI technologies and this type of LLM is better for this, and that type of LLM is better for that. It’s not prescriptive. It’s just designed as a guide to actually say, okay, well don’t get ahead of yourselves. Just follow a really sensible process, prove that it works, and then choose the next experiment. BS: Yeah, get people thinking about it. SM: Absolutely, BS: We hear a lot from people that, oh, it came down from the C-suite that we have to incorporate AI into our workflows in 2025, in 2026. And yeah, I mean that’s all the directive is usually. Usually there’s no foresight coming down from above saying, this is what we’re envisioning you doing with AI. So it really does come down to the people who are managing these processes to take a step back and say, okay, here’s where things are working, here’s where we could make improvements. Here are some potential footholds that we can start building with AI and see where it goes. But yeah, I think for a lot of people, the answer of how do I use AI? I think it’s going to be different for every company out there. I mean, it might be similar, but I think it might be very different and very unique from company to company as to what they’re actually doing. SM: That’s what we see. Yeah, that’s what we see. And again, some of those pitfalls we’ve talked about, some companies have a different approach to information security and confidentiality. Some companies are just risk averse. Some company’s content is, they should be more sensitive about it than other company’s content. Some company’s content, think finance, life sciences, medical devices, there’s real-world problems. Let’s say if it’s not accurate, whereas other company’s contents, yeah, okay, it might take you an extra 30 seconds to get that speaker to work or it might not. But I think, yeah, that’s no surprise. One of our customers said to me, AI is like tea. You need to infuse it. You can’t just dump it. You need to infuse it. You need to let it breathe. You need to let it kind of circulate. You got to decide the strength. You’ve got to decide where you get it from. You’ve got to decide what the human being making it has to do to make a great cup. And it’s just going to be different for every single person. BS: True. SM: We have five in our house and we have five different types of tea, whoever’s making that tea has to know what everyone’s preferences are. And I think it’s the same with AI. And it’s the same with a lot of technologies, isn’t it? BS: It is. So when let’s say someone running a localization department, their CEO says, “We need to incorporate AI. Here’s your mandate. Go run, figure it out, implement it.” Do you have any advice around how to report, I guess the results, the findings, the progress back up? SM: Yeah. My first advice would be, if I was in that situation, to say to that person, listen, we’ve been doing this for 10 years. We just never used to call it AI. We used to call it neural machine translation or machine translation. But my second bit of advice is you’ve actually got to do that because whilst the opportunity is there for localization managers to really drive and shape how AI is implemented, if they don’t do that, or if you pretend it’s something different than it is obvious, if you pretend it’s going away or if you pretend it’s a fad that people are going to forget about, what’ll happen is that somebody else will be asked to implement AI and you won’t be. And it’s quite interesting. We’re seeing a lot now of the persona, if you like, of the people that we’re working with in those enterprise localization teams is getting wider, it’s getting more multidisciplinary. It’s very, very rare that you’d have any decent sized company, a localization manager making decisions about partners, vendors, technology by themselves. It would always be now with a keen eye from the technology team, the IT team, because everyone’s laser-focused on getting this right. So that’d be my second piece of advice. But I think if you define the results that you’re looking for and you document those and you’re able to capture those, again, it is not rocket science. It’s really just basic project management then. And then try and report on those regularly and quickly in a way that you’re able to iterate. An AI pilot shouldn’t be a six-month project with results at the end of six months. I mean, you should be able to know if you’ve chosen the right size of pilots, you should be able to know within days or weeks whether it’s likely to bring the benefits you thought it would do. BS: Very true. So you see the return on using it or the lack of return on using it much quicker? SM: Yeah, well absolutely. Yeah. Again, I think from my own personal experience, we’ve done a lot of helping and guiding clients with pilots, with experiments. It’s not all great results. And again, we haven’t manufactured anything to make it not great results so we stay in a job and people still use the human service. But we have seen really good results. I’m thinking of one, it’s quite a specific use case to do with translation memories, but the client was using GenAI to improve the fuzzy match, if you’re familiar with that term, build a translation memory match, the fuzzy match enhancer, and they found that it improved about 80% of the segments in I think five languages. So again, if I look at that one, they didn’t pick every single language that they had. They only picked five, probably picked five where they could get some quick feedbacks of five more commonly spoken languages. And they were able to measure in their tool, the post editing time and the accuracy. And yeah, they found it improved 80%. I mean, 20% didn’t improve, so not 100% success, but they were able to provide real data to the powers that be to decide whether to extend it to their other language sets or their other content types. BS: Nice. Well, I think we’re at a point where we can wrap up here. Any closing thoughts on AI and localization? Good, bad, ugly, just do it. SM: I think the biggest thing for me is that AI is today. It’s not the future. It’s here. I’m in the UK, like I say, and multi-billion dollar announcement in investments, all specifically to do with AI from companies like NVIDIA, from Microsoft. And AI is the now. So I think you don’t have a choice whether to adopt it, whether to adapt to it being here. It’s just about how you choose to do it really. That’s become our role as a language service provider. As a sort of trusted partner of brands, our role has become to help guide and give our opinions. It’ll continue to change and we’ll have new use cases. And you ask me those same questions, I think Bill, in six months or 12 months, I might give you some different answers because we’ll have found new experiments and new use cases. BS: And that’s fair. Well, Steve, thank you very much. SM: Thank you, Bill. I enjoyed the conversation. Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. Want to learn more about AI, localization, and the future of content? Download our book, Content Transformation. The post Balancing automation, accuracy, and authenticity: AI in localization appeared first on Scriptorium.
    --------  
    33:51
  • From classrooms to clicks: the future of training content
    AI, self-paced courses, and shifting demand for instructor-led classes—what’s next for the future of training content? In this podcast, Sarah O’Keefe and Kevin Siegel unpack the challenges, opportunities, and what it takes to adapt. There’s probably a training company out there that’d be happy to teach me how to use WordPress. I didn’t have the time, I didn’t have the resources, nothing. So I just did it on my own. That’s one example of how you can use AI to replace some training. And when I don’t know how to do something these days, I go right to YouTube and look for a video to teach me how to do it. But given that, there are some industries where you can’t get away with that. Healthcare is an example—you’re not going to learn how to do brain surgery that someone could rely on with AI or through a YouTube video. — Kevin Siegel Related links: Is live, instructor-led training dying? (Kevin’s LinkedIn post) AI in the content lifecycle (white paper) Overview of structured learning content IconLogic LinkedIn: Kevin Siegel Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction SO: Hi, everyone, I’m Sarah O’Keefe. I’m here today with Kevin Siegel. Hey, Kevin. KS: Hey, Sarah. Great to be here. Thanks for having me. SO: Yeah, it’s great to see you. Kevin and I, for those of you that don’t know, go way back and have some epic stories about a conference in India that we went to together where we had some adventures in shopping and haggling and bartering in the middle of downtown Bangalore, as I recall. KS: I can only tell you that if you want to go shopping in Bangalore, take Sarah. She’s far better at negotiating than I am. I’m absolutely horrible at it. SO: And my advice is to take Alyssa Fox, who was the one that was really doing all the bartering. KS: Really good. Yes, yes. SO: So anyway, we are here today to talk about challenges in instructor-led training, and this came out of a LinkedIn post that Kevin put up a little while ago, which will include in the show notes. So Kevin, tell us a little bit about yourself and IconLogic, your company and what you do over there. KS: So IconLogic, we’ve always considered ourselves to be a three-headed dragon, three-headed beast, where we do computer training, software training, so vendor-specific. We do e-learning development, and I write books for a living as well. So if you go to Amazon, you’ll find me well-represented there. Actually, one of the original micro-publishers on this new platform called Amazon with my very first book posted there called, “All This PageMaker, the Essentials.” Yeah, did I date myself for that reference? Which led to a book on QuarkXPress, which led to Microsoft Office books. But my bread and butter books on Amazon even today are books on Adobe Captivate, Articulate Storyline, and TechSmith Camtasia. I still keep those books updated. So publishing, training, and development. And the post you’re talking about, which got a lot of feedback, I really loved it, was about training and specifically what I see as the demise of our training portion of our business. And it’s pretty terrifying. I thought it was just us, but I spoke with other organizations similar to mine in training, and we’re not talking about a small fall-off of training. 15, 20% could be manageable. You’re talking 90% training fall off, which led me to think originally, “Is it me?” Because I hadn’t talked to the other training companies. “Is it us? I mean, we’re dinosaurs at this point. Is it the consumer? Is it the industry?” But then I talked to a bunch of companies that are similar to mine and they’re all showing the same thing, 90% down. And just as an example of how horrifying that is, some of our classes, we’d expect a decent-sized class, 10, a large class, 15 to 18. Those were the glory days. Now we’re twos and threes, if anyone signs up at all. And what I saw as the demise of training for both training companies and trainers, if you’re a training company and you’re hiring a trainer, one or two people in the room isn’t going to pay the bills. Got to keep the lights on with your overhead running 50%, 60%, you know this as a business person, but you’ve got to have five or six minimum to pay those bills and pay your trainer any kind of a rate. SO: So we’re talking specifically about live instructor-led, in-person or online? KS: Both, but we went more virtual long before the pandemic. So we’ve been teaching more virtual than on-site for 30 years. Well, not virtual 30 years, virtual wasn’t really viable until about 20 years ago. So we’ve been teaching virtual for 20 years. The pandemic made it all the more important. But you would think that training would improve with the pandemic, it actually got even worse and it never recovered. So the pandemic was the genesis of that spiral down. AI has hastened the demise. But this is instructor-led training in both forms, virtual and on-site. I think even worse for on-site. SO: So let’s start with pandemic. You’re already doing virtual classes, along comes COVID and lockdowns and everything goes virtual. And you would think you’d be well-positioned for that, in that you’re good to go. What happened with training during the pandemic era when that first hit? KS: When that pandemic first hit, people panicked and went home and just hugged their families. They weren’t getting trained on anything. So it wasn’t a question of, were we well-positioned to offer training? Nobody wanted training, period. And this was, I think if you pull all training companies, well, there are certain markets where you need training no matter what. Healthcare as an example, they need training. Security, needed training. But for the day-to-day operations of a business, people went home and they didn’t work for a long time. They were just like, “The world is ending.” And then, oh, the world didn’t end. So now they’ve got to go back to work, but they didn’t go back to work for a long time. Eventually people got back to work. Now, are you on-site back to work or are you at home? That’s a whole nother thing to think about. But just from a training perspective, when panic sets in, when the economy goes bad, training is one of the first things, you get rid of it. Go teach yourself. And the teaching yourself part is what has led to the further demise of training, because you realize I can teach myself on YouTube. At least I think I can. And I think when you start teaching yourself on your own and you think you can, it becomes, the training was good enough. So if you said, “Let’s focus on the pandemic.” That’s what started it, the downward spiral. But we even saw the downward spiral before the pandemic, and it was the vendors that started to offer the training that we were offering themselves. SO: So instead of a third-party, certainly a third-party, mostly independent organization offering training on a specific software application, the vendors said, “We’re going to offer official training.” KS: Correct. And it started with some of these vendors rolling out their training at conferences. And I attended these conferences as a speaker. I won’t name the software, I won’t name the vendor, but I would just tell you I would go there and I would say, “Well, what’s this certificate thing you’re running there?” It’s a certificate of participation. But as I saw people walking around, they would say, “I’m now certified.” And I go, “You’re not certified after a three-hour program. You now have some knowledge.” They thought they were certified and experts, but they wouldn’t know they weren’t qualified until told to do a job. And then they would find out, “I’m not qualified to do this job.” But that certificate course, which was just a couple of hours by this particular vendor, morphed into a full day certificate. They were charging now a lot of money for it, which morphed into a multi-day thing, which now has destroyed any opportunity for training that we have. And that’s when I started noticing a downward spiral. Tracking finances, it would be your investments going down, down, down, down this thing. It’s like a plane, head and nose down. SO: And we’ve seen something similar. I mean, back in the day, and I do actually… So for those of you listening at home that are not in this generation, PageMaker was the sort of grandparent of InDesign. I am also familiar with PageMaker and I think my first work in computer stuff was in that space. So now we’ve all dated ourselves. But back in the day we did a decent amount of in-person training. We had a training classroom in one of our offices at one point. Now, we were never as focused on it as you are and were, but we did a decent business of public-facing, scheduled two-day, three-day, “Come to our office and we’ll train you on the things.” And then over time, that kind of dropped off and we got away from doing training because it was so difficult. And this is longer ago than you’re talking about. So the pattern that you’re describing where instructor-led in-person training, a classroom training with everybody in the same room kind of got disrupted a while back. We made a decent living doing that for a long time and there was- KS: Made a great living doing that. Oh, my God. That was the thing. SO: But we got away from it, because it got harder and harder to put the right people in the right classes and get people to travel and come to us. So then there’s online training, which we kind of got rid of training. You sort of pivoted to online/ virtual. And then ultimately, the pandemic has made it such, from my point of view, that the vast majority of what we do in this space is custom. We’re doing a big implementation project. We do some custom training that might be in-person, on-site, but much more often it is online, live online instructor-led, but custom. Because all of the companies that we’re dealing with, even if people did return to office, very much they’re fragmented, right? It’s two people here and five people there, and four people there and one in every state. And so, bringing them all together into a classroom is not just bring the instructor in, but bring everybody in and it costs a fortune. And that’s before we get into the question of, can they get across the borders and can they travel? There’s visa issues, there’s admin issues, people have caregiving responsibilities, they can’t travel. There’s a whole bunch of stuff that goes into actually relocating from point A to point B to do a class at point B. So fine. Okay. So along comes the pandemic that really pushes on the virtualization, right? The virtual stuff. And then you’re saying the vendors get into it and they are clawing back some of this revenue for themselves. They’re basically saying, “We’re going to do official vendor-approved stuff, which then makes it very difficult as a third-party, because you have to walk that line, and I’ve been there, you have to walk that line between, we are delivering training on this product which belongs to somebody else, and we can be maybe a little more forthright about the issues in the product because it’s not our product. So we’re just going to say, “Hey, there’s an issue over here. It doesn’t really work. Do it this other way.” Not toeing the official party line. Okay, so we have all of that going on and all of those challenges already. And now along comes AI. So what does AI do to this environment that you’re describing? KS: It further destroys it. I’ll give you an example. My blog, Typepad, we received an email September 1st, 2025, and we’re recording this September 4th, 2025, okay? So three days ago I got an email saying, “Hello, we’re shutting down. Sorry.” And I’m like, “What? Yeah, you’ve got 30 days to get your stuff out of here.” Basically being kicked out of your apartment or your house. So I’m like, “All right, well, go to AI and I asked AI, what is the top blog software?” They said, “WordPress.” Love it or hate it, okay. So I went to WordPress. I had no idea how to use WordPress. I had no staff available to help me. So I had to get my stuff out of Typepad and on and on it went. I went to AI, ChatGPT specifically, and I said, “Teach me how to use WordPress,” and specifically how to get my crap out of TypePad. I say crap, my stuff out of TypePad. In a matter of what? Two days I had everything transferred over. So, didn’t need training, otherwise I would’ve had to go to training to learn how to do that and I didn’t have to. So that’s an example of there’s probably a training company out there that’d be happy to teach me how to use WordPress. I didn’t have the time, I didn’t have the resources, nothing. So I just did it on my own. That’s one example of how you can use AI to replace the training. There’s other examples of training that is not just good enough, it’s fine. It’s good. It’s good. It’s not lacking. When I don’t know how to do something these days, I go right to YouTube and look for a video to teach me how to do it. So given that, some industries where you can’t get away with that. Healthcare as an example, you’re not going to learn how to do brain surgery that you could rely on with AI or video through YouTube. SO: We hope. KS: We hope. “Hey, relax. I know this is your first time, Sarah, I’m your surgeon. I watched a video yesterday, I feel pretty good about it as I grab that saw.” I don’t believe you’re going to be comfortable with that. So listen, it’s bad enough. And you mentioned the vendor that is now offering training. So vendor pullback, they want that for a revenue source. This particular vendor is using it as a revenue tool, but there’s also vendors out there that are actively stopping you from offering training classes, and on it goes. SO: Yeah, I do want to talk about that one a little bit. I know nothing about the specifics of your situation, but this is a losing battle. Because you were just talking about YouTube, I was doing some research for a very, very, very large company that makes farm equipment and I went looking for their content. And they had content on their website, it was like type in your product name or product number and it would give you the official user manual, which was of course ugly and terrible. But I discovered that if you typed in something like, “How do I fix the breaks on my X, Y, Z product?” It would take you to YouTube. And it would take you to this YouTube channel that had a lot of subscribers and was in fact not at all the official company YouTube channel. KS: It was a dude who was working on it? SO: It was a dude in Warsaw, North Carolina, which is not the same as Warsaw, Poland. It is a tiny, tiny, tiny little place, mostly known for me as being halfway between where I am and the beach. It’s where we stop to get gas and summer peaches and corn from the farm stand and fried chicken on our way to the beach, because that’s the thing we do. That’s where Warsaw is. It has a population of, I don’t know, 3,000 maybe. KS: Okay, yeah. SO: I have no idea. But there’s some guy who works for the dealership there who’s making these videos explaining how to do maintenance on these, in this case tractors, and he has got the audience. Not the official website, which by the way does not have a YouTube channel that I’m aware of, or at least that I could find now. This was five, 10 years ago. It has been a while. But so, there’s all this third-party content out there and there’s this ecosystem of content because it’s digital. You can’t really control that unless, we were talking about this earlier, unless you’re doing something like nuclear weapons, intelligence work, or maybe brain surgery. You can probably control those things. That’s about it. Clearly things are changing and not for the better. If your revenue is built on instructor-led, whether in-person or online, it sounds as though things are changing and not for the better in that space specifically, unless we’re training on brain surgery, which most of us are not. So what’s the path forward? KS: I’m thinking about it, actually. SO: I am not signing up for you to do my brain surgery. KS: I need someone to practice on. Sarah, let me know if you’re available. SO: Oh, I’m so sorry, you’re breaking up. I can’t hear you. Okay, so what does the path forward look like? I mean, what does it mean to be inside this disruption and where do you go from here? KS: Okay, so every training company that I have contacts in, they’re all down significantly. The ones that are surviving have government contracts. SO: Mm-hmm. KS: And that is to develop training in all of its guises, that primarily they’re seeing a call for virtual reality training. That’s really, really hot right now. But not the virtual reality training that you can create with the Captivates and the Storylines of the world. That’s too lowbrow. They’re talking about immersive, almost gamification, where you build a world. So if that’s your expertise, you can create training in that. That’s what people want. It looks like augmented reality and virtual reality. I can’t see it. Maybe I’m of a certain age that I’m like, “I’m not putting goggles on to take my training.” But that is pretty popular with other generations. So you can’t ignore it, I think, embrace it. So government contracts, if you can get that, you’ll be okay in the training business. Several of my colleagues have actually done that. So that’s a leg up. The other is to embrace asynchronous training and put your materials out there that live now forever. So I ignored for years these providers of asynchronous training where you put your content there and they sell it for you. I’ve got five classes on Udemy now, and each of them sells pretty well. Matter of fact, my Captivate Udemy is one of their bestsellers. That does not translate into offsetting the revenue lost from your training gigs when you were bringing in six, seven, $800 a person for a training class. Our prices were between $695 and $895 per person to take a public class, but it certainly does bring in some revenue. So if you have the ability to create the asynchronous training, the video training, and make it really, really good training, really impactful, then that’s going to help you stay in the game as long as you can. I also think embracing AI versus getting under the covers and just, “I don’t want to see it,” is not the way to go. I now use AI as a tool. I don’t think it replaces me, I think that I have more to offer in guiding the course than AI, but it gives me a nice, “Get me started here.” Maybe you’ve got a little writer’s block, maybe just getting started. It’s a beautiful day out, I can’t get started. Have AI start, you’ve started up. But if you’re going to go that route and you have AI make suggestions, you better fact check it. And just as an example, I was just curious, I asked ChatGPT to create an exam for Articulate Storyline. That is a tool I know really well, I’ve written exams for Storyline and Captivate and Camtasia. I said, “Write an exam. I want to see what you come up with.” And some of the questions were actually worded better than what I had done. They were very similar questions. And I go, “I kind of like the way you, AI, did that.” Which was kind of a bummer. But I would say a good 30% of what I read, while it was well-written, was completely wrong. SO: Yes, confidently wrong. KS: Yes, it was confidently wrong. Asking questions, “When you do this on storyline, what is the correct thing? What do you do?” And Storyline doesn’t do that thing. They were talking about Rise as an example. I’m like, “You’ve gone and combined Rise with Storyline.” So if you’re going to use AI, it’s the way you ask the question, your prompts. So get some training on engineering your prompts and fact-checking what you get from those prompts. But I use AI every day in my writing to make sure I don’t have grammar issues. So I’ll tell AI, “Check this for clarity and grammar.” So it’s my words, but it now is saying, “Well, there’s a couple typos, I fix that. And a couple of dangling modifiers, I fix that.” So it makes me feel like I’m writing better. But do keep in mind, if you put your stuff into ChatGPT, it’s now part of this mass of stuff that other people are going to get access to. So you can’t copyright anything that you put in AI. I wrote a book about copyright and training materials and things to think about, because we have a lot of people finding an image of a nice puppy on Google and using it in their training, and that puppy was copyrighted. So anything you do on AI, any photos that get created, any artwork, anything, any writing can’t be copyrighted because only a human can get a copyright. So that’s something to think about. If you have something really, really good, you really didn’t create that, so you can’t copyright it. You’re going to have to adapt. You’re going to have to adapt or you’re going to fail in the training industry, again, unless it’s very specific niche markets, or as you mentioned, custom training. If you don’t adapt, you’re going to fail. And that adaptation is going to be, embrace AI asynchronous training to put your training out there, available 24 hours a day, seven days a week when you can’t do it. And that’ll offset getting these onesies and twosies in your class. SO: And it removes the time-bound, I have to set aside these two hours or these four hours of this day to be in the classroom, whether virtual or not if it’s live. I do think that this idea that we’re going to see a split between things that go higher and higher end that people are willing to pay nearly anything for versus the low-end where the price is going… There’s going to be downward pressure on the price for all the low-end stuff, because the barrier to entry to producing asynchronous training is pretty minimal and it gets lower every single day because there’s so many people out there that can potentially do that. KS: Anybody can hang out a shingle and say that they’re an expert. So I mean, it’s the credentials of the trainer too, I think. Who is the person that’s teaching this? Is it what we call it, Chuck with a truck? Is it Chuck with a truck? Or is it someone who has actually done this? I wouldn’t want to get trained on handling my content by someone who hadn’t done it. I’d want you to handle that, right? So a content strategy. “I mean, who came up with that strategy? Oh, Bob. Has Bob ever done it? No, but he feels good about it. No, I want to get a Sarah who’s done it for years and years and years.” SO: Yeah, I mean that’s an interesting point though, because at the end of the day, if you commoditize/ productized training, you’re going to have a product as the asynchronous training that’s a package, and you get what you get. When it’s live with an instructor, you’re going to get that instructor on that day in that context. They’re feeling good, they’re feeling bad. The classroom dynamics are good or bad or weird. Every experience is going to be different. Whereas with async, it’s always going to be the same. I mean, barring internet connectivity or something, as the learner, you’re going to get a consistent experience. Now, it’s not going to be the best possible experience, right? Because the best possible experience is you’re in a group with some other people in a room with an amazing instructor. KS: That is the best. SO: That is the best. KS: There’s good too- SO: It costs the earth. KS: Yeah, there’s good too, the asynchronous training, because it’s always the same, it’s going to be consistent. How many times have you read a live class and the attendees, one of the attendees just spoiled the sauce? And you’re reminding me now, a colleague of mine, they were doing their certification as a certified technical trainer, CTT, and back in those days, you actually had to record yourself teaching. SO: Oh, yes, there was a VHS tape of me and kids. That is video, pre-digital video. KS: That is correct. VHS tape. And I had to do the same thing, but I remember for this one colleague of mine, and the students in this classroom, fake classroom, were other trainers that were also getting the recording done. And I remember she was being recorded and it was over her shoulder looking at the students, because she had to show the students. And one of these students, she made a comment that she knew was correct, and the student shook her head, “Nope, nope. That’s not right. Nope.” And the trainer is now, “What are you doing? Why are you shaking your head no and contradicting one of us? How about just nod?” And so, at some point God had turned around where the students started shaking their head, but realize, “Oh my God, you’re defeating all of us in this room.” So yes, that was to your point, that the training can vary wildly in a live class, whether it’s virtual or on-site, based on the attendees. Because listen, I’ve been teaching Captivate since it was called RoboDemo, so years and years and years and years, and no class has ever been the same. No two classes are the same and it’s all based on the dynamics of the students in my live class. And you get one person in there who is stuck, can’t move forward, file open is a mystery. Go to the file menu, choose open. How do you do that? Okay, mouse skills. All of that can either derail or can help your class. Funny moments, whatever they may be. But asynchronous training, if you do it right, is always consistently good. The problem is there’s no live interaction. So you can’t ask that instructor, “Well, what do you think about this? What do you think about that?” So yeah, you made me laugh when you mentioned that, that the dynamics of your live class, you better be fast on your feet to be a live trainer. So I am not saying, if you’re going to teach virtually, you shouldn’t know how to do it. Because listen, I think you’ll agree, there is a vast difference between teaching a class live on-site versus live online, or God forbid, live online and live on-site, where you’re doing both at the same time. Or if you’re going to do blended learning, you’ve got to mix all three, you better know what you’re doing as a facilitator and a trainer to do that or you’ll fall flat on your feet. You’ll hear all kinds of complaints that people who teach these live classes on-site that now incorporate virtual, and they ignore the virtual audience completely. So the virtual audience is not included in the training, they feel like they’re watching a recording. So you’ve got to know how to engage this audience. I’m actually really stunned, Sarah, that conferences still survive on-site. We mentioned a couple of times before we turned on this recording, why are those conferences live on-site? People are going there to network face-to-face. I guess that’s the big one, but not the content that you’re learning. That content could have been taught virtually. SO: Yeah, I’ve had the position for a long time that the most important part of a conference is the hallway track, right? The conversations at lunch, in the hallway, and in the exhibit hall and everywhere else. There’s a couple that are doing online in addition to in-person, and typically the- KS: ATD does that. Yeah, does a good job at that. Yeah. SO: Yeah, LavaCon is doing that, they’re coming up. But yeah, they have an online track with a chat, a pretty lively chat, and then they also have the in-person version if you can get there in-person. KS: Which is successful only if the facilitator addresses the online chat, if the facilitator addresses someone who’s virtual. Yeah. SO: And fun fact, Phylise Banner has been running that for years and years and years and has done a fantastic job of exactly that, of making sure that the online people get into the conversation, even when there’s 200 people in the room and another couple hundred on the chat, and she’s making sure that they get their questions into the discussion. Okay, so that was cheerful, and that made me feel better, because the first half hour of this was super not encouraging. So I think I’m going to close us out there because I’m pretty sure we could go on forever, but let’s leave it there. Kevin, thank you for coming and for giving us the inside information on what’s happening in training land. And hopefully I’ll see you again somewhere in-person at a conference. KS: Or virtual, with the camera is fine. So yeah, great working with you, Sarah. Thanks for having me. SO: Great to see you. Bye.  Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. Questions about this episode? Let’s talk! /* "function"==typeof InitializeEditor,callIfLoaded:function(o){return!(!gform.domLoaded||!gform.scriptsLoaded||!gform.themeScriptsLoaded&&!gform.isFormEditor()||(gform.isFormEditor()&&console.warn("The use of gform.initializeOnLoaded() is deprecated in the form editor context and will be removed in Gravity Forms 3.1."),o(),0))},initializeOnLoaded:function(o){gform.callIfLoaded(o)||(document.addEventListener("gform_main_scripts_loaded",()=>{gform.scriptsLoaded=!0,gform.callIfLoaded(o)}),document.addEventListener("gform/theme/scripts_loaded",()=>{gform.themeScriptsLoaded=!0,gform.callIfLoaded(o)}),window.addEventListener("DOMContentLoaded",()=>{gform.domLoaded=!0,gform.callIfLoaded(o)}))},hooks:{action:{},filter:{}},addAction:function(o,r,e,t){gform.addHook("action",o,r,e,t)},addFilter:function(o,r,e,t){gform.addHook("filter",o,r,e,t)},doAction:function(o){gform.doHook("action",o,arguments)},applyFilters:function(o){return gform.doHook("filter",o,arguments)},removeAction:function(o,r){gform.removeHook("action",o,r)},removeFilter:function(o,r,e){gform.removeHook("filter",o,r,e)},addHook:function(o,r,e,t,n){null==gform.hooks[o][r]&&(gform.hooks[o][r]=[]);var d=gform.hooks[o][r];null==n&&(n=r+"_"+d.length),gform.hooks[o][r].push({tag:n,callable:e,priority:t=null==t?10:t})},doHook:function(r,o,e){var t;if(e=Array.prototype.slice.call(e,1),null!=gform.hooks[r][o]&&((o=gform.hooks[r][o]).sort(function(o,r){return o.priority-r.priority}),o.forEach(function(o){"function"!=typeof(t=o.callable)&&(t=window[t]),"action"==r?t.apply(null,e):e[0]=t.apply(null,e)})),"filter"==r)return e[0]},removeHook:function(o,r,t,n){var e;null!=gform.hooks[o][r]&&(e=(e=gform.hooks[o][r]).filter(function(o,r,e){return!!(null!=n&&n!=o.tag||null!=t&&t!=o.priority)}),gform.hooks[o][r]=e)}}); /* ]]> */ "*" indicates required fields URLThis field is for validation purposes and should be left unchanged.Your name (required)*Your email (required)* Your companySubject (required)*Consulting requestSchedule a meetingLearningDITA.comStoreTrainingOtherYour message*Data collection (required)* I consent to my submitted data being collected and stored. /* = 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_14');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_14').length > 0;var is_redirect = contents.indexOf('gformRedirect(){') >= 0;var is_form = form_content.length > 0 && ! is_redirect && ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_14').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_14').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_14').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */ }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_14').val();gformInitSpinner( 14, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [14, current_page]);window['gf_submitting_14'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}jQuery('#gform_wrapper_14').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [14]);window['gf_submitting_14'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_14').text());}else{jQuery('#gform_14').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger("gform_pre_post_render", [{ formId: "14", currentPage: "current_page", abort: function() { this.preventDefault(); } }]); if (event && event.defaultPrevented) { return; } const gformWrapperDiv = document.getElementById( "gform_wrapper_14" ); if ( gformWrapperDiv ) { const visibilitySpan = document.createElement( "span" ); visibilitySpan.id = "gform_visibility_test_14"; gformWrapperDiv.insertAdjacentElement( "afterend", visibilitySpan ); } const visibilityTestDiv = document.getElementById( "gform_visibility_test_14" ); let postRenderFired = false; function triggerPostRender() { if ( postRenderFired ) { return; } postRenderFired = true; gform.core.triggerPostRenderEvents( 14, current_page ); if ( visibilityTestDiv ) { visibilityTestDiv.parentNode.removeChild( visibilityTestDiv ); } } function debounce( func, wait, immediate ) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if ( !immediate ) func.apply( context, args ); }; var callNow = immediate && !timeout; clearTimeout( timeout ); timeout = setTimeout( later, wait ); if ( callNow ) func.apply( context, args ); }; } const debouncedTriggerPostRender = debounce( function() { triggerPostRender(); }, 200 ); if ( visibilityTestDiv && visibilityTestDiv.offsetParent === null ) { const observer = new MutationObserver( ( mutations ) => { mutations.forEach( ( mutation ) => { if ( mutation.type === 'attributes' && visibilityTestDiv.offsetParent !== null ) { debouncedTriggerPostRender(); observer.disconnect(); } }); }); observer.observe( document.body, { attributes: true, childList: false, subtree: true, attributeFilter: [ 'style', 'class' ], }); } else { triggerPostRender(); } } );} ); /* ]]> */ The post From classrooms to clicks: the future of training content appeared first on Scriptorium.
    --------  
    31:30
  • From PowerPoint to possibilities: Scaling with structured learning content
    What if you could escape copy-and-paste and build dynamic learning experiences at scale? In this podcast, host Sarah O’Keefe and guest Mike Buoy explore the benefits of structured learning content. They share how organizations can break down silos between techcomm and learning content, deliver content across channels, and support personalized learning experiences at scale. The good thing about structured authoring is that you have a structure. If this is the concept that we need to talk about and discuss, here’s all the background information that goes with it. With that structure comes consistency, and with that consistency, you have more of your information and knowledge documented so that it can then be distributed and repackaged in different ways. If all you have is a PowerPoint, you can’t give somebody a PowerPoint in the middle of an oil change and say, “Here’s the bare minimum you need,” when I need to know, “Okay, what do I do if I’ve cross-threaded my oil drain bolt?” That’s probably not in the PowerPoint. That could be an instructor story that’s going to be told if you have a good instructor who’s been down that really rocky road, but again, a consistent structure is going to set you up so that you have robust base content. — Mike Buoy Related links: AEM Guides Overview of structured learning content CompTIA accelerates global content delivery with structured learning content (case study) Structured learning content that’s built to scale (webinar) LinkedIn: Mike Buoy Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe. I’m here today with Mike Buoy. Hey, Mike. Mike Buoy: Good morning, Sarah. How are you? SO: I’m doing well, welcome. For those of you who don’t know, Mike Buoy is the Senior Solutions Consultant for AEM Guides at Adobe since the beginning of this year of 2025. And before that had a, we’ll say, long career in learning. MB: Long is accurate, long is accurate. There may have been some gray hair grown along the way, in the about 20-plus years. SO: There might have been. No video for us, no reason in particular. Mike, what else do we need to know about you before we get into today’s topic, which is the intersection of techcomm and learning? MB: Oh gosh, so if I think just quickly about my career, my background’s in instructional design, consulting, instructor, all the things related to what you would consider a corporate L&D, moving into the software side of things into the learning content management space. And so what we call now component content management, we, when I say we, those are all the different organizations I’ve worked for throughout my career, have been focused in on how do you take content that is usually file-based and sitting in a SharePoint drive somewhere, and how do you bring it in, get it organized so it’s actually an asset as opposed to a bunch of files? And how do you take care of that? How do you maintain it? How do you get it out to the right people at the right time and the right combination, all the rights, all the right nows, that’s really the background of where I come from. And that’s not just in learning content; at the end of the day, learning content is often the technical communication-type content with an experience wrapped around it. So it’s really a very fun retrospective when you look back on where both industries have been running in parallel and where they’re really starting to intersect now. SO: Yeah, and I think that’s really the key here. When we start talking about learning content, structured authoring, techcomm, why is it that these things are running in parallel and sitting in different silos? What’s your take on that? Why haven’t they intersected more until maybe now we’re seeing some rumblings of maybe we should consider this, but until now it’s been straight up, we’re learning and your techcomm, or vice versa, and never the twain shall meet, so why? MB: Yeah, and it’s interesting, when you look at most organizations, the two major silos that you’re seeing, one is going to be product. So whether it’s a software product, a hardware product, an insurance or financial product, whatever that product is, technical communication, what is it? How do you do it? What are all the standard operating procedures surrounding it? That all tends to fall under that product umbrella. And then you get to the other side of the other silo, and that’s the hey, we have customers, whether those customers are our customers or the internal customers, our own employees that we need to trade and bring up the speed on products and how to use them, or perhaps even partners that sit there. And so, typically, techcomm is living under the product umbrella, and L&D is either living under HR or customer success or customer service of some sort, depending on where they’re coming from. Now in the learning space you, over the last probably decade or so, seeing where there’s a consolidation between internal and external L&D teams and having them get smarter about, what are we building, how are we building it, who are we delivering it to, and what are all those delivery channels? And then when I think about why are they running in parallel, well, they have different goals in mind, right? techcomm has to ship with the product and service and training ideally is doing that, but is often, there’s a little bit of a lag behind, “Okay, we ship the thing, how long is it before we start having all the educational frameworks around it to support the thing that was shipped?” And so I think leadership-wise, very different philosophies, very different principles on that. techcomm, very much focused on the knowledge side of things. What is it? How do you do it? What are all the SOPs? And L&D leans more towards creating a learning experience around, “Okay, well here’s the knowledge, here’s the information, how do we create that arc going from I’m a complete novice to whatever the next level is?” Or even, I may be an expert and I need to learn how to apply this to get whatever new changes there are in my world and help me get knowledgeable and then skilled in that regard. So I think those are kind the competing mindsets and philosophies as well as, I won’t say competing, but parallel business organization of why we don’t usually see those two. And if we think about from a workflow perspective, you have engineering or whoever’s building the product, handing over documentation of what they’re building to techcomm and techcomm is taking all of that and then building out their documentation, and then that documentation then gets handed to L&D for them to then say, “Well, how do we contextualize this and build all the best practices around it and recommendations and learning experiences?” So there is a little bit of a waterfall effect for how a product moves through the organization. I think those are the things that really contribute to it being siloed and running in parallel. SO: Yeah. And I mean many, many organizations, the presence of engineering documentation or product design documentation is also a big question mark, but we’ll set that aside. And I think the key point here is that learning content, and you’ve said this twice already, learning content in general and delivery of learning content is about experience. What is the learning experience? How does the learner interact with this information and how do we bring them from, they don’t understand anything to they can capably do their job? The techcomm side of things is more of a point of need. You’re capable enough but you need some reference documentation or you need to know how to log into the system or various other things. But techcomm to your point, tends to be focused much less on experience and much more on efficiency. How do we get this out the door as fast as possible to ship it with the product? Because the product’s shipping and if you hold up the product because your documentation isn’t ready, very, very bad things will happen to you. MB: Bad, bad, very bad. SO: Not a good choice. MB: It’s not a good look. It’s not a good look. SO: Now, what’s interesting to me is, and this sort of ties into some of the conversations we have around pre-sales versus post-sales marketing versus techcomm kinds of things, as technical content has moved into a web experience, online environment, and all the rest of it, it has shifted more into pre-sales. People read technical documentation, they read that content to decide whether or not to buy, which means the experience matters more. And conversely, the learning content has fractured into classroom learning and online instructor led and e-learning a bunch of things I’m not even going to get into, and so they have fractured into multi-channel. So they evolved from classroom into lots of different channels for learning where techcomm evolved from print into lots of different channels, but online and so the two are kind of converging where techcomm needs to be more interested in experience and learning content needs to be more interested in efficiency, which brings us then to, can we meet in the middle and what does it look like to apply some of the structured authoring principles to learning content? We’ve talked a lot about making techcomm better and improving the experience. So now let’s flip it around and talk about how do we bring learning content into structured authoring? Is that a sensible thing to do? I guess that’s the first question: is that a sensible thing to do? MB: Yeah, and here’s the thing that I like to keep in mind when talking about structured authoring, the context for why in the world would we even consider it? And when I think of traditional L&D training courses, whether it’s butts in seats at an instructor-led training event, whether I’m actually in a physical classroom or I’m sitting virtually in a Zoom class for example, or it’s self-paced e-learning, so much great content is built and encapsulated in that experience and is not able to be extracted out. My favorite example of talking about this is I’ve got a big truck sitting in my driveway, I need to change the oil on it, it’s time. If it’s the first time I’ve ever changed oil, absolutely, I want all the learning. I want the scaffolding. I want the best practices, how I’m going to set up my work environment, the types of tools. How I’m going to need to deal with all the fluids, what I need to purchase. I’m going to dive into all that. In the real world, university of YouTube, I’m going to go watch videos on this and there’s going to be some bad content, there’s going to be some gems, and I’m going to pay attention to the ones that are good. Now as I go from a novice, I’m going to build that knowledge of how to do it, I’m going to apply that knowledge. I’m actually going to go do it, now I’m probably going to make a mess and make mistakes my first time through, but that’s also building experience. So I’m moving from novice to knowledgeable to building skills to as I do it more and more, I move into that realm of being experienced. Now as you move further up that chain, you need less and less support to the point where I’m like, “Crap, which oil do I need to buy? What are the torque specs on my drain plug?” I really only need three or four data points to do the job now. So that’s where as I move from a novice to an expert, I need to be able to skim and find exactly what I need in the moment of need, the just enough information. And so I’ll take the oil changing experience and let’s take that to any product or service training your customers, the people who are consuming your content are going through the same thing. So learning-wise, why structured? Once I get to the expert level of things, I am not going to log into the LMS and I’m not going to launch that e-learning course, and I’m not going to click next 5 to 10 to 20 times to get to the answer that has the specification tables of, here’s what I need and what I need to do in order to accomplish the task at hand. Everybody’s nodding their head. Every time I ask, “When was the last time you logged into the LMS to get an answer to a question?” The only time I’ve ever had somebody go, “Oh, me,” it was actually an LMS administrator. So learning is great at creating that initial experience, but their content’s trapped. It is stuck inside that initial learning experience. So getting back to the question, why structured authoring? Well, if you move to a structured authoring where you’re taking your content and building it in chunks, yes, you can create that initial learning experience where you’ve assembled that very crafted, we’re taking you from novice, getting you the knowledge, giving you the opportunities to practice the skill in a safe environment and fail well and learn from that and get you to a place where you move from novice to skilled. And then over time, this is where a lot of the L&D in general, because their content’s trapped in that initial learning experience, they can’t easily extract that information out and provide the things people need to move from skilled to experienced and experienced to mastery. So that’s where when I think about, “Well, what does techcomm do really well? Techcomm supports that, I’ve got enough skills to do the job and I need to reference the very specific information, or the SOP, I’m on step four, I forget what are the things I need to enter in to get through step four, I can hop over the documentation and find that. So techcomm has figured out the structured authoring part. You mentioned creating new varied experiences for getting to the technical communication. Multi-channel delivery, I want to hop on and hit my search or hit my AI chatbot and pull up the information and just get me just enough to get through the tasks that I’m doing. Learning’s still often stuck, if we equate it to the tech communication side, they’re still stuck in the, “I’m hand building a Microsoft Word based 500 page user guide that to get anything out of that, it’s a lot of work to build it, it’s a lot of work to maintain it, and it’s not easy to extract that information out to use it for other things.” So why structured authoring, feature proof your content, make it more flexible. You’ve invested so much time and energy creating great content, great experiences, why not make it so it’s modular so you can pull things out and create new and different ways of consuming that content and delivering it in different bite size bits and pieces along the way? SO: And I guess we have to tackle the elephant in the room, which is PowerPoint. So much learning and training, in particular, especially classroom training, is identified with an instructor standing at the front, running through a bunch of slides. And we like to say that PowerPoint is the black hole of content, that’s where content goes to die, and once it goes in, you never get it back out. So what do we say to the people that come in and they’re like, “You will pry PowerPoint from my cold, dead hands.” MB: Such a great question. I’ll jokingly refer to PowerPoint as “My precious.” Here’s the reality: PowerPoint is not the knowledge chunk. That knowledge is actually sitting in the head of the instructor, the PowerPoint is providing the framework for them to deliver and impart that knowledge and impart those best practices. It’s there to provide guardrails so that it’s done in a consistent fashion, and there’s a bare minimum amount of structure that… There’s a bullet point there, they’re going to talk about it. The degree to the quality of how they’re going to talk about it and present it is going to vary based on the person delivering the content. So if you’ve got a bunch of PowerPoint slides, you don’t necessarily have all of your training material well documented. Now, if you’ve got parallel instructor guides and student guides that talk about the details of what should be said behind those bullet points, you’re a lot closer to having that information. So why structured authoring? Well, it’s kind of, again, the good thing about structured authoring is you have a structure. You have a, if this is the concept that we need to talk about and discuss, here’s all the background information that goes with it. So with that structure comes consistency, and with that consistency, that means that you have more of your information and knowledge documented so that it can then be distributed and repackaged in different ways. Because if all you have is a PowerPoint, you can’t give somebody a PowerPoint when they’re in the middle of an oil change and say, “Here’s the bare minimum you need.” When I need to know, “Okay, what do I do if I’ve cross-threaded my oil drain bolt?” That’s probably not in there. That may be an instructor story that’s going to be told if you have a good instructor who’s been down that really rocky road. But again, structure and being consistent about it is going to set you up so that you have robust base content.  We’ve got Legos in the house, I got two boys. Gosh, I’ve stepped on so many Legos in my life, it’s ridiculous. But the Lego metaphor works because you have a more robust batch of Legos that you can create new creations from, rather than a limited set if you’re only doing PowerPoint. SO: And because you’re nice, and I’m not, I’ll say this, we can produce PowerPoint out of structured content, that is a thing we can do. I’m not saying it’s going to be award-winning, every page is a special snowflake PowerPoint, but we can generate PowerPoint out of structured content. And if you’re using it as a little bit of an instructor support in the context of a classroom or live training, that’s fine. A lot of the PowerPoint that we see that people say, “This is what I want, and if you don’t allow me to do this,” and there’s this rainbow unicorn traipsing across the side of the page kind of thing, and no, we can’t do one-off slides, we can’t do crazy every slide is different stuff, but the vast majority of the content that I see that is PowerPoint based and kind of all over the place is not actually effective. So it’s like, this is not good. We have the same issue with InDesign. We see these InDesign pages that are highly, highly laid out, and it’s like, “We need this.” Well, why? It’s terrible. I mean, it’s awful. What are you doing here? No, we can give you a framework. MB: Now, you’re telling somebody that their baby’s ugly when you say that, that’s somebody’s baby. SO: I would never tell somebody that their baby is ugly, but I have seen a lot of really bad PowerPoint. Babies are wonderful. MB: Yes. SO: It’s so bad. So why does the PowerPoint exist, and how do we work around that? And also, are you delivering in multiple languages? Because if so, we need a way to localize this efficiently, and we’re right back to the structured content piece. MB: And as soon as you’re talking about with PowerPoint, it is the poster child of pixel-perfect placement. As soon as I take a perfectly placed pixel product and have to translate it from English to let’s just say French, just the growth of the text alone, now I’ve got what was a perfectly placed pixel layout, my beautiful slide is now a jumbled mess. So just because you can doesn’t mean you should. And the thing is, PowerPoint and Microsoft Excel are the duct tape that runs business. Everybody has it. Everybody uses it. That’s the reality. Now, the thing is, does everything have to be structured? I don’t believe it has to be. They are absolutely the one-off snowflake instances where, you know what? PowerPoint is the exact right tool for the job. Maybe it’s the one-off presentation that really is not going to see any reuse, it’s expendable, it’s disposable. We need to get the information communicated quickly. I’m going to fire it PowerPoint. I’m going to use it as my, I’m going to do air quotes, “My throwaway content” because it’s something that is short, sweet, and needs to be communicated, absolutely. I’m not, and I don’t think you are either, saying that PowerPoint has to go away, it’s the when is it appropriate and when is it not? SO: I mean, I am the queen of the one-off can never be reused content being developed in, now I refuse to use PowerPoint, but in slideware for a short presentation, so the next one of you that’s listening to this and walks up to me at a conference and says, “Oh, is your presentation structured content?” No, it is not. Thank you for asking. Why isn’t it structured? Because I don’t reuse it at scale. Because in fact, every presentation at every conference is a special snowflake and has been lovingly handcrafted by me to deliver the message that I need, the context that I need, potentially the language, but to your point, even if I’m not localizing the presentation itself, the cultural context matters. So if my audience is largely English-speaking or primarily English, or… I mean, we’re going to Atlanta for LavaCon, that is going to be mostly a US-based audience, and maybe we get some Canadians, eh. And other than that… But mostly US and a US context. Will I be using excessive amounts of images from the Georgia Aquarium? Yes, I will. Now, when I go to conferences elsewhere, so let’s take tcworld in Germany in November, that audience is, we’re delivering content in English, and the audience ranges from perfect English speakers to sort of barely hanging on. And so my practice at a conference like that is to include more text on my slides because if I include some additional text, it gives the people that are not quite as comfortable in English, a little bit more scaffolding to hang onto as they’re trying to follow my ridiculous analogies and insane references to cultural things. I also do try to pay attention to the kinds of words that I’m using and the kinds of idioms that I’m using so that they’re just not completely lost in space or things are not coming from left field or whatever. So the context matters, and no, my presentations are not structured. But pulling this back, let’s talk about the potential. So when we look at learning content and you think about saying, okay, we’re going to structure our learning content or we’re going to structure some of our learning content, what does that mean in terms of what gets enabled? What are the possibilities? What are the things that you can do with structured learning content that you cannot do in unstructured, by which I mean PowerPoint, but unstructured, locked-in content? If we break this stuff into components and we deliver on structured learning content, what are the ideas there? What are the possibilities? MB: Well, as you’re explaining the PowerPoint point of view, a word that came up a few times was scale. I’m not having to do it at scale. Effectively, it is a one-off. Yes, I’m going to personalize it for the audience, and the degree of personalization and customization that you’re doing per conference, per audience, per default language that they’re speaking, you’re able to scale that to the degree that you need to. There’s no need for you to put your content in data and localize it and do all the things that you need to do. So it’s really that word at scale, that, I think, is the key word. It’s when you hit that tipping point where the desktop tools that you’re using today, and we can say this with tech communications as well, I was using Word and Excel and copy and pasting and keeping things in sync, it works until you get to a tipping point where the scale no longer is sustainable. That same exact problem exists in training. So when you’re looking at things like, I have my training content that when I deliver it in California, I have to put my Prop 65 note in everything because Lord forbid, as soon as I step across the state line into California, everything that’s around me is going to give me cancer. Prop 65 is the default thing that you see plastered everywhere. So do I need to customize my content for delivering in California? Perhaps. Maybe different states have different regional laws or policies that apply to only that audience. That’s where that mass customization and mass personalization are really hard to scale because now you don’t have just one course, you have potentially 50 courses, if I’m just talking about the US, 50 states, 50 courses, and I have to have 50 different variations, which means that not if something changes, but when something changes, now I have to open up and change 50 different courses, and it’s not, did I miss anything? It’s, “What did I miss?” That’s the thing that you wake up in the morning in a cold sweat of, “Oh my God, what did I miss?” So why structured for learning? Largely when you get to that tipping point where you’re copy/pasting, and I call it the copy/paste published treadmill, when you are on that hamster wheel of copy/paste/publish, copy/paste/publish, and that is the majority of what you’re doing, and you’re looking at a pie chart of how much time is spent maintaining your courses or taking a base course and creating all the variations, that precious PowerPoint that is the handcrafted bespoke one-off, you can’t do that anymore. That’s the equivalent of, you look at a Lamborghini, how many do they make a year? They can afford to make a very small number per year because they’re really expensive to make. When you look at a Ford Mustang, which probably gives you 80% of the performance at a fraction of the cost and exponentially scales well beyond, it’s because they’ve taken that structured approach of, every frame’s the same, every hood’s the same, very few handcrafted things, and the things that are going to be handcrafted, that’s when I go order the special edition Shelby Cobra that has some handcrafted components put onto the basic structure. That’s that same metaphor applied. So why structured content? Because I want to have modular content that can be reassembled really quickly, that I may have chunks that are reused so that when I need to slip in my Prop 65 disclaimers, I can do that at scale and have 50 variations of a course, but when it comes time to update it, I’m literally updating one or two things and it’s automatically updating all 50 courses and of course all the efficiencies of publishing things out in a structured format. So that pixel-perfect placement, I’m going to give that up to stay sane so I can get home and have dinner with my family, because the amount of time that I’ve spent in my life doing pixel-perfect placement and updating things, God, I wish I could hit the way back machine and reclaim all that time in my life. How many… Guilty as charged. Show of hands of anybody who’s listening, how many times have you sat there and fiddled with the slide or a text box in InDesign then design to get it just right, that two days later, something changes and you’re back there spending 10, 15 minutes doing it to fiddle it in just right. So, as I affectionately like to say, I’m a recovered FrameMaker, InDesign, PowerPoint, and Word user because I want to author it in a structured format so that I am giving up the responsibility of layout and look and feel. SO: I like to tell people, “I’m not lazy, I’m efficient.” The fact that I don’t want to do it is just a bonus; I can get out of doing all this work. MB: That’s right, that’s right. SO: Because we are not allowed to leave any podcast without covering this topic, what does it look like to have AI in this context? MB: There are two sides of the AI coin from a content perspective, I think, and it’s the, “How can AI help me do my job better to create content?” Some things that when we’re looking at duplication of content, things that AI can do really well that, working smart, not hard, help me find things that already exist in my repository of structured content that look like this, that are really close. The human in the loop, so helping me deduplicate or help me not create new unnecessary variations of content. I think that’s one area of AI-based assistance for content creation that people may not be necessarily thinking about. Because right now, the easy one is like, “Hey, ChatGPT, help me write an introduction or an overview for the following,” it spits that out. That’s great, but that overview and that content may have already been written by somebody else, and so what ends up happening is you start generating content drift where it’s almost exactly the same but just slightly different. And in reality, yes, I could have used the one that was already there. So I think that’s one of the areas where AI from a content authoring perspective is one that I’m really excited about. Because at the end of the day, and this leads us into the second part of AI, AI is only as good as what you feed it, and if you feed it junk food, you’re going to get junk results. So it’s that whole thing of do you eat healthy food or are you going to eat Cheetos? If you’re pointing your AI at a SharePoint repository and saying, “Hey, read all of this,” and all the content shifts and variations and content drift and out-of-date and perhaps out-of-context content that exists inside of that repository, your results are not going to be as accurate as they need to be. So, how do you ensure that AI is providing good results? Well, you feed good content. And so within an organization, I think the two silos that we started our conversation with, technical communications and L&D, tend to have some of the most highly vetted, highly accurate, up-to-date content in an organization. And so this is my encouragement to everybody who’s in this space, you are the owners of what is good, highly nutritious food that you can feed your AI. So taking it back to the structured content perspective, if I’m authoring in the structured content, publishing it out in a format that is AI ready, all of your tags, all of your enrichments, all of your, here’s the California version of the content versus the Georgia or Florida’s version of the content, all of that context and enrichment and tagging that’s gone on, you’re now feeding AI all of that context so that AI can provide the proper answer. So that’s my short, it’s sweet for the AI side. We could talk for probably days on all sorts of other variations, but right now, that’s where I’m seeing the biggest impact that it’s going to have on techcomm and L&D. SO: I think that’s a great place to wrap it up. And I want to say thank you for being here and for a great conversation around all of these issues, and we will reconvene at a future conference somewhere to cause some more trouble and talk some more about all of these things. So Mike, thank you. MB: You are welcome. And yeah, I think the next conference we’re going to see each other is going to be LavaCon, so I’ll be talking in and around the convergence of L&D and techcomm and what life can look like with that. So certainly a deeper dive and continuation of what we started here, and super excited to sit on your session as well. SO: Yep, super. I will see you there. I’m pretty sure I’m doing one on the same topic, but it will be more complaining and less positive, so that seems to be my role. Okay, with that, thank you everybody, and we’ll see you on the next one.  Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. Questions about this episode? Ask Sarah! "*" indicates required fields LinkedInThis field is for validation purposes and should be left unchanged.Your name (required)*Your email (required)* Your companySubject (required)*Consulting requestSchedule a meetingLearningDITA.comStoreTrainingOtherYour message*Data collection (required)* I consent to my submitted data being collected and stored. /* = 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_14');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_14').length > 0;var is_redirect = contents.indexOf('gformRedirect(){') >= 0;var is_form = form_content.length > 0 && ! is_redirect && ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_14').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_14').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_14').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */ }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_14').val();gformInitSpinner( 14, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [14, current_page]);window['gf_submitting_14'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}jQuery('#gform_wrapper_14').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [14]);window['gf_submitting_14'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_14').text());}else{jQuery('#gform_14').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger("gform_pre_post_render", [{ formId: "14", currentPage: "current_page", abort: function() { this.preventDefault(); } }]); if (event && event.defaultPrevented) { return; } const gformWrapperDiv = document.getElementById( "gform_wrapper_14" ); if ( gformWrapperDiv ) { const visibilitySpan = document.createElement( "span" ); visibilitySpan.id = "gform_visibility_test_14"; gformWrapperDiv.insertAdjacentElement( "afterend", visibilitySpan ); } const visibilityTestDiv = document.getElementById( "gform_visibility_test_14" ); let postRenderFired = false; function triggerPostRender() { if ( postRenderFired ) { return; } postRenderFired = true; gform.core.triggerPostRenderEvents( 14, current_page ); if ( visibilityTestDiv ) { visibilityTestDiv.parentNode.removeChild( visibilityTestDiv ); } } function debounce( func, wait, immediate ) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if ( !immediate ) func.apply( context, args ); }; var callNow = immediate && !timeout; clearTimeout( timeout ); timeout = setTimeout( later, wait ); if ( callNow ) func.apply( context, args ); }; } const debouncedTriggerPostRender = debounce( function() { triggerPostRender(); }, 200 ); if ( visibilityTestDiv && visibilityTestDiv.offsetParent === null ) { const observer = new MutationObserver( ( mutations ) => { mutations.forEach( ( mutation ) => { if ( mutation.type === 'attributes' && visibilityTestDiv.offsetParent !== null ) { debouncedTriggerPostRender(); observer.disconnect(); } }); }); observer.observe( document.body, { attributes: true, childList: false, subtree: true, attributeFilter: [ 'style', 'class' ], }); } else { triggerPostRender(); } } );} ); /* ]]> */ The post From PowerPoint to possibilities: Scaling with structured learning content appeared first on Scriptorium.
    --------  
    32:17

More Business podcasts

About Content Operations

Scriptoriums delivers industry-leading insights for global content operations.
Podcast website

Listen to Content Operations, The Diary Of A CEO with Steven Bartlett and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.0.7 | © 2007-2025 radio.de GmbH
Generated: 12/7/2025 - 12:51:47 PM