Powered by RND
PodcastsTechnologyTech Law Talks

Tech Law Talks

Reed Smith
Tech Law Talks
Latest episode

Available Episodes

5 of 94
  • Tariff-related considerations when planning a data center project
    High tariffs would significantly impact data center projects, through increased costs, supply chain disruptions and other problems. Reed Smith’s Matthew Houghton, John Simonis and James Doerfler explain how owners and developers can attenuate tariff risks throughout the planning, contract drafting, negotiation, procurement and construction phases. In this podcast, learn about risk allocation and other proactive measures to manage cost and schedule challenges in today’s uncertain regulatory environment. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Matt: Hey, everyone. Welcome to this episode of our Data Center series. My name is Matt Houghton, joined today by Jim Doerfler and John Simonis. And in today's episode, we will discuss our perspectives regarding key tariff-related issues impacting data center projects that owners and developers should consider during initial planning, contract drafting, and negotiation and procurement and construction. So a bit about who we have here today. I'm Matt Houghton, counsel at Reed Smith based out of our San Francisco office. I focus on projects and construction-related matters both on the litigation and transaction sides. I was very excited to receive the invitation to moderate this podcast from two of my colleagues and thought leaders at Reed Smith in the area of capital projects, Mr. John Simonis and Mr. Jim Doerfler. And with that, I'm pleased to introduce them. John, why don't you go ahead and give the audience a brief rundown of your background?  John: Hi, I'm John Simonis. I'm a partner in the Real Estate Group, practicing out of the Orange County, California office. I've been very active in the data center space for many years, going back to the early years of digital realty. Over the years, I've handled a variety of transactions in the data center space, including acquisitions and dispositions, joint ventures and private equity transactions, leasing, and of course, construction and development. While our podcast today is primarily focused on the impacts of tariffs and trade restrictions on data center construction projects, I should note that we are seeing a great deal of focus on tariffs and trade restrictions by private equity and M&A investors. Given the potential impacts on ROIs, it should not be surprising that investors, like owners and developers, are laser-focused on tariffs and tariff uncertainty, both through diligence and risk allocation provisions. This means that sponsors can expect sophisticated investors to carefully diligence and review data center construction contracts and often require changes if they believe the tariff-related provisions are suboptic. Jim?  Jim: Yes, my name is Jim Doerfler. I'm a partner in our Pittsburgh office. I've been with Reed Smith now for over 25 years and have been focused on the projects and construction space. I would refer to myself as what I would call a bricks and sticks construction lawyer in that I focus on how projects are actually planned and built. I come to that by way of background in the sense that I grew up in a contractor family and I worked for a period of time as a project manager and a corporate officer for a commercial electrical contractor. And data center projects are the types of projects that we would have loved. There are projects that are complex. They have high energy demands. They have expensive equipment and lots of copper and fiber optics. In my practice at Reed Smith, I advise clients on commercial and industrial projects and do both claims and transactional work. And data of projects are sort of the biggest thing that we've seen come down the pipeline in some time. And so we're excited to talk to you about them here today.  Matt: Excellent. Thank you both. Really glad to be here with both of you. I always enjoy our conversations. I'm pretty sure this is the kind of thing we would be talking about, even if a mic wasn't on. So happy to be here. I want to start briefly with the choice of topic for today's podcast. Obviously, tariffs are at the forefront of construction-based considerations currently here in the U.S., but why are tariffs so important to data center project considerations?  Jim: So, this is Jim, and what I would say is that Reed Smith is a global law firm, and one of the things that we do in our projects in construction group is we try and survey the marketplace. And data center projects are such a significant part of the growth in the construction industry. In the U.S., for example, when we surveyed the available construction data from the available sources and subject matter experts, what we found is that at least for the past year or two, construction industry growth has been relatively flat aside from data center growth. And when you look at the growth of data centers and the drive for their being built by the growth in AI and other areas, it's really a growth industry for the construction and project space. And so something like tariffs that have the potential to impact those projects are particularly of concern to us. And so we want to make sure for our owner and developer clients and industry friends that we provided our perspectives on how to do these projects right.  Matt: That makes a lot of sense. So we've sort of set the stage for the discussion today. I think we could go on for hours if we didn't give ourselves some guidelines, but there are really three critical phases of a project that a owner or developer should be thinking about how they're going to address tariffs. And those are the initial planning, the contract drafting and negotiation, and then the procurement and construction phase. Since planning comes first, and of course, of the Titleist podcast is tariff-related considerations when planning a data center project. Let's start with the planning phase and some of the considerations an owner or developer may have at that time. John, what do you see as some of the key portions of the planning process where an owner or developer needs to start addressing tariff-related issues?  John: Tariffs and trade restrictions are getting a great deal of focus in all construction contracts. Tariffs impact steel and aluminum, rare earth materials. Data centers are big, expensive projects and can be impacted greatly. We're obviously in a period of great uncertainty as it relates to these types of restrictions. So I think in the planning stage, it may be somewhat obvious to say that that may be the most important time to mitigate to the extent possible some of the impacts. I think it starts in the RFP process. The requirements you're going to put on your design team and on your contractor to cooperate, collaborate, to mitigate to the extent possible the impacts of tariffs and particularly increased tariffs. You identify the materials and equipment subject to material tariffs and tariff risk. It increases, particularly those that might increase in the future, and I'd address those as best possible. You expect your team to be proposing potential mitigation measures, such as early release, substitutes, and other value engineering exercises. So that should be a very proactive dialogue. And you should be getting the commitment from the parties early in the RFP process and throughout the planning and pricing stage to cooperate with the owner to mitigate negative impacts, both in terms of cost, timing, and other supply chain issues. Jim, there's also some things we're seeing in the procurement space, and maybe you can address that.  Jim: Sure. So, you know, as you're going through the RFP phase and sort of anticipating what you would ultimately want to build into your contract and how you're going to procure it, you want to be thinking ahead about procurement-related items. As John indicated, these projects that are big and complicated and that involve significant and expensive equipment. So you want to be thinking about essentially your pre-construction phase and your early release packages, your equipment or your major material items. And you want to be talking with your trade partners in terms of allowing that equipment to get there in a timely fashion and also trying to lock down pricing to mitigate against the risk of tariff-related or generally trade-related disruptions that could affect either price or delivery issues. So you want to be thinking about facilitating deposits for long lead or big ticket material or equipment items. And you want to identify what are those big equipment or material items that could make or break your project and identify the risk associated with those items early on and build that into your planning process.  John: And there's some difference between different contracting models. If you were looking at a fixed price contract versus a cost plus for the GDP or a cost plus contract, obviously the risk allocation as it relates to tariff and trade restrictions might be handled differently. But generally speaking, we're seeing tariff and trade restriction risk being addressed very specifically in contracts now. So sophisticated owners and contractors are very specifically focusing on provisions that specifically address these risks and how they might be mitigated and allocated.  Jim: Just to follow up on John's point I mean in theory there are you could you could have a fixed price contract versus at least in the in the US what we would describe as cost plus or cost reimbursable projects using a guaranteed maximum price or a not to exceed cap style agreement in our experience at least in the US they tend to be more of the latter type of project delivery system and even if you had a fixed price contract, one of the issues in the marketplace that we've been seeing really since COVID has been these force majeure or supply chain impacts. And the tariffs are really sort of almost an outgrowth of that or a different manifestation of the same types of effects that we would see. And so even if you had what was in theory a fixed price contract, one of the issues that you're going to quickly run into is that the contractors are going to want some carve-outs from that because of what we're seeing in the marketplace.  Matt: I think you guys are basically making the natural transition into our next topic, sort of talking about the contract drafting and negotiation phase. When you're putting those thoughts into practice, what sort of provisions do you guys see as sort of the key contract provisions that owners and developers should either be including and or carefully negotiating or crafting into their contracts that involve these tariff-related issues?  John: I think traditionally, the issue of tariff increases, tariff uncertainty, supply chain uncertainty, given the current political climate, would typically, you'd be looking to the force majeure provisions, you'd be looking to the change in law provisions that have traditionally been in construction contracts. As Jim mentioned, we are seeing a more direct focus on tariff-related issues in current contracts where the party very specifically discuss it, allocate risk, allocate commitments to collaborate. So we're seeing that as its own provision usually now and setting a baseline that any kind of a change order would be based on, giving commitments to work with each other. And if we're going to put those into separate provisions, then we need to make sure that as you look at your change in law, of course, with your provisions, that you don't also cover them in those provisions. You'd want to refer to the specific provision as opposed to picking them up in the more generic provisions.  Jim: And just to follow up on what John was saying, I think one of the things you need to do is you need to look at your existing contracts. If you're using certain, if you're in the U.S., for example, and you're using some of the standard commercial templates like the American Institute of Architects or the consensus docs type documents, as opposed to a bespoke agreement, some of those actually don't contain a force majeure or a change in law provision. So in all likelihood, the contractor is going to suggest that one needs to be included, but I think you need to proactively plan for that. And as it relates to the specific issue of tariffs, one of the things that you want to focus on is your definition of change in law and whether or not, for example, an executive order issued by the president qualifies as a change in law. The bottom line is you want to be specific and you want to allocate your risk specifically to eliminate uncertainty and the risk of claims that follows from uncertainty. You know, I guess the other thing along with sort of the big issues of force majeure and change in law is if you're doing something like a cost plus type arrangement, one of the additional provisions that is going to be heavily negotiated is something called contractual or construction contingency. And contingency is typically a term that is used for essentially an allocation of money that has been set aside to cover the risk of unforeseen contingencies or unforeseen events that would cause the cost of the project to increase. Now, tariffs would often be viewed as a contingency, but one of the things that we're seeing in the marketplace is that contractors have a certain amount of money that they're setting aside for contingency related to other risks associated with, for example, unforeseen price increases because of supply chain issues and that sort of thing. And so they're reluctant to give up what has been set aside for general construction contingency to cover tariff-related risks. So that's one of the areas that is going to be heavily negotiated in that type of project and to what extent the owner has approval over approving of the use of that contingency. So that's something for that particular type of contract. John, what other types of contractual provisions in your experience are going to be critical?  John: Following on your last point, Jim, first, I think contractors, at least in my experience, have been viewing the tariff risk as a very pure force majeure type risk that they can't control beyond collaborating on mitigation, but they cannot generally control it. So that if, you know, if they've done their pricing and bought out stuff and that changes, both they and their subcontractor are going to be looking for change orders if an additional cost is imposed on them. Now, we're seeing that contractors and some of the subcontractors sometimes absorb some of that risk, but it's almost always capped at, you know, some relatively low percentage of the cost increase. Jim, have you been seeing the same thing?  Jim: I have been seeing the same thing, that the contractors are pushing back, that basically they view tariffs and tariff-related risks as being a different animal, that you should have a different mechanism for dealing with that risk. And one of the ways, as you pointed out, they view that as being essentially sort of like a force majeure or a change in law type issues where they're entitled to a change order. And then the issue then becomes, from a negotiating standpoint, what sort of provisions or planning do you anticipate in your change order provisions? And we've seen some development in that area as well. John, do you want to talk about what you've seen in terms of how these change order provisions are being drafted to account for tariff-related risks?  John: Sure. One of the first items that's always addressed is a duty to mitigate. And that duty to mitigate, I think if it's properly drafted as both a covenant and a condition to recovery. It has more importance than the simple covenant, I believe, in that it precludes claims from a contractor that they just push forward and absorb what could be an exorbitant tariff. That the owner can say to them, we should be considering substitutes and value engineering and alternatives, and I want to pause to do so. So I think without crafting every possibility, the best way to do that is to have a collaboration clause. And we are not seeing much resistance with contractors.  Jim: And I'll just follow up on that. To the extent that there's agreement that the tariffs would represent sort of a change and that they're entitled to a change order, One of the things that we're seeing is essentially a bit of horse trading between owners and contractors in the sense that owners may be willing to grant a change order for the costs associated with tariffs that were not included in their original pricing, but they're asking essentially for the contractors not to seek fee or markup on the additional costs that they would be incurring. At least that's what I've been seeing. John, has that been your experience?  John: Yes. I think the tariffs, particularly if it's a substantial increase, I think there's more alignment and I don't think it's necessarily appropriate for a contractor to be profiting in that added cost to the project. And we see parties in general agreement on that.  Jim: I would say the other contract drafting and negotiation issue is that what I've seen is when they're trying to develop it, essentially you draw sort of a bright line at the time of contracting or the time that the pricing is agreed upon. And then going forward, then, if there is some new tariff that would come in, then it's very important to provide timely notice of claims so that it can be addressed very quickly. So that's one of the provisions that I think is important. And then finally, sort of the flip side of the contingency coin is oftentimes these cost plus agreements have a shared savings provision. And so I would say that if tariffs are considered to be a contingency item, then the flip side of that is that the contractor would be entitled to some level of shared savings if they are able to bring it in under the guaranteed maximum price or the not-to-exceed cap. By contrast, if tariffs are treated as being something set apart from contingency, then oftentimes what I'm seeing is that it gets treated as an owner allowance item. That if it's used, it's used, again, without markup. But if it's not used, then it reverts back to the owner in its entirety with no shared savings being paid out. At least that's what I'm seeing in my area.  John: Yeah, I think the one little twist on that, Jim, that I would throw out is if it's the contractors absorbing some of the risk of the tariff increases, in those instances, I have sometimes seen some of the sharing. But if they're not absorbing that risk, I think it's pretty appropriate for the owner to say you shouldn't be benefiting from the fact that the tariffs didn't come up and we had provided contingency for that. One last thing to think about, and I haven't seen a lot of changes in the provisions yet, but I've given a lot of thought to the fact that if you've got some extreme combative retaliatory tariffs that made a project uneconomic, you should be focusing on your right to pause and your right to terminate and making sure that those aren't penal to the owner. So I think that deserves some focus in this instance as well.  Matt: Just my takeaway from all of this is that there's really a lot for owners and developers to think about here. And to give credit where credit's due, I think John mentioned this during our preparation for this podcast. Owners and developers really need to make sure that they're considering these issues during the initial planning and address them carefully and deliberately during the initial drafting and negotiation. Your guys' thoughts and comments remind me of a remark I heard from a pilot in passing in response to passengers' frustration at certain delays when their plane wouldn't take off. He said it was better to address the problems with the plane on the ground than in the air. And I think that aptly applies here to these sorts of projects. Let's make sure we address them before the project has taken off. But to round out our discussion, I want to transition to the third and overall final area, procurement and construction. So once a project is ready to take off and get underway, what sort of issues should owners and developers anticipate and be prepared to address as the project heads into the procurement construction phase? Jim, if we want to start with you.  Jim: Yeah, so one of the things that I would say is planning is the best preparation for a successful project. And during, but you need to stay on top of the project as it moves forward. Some of our sophisticated clients, as they're building these projects, have developed things like tariff tracker spreadsheets, where they're identifying, they're breaking the project down into their different component parts and identifying specific equipment or material items that might be subject to tariffs, what the amount of those tariffs are, and who's bearing the risk of that. They are assiduously monitoring the windows for buyout packages to be completed so that once they get the go-ahead from the owner to lock in the guaranteed price or the capped price, it's very important that the prime contractor buy out those critical equipment and material and subcontractor packages to lock in that pricing and to lock in the best delivery schedules that they can get. And to monitor that as they're going forward to avoid any sort of slippage, either in terms of the project schedule or in terms of pricing.  John: Jim, I think a lot of it begins with the identification of the high-risk equipment being incorporated into a data center project. One thing I would comment on is that I've had a fair share of sensitivity from my clients to the issue of who's making the decision on early procurement versus wait and see if the tariff situation changes versus pausing a project. I think the owners should be focused on trying to keep as much control as possible of those decisions, since they will be bearing the brunt of the cost and the brunt of the delay.  Jim: One additional thought, and that is that during the course of the project, you may have situations where for various tariff or trade disruption type reasons, particular equipment, item of equipment or a particular material may not be available in the way that you want it. And so what we'll see is that there will be requests from the contractor to use alternates or substitutions that are designed either to control costs or to control schedule. And so that's going to be something also where you're going to have to work with your designer to evaluate those because sometimes the substitutions are not equal. But in other instances, if you can make a substitution and keep the project on track and on budget, it can be very helpful. So that's one of the things where, just from a procurement standpoint, handling of alternates and substitutions can be very important.  John: Following up on Jim's last comment, we've talked mostly about contractor-related issues in this podcast. But I do think the front-end focus from the design team may be even more critical. It's going to happen naturally, I believe. But I think it is incumbent upon owners to get their design team to be focusing heavily on helping them mitigate the risks to the extent possible. If they're speccing equipment or speccing supplies that are, you know, subject to substantial risk, that's going to exacerbate the problem. And you can only solve it to someone's degree, contractor. So I think it should be focused on early again in the process and with the design team and the contracting team from a pricing standpoint and value engineering standpoint.  Matt: Well, gentlemen, it's been a pleasure hosting this discussion with you today. Thank you very much for inviting me to be a part of it. Having worked with you both for some period of time, including, you know, basically living out of a hotel with Jim for a month at a time during arbitration. I know we could go for hours, but before we say goodbye, any final thoughts or key takeaways that you want to share with the audience in light of our conversation today?  Jim: What I would say is that many of the lessons that the industry has learned and the tools that have been developed during the COVID crisis and in the post-COVID supply chain disruptions apply equally or applicable, at least by analogy, to the current tariff issues that you'd be facing on data center projects. We've learned a lot from those, and they can be applied here. So in some ways, it's a new dispute, but it's sort of a variation on an existing theme. The second is that many of the contractors that we're seeing are facing a somewhat uncertain environment, especially in the United States, in terms of the pipeline of available work that they have coming in compared to years past. And so these contractors tend to be willing to trade off certain things like markup on change orders for security related to not having to assume what they view as being undue risk that would potentially cause them, you know, sort of an existential threat to their company. And as long as they can get some cooperation from the owner, they likely are willing to negotiate on things like fee and other areas in order to get that security.  John: I would add to Jim's comment just by saying that data center projects by their nature are fairly complex, fairly expensive projects. The contractors and subcontractors that you're dealing with are very sophisticated and understand the space very well. So I think there's an opportunity for the parties, assuming you can get alignment in your construction documents, there's opportunities for the party to work together to mitigate a risk that I think. Frankly, everyone would recognize as a force majeure type risk that nobody can anticipate what's going to happen politically in the near future. So the parties working together is important. Making sure the document creates alignment is important.  Matt: Fantastic. Well, thank you both. I look forward to hopefully doing this again with you guys. Thank you again for very much for inviting me in to host this with you today. This has been our data center series podcast on tariff-related considerations when planning a data center project. For more information on Jim, John, or myself, please visit our profiles on reedsmith.com. Thank you for listening, and goodbye.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcast on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved.  Transcript it auto-generated.
    --------  
    28:12
  • AI explained: Introduction to Reed Smith's AI Glossary
    Have you ever found yourself in a perplexing situation because of a lack of common understanding of key AI concepts? You're not alone. In this episode of "AI explained," we delve into Reed Smith's new Glossary of AI Terms with Reed Smith guests Richard Robbins, director of applied artificial intelligence, and Marcin Krieger, records and e-discovery lawyer. This glossary aims to demystify AI jargon, helping professionals build their intuition and ask informed questions. Whether you're a seasoned attorney or new to the field, this episode explains how a well-crafted glossary can serve as a quick reference to understand complex AI terms. The E-Discovery App is a free download available through the Apple App Store and Google Play. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Marcin: Welcome to Tech Law Talks and our series on AI. Today, we are introducing the Reed Smith AI Glossary. My name is Marcin Krieger, and I'm an attorney in the Reed Smith Pittsburgh office.  Richard: And I am Richard Robbins. I am Reed Smith's Director of Applied AI based in the Chicago office. My role is to help us as a firm make effective and responsible use of AI at scale internally.  Marcin: So what is the AI Glossary? The Glossary is really meant to break down big ideas and terms behind AI into really easy-to-understand definitions so that legal professionals and attorneys can have informed conversations and really conduct their work efficiently without getting buried in tech jargon. Now, Rich, why do you think an AI glossary is important?  Richard: So, I mean, there are lots of glossaries about, you know, sort of AI and things floating around. I think what's important about this one is it's written by and for lawyers. And I think that too many people are afraid to ask questions for fear that they may be exposed as not understanding things they think everyone else in the room understands. Too often, many are just afraid to ask. So we hope that the glossary can provide comfort to the lawyers who use it. And, you know, I think to give them a firm footing. I also think that it's, you know, really important that people do have a fundamental understanding of some key concepts, because if you don't, that will lead to flawed decisions, flawed policy, or choices can just miscommunicate with people in connection with you, with your work. So if we can have a firm grounding, establish some intuition, I think that we'll be in a better spot. Marcin, how would you see that?  Marcin: First of all, absolutely, I totally agree with you. I think that it goes even beyond that and really gets to the core of the model rules. When you look at the various ethics opinions that have come out in the last year about the use of AI, and you look at our ethical obligations and basic competence under Rule 1.1, we see that ethics opinions that were published by the ABA and by various state ethics boards say that there's a duty on lawyers to exercise the legal knowledge, skill, thoroughness, and preparation necessary for the representation. And when it comes to AI, you have to achieve that competence through some level of self-study. This isn't about becoming experts about AI, but to be able to competently represent a client in the use of generative AI, you have to have an understanding of the capabilities and the limitations, and a reasonable understanding about the tools and how the tech works. To put another way, you don't have to become an expert, but you have to at least be able to be in the room and have that conversation. So, for example, in my practice, in litigation and specifically in electronic discovery, we've been using artificial intelligence and advanced machine learning and various AI products previous to generative AI for well over a decade. And as we move towards generative AI, this technology works differently and it acts differently. And how the technology works is going to dictate how we do things like negotiate ESI protocols, how we issue protective orders, and also how we might craft protective orders and confidentiality agreements. So being able to identify how these types of orders restrict or permit the use of generative AI technology is really important. And you don't want to get yourself into a situation where you may inadvertently agree to allow the other side, the receiving party of your client's data, to do something that may not comply with the client's own expectations of confidentiality. Similarly, when you are receiving data from a producing party, you want to make sure that the way that you apply technology to that data complies with whatever restrictions may have been put in to any kind of protective order or confidentiality agreement.  Richard: Let me jump in and ask you something about that. So you've been down this path before, right? This is not the first time professionally you've seen new technology coming into play that people have to wrestle with. And as you were going through the prior use of machine learning and things that inform your work, how have you landed? You know, how often did you get into a confusing situation because people just didn't have a common understanding of key concepts where maybe a glossary like this would have helped or did you use things like that before?  Marcin: Absolutely. And it comes, it's cyclic. It comes in waves. Anytime there's been a major advancement in technology, there is that learning curve where attorneys have to not just learn the terminology, but also trust and understand how the technology works. Even now, technology that was new 10 years ago still continues to need to be described and defined even outside of the context of AI things like just removing email threads almost every ESI order that we work with requires us to explain and define what that process looks like when we talk about traditional technology assisted review to this day our agreements have to explain and describe to a certain level how technology-assisted review works. But 10 years ago, it required significant investment of time negotiating, explaining, educating, not just opposing counsel, but our clients.  Richard: I was going to ask about that, right? Because. It would seem to me that, you know, especially at the front end, as this technology evolves, it's really easy for us to talk past each other or to use words and not have a common understanding, right?  Marcin: Exactly, exactly. And now with generative AI, we have exponentially more terminology. There's so many layers to the way that this technology works that even a fairly skilled attorney like myself, when I first started learning about generative AI technology, I was completely overwhelmed. And most attorneys don't have the time or the technical understanding to go out into the internet and find that information. A glossary like this is probably one of the best ways that an attorney can introduce themselves to the terminology or have a reference where if they see a term that they are unfamiliar with, quickly go take a look at what does that term mean? What's the implication here? Get that two sentence description so that they can say, okay, I get what's going on here or put the brakes on and say, hey, I need to bring in one of my tech experts at this point.  Richard: Yeah, I think that's really important. And this kind of goes back to this notion that this glossary was prepared, you know, at least initially, right, for, you know, from the litigator's lens, litigator's perspective. But it's really useful well beyond that. And, you know, I mean, I think the biggest need is to take the mystery out of the jargon, to help people, you know, build their intuition, to ask good questions. And you touched on something where you said, well, I've got a, I don't need to be a technical expert on a given topic, but I need a tight. Accessible description that lets me get the essence of it. So, I mean, a couple of my, you know, favorite examples from the glossary are, you know, in the last year or so, we've heard a lot of people talking about RAG systems and they fling that phrase around, you know, retrieval augmented generation. And, you know, you could sit there and say to someone, yeah, use that label, but what is it? Well, we describe that in three tight sentences. Agentic AI, two sentences.  Marcin: And that's a real hot topic for 2025 is agentic AI.  Richard: Yep.  Marcin: And nobody knows what it is. So I focus a lot on litigation and in particular electronic discovery. So I have a very tight lens on how we use technology and where we use it. But in your role, you deal with attorneys in every practice group and also professionally outside of the law firm. You deal with professionals and technologists. In your experience, how do you see something like this AI glossary helping the people that you work with and what kind of experience levels you get exposed to?  Richard: Yeah, absolutely. So I keep coming back to this phrase, this notion of saying it's about helping people develop an intuition for when and how to use things appropriately, what to be concerned about. So a glossary can help to demystify things. These concepts so that you can then carry on whatever it is that you're doing. And so I know that's rather vague and abstract, but I mean, at the end of the day, if you can get something down to a couple of quick sentences and the key essence of it, and that light bulb comes on and people go, ah, now I kind of understand what we're talking about, that will help them guide their conversations about what they should be concerned about or not concerned about. And so, you know, that glossary gives you a starting point. It can help you to ask good questions. It can set alarm bells off when people are saying things that are, you know, perhaps very far off, those key notions. And you have, you know, you have the ability to, you know, I think know when you're out of your depth a little bit, but to know enough to at least start to chart that course. Because right now people are just waving their hands. And that, I think, results in a tendency to say, oh, I can't rely on my own intuition, my own thinking. I have to run away and hide. And I think the glossary makes all this information more accessible so that you can start to interact with the technology and the issues and things around it.  Marcin: Yeah, I agree. And I also think that having those two to three sentence hits on what these terms are, I think also will help attorneys know how to ask the right questions. Like you said, know when to get that help, but also know how to ask for it. Because I think that most attorneys know when they need to get help, but they struggle with how to articulate that request for it.  Richard: Yeah, I think that's right. And I think that, you know, often we can bring things back to concepts that people are already comfortable with. So I'll spend a lot of time talking to people about sort of generative AI, and their concerns really have nothing to do with the fact that it's generative AI. It just happens to be something that's hosted in the cloud. And we've had conversations about how to deal with information that's hosted in the cloud or not, and we're comfortable having those. But yet, when we get to generative AI, they go, oh, wait, it's a whole new range of issues. I'm like, no, actually, it's not. You've thought about these things before. We can attack these things again. Now, again, the glossary, the point of the glossary is not to teach all this stuff, but it's about to help you get your bearings straight, to get you oriented. And from there, you can have the journey.  Marcin: So in order to get onto that journey, we have to let everybody know where they can actually get a copy of the glossary. So the Reed Smith AI Glossary can be found at the website e-discoveryapp.com, or any attorney can go to the Play Store or the Apple Store and download the E-Discovery App, which is a free app that contains a variety of resources. And right on the landing page of the app there's a link for glossaries and within there you'll see a downloadable link that'll give you a PDF version of the AI Glossary which again any attorney can get for free and have available and of course it is a live document which means that we will make updates to it and revisions to it as the technology evolves and as how we present information changes in the coming years.  Richard: At that price, I'll take six.  Marcin: Thank you, Rich. Thanks for your time.  Richard: Thank you.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    14:56
  • AI explained: Navigating AI in Arbitration - The SVAMC Guideline Effect
    Arbitrators and counsel can use artificial intelligence to improve service quality and lessen work burden, but they also must deal with the ethical and professional implications. In this episode, Rebeca Mosquera, a Reed Smith associate and president of ArbitralWomen, interviews Benjamin Malek, a partner at T.H.E. Chambers and former chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. They reveal insights and experiences on the current and future applications of AI in arbitration, the potential risks of bias and transparency, and the best practices and guidelines for the responsible integration of AI into dispute resolution. The duo discusses how AI is reshaping arbitration and what it means for arbitrators, counsel and parties. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Rebeca: Welcome to Tech Law Talks and our series on AI. My name is Rebeca Mosquera. I am an attorney with Reed Smith in New York focusing on international arbitration. Today we focus on AI in arbitration. How artificial intelligence is reshaping dispute resolution and the legal profession. Joining me is Benjamin Malek, a partner at THE Chambers and chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. Ben has extensive experience in commercial and investor state arbitration and is at the forefront of AI governance in arbitration. He has worked at leading institutions and law firms, advising on the responsible integration of AI into dispute resolution. He's also founder and CEO of LexArb, an AI-driven case management software. Ben, welcome to Tech Law Talks.  Benjamin: Thank you, Rebeca, for having me.  Rebeca: Well, let's dive in into our questions today. So artificial intelligence is often misunderstood, or put it in other words, there is a lot of misconceptions surrounding AI. How would you define AI in arbitration? And why is it important to look beyond just generative AI?  Benjamin: Yes, thank you so much for having me. AI in arbitration has existed for many years now, But it hasn't been until the rise of generative AI that big question marks have started to arise. And that is mainly because generative AI creates or generates AI output, whereas up until now, it was a relatively mild output. I'll give you one example. Looking for an email in your inbox, that requires a certain amount of AI. Your spellcheck in Word has AI, and it has been used for many years without raising any eyebrows. It hasn't been until ChatGPT has really given an AI tool to the masses that question started arising. What can it do? Will attorneys still be held accountable? Will AI start drafting for them? What will happen? And it's that fear that started generating all this talk about AI. Now, to your question on looking beyond generative AI, I think that is a very important point. In my function as the chair of the SAMC AI Task Force, while we were drafting the guidelines on the use of AI, one of the proposals was to call it use of generative AI in arbitration. And I'm very happy that we stood firm and said no, because there's many forms of AI that will arise over the years. Now we're talking about predictive AI, but there are many AI forms such as predictive AI, NLP, automations, and more. And we use it not only in generating text per se, but we're using it in legal research, in case prediction to a certain extent. Whoever has used LexisNexis, they're using a new tool now where AI is leveraged to predict certain outcomes, document automation, procedural management, and more. So understanding AI as a whole is crucial for responsible adoption.  Rebeca: That's interesting. So you're saying, obviously, that AI and arbitration is more than just chat GPT, right? I think that the reason why people think that and relies on maybe, as we'll see in some of the questions I have for you, that people may rely on chat GPT because it sounds normal. It sounds like another person texting you, providing you with a lot of information. And sometimes we just, you know, people, I can understand or I can see why people might believe that that's the correct outcome. And you've given examples of how AI is already being used and that people might not realize it. So all of that is very interesting. Now, tell me, as chair of the SVAMC AI Task Force, you've led significant initiatives in AI governance, right? What motivated the creation of the SVAMC AI guidelines? And what are their key objectives? And before you dive into that, though, I want to take a moment to congratulate you and the rest of the task force on being nominated once again for the GAR Awards, which will be unveiled during Paris Arbitration Week in April of this year. That's an incredible achievement. And I really hope you'll take pride in the impact of your work and the well-deserved recognition it continues to receive. So good luck to you and the rest of the team.  Benjamin: Thank you, Rebeca. Thank you so much. It really means a lot, and it also reinforces the importance of our work, seeing that we're nominated not only once last year for the GAR Award, but second year in a row. I will be blunt, I haven't kept track of many nominations, but I think it may be one of the first years where one initiative gets nominated twice, one year after the other. So that in itself for us is worth priding ourselves with. And it may potentially even be more than an award itself. It really, it's a testament to the work we have provided. So what led to the creation of the SVAMC AI guidelines? It's a very straightforward and to a certain extent, a little boring answer as of now, because we've heard it so many times. But the crux was Mata versus Avianca. I'm not going to dive into the case. I think most of us have heard it. Who hasn't? There's many sources to find out about it. The idea being that in a court case, an attorney used Chad GPT, used the outcome without verifying it, and it caused a lot of backlash, not only from opposing party, but also being chastised by the judge. Now when I saw that case, and I saw the outcome, and I saw that there were several tangential cases throughout the U.S. And worldwide, I realized that it was only a question of time until something like this could potentially happen in arbitration. So I got on a call with my dear friend Gary Benton at the SVAMC, and I told him that I really think that this is the moment for the Silicon Valley Arbitration Mediation Center, an institution that is heavily invested in tech to shine. So I took it upon myself to say, give me 12 months and I'll come up with guidelines. So up until now at the SVAMC, there are a lot of think tank-like groups discussing many interesting subjects. But the SVAMC scope, especially AI related, was to have something that produces something tangible. So the guidelines to me were intuitive. It was, I will be honest, I don't think I was the only one. I might have just been the first mover, but there we were. We created the idea. It was vetted by the board. And we came up first with the task force, then with the guidelines. And there's a lot more to come. And I'll leave it there.  Rebeca: Well, that's very interesting. And I just wanted to mention or just kind of draw from, you mentioned the Mata case. And you explained a bit about what happened in that case. And I think that was, what, 2023? Is that right? 2022, 2023, right? And so, but just recently we had another one, right? In the federal courts of Wyoming. And I think about two days ago, the order came out from the judge and the attorneys involved were fined about $15,000 because of hallucinations on the case law that they cited to the court. So, you know I see that happening anyway. And this is a major law firm that we're talking about here in the U.S. So it's interesting how we still don't learn, I guess. That would be my take on that.  Benjamin: I mean, I will say this. Learning is a relative term because learning, you need to also fail. You need to make mistakes to learn. I guess the crux and the difference is that up until now, at any law firm or anyone working in law would never entrust a first-year associate, a summer associate, a paralegal to draft arguments or to draft certain parts of a pleading by themselves without supervision. However, now, given that AI sounds sophisticated, because it has unlimited access to words and dictionaries, people assume that it is right. And that is where the problem starts. So I am obviously, personally, I am no one to judge a case, no one to say what to do. And in my capacity of the chair of the SVAMC AI task force, we also take a backseat saying these are soft law guidelines. However, submitting documents with information that has not been verified has, in my opinion, very little to do with AI. It has something to do with ethical duty and candor. And that is something that, in my opinion, if a court wants to fine attorneys, they're more welcome to do so. But that is something that should definitely be referred to the Bar Association to take measures. But again, these are my two cents as a citizen.  Rebeca: No, very good. Very good. So, you know, drawing from that point as well, and because of the cautionary tales we hear about surrounding these cases and many others that we've heard, many see AI as a double-edged sword, right? On the one hand, offering efficiency gains while raising concerns about bias and procedural fairness. What do you see as the biggest risk and benefits of AI in arbitration?  Benjamin: So it's an interesting question. To a certain extent, we tried to address many of the risks in the AI guidelines. Whoever hasn't looked at the guidelines yet, I highly suggest you take a look at them they're available on svamc.org I'm sure that they're widely available on other databases Jus Mundi has it as well. I invite everyone to take a look at it. There are several challenges. We don't believe that those challenges would justify not using it. To name a few, we have bias. We have lack of transparency. We also have the issue of over-reliance, which is the one we were talking about just a minute ago, where it seems so sophisticated that we as human beings, having worked in the field, cannot conceive how such an eloquent answer is anything but true. So there's a black box problem and so many others, but quite frankly, there are so many benefits that come with it. AI is an unlimited knowledge tool that we can use. As of now, AI is what we know it is. It has hallucinations. It does have some bias. There is this black box problem. Where does it come from? Why? What's the source? But quite frankly, if we are able to triage the issues and to really look at what are the advantages and what is it we want to get out of it, and I'll give you a brief example. Let's say you're drafting an RFA. If you know the case, you know the parties, and you know every aspect of the case, AI can draft everything head to toe. You will always be able to tell what is from the case and what's not from the case. If we over-rely on AI and we allow it to draft without verifying all the facts, without making sure we know the transcript inside and out, without knowing the facts of the case, then we will always run into certain issues. Another issue we run into a lot with predictive AI is relying on data that exists. So compared to generative AI, predictive AI is taking data that already exists and predicting another outcome. So there's a lesser likelihood of hallucinations. The issue with that is, of course, bias. Just a brief example, you're the president of Arbitral Women, so you will definitely understand. It has only been in the last 30 years that women had more of a presence in arbitration, specifically sitting as an arbitrator. So if we rely on data that goes beyond those 30, 40, 50 years, there's going to be a lot of male decisions having been taken. Potentially even laws that applied back then that were not very gender neutral. So we need, we as people, need to triage and understand where is the good information, where is information that may have bias and counterbalance it. As of now, we will need to counterbalance it manually. However, as I always say, we've only seen a grain of salt of what AI can do. So as time progresses, the challenges, as you mentioned, will become lesser and lesser and lesser. And the knowledge that AI has will become wider and wider. As of now, especially in arbitration, we are really taking advantage of the fact that there is still scarcity of knowledge. But it is really just a question of time until AI picks up. So we need to get a better understanding of what is it we can do to leverage AI to make ourselves indispensable.  Rebeca: No, that's very interesting, Ben. And as you mentioned, yes, as president of ArbitralWomen, the word bias is something I pay close attention. You know, we're talking about bias. You mentioned bias. And we all have conscious or unconscious biases, right? And so you mentioned that about laws that were passed in the past where potentially there was not a lot of input from women or other members of our society. Do you think AI can be trained then to be truly neutral or will bias always be a challenge?  Benjamin: I wish I had the right answer. I think, I actually truly believe that bias is a very relative term. And in certain societies, bias has a very firm and black and white standing, whereas in other societies, it does not. Especially in international arbitration, where we not only deal with cross-border disputes, but different cultures, different laws, laws of the seats, laws of the contract. I think it's very hard to point out one set of bias that we will combat or that we will set as principle for everything. I think ultimately what ensures that there is always human oversight in the use of AI, especially in arbitration, are exactly these type of issues. So we can, of course, try to combat bias and gender bias and others. But I don't think it is as easy as we say, because even nowadays, in normal proceedings, we are still dealing with bias on a human level. So I think we cannot ask from machines to be less biased than we as humans are.  Rebeca: Let me pivot here a bit. And, you know, earlier, we mentioned the GAR Awards. And now I'd like to shift our focus to the recent GAR Life on Technology that took place here in New York last week on February 20th. And to give our audience, you know, some context. GAR stands for Global Arbitration Review, a widely read journal that not only ranks international arbitration practices at law firms worldwide, but also, among other things, organizes live conferences on cutting-edge topics in arbitration across the globe. So I know you were a speaker at GAR Live, and there was an important discussion about distinguishing generative AI, predictive AI, and other AI applications. How do these different AI technologies impact arbitration, and how do the SVAMC guidelines address them?  Benjamin: I was truly honored to speak at the GAR Live event in New York, and I think the fact that I was invited to speak on AI as a testament on how important AI is and how widely interested the community is in the use of AI, which is very different to 2023 when we were drafting the guidelines on the use of AI. I think it is important to understand that ultimately, everything in arbitration, specifically in arbitration, needs human oversight. But in using AI in arbitration, I think we need to differentiate on how the use of AI is different in arbitration versus other parts of the law, and specifically how it is different in arbitration compared to how we would use it on a day-to-day basis. In arbitration specifically, arbitrators are still responsible for a personal or arbitrators are given a personal mandate that is very different to how law works in general. Where you have a lot of judges that let their assistants draft parts of the decision, parts of the order. Arbitration is a little different, and that for a reason. Specifically in international arbitration, because there are certain sensitivities when it comes to local law, when it comes to an international standard and local standards. Arbitrators are held to a higher standard. Using AI as an arbitrator, for example, which could technically be put at the same level as using a tribunal secretary, has its limits. So I think that AI can be used in many aspects, from drafting for attorneys, for counsel, when it comes to helping prepare graphs, when it comes to preparing documents, accumulating documents, etc., etc. But it does have its limits when it comes to arbitrators using it. As we have tried to reiterate in the guidelines, arbitrators need to be very conscious of where their personal mandate starts and ends. In other words, our recommendation, again, we are soft law guidelines, our recommendation to arbitrators are to not use AI when it comes to any decision-making process. What does that mean? We don't know. And neither does the law. And every jurisdiction has their own definition of what that means. It is up for the arbitrator to define what a decision-making process is and to decide of whether the use of AI in that process is adequate.  Rebeca: Thank you so much, Ben. I want to now kind of pivot, since we've been talking a little bit more about the guidelines, I want to ask you a few questions about them. So they were created with a global perspective, right? And so what initiatives is the AI task force pursuing to ensure the guidelines remain relevant worldwide? You've been talking about different legal systems and local laws and how practitioners or certain regulations within certain jurisdictions might treat certain things differently. So what is the AI task force doing to remain relevant, to maybe create some sort of uniformity? So what can you tell me about that?  Benjamin: So we at the SVAMC task force, we continue to gather feedback, of course, And we're looking for global adaptation. We will continue to work closely with practitioners, with institutions, with lawmakers, with government, to ensure that when it comes to arbitration, AI is given a space, it's used adequately, and if possible, of course, and preferential to us, the SVAMC AI guidelines are used. That's why they were drafted, to be used. When we presented the guidelines to different committees and to different law sections and bar associations, it struck us that jurisdictions such as the U.S., and more specifically in New York, where both you and I are based, the community was not very open to receiving these guidelines as guidelines. And the suggestion was actually made to creating a white paper, And as much as it seemed to be a shutdown at an early stage, when we were thinking about it, and I was very blessed to have seven additional members in the Guidelines Drafting Committee, seven very bright individual members that I learned a lot from during this process. It was clear to us that jurisdictions such as New York have a very high ethical standard, and where guidelines such as our guidelines would potentially be seen as doubling ethical rules. So although we advocate for them not being ethical guidelines whatsoever, because we don't believe they are, we strongly suggest that local and international ethical standards are being upheld. So with that in mind, we realize that there is more to a global aspect that needs to be addressed rather than an aspect of law associations in the US or in the UK or now in Europe. Up-and-coming jurisdictions that up until now did not have a lot of exposure to artificial intelligence and maybe even technology as a whole are rising. And they may need more guidance than jurisdictions where technology may be an instinct away. So what the AI task force has created. And is continuing to recruit for, are regional committees for the AI Task Force, tracking AI usage in different legal systems and different jurisdictions. Our goal is to track AI-related legislation and its potential impact on arbitration. These regional committees will also provide jurisdiction-specific insights to refine the guidelines. And hopefully, or this is what we anticipate, these regional committees will help bridge the gap between AI's global development and local legal framework. There will be a dialogue. We will continue, obviously, to be present at conferences, to have open dialogue, and to recruit, of course, for these committees. But the next step is definitely to focus on these regional committees and to see how we, as the AI task force of the Silicon Valley Arbitration Mediation Center, can impact the use of AI in arbitration worldwide.  Rebeca: Well, that's very interesting. So you're utilizing committees in different jurisdictions to keep you appraised of what's happening in each jurisdiction. And then with that, continue, you know, somehow evolving the guidelines and gathering information to see how this field, you know, it's changing rapidly.  Benjamin: Absolutely. Initially, we were thinking of just having a small local committee to analyze different jurisdictions and what laws and what court cases, etc. But we soon came to realize that it's much more than tracking judicial decisions. We need people on the ground that are part of a jurisdiction, part of that local law, to tell us how AI impacts their day-to-day, how it may differ from yesterday to tomorrow, and what potential legislation will be enacted to either allow or disallow the use of certain AI.  Rebeca: That's very interesting. I think it's something that will keep the guidelines up to date and relevant for a long time. So kudos to you, the SVAMC and the task force. Now, I know that the guidelines are a very short paper, you know, and then in the back you have the commentary on them. So I want to, I'm not going to dissect all of the guidelines, but I want to come and talk about one of them in particular that I think created a lot of discussion around the guidelines itself. So for full disclosure, right, I was part of the reviewing committee of the AI guidelines. And I remember that one of the most debated aspects of the SVAMC AI guidelines is guideline three on disclosure, right? So should arbitrators and counsel disclose their AI use in proceedings? So I think that that has generated a lot of debates. And that's the reason why we have the resulting guideline number three, the way it is drafted. So can you give us a little bit more of insight what happened there?  Benjamin: Absolutely. I'd love to. Guideline three was very controversial from the get-go. We initially had two options. We had a two-pronged test that parties would either satisfy or not, and then disclosure was necessary. And then we had another option that the community could vote on where it was up to the parties to decide whether their AI-aided submission could impact the outcome of the case. And depending on that, they would disclose or not disclose whether AI was used. Quite frankly, that was a debate we had in 2023, and a lot changed from November 2023 until April, when we finally published the first version of the AI guidelines. A lot of courts have implemented an obligatory disclosure. I think people have also gotten more comfortable with using AI on a day-to-day. And we ultimately came to the conclusion to opt for a flexible disclosure approach, which can now be found in the guidelines. The reason for that was relatively simple, or relatively simple to us who debated that. Having a disclosure obligation of the use of AI will very easily become inefficient for two reasons. A blanket disclosure for the use of AI serves nobody. It really boils down to one question, which is, if the judge, or in our case in arbitration, if the arbitrator or tribunal knows that AI was used for a certain document, now what? How does that knowledge transform into action? And how does that knowledge lead to a different outcome? And in our analysis, it turned out that a blanket disclosure of AI usage, or in general, an over-disclosure of the use of AI in arbitration, may actually lead to adverse consequences for the parties who make the disclosure. Why? Because not knowing how AI can impact these submissions causes arbitrators not to know what to do with that disclosure. So ultimately, it's really up to the parties to decide, how was AI used? How can it impact the case? What is it I want to disclose? How do I disclose? It's also important for the arbitrators to understand, what do I do with the disclosure before saying, everything needs to be disclosed. During the GAR event in New York, the issue was raised whether documents which were prepared with the use of AI should be disclosed or whether there should be a blanket disclosure. And quite frankly, the debate went back and forth, but ultimately it comes down to cross-examination. It comes down to the expert or the party submitting the document, being able to back up where the information comes from rather than knowing that AI was used. And if you put that in aspect, we received a very interesting question of why we should continue using AI, knowing that approximately 30% of its output are hallucinations and it needs revamping. This was compared to a summer associate or a first-year associate, and the question was very simple. If I have a first-year associate or a summer associate whose output has a 30% error rate, why would I continue using that associate? And quite frankly, there is merit to the question, and it really has a very simple answer. And the answer is time and money. Using AI makes it much faster to receive using AI makes it faster to receive output than using a first year associate or summer associate and it's way cheaper. For that, it's worth having a 30% error margin. I don't know where they got the 30% from, but we just went along with it.  Rebeca: I was about to ask you where they get the 30%. And well, I think that for first-year associates or summer associates that are listening, I think that the main thing will be for them to then become very savvy in the use of AI so they can become relevant to the practice. I think everyone, you know, there's always that question about whether AI will replace all of us, the entire world, and we'll go into machine apocalypses. I don't see it that way. In my view, I see that if we, you know, if we train ourselves, if we're not afraid of using the tool, we'll very much be in a position to pivot and understand how to use it. And when you have, what is the saying, garbage in, garbage out. So if you have a bad input, you will have a bad output. You need to know the case. You need to know your documents to understand whether the machine is hallucinating or giving you, you know, an information that is not real. I like to play and ask certain questions to chat GPT, you know, here and there. And sometimes I, you know, I ask obviously things that I know the answer to. And then I'm like, chat GPT, this is not accurate. Can you check on this? And he's like, oh, thank you for correcting me. I mean, and it's just a way of, you got to try and understand it so you know where to make improvements. But that doesn't mean that the tool, because it's a tool, will come and replace, you know, your better judgment as a professional, as an attorney.  Benjamin: Absolutely. One of the things we say is it is a tool. It does nothing out of its own volition. So what you're saying is 100% right. This is what the SVAMC AI guidelines stand for. Practitioners need to accustom themselves on proper use of AI. AI can be used from paid versions to unpaid versions. We just need to understand what is an open source AI, what is a close circuit AI. Again, for whoever's listening, feel free to look up the guidelines. There's a lot of information there. There's tons of articles written at this point. And just be very mindful of if there is an open AI system, such as an unpaid chat GPT version. It does not mean you cannot use it. First, check with your firm to make sure you're allowed to use it. I don't want to get into any trouble.  Rebeca: Well, we don't want to put confidential information on an open AI platform.  Benjamin: Exactly. Once the firm or your colleagues allow you to use ChatGPT, even if it's an open version, just be very smart about what it is you're putting in. No confidential information, no potential conflict check, no potential cases. Just be smart about what it is you put in. Another aspect we were actually debating about is this hallucination. Just an example, let's say you say this is an ISDS case, so we're talking a little more public, and you ask Chad GPT, hey, show me all the cases against Costa Rica. And it hallucinates, too. It might actually be that somebody input information for a potential case against Costa Rica or a theoretical case against Costa Rica, Chad GPT being on the open end, takes that as one potential case. So just be very smart. Be diligent, but also don't be afraid of using it.  Rebeca: That's a great note to end on. AI is here to stay. And as legal professionals, it's up to us to ensure it serves the interests of justice, fairness, and efficiency. And for those interested in learning more about the SVAMC AI guidelines, you can find them online at svamc.org and search for guidelines. I tried it myself and you will go directly to the guidelines. And if you like to stay updated on developments in AI and arbitration, be sure to follow Tech Law Talks and join us for future episodes where we'll continue exploring the intersection of law and technology. Ben, thank you again for joining me today. It's been a great pleasure. And thank you to our listeners for tuning in.  Benjamin: Thank you so much, Rebeca, for having me and Tech Law Talks for the opportunity to be here.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    37:11
  • AI explained: The EU AI Act, the Colorado AI Act and the EDPB
    Partners Catherine Castaldo, Andy Splittgerber, Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI? ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hello, everyone, and thanks again for joining us on Tech Law Talks. We're here with a really good array of colleagues to talk to you about the EU AI Act, the Colorado AI Act, the EDPB guidance, and we'll share some of those initials soon on what they all mean. But I'm going to let my colleagues introduce themselves. Before I do that, though, I'd like to say if you like our content, please consider giving us a five-star review wherever you find us. And let's go ahead and first introduce my colleague, Andy.  Andy: Yeah, hello, everyone. My name is Andy Splittgerber. I'm a partner at Reed Smith in the Emerging Technologies Department based out of Munich in Germany. And looking forward to discussing with you interesting data protection topics.  Thomas: Hello, everyone. This is Thomas, Thomas Fischl in Munich, Germany. I also focus on digital law and privacy. And I'm really excited to be with you today on this podcast.  Tyler: Hey everyone, thanks for joining. My name is Tyler Thompson. I'm a partner in the emerging technologies practice at Reed Smith based in the Denver, Colorado office.  Catherine: And I'm Catherine Castaldo, a partner in the New York office. So thanks to all my colleagues. Let's get started. Andy, can you give us a very brief overview of the EU AI app?  Andy: Sure, yeah. It came into force in August 2024. And it is a law about mainly the responsible use of AI. Generally, it is not really focused on data protection matters. Rather, it is next to the world-famous European Data Protection Regulation. It has a couple of passages where it refers to the GDPR and also sometimes where it states that certain data protection impact assessments have to be conducted. Other than that, it has its own concept dividing up AI systems. And we're just expecting a new guidance on how authorities and how the commission interprets what AI systems are. So watch out for that. Into different categories, prohibited AI, high-risk AI, and then normal AI systems. There are also special rules on generative AI, and then some rules on transparency requirements when organizations use AI towards ends customers. And depending on these risk categories, there are certain requirements, and attaching to each of these categories, developers, importers, and also users as like organizations of AI have to comply with certain obligations around accountability, IT security, documentation, checking, and of course, human intervention and monitoring. This is the basic concept and the rules start to kick in February 2nd, 2025 when prohibited AI must not be used anymore in Europe. And the next bigger wave will be on August 2nd, 2025 when the rules on generative AI kick in. So organizations should start and be prepared to comply with these rules now and get familiar with this new type of law. It's kind of like a new area of law.  Catherine: Thanks for that, Andy. Tyler, can you give us a very brief overview of the Colorado AI Act?  Tyler: Sure, happy to. So Colorado AI Act, this is really the first comprehensive AI law in the United States. Passed at the end of the 2024 legislative session. it covers developers or deployers that use a high-risk AI system. Now, what is a high-risk AI system? It's just a system that makes a consequential decision. What is a consequential decision? These can include things like education decisions, employment opportunities, employment related decisions, financial lending service decisions, if it's an essential government service, a healthcare service, housing, insurance, legal services. So that consequential decision piece is fairly broad. The effective date of it is February 1st of 2026, and the Colorado AG is going to be enforcing it. There's no private right of action here, but violating the Colorado AEI Act is considered an unfair and deceptive trade practice under Colorado law. So that's where you get the penalties of the Colorado AEI Act. It's tied into the Colorado deceptive trade practice.  Catherine: That's an interesting angle. And Tom, let's turn to you for a moment. I understand that the European Data Protection Board, or EDPB, has also recently released some guidance on data protection in connection with artificial intelligence. Can you give us some high-level takeaways from that guidance?  Thomas: Sure, Catherine, and it's very true that the EDPB has just released a statement. It actually has been released in December of last year. And yeah, they have released that highly anticipated statement on AI models and data protection. This statement of the EDPB follows actually a much-discussed paper published by the German Hamburg Data Protection Authority in July of last year. And I also wanted to briefly touch upon this paper. Because the Hamburg Authority argued that AI models, especially large language models, are anonymous when considered separately. They do not involve the processing of personal data. To reach this conclusion, the paper decoupled the model itself from, firstly, the prior training of the model, which may involve the collection and further processing of personal data as part of the training data set. And secondly, the subsequent use of the model, where a prompt may contain personal data and output may be used in a way that means it represents personal data. And interestingly, this paper considered only the AI model itself and concluded that the tokens and values that make up the inner processes of a typical AI model do not meaningfully relate to or correspond with information about identifiable individuals. And consequently, the model itself was classified as anonymous, even if personal data is processed during the development and the use of the model. So the EDPB statement, recent statement, does actually not follow this relatively simple and secure framework proposed by the German authority. The EDPB statement responds actually to a request from the Irish Data Protection Commission and gives kind of a framework, just particularly with respect to certain aspects. It actually responds to four specific questions. And the first question was, so under what conditions can AI models be considered anonymous? And the EDPB says, well, yes, it can be considered anonymous, but only in some cases. So it must be impossible with all likely means to obtain personal data from the model either through attacks aimed at extracting the original training data or through other interactions with the AI model. The second and third questions relate to the legal basis of the use and the training of AI models. And the EDPB answered those questions in one answer. So the statement indicates that the development and use of AI models can. Generally be based on a legal basis of legitimate interest, then the statement lists a variety of different factors that need to be considered in the assessment scheme according to Article 6 GDPR. So again, it refers to an individual case-by-case analysis that has to be made. And finally, the EDPB addresses the highly practical question of what consequences it has for the use of an AI model if it was developed in violation of data protection regulations. The EDPB says, well, this partly depends on whether the EI model was first anonymized before it was disclosed to the model operator. And otherwise, the model operator may need to assess the legality of the model's development as part of their accountability obligations. So quite interesting statement.  Catherine: Thanks, Tom. That's super helpful. But when I read some commentary on this paper, there's a lot of criticism that it's not very concrete and doesn't provide actionable guidance to businesses. Can you expand on that a little bit and give us your thoughts?  Thomas: Yeah, well, as is sometimes the case with these EDPB statements, which necessarily reflect the consensus opinion of authorities from 27 different member states. The statement does not provide many clear answers. So instead, the EDPP offers kind of indicative guidelines and criteria and calls for case-by-case assessments of AI models to understand whether and how they are affected by the GDPR. And interestingly, someone has actually counted how often the phrase case-by-case appears in the statement. It appears actually 16 times. and can or could appears actually 161 times so. Obviously, this is likely to lead to different approaches among data protection authorities, but it's maybe also just an intended strategy of the EDPB. Who knows?  Catherine: Well, as an American, I would read that as giving me a lot of flexibility.  Thomas: Yeah, true.  Catherine: All right, let's turn to Andy for a second. Andy, also in view of the AI Act, what do you now recommend organizations do when they want to use generative AI systems?  Andy: That's a difficult question after 161 cans and goods. We always try to give practical advice. And I mean, with regard, like if you now look at the AI Act plus this EDPB paper or generally GDPR, there are a couple of items where organizations can prepare and need to prepare. First of all, organizations using generative AI must be aware that a lot of the obligations is on the developers. So the developers of generative AI definitely have more obligations, especially under the AI Act, for example. They have to create and maintain the model's technical documentation, including the training and testing processes, monitor the AI system. They must also, which can be really painful and will be painful, they have to make available a detailed summary of the content that was used for the training for the model. And this goes very much also into copyright topics. So there are a lot of obligations and none of these are on the using side. So if organizations use generative AI, they don't have to comply with all of this, but they have to, and that's our recommendation, ensure in their agreements when they license the model or the AI system, get the confirmation by the developer that the developer complies with all of these obligations. That's kind of like the supply chain compliance in AI. So that's one of the aspects from the using side. Make sure in your agreement that the provider complies with AI Act. Other items for the agreement when licensing AI, generative AI systems or AI is attaching to what Thomas said. Getting a statement from the developer whether or not the model itself contains personal data. The ideal answer is no, the model does not contain personal data because then we don't have the poisonous tree. If the developer was not in compliance with GDPR or data protection laws when doing the training, there is a cut. If the model does not contain any personal data, then this cannot infect the later use by the using organization. So this is a very important statement. We have not seen this in practice very often so far, and it is quite a strong commitment developers are asked to give, but it is something at least to be discussed in the negotiations. So that's the second point. A third point for the agreement with the provider is whether or not the usage data is used for further training that can create data protection issues and might require using organizations to solicit consent or other justifications from their employees or users. And then, of course, having in place a data processing agreement with the provider or developer of the generative AI system if it runs on someone else's systems. So these are all items for the contracts, and we think this is something that needs to be tackled now because it always takes a while until the contract is negotiated and in place. And on top of this, as I said, the AI Act obligations are rather limited. There's only some transparency only, but it's transparency obligations for using organizations to, for example, inform their employees that they're using AI to inform end users that a certain whatever text or photo or article was created by AI. So like a tag, this was created by AI being transparent that AI was used to develop something. And then on top of this, the general GDPR compliance requirements apply, like transparency about what personal data is processed when the AI is used. Justification of processing, add the AI system to your role paths, and also check if potentially data protection impact assessment is required. This will mainly be the case if the AI has intensive impact on the personality of data subjects' data. So these are the general requirements. So takeaways, look, check the contracts, check the limited transparency requirements under AI Act, and comply with what you know already under GDPR.  Tyler: It's interesting because there is a lot of overlap between the EU AI Act and the Colorado AI Act. But Colorado, it does have that robust impact assessment requirements. You know, you've got to provide notification. You have to provide opt-out rights and appeal. You do have some of that publicly facing notice requirement as well. And so the one thing that I think I want to highlight that's a little bit different, we have an AG notification requirement. So if you discover that your artificial intelligence system has been creating an effect that could be considered algorithmic discrimination, you have an affirmative duty to notify the attorney general. So that's something that's a little bit different. But I think overall, there's a lot of overlap between the Colorado AI Act and the EU AI Act. And I like Andy's analogy of the supply chain, right? Colorado as well. Yes, it applies to the developers, but it also applies to deployers. And on the deployer side, it is kind of that supply chain type of analogy of these are things that you as a deployer, you need to go back, look at your developer, make sure you have the right documentation, that you've checked the right boxes there and have done the right things.  Catherine: Thanks for that, Tyler. Do you think we're entering into an area where the U.S. States might produce more AI legislation?  Tyler: I think so. Virginia has proposed a version of basically the Colorado AI Act. And I honestly think we could see the same thing with these AI bills that we have seen with privacy on the US side, which is kind of a state-specific approach. Some states adopting the same or highly similar versions of the laws of other states, but then maybe a couple states going off on their own and doing something unique. So it would not be surprising to me at all, at least in the short to midterm. We have a patchwork of AI laws throughout the United States just based on individual state law.  Catherine: Thanks for that. And I'm going to ask a question to both Tyler and Tom and Andy. Either one of you can answer, whoever thinks of this. But we've been reading a lot lately about DeepSeek and all the cyber insecurities, essentially, with utilizing a system like that and some failures on the part of the developers there. Is there any security requirement in either one of the EU or Colorado-based AI acts for deploying or developing a new system?  Tyler: Yeah, for sure. So where your security requirements are going to come in, I think, is in the impact assessment piece, right? Where, you know, when you have to look at your risks and how this could affect an individual, whether through a discrimination issue or other type of risk to it, you're going to have to address that in the discrimination piece. So while it's not like a specific security provision, there's no way that you're going to get around some of these security requirements because you have to do that very robust impact assessment, right? Part of that analysis under the impact assessment is known or reasonably foreseeable risks. So things like that, you're going to have to, I would say, address via some of the security requirements facing the AI platform.  Catherine: Great. And what about from the European side?  Andy: Yes, similar from the European side or perhaps even a bit more, definitely robustness, cybersecurity, IT security is like a major portion of the AI Act. So that's definitely a very, very important obligation and duty that must be compliant.  Catherine: And I would think too under GDPR, because you have to ensure adequate technical and organizational measures that if you had personal information going into the AI system, you'd have to comply with that requirement as well, since they stand side by side.  Andy: Exactly, exactly. And then there's under both also notification obligations if something goes wrong.  Catherine: Well, good to know. All right, well, maybe we'll do a future podcast on the impact of the NIST AI risk management framework and the application to both of these large bodies of law. But I thank all my colleagues for joining us today. We have time for just a quick final thought. Does anyone have one?  Andy: Thought from me after the AI Act came into force now, I'm as a practical European worried that we're killing the AI industry and innovation in Europe. It's kind of like good to see that at least some states in the U.S. follow a bit of a similar approach, even if it's, you know, different. Perhaps I haven't given up the hope for a more global solution. Perhaps the AI Act will be also adjusted a bit to then come more to a closer global solution.  Tyler: On the U.S., I'd say, look, my takeaway is start now, start thinking about some of this stuff now. It can be tempting to say it's just Colorado. You know, we have till February of 2026. I think a lot of these things that the Colorado AI Act and even the EU AI Act are requiring are arguably things that you should be doing anyway. So I would say start now, especially as Andy said, on the contract side, if nothing else. We'd start thinking about doing a deal with a developer or a deployer. What needs to be in that agreement? How do we need to protect ourselves? And how do we need to look at the regulatory space to future-proof this so that when we come to 2026, we're not amending 30, 40 contracts?  Thomas: And maybe a final thought from my side. So the EDPB statement does only answer a few questions, actually. So it doesn't touch other very important issues like automated decision-making. There is nothing in the document. There is not really anything about sensitive data. The use of sensitive data, data protection impact assessments are not addressed. So a lot of topics that remain unclear, at least there is no guidance yet.  Catherine: Those are great views and I'm sure really helpful to all of our listeners who have to think of these problems from both sides of the pond. And thank you for joining us again on Tech Law Talks. We look forward to speaking with you again soon.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved.  Transcript is auto-generated.
    --------  
    22:33
  • Navigating NIS2: What businesses need to know
    Catherine Castaldo, Christian Leuthner and Asélle Ibraimova dive into the implications of the new Network and Information Security (NIS2) Directive, exploring its impact on cybersecurity compliance across the EU. They break down key changes, including expanded sector coverage, stricter reporting obligations and tougher penalties for noncompliance. Exploring how businesses can prepare for the evolving regulatory landscape, they share insights on risk management, incident response and best practices. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hi, and welcome to Tech Law Talks. My name is Catherine Castaldo, and I am a partner in the New York office in the Emerging Technologies Group, focusing on cybersecurity and privacy. And we have some big news with directives coming out of the EU for that very thing. So I'll turn it to Christian, who can introduce himself.  Christian: Thanks, Catherine. So my name is Christian Leuthner. I'm a partner at the Reed Smith Frankfurt office, also in the Emerging Technologies Group, focusing on IT and data. And we have a third attorney on this podcast, our colleague, Asélle.  Asélle: Thank you, Christian. Very pleased to join this podcast. I am counsel based in Reed Smith's London office, and I also am part of emerging technologies group and work on data protection, cybersecurity, and technology issues.  Catherine: Great. As we previewed a moment ago, on October 17th, 2024, there was a deadline for the transposition of a new directive, commonly referred to as NIS2. And for those of our listeners who might be less familiar, would you tell us what NIS2 stands for and who is subject to it?  Christian: Yeah, sure. So NIS2 stands for the Directive on Security of Network and Information Systems. And it is the second iteration of the EU's legal framework for enhancing the cybersecurity of critical infrastructures and digital services, it will replace what replaces the previous directive, which obviously is called NIS1, which was adopted in 2016, but had some limitations and gaps. So NIS2 applies to a wider range of entities that provide essential or important services to the society and the economy, such as energy, transport, health, banking, digital infrastructure, cloud computing, online marketplaces, and many, many more. It also covers public administrations and operators of electoral systems. Basically, anyone who relies on network and information systems to deliver their services and whose disruptions or compromise could have significant impacts on the public interest, security or rights of EU citizens and businesses will be in scope of NIS2. As you already said, Catherine, NIS2 had to be transposed into national member state law. So it's a directive, not a regulation, contrary to DORA, which we discussed the last time in our podcast. It had to be implemented into national law by October 17th, 2024. But most of the member states did not. So the EU Commission has now started investigations regarding the violations of the treaty of the functioning of the European Union against, I think, 23 member states as they have not yet implemented NIS2 into national law.  Catherine: That's really comprehensive. Do you have any idea what the timeline is for the implementation?  Christian: It depends on the state. So there are some states that have already comprehensive drafts. And those just need to go through the legislative process. In Germany, for example, we had a draft, but we have elections in a few weeks. And the current government just stated that they will not implement the law before that. And so after the election, the implementation law will be probably discussed again, redrafted. And so it'll take some time. It might be in the third quarter of this year.  Catherine: Very interesting. We have a similar process. Sometimes it happens in the States where things get delayed. Well, what are some of the key components?  Asélle: So, NIS2 focuses on cybersecurity measures, and we need to differentiate it from the usual cybersecurity measures that any organization thinks about in the usual way where they protect their data, their systems against cyber attacks or incidents. So the purpose of this legislation is to make sure there is no disruption to the economy or to others. And in that sense, the similar kind of notions apply. Organizations need to focus on ensuring availability, authenticity, integrity, confidentiality of data and protect their data and systems against all hazards. These notions are familiar to us also from the GDPR kind of framework. So there are 10 cybersecurity risk management measures that NIS2 talks about, and this is policies on risk analysis and information system security, incident handling, business continuity and crisis management, supply chain security. Security in systems acquisition, development, and maintenance, policies to assess the effectiveness of measures, basic cyber hygiene practices, and training, cryptography and encryption, human resources security training, use of multi-factor authentication. So these are familiar notions also. And it seems the general requirements are something that organizations will be familiar with. However, the European Commission in its NIS Investments Report of November 2023 has done research, a survey, and actually found that organizations that are subject to NIS2 didn't really even take these basic measures. Only 22% of those surveyed had third-party risk management in place, and only 48% of organizations had top management involved in approving cybersecurity risk policies and any type of training. And this reduces the general commitment of organizations to cybersecurity. So there are clearly gaps, and NAS2 is trying to focus on improving that. There are other couple of things that I wanted to mention that are different from NIS1 and are important. So as Christian said, essential entities are different, have different regime, compliance regime applied to them compared with important entities. Essential entities need to systematically document their compliance and be prepared for regular monitoring by regulators, including regular inspections by competent authorities, whereas important entities only are obliged to kind of be in touch and communicate with competent authorities in case of security incidents. And there is an important clarification in terms of the supply chain, these are the questions we receive from our clients. And the question is, does the supply chain mean anyone that provides services or products? And from our reading of the legislation, supply chain only relates to ICT products and ICT services. Of course, there is a proportionality principle employed in this legislation, as with usually most of the European legislation, and there is a size threshold. The legislation only applies to those organizations who exceed the medium threshold. And two more topics, and I'm sorry that I'm kind of taking over the conversation here, but I thought the self-identification point was important because in the view of the European Commission, the original NIS1 didn't cover the organizations it intended to cover and so in the European Commission's view, the requirements are so clear in terms of which entities it applies to, that organizations should be able to assess it and register, identify themselves with the relevant authorities by April this year. And the last point, digital infrastructure organizations, their nature is specifically kind of taken into consideration, their cross-border nature. And if they provide services in several member states, there is a mechanism for them to register with the competent authority where their main establishment is based, similar to the notion under the GDPR.  Catherine: It sounds like, though, there's enough information in the directive itself without waiting for the member state implementation that companies who are subject to this rule could be well on their way to being compliant by just following those principles.  Christian: That's correct. So even if the implementation international law is currently not happening. All of the member states, companies can already work to comply with NIS2. So once the law is implemented, they don't have to start from zero. NIS2 sets out the requirements that important and essential entities under NIS2 have to comply with. For example have a proper information security management system have supply chain management train their employees and so they can already work to implement NIS2 and the the directive itself also has an access that sets out the sectors and potential entities that might be in scope of NIS2 And the member states cannot really vary from those annexes. So if you are already in scope of NIS2 under the information that is in the directive itself, you can be sure that you would probably also have to comply with your national rules. There might be some gray areas where it's not fully clear if someone is in scope of NIS2 and those entities might want to wait for the national implementation. And it also can happen that the national implementation goes beyond the directive and covers sectors or entities that might not be in scope under the directive itself. And then of course they will have to work to implement the requirements then. I think a good starting point anyways is the existing security program that companies already hopefully have in place so if they for example have an ISO 27001 framework implemented it might be good to start but with a mapping exercise what NIS2 might require in addition to the ISO 27001. And then look if this should be implemented now or companies can wait for the national implementation. But it's recommended not to wait for the national implementation and don't do anything until then.  Asélle: I agree with that, Christian. And I would like to point out that, in fact, digital infrastructure entities have very detailed requirements for compliance because there was an implementing regulation that basically specifies the cybersecurity requirements under NIS2. And just to clarify, perhaps digital infrastructure entities that I'm referring to are DNS service providers, TLD name, registries, cloud service providers, data centers. Content delivery network providers, managed service providers, managed security service providers, online marketplaces, online search engines, social networking services, and trust service providers. So the implementing regulation is in fact binding and directly applicable in all member states. And the regulation is quite detailed and has specific requirements in relation to each cybersecurity measure. Importantly, it has detailed thresholds on when incidents should be reported, and we need to take into consideration that not any incident is reportable, only those incidents that are capable of causing significant disruption to the service or significant impact on the provision of the services. So please take that into consideration. And NISA also published implementing guidance, and it's 150 pages, just explaining what the implementing regulation means. And it's still a draft. The consultation ended on the 9th of January 2025, so there'll be further guidance on that.  Catherine: Well, we can look forward to that. But I guess the next question would be, what are some of the risks for noncompliance?  Christian: Noncompliance with NIS2 can have serious consequences for the entity's concern, both legal and non-legal. On the legal side, NIS2 empowers the national authorities to impose sanctions and penalties, breaches. They can range from warnings and orders to fines and injunctions. Depending on the severity and duration of the infringement. The sanctions can be up to 2% of the annual turnover or 10 million euros, whatever is higher for the essential entities, and up to 1.4% of the annual turnover or 7 million euros, whichever is higher for important entities. NIS2 also allows the national authorities to take corrective or preventive measures. They can suspend or restrict the provision of the services and take the or order the entities to take remedial actions or improve the security posture. So even if they have implemented security measures and the authorities understand or determine that they are not sufficient in light of the risk applicable to the entity, they can require them to implement other measures to increase the security. On the non-legal side, it's very similar to what we discussed in our DORA podcast. There can be civil liability if there is an incident, if a damage occurs. And of course, the reputational damage and loss of trust and confidence can be really, really severe for the entities if they have an incident. And it's huge because they did not comply with the NIS2 requirements.  Asélle: I wanted to add that, unfortunately, with this piece of legislation, member states can add to the list of entities to which this legislation will apply. They can apply higher cybersecurity requirements, and because of the new criteria and new entities being added, it now applies to twice as many sectors as before. So quite a few organizations will need to review their policies, take cybersecurity measures. And it's helpful, as Christian mentioned, that, you know, NIS already mapped the cybersecurity measures against existing standards. It's on its website. I think it's super helpful. And it's likely that, the cybersecurity measures and the general risk assessment will be done by cybersecurity teams and risk compliance teams within organizations. However, legal will also need to be involved. And often policies, once drafted, they're reviewed by in-house legal teams. So it's essential that they all work together. It's also important to mention that there will be an impact on the due diligence and contracts with ICT product providers and ICT service providers. So the due diligence processes will need to be reviewed and enhanced and contracts drafted to ensure they will allow the organization, the recipients of the services to be compliant with NIS2. And maybe last point, just to cover off the UK, what's happening in the UK for those who also have operations there. It is clear now that the government will implement a version of NIS2. It's going to follow the European Union in its steps. And we recently were informed of a government page on the new cybersecurity and resilience bill. It's clear that it's going to be covering five sectors, transport, energy, drinking, water, health, and digital infrastructure. And digital services, very similar to NIS2, such as online marketplaces, online search engines, and cloud computing services. We are expecting the bill to be introduced to Parliament this year.  Catherine: Wow, fantastic news. So it should be a busy cybersecurity season. If any of our listeners think that they need help and think that they may be subject to these rules, I'm sure my colleagues, Asélle and Christian, would be happy to help with the legal governance side of this cybersecurity compliance effort. So thank you very much for sharing all this information, and we'll talk soon.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    21:17

More Technology podcasts

About Tech Law Talks

Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.
Podcast website

Listen to Tech Law Talks, TBPN and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Tech Law Talks: Podcasts in Family

Social
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 9/17/2025 - 10:56:31 AM