Powered by RND
PodcastsTechnologyTech Law Talks
Listen to Tech Law Talks in the App
Listen to Tech Law Talks in the App
(524)(250,057)
Save favourites
Alarm
Sleep timer

Tech Law Talks

Podcast Tech Law Talks
Reed Smith
Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance i...

Available Episodes

5 of 87
  • EU/Germany: Damages after data breach/scraping – Groundbreaking case law
    In its first leading judgment (decision of November 18, 2024, docket no.: VI ZR 10/24), the German Federal Court of Justice (BGH) dealt with claims for non-material damages pursuant to Art. 82 GDPR following a scraping incident. According to the BGH, a proven loss of control or well-founded fear of misuse of the scraped data by third parties is sufficient to establish non-material damage. The BGH therefore bases its interpretation of the concept of damages on the case law of the CJEU, but does not provide a clear definition and leaves many questions unanswered. Our German data litigation lawyers, Andy Splittgerber, Hannah von Wickede and Johannes Berchtold, discuss this judgment and offer insights for organizations and platforms on what to expect in the future. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Andy: Hello, everyone, and welcome to today's episode of our Reed Smith Tech Law Talks podcast. In today's episode, we'll discuss the recent decision of the German Federal Court of Justice, the FCJ, of November 18, 2024, on compensation payments following a data breach or data scraping. My name is Andy Splittgerber. I'm partner at Reed Smith's Munich office in the Emerging Technologies Department. And I'm here today with Hannah von Wickede from our Frankfurt office. Hannah is also a specialist in data protection and data litigation. And Johannes Berchtold, also from Reed Smith in the Munich office, also from the emerging technologies team and tech litigator. Thanks for taking the time and diving a bit into this breathtaking case law. Just to catch everyone up and bring everyone on the same speed, it was a case decided by the German highest civil court, in an action brought by a user of a social platform who wanted damages after his personal data was scraped by a hacker from that social media network. And that was done through using the telephone number or trying out any kind of numbers through a technical fault probably, and this find a friend function. And through this way, the hackers could download a couple of million data sets from users of that platform, which then could be found in the dark web. And the user then started an action before the civil court claiming for damages. And this case was then referred to the highest court in Germany because of the legal difficulties. Hannah, do you want to briefly summarize the main legal findings and outcomes of this decision?  Hannah: Yes, Andy. So, the FCJ made three important statements, basically. First of all, the FCJ provided its own definition of what a non-material damage under Article 82 GDPR is. They are saying that mere loss of control can constitute a non-material damage under Article 82 GDPR. And if such a loss of the plaintiffs is not verifiable, that also justified fear of personal data being misused can constitute a non-material damage under GDPR. So both is pretty much in line with what the ECJ already has said about non-material damages in the past. And besides that, the FCJ makes also a statement regarding the amount of compensation for non-material damages following from scraping incident. And this is quite interesting because according to the FCJ, the amount of the claim for damages in such cases is around 100 euros. That is not much money. However, FCJ also says both loss of control and reasonable apprehension, also including the negative consequences, must first be proven by the plaintiff.  Andy: So we have an immaterial damage that's important for everyone to know. And the legal basis for the damage claim is Article 82 of the General Data Protection Regulation. So it's not German law, it's European law. And as you'd mentioned, Hannah, there was some ECJ case law in the past on similar cases. Johannes, can you give us a brief summary on what these rulings were about? And on your view, does the FCJ bring new aspects to these cases? Or is it very much in line with the European Court of Justice that already?  Johannes: Yes, the FCJ has quoted ECJ quite broadly here. So there was a little clarification in this regard. So far, it's been unclear whether the loss of control itself constitutes the damage or whether the loss of control is a mere negative consequence that may constitute non-material damage. So now the Federal Court of Justice ruled that the mere loss of control constitutes the direct damage. So there's no need for any particular fear or anxiety to be present for a claim to exist.  Andy: Okay, so it's not. So we read a bit in the press after the decision. Yes, it's very new and interesting judgment, but it's not revolutionary. It stays very close to what the European Court of Justice said already. The loss of control, I still struggle with. I mean, even if it's an immaterial damage, it's a bit difficult to grasp. And I would have hoped FCJ provides some more clarity or guidance on what they mean, because this is the central aspect, the loss of control. Johannes, you have some more details? What does the court say or how can we interpret that?  Johannes: Yeah, Andy, I totally agree. So in the future, discussion will most likely tend to focus on what actually constitutes a loss of control. So the FCJ does not provide any guidance here. However, it can already be said the plaintiff must have had the control over his data to actually lose it. So whether this is the case is particularly questionable if the actual scrape data was public, like in a lot of cases where we have in Germany right here, and or if the data was already included in other leaks, or the plaintiff published the data on another platform, maybe on his website or another social network where the data was freely accessible. So in the end, it will probably depend on the individual case if there was actually a loss of control or not. And we'll just have to wait on more judgments in Germany or in Europe to define loss of control in more detail.  Andy: Yeah, I think that's also a very important aspect of this case that was decided here, that the major cornerstones of the claim were established, they were proven. So it was undisputed that the claimant was a user of the network. It was undisputed that the scraping took place. It was undisputed that the user's data was affected part of the scraping. And then also the user's data was found in the dark web. So we have, in this case, when I say undistributed, it means that the parties did not dispute about it and the court could base their legal reasoning on these facts. In a lot of cases that we see in practice, these cornerstones are not established. They're very often disputed. Often you perhaps you don't even know that the claimant is user of that network. There's always dispute or often dispute around whether or not a scraping or a data breach took place or not. It's also not always the case that data is found in the dark web. I think this, even if the finding in the dark web, for example, is not like a written criteria of the loss of control. I think it definitely is an aspect for the courts to say, yes, there was loss of control because we see that the data was uncontrolled in the dark web. So, and that's a point, I don't know if any of you have views on this, also from the technical side. I mean, how easy and how often do we see that, you know, there is like a tag that it says, okay, the data in the dark web is from this social platform? Often, users are affected by multiple data breaches or scrapings, and then it's not possible to make this causal link between one specific scraping or data breach and then data being found somewhere in the web. Do you think, Hannah or Johannes, that this could be an important aspect in the future when courts determine the loss of control, that they also look into, you know, was there actually, you know, a loss of control?  Hannah: I would say yes, because it was already mentioned that the plaintiffs must first prove that there is a causal damage. And a lot of the plaintiffs are using various databases that list such alleged breaches, data breaches, and the plaintiffs always claim that this would indicate such a causal link. And of course, this is now a decisive point the courts have to handle, as it is a requirement. Before you get to the damage and before you can decide if there was a damage, if there was a loss of control, you have to prove if the plaintiff even was affected. And yeah, that's a challenge and not easy in practice because there's also a lot of case law already about these databases or on those databases that there might not be sufficient proof for the plaintiffs being affected by alleged data breaches or leaks.  Andy: All right. So let's see what's happening also in other countries. I mean, the Article 82, as I said in the beginning, is a European piece of law. So other countries in Europe will have to deal with the same topics. We cannot come up with our German requirements or interpretation of immaterial damages that are rather narrow, I would say. So Hannah, any other indications you see from the European angle that we need to have in mind?  Hannah: Yes, you're right. And yet first it is important that this concept of immaterial damage is EU law, is in accordance with EU law, as this is GDPR. And as Johannes said, the ECJ has always interpreted this damage very broadly. And does also not consider a threshold to be necessary. And I agree with you that it is difficult to set such low requirements for the concept of damage and at the same time not demand materiality or a threshold. And in my opinion, the Federal Court of Justice should perhaps have made a submission here to the ECJ after all because it is not clear what loss of control is. And then without a material threshold, this contributes a lot to legal insecurity for a lot of companies.  Andy: Yeah. Thank you very much, Hannah. So yes, the first takeaway for us definitely is loss of control. That's a major aspect of the decision. Other aspects, other interesting sentences or thoughts we see in the FCJ decision. And one aspect I see or I saw is right at the beginning where the FCJ merges together two events. The scraping and then a noncompliance with data access requests. And that was based in that case on contract, but similar on Article 15, GDPR. So those three events are kind of like merged together as one event, which in my view doesn't make so much sense because they're separated from the event, from the dates, from the actions or non-actions, and also then from the damages from a non-compliance with an Article 15. I think it's much more difficult to argue with a damage loss of control than with a scraping or a data breach. That that's not a major aspect of the decision but I think it was an interesting finding. Any other aspects, Hannah or Johannes, that you saw in the decision worth mentioning here for our audience?  Johannes: Yeah so I think discussion in Germany was really broadly so i think just just maybe two points have been neglected in the discussion so far. First, towards the ending of the reasoning, the court stated that data controllers are not obliged to provide information about unknown recipients. For example, like in scraping cases, controllers often do not know who the scrapers are. So there's no obligation for them to provide any names of scrapers they don't know. That clarification is really helpful in possible litigation. And on the other hand, it's somewhat lost in the discussion that the damages of the 100 euros only come into consideration if the phone number, the user ID, the first name, the last name, the gender, and the workplace are actually affected. So accordingly, if less data, maybe just an email address or a name, or less sensitive data was scraped, the claim for damages can or must even be significantly lower.  Andy: All right. Thanks, Johannes. That's very interesting. So, not only the law of control aspect, but also other aspects in this decision that's worth mentioning and reading if you have the time. Now looking a bit into the future, what's happening next, Johannes? What are your thoughts? I mean, you're involved in some similar litigation as well, as so is Hannah, what do you expect, What's happening to those litigation cases in the future? Any changes? Will we still have law firms suing after social platforms or suing for consumers after social platforms? Or do we expect any changes in that?  Johannes: Yeah, Andy, it's really interesting. In this mass GDPR litigation, you always have to consider the business side, not always just the legal side. So I think the ruling will likely put an end to the mass GDPR litigation as we know it in the past. Because so far, the plaintiffs have mostly appeared just with a legal expenses insurer. So the damages were up to like 5,000 euros and other claims have been asserted. So the value in dispute could be pushed to the edge. So it was like maybe around 20,000 euros in the end. But now it's clear that the potential damages in such scraping structures are more likely to be in the double-digit numbers, like, for example, 100 euros or even less. So as a result, the legal expenses insurers will no longer fund their claims for 5,000 euros. But at the same time, the vast majority of legal expenses insurers have agreed to a deductible of more than 100 euros. So the potential outcome and the risk of litigation are therefore disproportionate. And as a result, the plaintiffs will probably refrain from filing such lawsuits in the future.  Andy: All right. So good news for all insurers in the audience or better watch out for requests for coverage of litigation and see if not the values in this cube are much too high. So we will probably see less of insurance coverage cases, but still, definitely, we expect the same amount or perhaps even more litigation because the number as such, even if it's only 100 euros, seems certainly attractive for users as a so-called low-hanging fruit. And Hannah, before we close our podcast today, again, looking into the future, what is your recommendation or your takeaways to platforms, internet sites, basically everyone, any organization handling data can be affected by data scraping or a data breach. So what is your recommendation or first thoughts? How can those organizations get ready or ideally even avoid such litigation?  Hannah: So at first, Andy, it is very important to clarify that the FCJ judgment is ruled on a specific case in which non-public data was made available to the public as a result of a proven breach of data protection. And that is not the case in general. So you should avoid simply apply this decision to every other case like a template because if other requirements following from the GDPR are missing, the claims will still be unsuccessful. And second, of course, platforms companies have to consider what they publish about their security vulnerabilities and take the best possible precautions to ensure that data is not published on the dark web. And if necessary, companies can transfer the risk of publication to the user simply by adjusting their general terms and conditions.  Andy: Thanks, Hannah. These are interesting aspects and I see a little bit of conflict between the breach notification obligations under Article 33, 34, and then the direction this caseload goes. That will also be very interesting to see. Thank you very much, Hannah and Johannes, for your contribution. That was a really interesting, great discussion. And thank you very much to our audience for listening in. This was today's episode of our EU Reed Smith Tech Law Talks podcast. We thank you very much for listening. Please leave feedback and comments in the comments fields or send us an email. We hope to welcome you soon to our next episode. Have a nice day. Thank you very much. Bye bye.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcast on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved.  Transcript is auto-generated.
    --------  
    20:15
  • AI explained: AI in the UK insurance market
    Laura-May Scott and Emily McMahan navigate the intricate relationship between AI and professional liability insurance, offering valuable insights and practical advice for businesses in the AI era. Our hosts, both lawyers in Reed Smith’s Insurance Recovery Group in London, delve into AI’s transformative impact on the UK insurance market, focusing on professional liability insurance. AI is adding efficiency to tasks such as document review, legal research and due diligence, but who pays when AI fails? Laura-May and Emily share recommendations for businesses on integrating AI, including evaluating specific AI risks, maintaining human oversight and ensuring transparency. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Laura-May: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the UK insurance market. I'm Laura-May Scott, a partner in our Insurance Recovery and Global Commercial Disputes group based here in our London office. Joining me today is Emily McMahan, a senior associate also in the Insurance Recovery and Global Commercial Disputes team from our London office. So diving right in, AI is transforming how we work and introducing new complexities in the provision of services. AI is undeniably reshaping professional services, and with that, the landscape of risk and liability. Specifically today, we're going to discuss how professional liability insurance is evolving to address AI-related risks, and what companies should be aware of as they incorporate AI into their operations and work product. Emily, can you start by giving our listeners a quick overview of professional liability insurance and how it intersects with this new AI-driven landscape? Emily: Thank you, Laura-May. So, professional liability insurance protects professionals, including solicitors, doctors, accountants, and consultants, for example, against claims brought by their clients in respect of alleged negligence or poor advice. This type of insurance helps professionals cover the legal costs of defending those claims, as well as any related damages or settlements associated with the claim. Before AI, professional liability insurance would protect professionals from traditional risks, like errors in judgment or omissions from advice. For example, if an accountant missed a filing deadline or a solicitor failed to supervise a junior lawyer, such that the firm provided incorrect advice on the law. However, as AI becomes increasingly utilized in professional services and in the delivery of services and advice to their clients, the traditional risks faced by these professionals is changing rapidly. This is because AI can significantly alter how services are delivered to clients. Indeed, it is also often the case that it is not readily apparent to the client that AI has been used in the delivery of some of these professional services. Laura-May: Thank you, Emily. I totally agree with that. Can you now please tell us how the landscape is changing? So how is AI being used in the various sectors to deliver services to clients? Emily: Well, in the legal sphere, AI is being used for tasks such as document review, legal research, and within the due diligence process. At first glance, this is quite impressive, as these are normally the most time-consuming aspects of a lawyer's work. So the fact that AI can assist with these tasks is really useful. Therefore, when it works well, it works really well and can save us a lot of time and costs. However, when the use of AI goes wrong, then it can cause real damage. For example, if it transpires that something has been missed in the due diligence process, or if the technology hallucinates or makes up results, then this can cause a significant problem. I know, for example, on the latter point in the US, there was a case where two New York lawyers were taken to court after using ChatGPT to write a legal brief that actually contained fake case citations. Furthermore, using AI poses a risk in the context of confidentiality, where personal data of clients is disclosed to the system or there's a data leak. So when it goes wrong, it can go really wrong. Laura-May: Yes, I can totally understand that. So basically, it all boils down to the question of who is responsible if AI gets something wrong? And I guess, will professional liability insurance be able to cover that? Emily: Yes, exactly. Does liability fall to the professionals who have been using the AI or the developers and providers of the AI? There's no clear-cut answer, but the client will probably no doubt look to the professional with whom they've contracted with and who owes them a duty of care, whether that be, for example, a law firm or an accountancy firm to cover any subsequent loss. In light of this, Laura-May, maybe you could tell our listeners what this means from an insurance perspective. Laura-May: Yes, it's an important question. So since many insurance policies were created before AI, they don't explicitly address AI related issues. For now, claims arising from AI are often managed on a case by case basis within the scope of existing policies, and it very much depends on the policy wording. For example, as UK law firms must obtain sufficient professional liability insurance to adequately cover its current and past services as mandated by its regulator, to the solicitor's regulatory authority, then it is likely that such policy will respond to claims where AI is used to perform and deliver services to clients and where a later claim for breach of duty arises in relation to that use of AI. Thus, a law firm's professional liability insurance could cover instances where AI is used to perform legal duties, giving rise to a claim from the client. And I think that's pretty similar for accountancy firms who are members of the Institute of Chartered accountants for England and Wales. So the risks associated with AI are likely to fall under the minimum terms and conditions for its required professional liability insurance, such that any claims brought against accountants for breach of duty in relation to the use of AI would be covered under the insurance policy. However, as time goes on, we can expect to see more specific terms addressing the use of AI in professional liability policies. Some policies might have that already, but I think as we go through the market, it will become more industry standard. And we recommend that businesses review their professional liability policy language to ascertain how it addresses AI risk. Emily: Thanks, Laura-May. That's really interesting that such a broad approach is being followed. I was wondering whether you would be able to tell our listeners how you think they should be reacting to this approach and preparing for any future developments. Laura-May: I would say the first step is that businesses should evaluate how AI is being integrated into their services. It starts with understanding the specific risks associated with the AI technologies that they are using and thinking through the possible consequences if something goes wrong with the AI product that's being utilised. The second thing concerns communication. So even if businesses are not coming across specific questions regarding the use of AI when they're renewing or placing professional liability cover, companies should always ensure that they're proactively updating their insurers about the tech that they are using to deliver their services. And that's to ensure that businesses discharge their obligation to give a fair presentation of the risk to insurers at the time of placement or on variation or renewal of the policy pursuant to the Insurance Act 2015. It's also practically important to disclose to insurers fully so that they understand how the business utilizes AI and you can then avoid coverage-related issues down the line if a claim does arise. Better to have that all dealt with up front. The third step is about human involvement and maintaining robust risk management processes for the use of AI. Businesses need to ensure that there is some human supervision with any tasks involving AI and that all of the output from the AI is thoroughly checked. So businesses should be adopting internal policies and frameworks to outline the permitted use of AI in the delivery of services by their business. And finally, I think it's very important to focus on transparency with clients. You know, clients should be informed if any AI tech has been used in the delivery of services. And indeed, some clients may say that they don't want the professional services provider to utilize AI in the delivery of services. And businesses must be familiar with any restrictions that have been put in place by a client. So in other words, informed consent for the use of AI should be obtained from the client where possible. I think these should collectively help, these steps should collectively help all parties begin to understand where the liability lies, Emily. Do you have anything to add? Emily: I see. So it's basically all about taking a proactive rather than a reactive attitude to this. Though times may be uncertain, companies should certainly be preparing for what is to come. In terms of anything to add, I would also just like to quickly mention that if a firm uses a third-party AI tool instead of its own tool, risk management can become a little more complex. This is because if a firm develops their own AI tool, they know how it works and therefore any risks that could manifest from that. This makes it easier to perform internal checks and also obtain proper informed consent from clients as they'll have more information about the technology that is being utilized. Whereas if a business uses a third-party technology, although though in some cases this might be cheaper, it is harder to know the associated risk. And I would also say that jurisdiction comes into this. It's important that any global professional service business looks at the legal and regulatory landscape in all the countries that they operate. There is not a globally uniform approach to AI, and how to utilize it and regulate it is changing. So, companies need to be aware of where their outputs are being sent and ensure that their risks are managed appropriately. Laura-May: Yes, I agree. All great points, Emily. So in the future, what do you think we can be expecting from insurers? Emily: So I know you mentioned earlier about how time progresses, we can expect to see more precise policy. At the moment, I think it is fair to say that insurers are interested in understanding how AI is being used in businesses. It's likely that as time goes on and insurers begin to understand the risks involved, they will start to modify their policies and ask additional questions of their clients to better tailor their covered risks. For example, certain insurers may require that insureds declare their AI usage and provide an overview of the possible consequences if that technology fails. Another development we can expect from the market is new insurance products created solely for the use of AI. Laura-May: Yes, I entirely agree. I think we will see more specific insurance products that are tailored to the use of AI. So in summary, businesses should focus on their risk management practices regarding the use of AI and ensure that they're having discussions with their insurers about the use of the new technologies. These conversations around responsibility, transparency and collaboration will undoubtedly continue to shape professional liability insurance in the AI era that we're now in. And indeed, by understanding their own AI systems and engaging with insurers and setting clear expectations with clients, companies can stay ahead. Anything more there, Emily? Emily: Agreed. It's all about maintaining that balance between innovation and responsibility. While AI holds tremendous potential, it also demands accountability from the professionals who use it. So all that's left to say is thank you for listening to this episode and look out for the next one. And if you enjoyed this episode, please subscribe, rate, and review us on your favorite podcast platform and share your thoughts and feedback with us on our social media channels. Laura-May: Yes, thanks so much. Until next time. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    13:12
  • AI explained: AI and cybersecurity threat
    Our latest podcast covers the legal and practical implications of AI-enhanced cyberattacks; the EU AI Act and other relevant regulations; and the best practices for designing, managing and responding to AI-related cyber risks. Partner Christian Leuthner in Frankfurt, partner Cynthia O'Donoghue in London with counsel Asélle Ibraimova share their insights and experience from advising clients across various sectors and jurisdictions. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Christian: Welcome to the Tech Law Talks, now new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI and cybersecurity threat. My name is Christian Leutner. I'm a partner at the Reed Smith Frankfurt office, and I'm with my colleagues Cynthia O'Donoghue and Asélle Ibraimova from the London office. Cynthia: Morning, Christian. Thanks. Asélle: Hi, Christian. Hi, Cynthia. Happy to be on this podcast with you. Christian: Great. In late April 2024, the German Federal Office for Information Security has identified that AI, and in particular, generative AI and large language models, LLM, is significantly lowering the barriers to entry for cyber attacks. The technology, so AI, enhances the scope, speed, and the impact of cyber attacks, of malicious activities, because it simplifies social engineering, and it really makes the creation or generation of malicious code faster, simpler, and accessible to almost everybody. The EU legislator had some attacks in mind when creating the AI Act. Cynthia, can you tell us a bit about what the EU regulator particularly saw as a threat? Cynthia: Sure, Christian. I'm going to start by saying there's a certain irony in the EU AI Act, which is that there's very little about the threat of AI, even though sprinkled throughout the EU AI Act is lots of discussion around security and keeping AI systems safe, particularly high-risk systems. But the EU AI Act contains a particular article that's focused on the design of high-risk systems and cybersecurity. And the main concern is really around the potential for data poisoning and for model poisoning. And so part of the principle behind the EU AI Act is security by design. And so the idea is that the EU AI Act regulates high-risk AI systems such that they need to be designed and developed in a way that ensures an appropriate level of accuracy robustness, and cybersecurity. And to prevent such things as data poisoning and model poisoning. And it also talks about the horizontal laws across the EU. So because the EU AI Act treats AI as a product, it brings into play other EU directives, like the Directive on the Resilience of Critical Entities and the newest cybersecurity regulation in relation to digital products. And I think when we think about AI, you know, most of our clients are concerned about the use of AI systems and being, let's say, ensuring that they're secure. But really, you know, based on that German study you mentioned at the beginning of the podcast, I think there's less attention paid to use of AI as a threat vector for cybersecurity attacks. So, Christian, what do you think is the relationship between the AI Act and the Cyber Resilience Act, for instance? Christian: Yeah, I think, and you mentioned it already. So the legislator thought there is a link and the high-risk AI models need to implement a lot of security measures. And the latest Cyber Resilience Act requires some stakeholders in software and hardware products to also implement security measures and also imposes another or lots of different obligations on them. To not over-engineer these requirements, the AI Act already takes into account that if a high-risk AI model is in scope of the Cyber Resilience Act, the providers of those AI models can refer to the implementation of the cybersecurity requirements they made under the Cyber Resilience Act. So they don't need to double their efforts. They can just rely on what they have implemented. But it would be great if we're not only applying the law, but if there would also be some guidance from public bodies or authorities on that. Asélle, do you have something in mind that might help us with implementing those requirements? Asélle: Yeah, so ENISA has been working on AI and cybersecurity in general, and it has produced a paper called Multi-Layer Framework for Good Cybersecurity Practices for AI last year. So it still needs to be updated. However, it does provide a very good summary of various AI initiatives throughout the world. And generally mentions that when thinking of AI, organizations need to take into consideration the general system vulnerabilities, the vulnerabilities in the underlying ICT infrastructure. And also when it comes to the use of AI models or systems, then, you know, various threats that you already talked about, such as data poisoning and model poisoning and other kind of adversarial attacks on those systems should also be taken into account. So in terms of specific kind of guidelines or standards that ENISA mentioned is ISO 4201. It's an AI management system standard. And also another noteworthy guidelines mentioned is the NIST AI risk management framework, obviously the US guidelines. And obviously both of these are to be used on a voluntary basis. But basically, their aim is to ensure developers create trustworthy AI, valid, reliable, safe, and secure and resilient. Christian: Okay, that's very helpful. I think it's fair to say that AI will increase the already high likelihood of being subject to cyber attack at some point, that this is a real threat to our clients. And we all know from practice that you cannot defend against everything. You can be cautious, but there might be occasions when you are subject to an attack, when there has been a successful attack or there is a cyber incident. If it is caused by AI, what do we need to do as a first responder, so to say? Cynthia: Well, there are numerous notification obligations in relation to attacks. Again, depending on the type of data or the entity involved. For instance, if the, As a result of a breach from an AI attack, it involves personal data, then there's notification requirements under the GDPR, for instance. If you're in a certain sector that's using AI, one of the newest pieces of legislation to go into effect in the EU, the Network and Information Security Directive, tiers organizations into essential entities and important entities. And, you know, depending on whether the sector the particular victim is in is subject to either, you know, the essential entity requirements or the important entity requirements, there's a notification obligation under NIST-2, for short, in relation to vulnerabilities and attacks. And ENISA, who Asélle was just talking about, has most recently issued a report for, let's say, network and other providers, which are essential entities under NIST-2, in relation to what is considered a significant or a vulnerability or a material event that would need to be notified to the regulatory entity and the relevant member state for that particular sector. And I'm sure there's other notification requirements. I mean, for instance, financial services are subject to a different regulation, aren't they, Asélle? And so why don't you tell us a bit more about the notification requirements of financial services organizations? Asélle: The EU Digital Operational Resilience Act also provides similar requirements to the supply chain of financial entities, specifically the ICT third-party providers, which the AI providers may fall into. And Article 30 under DORA requires that there are specific, for example, contractual clauses requiring cybersecurity around data. So it requires provisions on availability, authenticity, integrity, and confidentiality. There are additional requirements to those ICT providers whose product, say AI product, perhaps as an ICT product, plays a critical or important function in the provision of the financial services. In that case... There will be additional requirements, including on ICT security measures. So in practical terms, it would mean your organizations that are regulated in this way, they are likely to ask AI providers to have additional tools, policies, measures, and to provide evidence that such measures have been taken. It's also worth mentioning about the developments on AI regulation in the UK. Previous UK government wanted to adopt a flexible, non-binding regulation of AI. However, the Labour Government appears to want to adopt a binding instrument. However, it is likely to be of limited scope, focusing only on the most powerful AI models. However, there isn't any clarity in terms of whether the use of AI in cyber threats is regulated in any specific way. Christian, I wanted to direct a question to you. How about the use of AI in supply chains? Christian: Yeah, I think it's very important to have a look on the entire supply chain of the companies or the entire contractual relationships. Because most of our clients or companies out there do not develop or create their own AI. They will use AI from vendors or their suppliers or vendors will use AI products to be more efficient. And all the requirements, for example, the notification requirements that Cynthia just mentioned, they do not stop if you use a third party. So even if you engage a supplier, a vendor, you're still responsible to defend against cyber attacks and to report cyber incidents or attacks if they concern your company. Or at least there's a high likelihood. So it's very crucial to have those scenarios in your mind when you're starting a procurement process and you start negotiating on contracts to have those topics in the contract with a vendor, with a supplier to have notification obligations in case there is a cyber attack at that vendor that you probably have some audit rights, inspection rights, depending on your negotiation position, but at least to make sure that you are aware if something happens so that the risk that does not really or does not directly materializes at your company cannot sneak through the back door by a vendor. So that's really important that you always have an eye on your supply chain and on your third-party vendors or providers. Cynthia: That's such a good point, Christian. And ultimately, I think it's best for organizations to think about it early. So it really needs to be embedded as part of any kind of supply chain due diligence, where maybe a new question needs to be added to a due diligence questionnaire on suppliers about whether they use AI, and then the cybersecurity around the AI that they use or contribute to. Because we've all read and heard in the papers and been exposed to through client counseling of cybersecurity breaches that have come through the supply chain and may not be direct attacks on the client itself. And yeah, I mean, the contractual provisions then are really important. Like you said, making sure that the supplier notifies the customer very early on. And then there is cooperation and audit mechanisms. Asélle, anything else to add? Asélle: Yeah, I totally agree with what was said. I think beyond just the legal requirements, it is ultimately the question of defending your business, your data, and whether or not it's required by your customers or by specific legislation to which your organization may be subject to. It's ultimately whether or not your, business can withstand more sophisticated cyber attacks and therefore agree with both of you that organizations should take supply chain resilience and cyber security and generally higher risks of cyber attacks more seriously and put measures in place better to invest now than later after the attack. I also think that it is important for in-house teams to work together as cyber security threats are enhanced by AI. And these are legal, IT security, risk management, and compliance teams. Sometimes, for example, legal teams might think that the IT security or incident response policies are owned by IT, so there isn't much contribution needed. Or the IT security teams might think the legal requirements are in the legal team's domain, so we'll wait to hear from legal on how to reflect those. So working in silos is not beneficial. IT policies, incident response policies, training material on cybersecurity should be regularly updated by IT teams and reviewed by legal to reflect the legal requirements. The teams should collaborate on running tabletop incident response and crisis response exercises, because in the real case scenario, they will need to work hand in hand to efficiently respond to these scenarios. Cynthia: Yeah, I think you're right, Asélle. I mean, obviously, any kind of breach is going to be multidisciplinary in the sense that you're going to have people who understand AI, understand, you know, the attack vector, which used the AI. You know, other people in the organization will have a better understanding of notification requirements, whether that be notification under the cybersecurity directives and regulations or under the GDPR. And obviously, if it's an attack that's come from the supply chain, there needs to be that coordination as well with the supplier management team. So it's definitely multidisciplinary and requires, obviously cooperation and information sharing and obviously in a way that's done in accordance with the regulatory requirements that we've talked about. So in sum, you have to think about AI and cybersecurity both from a design perspective as well as the supply chain perspective and how AI might be used for attacks, whether it's vulnerabilities into a network or data poisoning or model poisoning. Think about the horizontal requirements across the EU in relation to cybersecurity requirements for keeping systems safe and or if you're an unfortunate victim of a cybersecurity attack where AI has been used to think about the notification requirements and ultimately that multidisciplinary team that needs to be put in place. So thank you, Asélle, and thank you, Christian. We really appreciate the time to talk together this morning. And thank you to our listeners. And please tune in for our next Tech Law Talks on AI. Asélle: Thank you. Christian: Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    21:05
  • AI explained: AI regulations and PRC court decisions in China
    Reed Smith lawyers Cheryl Yu (Hong Kong) and Barbara Li (Beijing) explore the latest developments in AI regulation and litigation in China. They discuss key compliance requirements and challenges for AI service providers and users, as well as the emerging case law on copyright protection and liability of AI-generated content. They also share tips and insights on how to navigate the complex and evolving AI legal landscape in China. Tune in to learn more about China’s distinct approach to issues involving AI, data and the law.  ----more---- Transcript:  Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Cheryl: Welcome to our Tech Law Talks and new series on artificial intelligence. Over the months, we have been exploring the key challenges and opportunities within the rapidly involving AI landscape. Today, we will focus on AI regulations in China and the relevant PRC court decisions. My name is Cheryl Yu, a partner in the Hong Kong office at Reed Smith, and I'm speaking today with Barbara Li, who is a partner based in our Beijing office. Barbara and I are going to focus on the major legal regulations on AI in China and also some court decisions relating to AI tours to see how China's legal landscape is evolving to keep up with the technological advancements. Barbara, can you first give us an overview about China's AI regulatory developments? Barbara: Sure. Thank you, Cheryl. Very happy to do that. In the past few years, the regulatory landscape governing AI in China has been evolving at a very fast pace. Although China does not have a comprehensive AI as a EU AI act, China has been leading the way in rolling out multiple AI regulations governing generative AI, debate technologies, and algorithms. In July 2023, China issued the Generative AI Measures, which becomes one of the first countries in the world to regulate generative AI technologies. These measures apply to generative AI services offered to the public in China, regardless of whether the service provider is based in China or outside China. And international investors are allowed to set up local entities in China to develop and offer AI services in China. In relation to the legal obligation, the measures lay down a wide range of legal requirements in performing and using generative AI services. Including content screening, protection of personal data and privacy, and safeguarding IPR and trade secrets, and also taking effective measures to prevent discrimination, when the company's design algorithm chooses a training data or creates a large language model. Cheryl: Many thanks, Barbara. These are the very important compliance obligations that business should not neglect when engaging in development of AI technologies, products, and services. I understand that one of the biggest concerns in AI is how to avoid hallucination and misinformation. I wonder if China has adopted any regulations to address these issues? Barbara: Oh, yes, definitely, Cheryl. China has adopted multiple regulations and guidelines to address these concerns. For example, the Deep Synthesis Rule, which became effective from January 2023, and this regulation aims to have a governance over the use of deep-fake technologies in generating or changing digital content. And when we talk about digital content, the regulation refers to a wide range of digital media, including video, voices, text, and images. And the deep synthesis service providers, they must refrain from using deep synthesis of services to produce or disseminate illegal information. And also, the companies are required to establish and improve proper compliance or risk management systems. Such as having the user registration system, doing the ethics review of the algorithm, and also protecting personal information, and also taking measures to protect IT and also prevent misinformation and fraud, and also, last but not least, setting up a response to the data breach. In addition, China's National Data and Cybersecurity Regulator, which is CAC, have issued a wide range of rules on algorithm fighting. And also, these algorithm fighting requirements have become effective from June 2024. According to this 2024 regulation, if a company uses algorithms in its online services with the functions of blogs, chat rooms, public accounts, short videos, or online streaming, So these staff functions are required of being capable of influencing public opinion or driving social engagement. And then the service provider is required to file its algorithm with the CAC, the regulator, within 10 working days after the launch of the service. So in order to finish the algorithm filing, the company is required to put together a comprehensive information documentation. Those information and documentation include the algorithm assessment report, security monitoring policy, data breach response plan, and also some technical documentation to explain the function of the algorithm. And also, the CAC has periodically published a list of filed algorithms, and also up to 30th of June 2024, we have seen over 1,400 AI algorithms which have been developed by more than 450 companies, and those algorithms have been successfully filed by the CAC. So you can see this large number of AI algorithm findings indeed have highlighted the rapid development of AI technologies in China. And also, we should remember that the large volume of data is a backbone of AI technologies. So we should not forget about the importance of data protection and privacy obligations when you develop and use AI technologies. Over the years, China has built up a comprehensive data and privacy regime with the three pillars of national laws. Those laws include the Personal Information Protection Law, normally in short name PIPL, and also the Cybersecurity Law and Data Security Law. So the data protection and cybersecurity compliance requirements got to be properly addressed when companies develop AI technologies, products, and services in China. And indeed, there are some very complicated data requirements and issues under the Chinese data and cybersecurity laws. For example, how to address the cross-border data transfer. So it's very important to remember those requirements. China data requirement and the legal regime is very complex. So given the time constraints, probably we can find another time to specifically talk about the data issues under the Chinese. Cheryl: Thanks, Barbara. Indeed, there are some quite significant AI and data issues which would warrant more time for a deeper dive. Barbara, can you also give us some update on the AI enforcement status in China and share with us your views on the best practice that companies can take in mitigating those risks? Barbara: Yes, thanks, Cheryl. Indeed, Chinese AI regulations do have keys. For example, the violation of the algorithm fighting requirement can result in fines up to RMB 100,000. And also the failure to comply with those compliance requirements in developing and using technologies can also trigger the legal liability under the Chinese PIPL, which is Personal Information Protection Law, and also the cyber security law and the data security law. And under those laws, a company can be imposed a monetary fine up to RMB 15 million or 5% of its last year turnover. In addition, the senior executives of the company can be personally subject to liability, such as a penalty up to a fine up to 1 million RMB, and also the senior executives can be barred from taking senior roles for a period of time. In the worst scenario, criminal liability can be pursued. So, in the first and second quarters of this year, 2024, we have seen some companies have been caught by the Chinese regulators for failing to comply with the AI requirements, ranging from failure to monitor the AI-generated content or neglecting the AI algorithm-finding requirements. Noncompliance has resulted in the suspension of their mobile apps pending ratification. As you can see, that noncompliance risk is indeed real, so it's very important for the businesses to pay close attention to the relevant compliance requirements. So to just give our audience a few quick takeaways in terms of how to address the AI regulatory and legal risk in China, we would say probably the companies can consider three most important compliance steps. The first is that with the faster development of AI in China, it's crucial to closely monitor the legislative and enforcement development in AI, data protection, and cybersecurity. security. While the Chinese AI and data laws share some similarities with the laws in other countries, for example, the EU AIF and the European GDPR, Chinese AI and data laws and regulations indeed have its unique characteristics and requirements. So it's extremely important for businesses to understand the Chinese AI and data laws, conduct proper analysis of the key business implications. And also take appropriate compliance action. So that is number one. And the second one, I would say, in terms of your specific AI technologies, products and services rolling out in the China market, it's very important to do the required impact assessment to ensure compliance with accountability, bias, and also accessibility requirements, and also build up a proper system for content monitoring. If your algorithm falls within the scope subject to fighting requirements, you definitely need to prepare the required documents and finish the algorithm fighting as soon as possible to avoid the potential penalties and compliance rates. And the third one is that you should definitely prepare the China AI policies, the AI terms of use, and build up your AI governance and compliance mechanism in line with the evolving Chinese AI regulation, and also train your team on the use of AI for compliance in their day-to-day work. So it's also very important, very interesting to note that in the past month, Chinese schools have given some landmark rulings in trials in relation to AI technology. Those rulings cover various AI issues, ranging from copyright protection of AI-generated content, data scraping, and privacy. Cheryl, can you give us an overview about those cases and what takeaways we can get from those rulings? Cheryl: Yes, thanks, Barbara. As mentioned by Barbara, with the emerging laws in China, there have been a lot of questions relating to AI technologies which are interacted with copyright law. The most commonly discussed questions include if users instruct an AI tour to produce an image, who is the author of the work, the AI tour, or the person giving instructions to the AI tour. And if the AI tour generates a work that bears a strong resemblance to another work already published, would that constitute an infringement of copyright? Before 2019, the position in China was that works generated by AI machines generally were not subject to copyright protection. For a work to be copyrightable, the courts will generally consider whether the work is created by natural persons and whether the work is original. Subsequently, there has been a shift in the Chinese court's position, in which the courts are more inclined to protect the copyrights of AI-generated content. For example, the Nanshan District Court of Shenzhen handed down a decision, Shenzhen Tencent versus Shanghai Yinsheng, in 2019. The court held that the plaintiff, Shenzhen Tencent, should be regarded as the author of an article, which was generated by an AI system at the supervision of the plaintiff. The court further held that the intellectual contribution of the plaintiff's staff, including inputting data, setting prompts, selecting the template, and the layout of the article, played a direct role in shaping the specific expression of the article. Hence, the article demonstrated sufficient originality and creativity to warrant copyright protection. Similarly, the Beijing Internet Court reached the same decision in Li Yunkai v. Liu Yuanchun in 2023, and the court held that AI-generated content can be subject to copyright protection if the human user has contributed substantially to the creation of the work. In its judgment, the court ruled that an AI machine cannot be an author of the work, since it is not human. And the plaintiff is entitled to the copyright of the photo generated by the AI machine on the grounds that the plaintiff personally chose and arranged the order of prompts, set the parameters, and detected the style of the output, which warrants a sufficient level of originality in the work. As you may note, in both cases, for work to be copyrightable in China, the courts no longer required it to be created entirely by a human being. Rather, the courts focused on whether there was an element of original intellectual achievement. Interestingly, there's another case handed down by the Hangzhou Internet Court in 2023, which has been widely criticized in China. This court decided that the AI was not an author, not because it was non-human, but because it was a weak AI and did not possess the relevant capability for intellectual creation. And this case has created some uncertainty as to what is the legal status of the AI if it is stronger and has the intellectual capability to generate original works, and the questions such as, would such an AI be qualified as an author and be entitled to copyright over its works? Those issues remain to be seen as the technology and law develops. Barbara: Thank you, Cheryl. We now understand the position in relation to the authorship under the Chinese law. What about the plaintiffs? What about the platforms which provide generative AI tools? I understand that they also face the question of whether there will be secondary level for infringement of AI generated content output. Have the Chinese courts issued any case on this topic? Cheryl: Many thanks, Barbara. Yes, there's some new development on this issue in China in early 2024. And the Guangzhou Internet Court published a decision on this issue, which is the first decision in China regarding the secondary liability of AI platform providers. And the plaintiff in this case has exclusive rights to a Japanese cartoon image, the Ultraman, including various rights such as reproduction, adaptation, etc. And the defendant was an undisclosed AI company that operates a website with AI conversation function and AI image generation function. These functions were provided using an unnamed third-party provider's AI model, which was connected to the defendant's website. The defendant allowed visitors to their website to use this AI model to generate images, but it hadn't created the AI model themselves. The plaintiff eventually discovered that if one input prompts related to Ultraman, the the generative AI tool would produce images highly similar to Ultraman. Then the plaintiff eventually brought an action of copyright infringement against the defendant. And the court held that, in this case, the defendant platform has breached a duty of care to take appropriate measures to ensure that outputs do not contribute any copyright law and the relevant AI regulations in China. And the output that the AI generative tool created has infringed on the copyright of the other protected works. So this Ultraman case serves a timely reminder to Chinese AI platform providers that it is of utmost importance to comply with the relevant laws and regulations in China. And another interesting point of law is the potential liability of AI developers in the scenario that copyright materials are used to train the AI tour. So far, there haven't been any decisions relating to this issue in China, and it remains to be seen whether AI model developers would be liable for infringement of copyright in the process of training their AI models with copyrightable materials, and if so, whether there are any defenses available to them. We shall continue to follow up and keep everyone posted in this regard. Barbara: Yes, indeed Cheryl those are all very interesting developments. So to conclude for our podcast today, with the advancement of AI technology, it's almost inevitable that more legal challenges will emerge related to the training and application of a generative AI system. To this course, we have been expected to develop innovative legal interpretations to strike a balance between safeguarding copyright and promoting the technology innovation and growth. So our team, Reed Smith in Greater China, will bring all the updates to you on the development. So please do stay tuned. Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    21:08
  • AI explained: AI and financial services
    Reed Smith partners Claude Brown and Romin Dabir discuss the challenges and opportunities of artificial intelligence in the financial services sector. They cover the regulatory, liability, competition and operational risks of using AI, as well as the potential benefits for compliance, customer service and financial inclusion. They also explore the strategic decisions firms need to make regarding the development and deployment of AI, and the role of regulators play in supervising and embracing AI. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Claude: Welcome to Tech Law Talks and our new series on artificial intelligence, or AI. Over the coming months, we'll explore key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in financial services. And to do that, I'm here. My name is Claude Brown. I'm a partner in Reed Smith in London in the Financial Industry Group. And I'm joined by my colleague, Romin Dabir, who's a financial services partner, also based in London.  Romin: Thank you, Claude. Good to be with everyone.  Claude: I mean, I suppose, Romin, one of the things that strikes me about AI and financial services is it's already here. It's not something that's coming in. It's been established for a while. We may not have called it AI, but many aspects it is. And perhaps it might be helpful to sort of just review where we're seeing AI already within the financial services sector.  Romin: Yeah, absolutely. No, you're completely right, Claude. Firms have been using AI or machine learning or some form of automation in their processes for quite a while, as you rightly say. And this has been mainly driven by searches for efficiency, cost savings, as I'm sure the audience would appreciate. There have been pressures on margins and financial services for some time. So firms have really sought to make their processes, particularly those that are repetitive and high volume, as efficient as possible. And parts of their business, which AI has already impacted, include things like KYC, AML checks, back office operations. All of those things are already having AI applied to them.  Claude: Right. I mean, some of these things sound like a good thing. I mean, improving customer services, being more efficient in the know-your-customer, anti-money laundering, KYC, AML areas. I suppose robo-advice, as it's called sometimes, or sort of asset management, portfolio management advice, might be an area where one might worry. But I mean, the general impression I have is that the regulators are very focused on AI. And generally, when one reads the press, you see it being more the issues relating to AI rather than the benefits. I mean, I'm sure the regulators do recognize the benefits, but they're always saying, be aware, be careful, we want to understand better. Why do you think that is? Why do you think there's areas of concern, given the good that could come out of AI?  Romin: No, that's a good question. I think regulators feel a little bit nervous when confronted by AI because obviously it's novel, it's something new, well, relatively new that they are still trying to understand fully and get their arms around. And there are issues that arise where AI is applied to new areas. So, for example, you give the example of robo-advice or portfolio management. Now, these were activities that traditionally have been undertaken by people. And when advice or investment decisions are made by people, it's much easier for regulators to understand and to hold somebody accountable for that. But when AI is involved, responsibility sometimes becomes a little bit murkier and a little bit more diffuse. So, for example, you might have a regulated firm that is using software or AI that has been developed by a specialist software developer. And that software is able to effectively operate with minimal human intervention, which is really one of the main drivers behind it, behind the adoption of AI, because obviously it costs less, it is less resource intensive in terms of skilled people to operate it. It but under those circumstances who has the regulatory responsibility there is it the software provider who makes the algorithm programs the software etc etc and then the software goes off and makes decisions or provides the advice or is it the firm who's actually running the software on its systems when it hasn't actually developed that software? So there are some naughty problems i think that regulators are are still mulling through and working out what they think the right answers should be.  Claude: Yeah I can see that because I suppose historically the the classic model certainly in the UK has been the regulator say if you want to outsource something thing. You, the regulated entity, be you a broker or asset manager or a bank, you are, or an investment firm, you are the authorized entity, you're responsible for your outsourcer or your outsource provider. But I can see with AI, that must get a harder question to determine, you know, because say your example, if the AI is performing some sort of advisory service, you know, has the perimeter gone beyond the historically regulated entity and does it then start to impact on the software provider. That's sort of one point and you know how do you allocate that responsibility you know that strict bright line you want to give it to a third party provider it's your responsibility. How do you allocate that responsibility between the two entities even outside the regulator's oversight, there's got to be an allocation of liability and responsibility.  Romin: Absolutely. And as you say, with traditional outsource services, it's relatively easy for the firm to oversee the activities of the outsource services provider. It can get MI, it can have systems and controls, it can randomly check on how the outsource provider is conducting the services. But with something that's quite black box, like some algorithm, trading algorithm for portfolio management, for example, it's much harder for the firm to demonstrate that oversight. It may not have the internal resources. How does it really go about doing that? So I think these questions become more difficult. And I suppose the other thing that makes it more difficult with AI to the traditional outsourcing model, even the black box algorithms, is by and large they're static. You know, whatever it does, it keeps on doing. It doesn't evolve by its own processes, which AI does. So it doesn't matter really whether it's outsourced or it's in-house to the regulated entity. That thing's sort of changing all the time and supervising it is a dynamic process and the speed at which it learns which is in part driven by its usage means that the dynamics of its oversight must be able to respond to the speed of it evolving.  Romin: Absolutely and and you're right to highlight all of the sort of liability issues that arise, not just simply vis-a-vis sort of liabilities to the regulator for performing the services in compliance with the regulatory duties, but also to clients themselves. Because if the algo goes haywire and suddenly, you know, loses customers loads of money or starts making trades that were not within the investment mandate provided by the client where does the buck stop is that with the firm is that with the person who who provided the software it's it's all you know a little difficult.  Claude: I suppose the other issue is at the moment there's a limited number of outsourced providers and. One might reasonably expect competition being what it is for that to proliferate over time but until it does I would imagine there's a sort of competition issue a not only a competition issue in one system gaining a monopoly but that particular form of large model learning then starts to dominate and produce, for want of a better phrase, a groupthink. And I suppose one of the things that puzzles me is, is there a possibility that you get a systemic risk by the alignment of the thinking of various financial institutions using the same or a similar system of AI processes, which then start to produce a common result? And then possibly producing a common misconception, which introduces a sort of black swan event that was anticipated.  Romin: And sort of self-reinforcing feedback loops. I mean, there was the story of the flash crash that occurred with all these algorithmic trading firms all of a sudden reacting to the same event and all placing sell orders at the same time, which created a market disturbance. That was a number of years ago now. You can imagine such effects as AI becomes more prevalent, potentially being even more severe in the future.  Claude: Yeah, no, I think that's, again, an issue that regulators do worry about from time to time.  Romin: And I think another point, as you say, is competition. Historically, asset managers have differentiated themselves on the basis of the quality of their portfolio managers and the returns that they deliver to clients, etc. But here in a world where we have a number of software providers, maybe one or two of which become really dominant, lots of firms are employing technology provided by these firms, differentiating becomes more difficult in those circumstances.  Claude: Yeah and I guess to unpack that a little bit you know as you say portfolio managers have distinguished themselves by better returns than the competition and certainly better returns than the market average and that then points to the quality of their research and their analytics. So then i suppose the question becomes to what extent is AI being used to produce that differentiator and how do you charge your you know your your fees based on that is this you've got better technology than anyone else or is it you've got a better way to deploy the technology or is it that you've just paid more for your technology you know how did how because transforming the input of AI into the analytics and the portfolio management. Is quite a difficult thing to do at the best of times. If it's internal, it's clearly easier because it's just your mousetrap and you built that mousetrap. But when you're outsourcing, particularly in your example, where you've got a limited number of technology providers, that split I can see become quite contentious.  Romin: Yeah, absolutely. Absolutely. And I think firms themselves will need to sort of decide what approach they are going to take to the application of AI, because if they go down the outsourced approach, that raises issues that we've discussed so far. Conversely if they adopt a sort of in-house model they have more control the technology's proprietary potentially they can distinguish themselves and differentiate themselves better than relying on an outsource solution but then you know cost is far greater will they have the resources resources expertise and really to compete with these large specialist providers to many different firms. There's lots of strategic decisions that firms need to make as well.  Claude: Yeah but I mean going back to the regulators for a moment Romin, it does seem to me that you know there are some benefits to regulators in embracing AI within their own world because certainly we already see the evidence that they're very comfortable using manipulation of large databases. For example, it's trade repositories or it's trade reporting. We can see sort of enforcement actions being brought using databases that have produced the information the anomalies and as I see it AI can only improve that form of surveillance enforcement whether that is market manipulation or insider dealing or looking across markets to see whether sort of concurrent or collaborative activity is engaged and it may not get to the point where the AI is going to to bring the whole enforcement action to trial. But it certainly makes that demanding surveillance and oversight role for a regulator a lot easier.  Romin: Absolutely. Couldn't agree more. I mean, historically, firms have often complained. And it's a very common refrain in the financial services markets We have to make all these ridiculous reports, detailed reports, send all this information to the regulator. And firms were almost certain that it would just disappear into some black hole and never be looked at again. Again, historically, that was perhaps true, but with the new technology that is coming on stream, it gives regulators much more opportunity to meaningfully interrogate that data and use it to either bring enforcement action against firms or just supervise trends, risks, currents in markets which might otherwise not have been available or apparent to them.  Claude: Yeah, I mean, I think, to my mind, data before you apply technology to it, it's rather like the end of Raiders of the Lost Ark in the Spielberg film, you know, where they take the Covenant and push it into that huge warehouse and the camera pans back and you just see massive, massive data. But I suppose you're right with AI, that you can go and find the crate with the thing in other Spielberg films are available. it seems to me almost inexorable that the use of AI in financial services will increase and you know the potential and the efficiencies particularly with large scale and repetitive tasks and more inquiry it's not just a case of automation it's a case of sort of overseeing it but I suppose that begs a bit of a question as to who's going to be the dominant force in in the market. Is it going to be the financial services firms or the tech firms that can produce more sophisticated AI models.  Romin: Absolutely, I mean I think we've seen amongst the AI companies themselves so you know the the key players like google open AI microsoft there's a bit of an arms race between themselves as to the best LLM who can come up with the most accurate, best, fastest answers to queries. I think within AI and financial services, it's almost inevitable that there'll be a similar race. And I guess the jury's still out as to who will win. Will it be the financial services firms themselves, or will it be these specialist technology companies that apply their solutions in the financial services space? I don't know, but it will certainly be interesting to see.  Claude: Well, I suppose the other point with the technology providers, and you're right, I mean, you can already see that when you get into cloud-based services and software as a service and the others, that the technology is becoming a dominant part of financial services, not necessarily the dominant, but certainly a large part of it. And that, to my mind, has a really interesting question about the commonality of technology and I in general and AI in particular in you, know you can now see the these services and particularly you know and I can see this with AI as well entering into a number of financial sectors which historically have been diffused so the use of AI for example in insurance the the use in banking, the use in asset management, the use in broking, the use in advisory services, there's now a coming together of the platforms and the technology, such as LLM, across all of them. And that then begs the question, is there an operational resilience question? It's almost like, does AI ever become so pervasive that is a bit like electricity, power. You can see the CrowdStrike. Is the technology so all-pervasive that actually it produces an operational risk concern that would cause a regulator to take it to an extreme, to alter the operational risk charge in the regulatory capital environment?  Romin: Yeah, exactly. I think these are certainly is the space that regulators are looking at with increased attention, because some of the emerging risks, etc, might not be apparent. So like you mentioned with CrowdStrike, nobody really knew that this was an issue until it happened. So regulators, I think, are very nervous of the unknown unknowns.  Claude: Yeah. I mean, it seems to me that AI has a huge potential in the financial services sector, in, A, facilitating the mundane, but also in being proactive in identifying anomalies. Potentials for errors, potentials for fraud. It's like, you know, there's a huge amount that it can contribute. But as always, you know, that brings structural challenges.  Romin: Absolutely. And just on the point that we were discussing earlier about the increased efficiencies that it can bring to markets you know there's been a recognized problem with the so-called advice gap in the uk where the kind of mass affluent less high net worth investors aren't really willing to pay for the receipt of financial advice. As technology gets better, the cost of accessing more tailored, intelligent advice will hopefully come down, leading to the ability for people to make more sensible financial decisions.  Claude: Which I'm sure was part of the responsibility of financial institutions to improve financial and fiscal education. That's going to be music to a regulator's ears. Well, Romin, interesting subject, interesting area. We live, as the Chinese say, in interesting times. But I hope to those of you who've listened, it's been interesting. We've enjoyed talking about it. Of course, if you have any questions, please feel free to contact us, my colleague, Romin Dabir, or myself, Claude Brown. You can find our contact details accompanying this and also on our website. Thank you for listening.  Romin: Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved.  Transcript is auto-generated.
    --------  
    24:43

More Technology podcasts

About Tech Law Talks

Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.
Podcast website

Listen to Tech Law Talks, Microsoft Threat Intelligence Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Tech Law Talks: Podcasts in Family

Social
v7.1.1 | © 2007-2024 radio.de GmbH
Generated: 12/26/2024 - 4:31:39 PM