Powered by RND
PodcastsTechnologyTech Law Talks
Listen to Tech Law Talks in the App
Listen to Tech Law Talks in the App
(524)(250,057)
Save favourites
Alarm
Sleep timer

Tech Law Talks

Podcast Tech Law Talks
Reed Smith
Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance i...
More

Available Episodes

5 of 86
  • AI explained: AI in the UK insurance market
    Laura-May Scott and Emily McMahan navigate the intricate relationship between AI and professional liability insurance, offering valuable insights and practical advice for businesses in the AI era. Our hosts, both lawyers in Reed Smith’s Insurance Recovery Group in London, delve into AI’s transformative impact on the UK insurance market, focusing on professional liability insurance. AI is adding efficiency to tasks such as document review, legal research and due diligence, but who pays when AI fails? Laura-May and Emily share recommendations for businesses on integrating AI, including evaluating specific AI risks, maintaining human oversight and ensuring transparency.
    --------  
    13:12
  • AI explained: AI and cybersecurity threat
    Our latest podcast covers the legal and practical implications of AI-enhanced cyberattacks; the EU AI Act and other relevant regulations; and the best practices for designing, managing and responding to AI-related cyber risks. Partner Christian Leuthner in Frankfurt, partner Cynthia O'Donoghue in London with counsel Asélle Ibraimova share their insights and experience from advising clients across various sectors and jurisdictions. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Christian: Welcome to the Tech Law Talks, now new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI and cybersecurity threat. My name is Christian Leutner. I'm a partner at the Reed Smith Frankfurt office, and I'm with my colleagues Cynthia O'Donoghue and Asélle Ibraimova from the London office. Cynthia: Morning, Christian. Thanks. Asélle: Hi, Christian. Hi, Cynthia. Happy to be on this podcast with you. Christian: Great. In late April 2024, the German Federal Office for Information Security has identified that AI, and in particular, generative AI and large language models, LLM, is significantly lowering the barriers to entry for cyber attacks. The technology, so AI, enhances the scope, speed, and the impact of cyber attacks, of malicious activities, because it simplifies social engineering, and it really makes the creation or generation of malicious code faster, simpler, and accessible to almost everybody. The EU legislator had some attacks in mind when creating the AI Act. Cynthia, can you tell us a bit about what the EU regulator particularly saw as a threat? Cynthia: Sure, Christian. I'm going to start by saying there's a certain irony in the EU AI Act, which is that there's very little about the threat of AI, even though sprinkled throughout the EU AI Act is lots of discussion around security and keeping AI systems safe, particularly high-risk systems. But the EU AI Act contains a particular article that's focused on the design of high-risk systems and cybersecurity. And the main concern is really around the potential for data poisoning and for model poisoning. And so part of the principle behind the EU AI Act is security by design. And so the idea is that the EU AI Act regulates high-risk AI systems such that they need to be designed and developed in a way that ensures an appropriate level of accuracy robustness, and cybersecurity. And to prevent such things as data poisoning and model poisoning. And it also talks about the horizontal laws across the EU. So because the EU AI Act treats AI as a product, it brings into play other EU directives, like the Directive on the Resilience of Critical Entities and the newest cybersecurity regulation in relation to digital products. And I think when we think about AI, you know, most of our clients are concerned about the use of AI systems and being, let's say, ensuring that they're secure. But really, you know, based on that German study you mentioned at the beginning of the podcast, I think there's less attention paid to use of AI as a threat vector for cybersecurity attacks. So, Christian, what do you think is the relationship between the AI Act and the Cyber Resilience Act, for instance? Christian: Yeah, I think, and you mentioned it already. So the legislator thought there is a link and the high-risk AI models need to implement a lot of security measures. And the latest Cyber Resilience Act requires some stakeholders in software and hardware products to also implement security measures and also imposes another or lots of different obligations on them. To not over-engineer these requirements, the AI Act already takes into account that if a high-risk AI model is in scope of the Cyber Resilience Act, the providers of those AI models can refer to the implementation of the cybersecurity requirements they made under the Cyber Resilience Act. So they don't need to double their efforts. They can just rely on what they have implemented. But it would be great if we're not only applying the law, but if there would also be some guidance from public bodies or authorities on that. Asélle, do you have something in mind that might help us with implementing those requirements? Asélle: Yeah, so ENISA has been working on AI and cybersecurity in general, and it has produced a paper called Multi-Layer Framework for Good Cybersecurity Practices for AI last year. So it still needs to be updated. However, it does provide a very good summary of various AI initiatives throughout the world. And generally mentions that when thinking of AI, organizations need to take into consideration the general system vulnerabilities, the vulnerabilities in the underlying ICT infrastructure. And also when it comes to the use of AI models or systems, then, you know, various threats that you already talked about, such as data poisoning and model poisoning and other kind of adversarial attacks on those systems should also be taken into account. So in terms of specific kind of guidelines or standards that ENISA mentioned is ISO 4201. It's an AI management system standard. And also another noteworthy guidelines mentioned is the NIST AI risk management framework, obviously the US guidelines. And obviously both of these are to be used on a voluntary basis. But basically, their aim is to ensure developers create trustworthy AI, valid, reliable, safe, and secure and resilient. Christian: Okay, that's very helpful. I think it's fair to say that AI will increase the already high likelihood of being subject to cyber attack at some point, that this is a real threat to our clients. And we all know from practice that you cannot defend against everything. You can be cautious, but there might be occasions when you are subject to an attack, when there has been a successful attack or there is a cyber incident. If it is caused by AI, what do we need to do as a first responder, so to say? Cynthia: Well, there are numerous notification obligations in relation to attacks. Again, depending on the type of data or the entity involved. For instance, if the, As a result of a breach from an AI attack, it involves personal data, then there's notification requirements under the GDPR, for instance. If you're in a certain sector that's using AI, one of the newest pieces of legislation to go into effect in the EU, the Network and Information Security Directive, tiers organizations into essential entities and important entities. And, you know, depending on whether the sector the particular victim is in is subject to either, you know, the essential entity requirements or the important entity requirements, there's a notification obligation under NIST-2, for short, in relation to vulnerabilities and attacks. And ENISA, who Asélle was just talking about, has most recently issued a report for, let's say, network and other providers, which are essential entities under NIST-2, in relation to what is considered a significant or a vulnerability or a material event that would need to be notified to the regulatory entity and the relevant member state for that particular sector. And I'm sure there's other notification requirements. I mean, for instance, financial services are subject to a different regulation, aren't they, Asélle? And so why don't you tell us a bit more about the notification requirements of financial services organizations? Asélle: The EU Digital Operational Resilience Act also provides similar requirements to the supply chain of financial entities, specifically the ICT third-party providers, which the AI providers may fall into. And Article 30 under DORA requires that there are specific, for example, contractual clauses requiring cybersecurity around data. So it requires provisions on availability, authenticity, integrity, and confidentiality. There are additional requirements to those ICT providers whose product, say AI product, perhaps as an ICT product, plays a critical or important function in the provision of the financial services. In that case... There will be additional requirements, including on ICT security measures. So in practical terms, it would mean your organizations that are regulated in this way, they are likely to ask AI providers to have additional tools, policies, measures, and to provide evidence that such measures have been taken. It's also worth mentioning about the developments on AI regulation in the UK. Previous UK government wanted to adopt a flexible, non-binding regulation of AI. However, the Labour Government appears to want to adopt a binding instrument. However, it is likely to be of limited scope, focusing only on the most powerful AI models. However, there isn't any clarity in terms of whether the use of AI in cyber threats is regulated in any specific way. Christian, I wanted to direct a question to you. How about the use of AI in supply chains? Christian: Yeah, I think it's very important to have a look on the entire supply chain of the companies or the entire contractual relationships. Because most of our clients or companies out there do not develop or create their own AI. They will use AI from vendors or their suppliers or vendors will use AI products to be more efficient. And all the requirements, for example, the notification requirements that Cynthia just mentioned, they do not stop if you use a third party. So even if you engage a supplier, a vendor, you're still responsible to defend against cyber attacks and to report cyber incidents or attacks if they concern your company. Or at least there's a high likelihood. So it's very crucial to have those scenarios in your mind when you're starting a procurement process and you start negotiating on contracts to have those topics in the contract with a vendor, with a supplier to have notification obligations in case there is a cyber attack at that vendor that you probably have some audit rights, inspection rights, depending on your negotiation position, but at least to make sure that you are aware if something happens so that the risk that does not really or does not directly materializes at your company cannot sneak through the back door by a vendor. So that's really important that you always have an eye on your supply chain and on your third-party vendors or providers. Cynthia: That's such a good point, Christian. And ultimately, I think it's best for organizations to think about it early. So it really needs to be embedded as part of any kind of supply chain due diligence, where maybe a new question needs to be added to a due diligence questionnaire on suppliers about whether they use AI, and then the cybersecurity around the AI that they use or contribute to. Because we've all read and heard in the papers and been exposed to through client counseling of cybersecurity breaches that have come through the supply chain and may not be direct attacks on the client itself. And yeah, I mean, the contractual provisions then are really important. Like you said, making sure that the supplier notifies the customer very early on. And then there is cooperation and audit mechanisms. Asélle, anything else to add? Asélle: Yeah, I totally agree with what was said. I think beyond just the legal requirements, it is ultimately the question of defending your business, your data, and whether or not it's required by your customers or by specific legislation to which your organization may be subject to. It's ultimately whether or not your, business can withstand more sophisticated cyber attacks and therefore agree with both of you that organizations should take supply chain resilience and cyber security and generally higher risks of cyber attacks more seriously and put measures in place better to invest now than later after the attack. I also think that it is important for in-house teams to work together as cyber security threats are enhanced by AI. And these are legal, IT security, risk management, and compliance teams. Sometimes, for example, legal teams might think that the IT security or incident response policies are owned by IT, so there isn't much contribution needed. Or the IT security teams might think the legal requirements are in the legal team's domain, so we'll wait to hear from legal on how to reflect those. So working in silos is not beneficial. IT policies, incident response policies, training material on cybersecurity should be regularly updated by IT teams and reviewed by legal to reflect the legal requirements. The teams should collaborate on running tabletop incident response and crisis response exercises, because in the real case scenario, they will need to work hand in hand to efficiently respond to these scenarios. Cynthia: Yeah, I think you're right, Asélle. I mean, obviously, any kind of breach is going to be multidisciplinary in the sense that you're going to have people who understand AI, understand, you know, the attack vector, which used the AI. You know, other people in the organization will have a better understanding of notification requirements, whether that be notification under the cybersecurity directives and regulations or under the GDPR. And obviously, if it's an attack that's come from the supply chain, there needs to be that coordination as well with the supplier management team. So it's definitely multidisciplinary and requires, obviously cooperation and information sharing and obviously in a way that's done in accordance with the regulatory requirements that we've talked about. So in sum, you have to think about AI and cybersecurity both from a design perspective as well as the supply chain perspective and how AI might be used for attacks, whether it's vulnerabilities into a network or data poisoning or model poisoning. Think about the horizontal requirements across the EU in relation to cybersecurity requirements for keeping systems safe and or if you're an unfortunate victim of a cybersecurity attack where AI has been used to think about the notification requirements and ultimately that multidisciplinary team that needs to be put in place. So thank you, Asélle, and thank you, Christian. We really appreciate the time to talk together this morning. And thank you to our listeners. And please tune in for our next Tech Law Talks on AI. Asélle: Thank you. Christian: Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    21:05
  • AI explained: AI regulations and PRC court decisions in China
    Reed Smith lawyers Cheryl Yu (Hong Kong) and Barbara Li (Beijing) explore the latest developments in AI regulation and litigation in China. They discuss key compliance requirements and challenges for AI service providers and users, as well as the emerging case law on copyright protection and liability of AI-generated content. They also share tips and insights on how to navigate the complex and evolving AI legal landscape in China. Tune in to learn more about China’s distinct approach to issues involving AI, data and the law.  ----more---- Transcript:  Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Cheryl: Welcome to our Tech Law Talks and new series on artificial intelligence. Over the months, we have been exploring the key challenges and opportunities within the rapidly involving AI landscape. Today, we will focus on AI regulations in China and the relevant PRC court decisions. My name is Cheryl Yu, a partner in the Hong Kong office at Reed Smith, and I'm speaking today with Barbara Li, who is a partner based in our Beijing office. Barbara and I are going to focus on the major legal regulations on AI in China and also some court decisions relating to AI tours to see how China's legal landscape is evolving to keep up with the technological advancements. Barbara, can you first give us an overview about China's AI regulatory developments? Barbara: Sure. Thank you, Cheryl. Very happy to do that. In the past few years, the regulatory landscape governing AI in China has been evolving at a very fast pace. Although China does not have a comprehensive AI as a EU AI act, China has been leading the way in rolling out multiple AI regulations governing generative AI, debate technologies, and algorithms. In July 2023, China issued the Generative AI Measures, which becomes one of the first countries in the world to regulate generative AI technologies. These measures apply to generative AI services offered to the public in China, regardless of whether the service provider is based in China or outside China. And international investors are allowed to set up local entities in China to develop and offer AI services in China. In relation to the legal obligation, the measures lay down a wide range of legal requirements in performing and using generative AI services. Including content screening, protection of personal data and privacy, and safeguarding IPR and trade secrets, and also taking effective measures to prevent discrimination, when the company's design algorithm chooses a training data or creates a large language model. Cheryl: Many thanks, Barbara. These are the very important compliance obligations that business should not neglect when engaging in development of AI technologies, products, and services. I understand that one of the biggest concerns in AI is how to avoid hallucination and misinformation. I wonder if China has adopted any regulations to address these issues? Barbara: Oh, yes, definitely, Cheryl. China has adopted multiple regulations and guidelines to address these concerns. For example, the Deep Synthesis Rule, which became effective from January 2023, and this regulation aims to have a governance over the use of deep-fake technologies in generating or changing digital content. And when we talk about digital content, the regulation refers to a wide range of digital media, including video, voices, text, and images. And the deep synthesis service providers, they must refrain from using deep synthesis of services to produce or disseminate illegal information. And also, the companies are required to establish and improve proper compliance or risk management systems. Such as having the user registration system, doing the ethics review of the algorithm, and also protecting personal information, and also taking measures to protect IT and also prevent misinformation and fraud, and also, last but not least, setting up a response to the data breach. In addition, China's National Data and Cybersecurity Regulator, which is CAC, have issued a wide range of rules on algorithm fighting. And also, these algorithm fighting requirements have become effective from June 2024. According to this 2024 regulation, if a company uses algorithms in its online services with the functions of blogs, chat rooms, public accounts, short videos, or online streaming, So these staff functions are required of being capable of influencing public opinion or driving social engagement. And then the service provider is required to file its algorithm with the CAC, the regulator, within 10 working days after the launch of the service. So in order to finish the algorithm filing, the company is required to put together a comprehensive information documentation. Those information and documentation include the algorithm assessment report, security monitoring policy, data breach response plan, and also some technical documentation to explain the function of the algorithm. And also, the CAC has periodically published a list of filed algorithms, and also up to 30th of June 2024, we have seen over 1,400 AI algorithms which have been developed by more than 450 companies, and those algorithms have been successfully filed by the CAC. So you can see this large number of AI algorithm findings indeed have highlighted the rapid development of AI technologies in China. And also, we should remember that the large volume of data is a backbone of AI technologies. So we should not forget about the importance of data protection and privacy obligations when you develop and use AI technologies. Over the years, China has built up a comprehensive data and privacy regime with the three pillars of national laws. Those laws include the Personal Information Protection Law, normally in short name PIPL, and also the Cybersecurity Law and Data Security Law. So the data protection and cybersecurity compliance requirements got to be properly addressed when companies develop AI technologies, products, and services in China. And indeed, there are some very complicated data requirements and issues under the Chinese data and cybersecurity laws. For example, how to address the cross-border data transfer. So it's very important to remember those requirements. China data requirement and the legal regime is very complex. So given the time constraints, probably we can find another time to specifically talk about the data issues under the Chinese. Cheryl: Thanks, Barbara. Indeed, there are some quite significant AI and data issues which would warrant more time for a deeper dive. Barbara, can you also give us some update on the AI enforcement status in China and share with us your views on the best practice that companies can take in mitigating those risks? Barbara: Yes, thanks, Cheryl. Indeed, Chinese AI regulations do have keys. For example, the violation of the algorithm fighting requirement can result in fines up to RMB 100,000. And also the failure to comply with those compliance requirements in developing and using technologies can also trigger the legal liability under the Chinese PIPL, which is Personal Information Protection Law, and also the cyber security law and the data security law. And under those laws, a company can be imposed a monetary fine up to RMB 15 million or 5% of its last year turnover. In addition, the senior executives of the company can be personally subject to liability, such as a penalty up to a fine up to 1 million RMB, and also the senior executives can be barred from taking senior roles for a period of time. In the worst scenario, criminal liability can be pursued. So, in the first and second quarters of this year, 2024, we have seen some companies have been caught by the Chinese regulators for failing to comply with the AI requirements, ranging from failure to monitor the AI-generated content or neglecting the AI algorithm-finding requirements. Noncompliance has resulted in the suspension of their mobile apps pending ratification. As you can see, that noncompliance risk is indeed real, so it's very important for the businesses to pay close attention to the relevant compliance requirements. So to just give our audience a few quick takeaways in terms of how to address the AI regulatory and legal risk in China, we would say probably the companies can consider three most important compliance steps. The first is that with the faster development of AI in China, it's crucial to closely monitor the legislative and enforcement development in AI, data protection, and cybersecurity. security. While the Chinese AI and data laws share some similarities with the laws in other countries, for example, the EU AIF and the European GDPR, Chinese AI and data laws and regulations indeed have its unique characteristics and requirements. So it's extremely important for businesses to understand the Chinese AI and data laws, conduct proper analysis of the key business implications. And also take appropriate compliance action. So that is number one. And the second one, I would say, in terms of your specific AI technologies, products and services rolling out in the China market, it's very important to do the required impact assessment to ensure compliance with accountability, bias, and also accessibility requirements, and also build up a proper system for content monitoring. If your algorithm falls within the scope subject to fighting requirements, you definitely need to prepare the required documents and finish the algorithm fighting as soon as possible to avoid the potential penalties and compliance rates. And the third one is that you should definitely prepare the China AI policies, the AI terms of use, and build up your AI governance and compliance mechanism in line with the evolving Chinese AI regulation, and also train your team on the use of AI for compliance in their day-to-day work. So it's also very important, very interesting to note that in the past month, Chinese schools have given some landmark rulings in trials in relation to AI technology. Those rulings cover various AI issues, ranging from copyright protection of AI-generated content, data scraping, and privacy. Cheryl, can you give us an overview about those cases and what takeaways we can get from those rulings? Cheryl: Yes, thanks, Barbara. As mentioned by Barbara, with the emerging laws in China, there have been a lot of questions relating to AI technologies which are interacted with copyright law. The most commonly discussed questions include if users instruct an AI tour to produce an image, who is the author of the work, the AI tour, or the person giving instructions to the AI tour. And if the AI tour generates a work that bears a strong resemblance to another work already published, would that constitute an infringement of copyright? Before 2019, the position in China was that works generated by AI machines generally were not subject to copyright protection. For a work to be copyrightable, the courts will generally consider whether the work is created by natural persons and whether the work is original. Subsequently, there has been a shift in the Chinese court's position, in which the courts are more inclined to protect the copyrights of AI-generated content. For example, the Nanshan District Court of Shenzhen handed down a decision, Shenzhen Tencent versus Shanghai Yinsheng, in 2019. The court held that the plaintiff, Shenzhen Tencent, should be regarded as the author of an article, which was generated by an AI system at the supervision of the plaintiff. The court further held that the intellectual contribution of the plaintiff's staff, including inputting data, setting prompts, selecting the template, and the layout of the article, played a direct role in shaping the specific expression of the article. Hence, the article demonstrated sufficient originality and creativity to warrant copyright protection. Similarly, the Beijing Internet Court reached the same decision in Li Yunkai v. Liu Yuanchun in 2023, and the court held that AI-generated content can be subject to copyright protection if the human user has contributed substantially to the creation of the work. In its judgment, the court ruled that an AI machine cannot be an author of the work, since it is not human. And the plaintiff is entitled to the copyright of the photo generated by the AI machine on the grounds that the plaintiff personally chose and arranged the order of prompts, set the parameters, and detected the style of the output, which warrants a sufficient level of originality in the work. As you may note, in both cases, for work to be copyrightable in China, the courts no longer required it to be created entirely by a human being. Rather, the courts focused on whether there was an element of original intellectual achievement. Interestingly, there's another case handed down by the Hangzhou Internet Court in 2023, which has been widely criticized in China. This court decided that the AI was not an author, not because it was non-human, but because it was a weak AI and did not possess the relevant capability for intellectual creation. And this case has created some uncertainty as to what is the legal status of the AI if it is stronger and has the intellectual capability to generate original works, and the questions such as, would such an AI be qualified as an author and be entitled to copyright over its works? Those issues remain to be seen as the technology and law develops. Barbara: Thank you, Cheryl. We now understand the position in relation to the authorship under the Chinese law. What about the plaintiffs? What about the platforms which provide generative AI tools? I understand that they also face the question of whether there will be secondary level for infringement of AI generated content output. Have the Chinese courts issued any case on this topic? Cheryl: Many thanks, Barbara. Yes, there's some new development on this issue in China in early 2024. And the Guangzhou Internet Court published a decision on this issue, which is the first decision in China regarding the secondary liability of AI platform providers. And the plaintiff in this case has exclusive rights to a Japanese cartoon image, the Ultraman, including various rights such as reproduction, adaptation, etc. And the defendant was an undisclosed AI company that operates a website with AI conversation function and AI image generation function. These functions were provided using an unnamed third-party provider's AI model, which was connected to the defendant's website. The defendant allowed visitors to their website to use this AI model to generate images, but it hadn't created the AI model themselves. The plaintiff eventually discovered that if one input prompts related to Ultraman, the the generative AI tool would produce images highly similar to Ultraman. Then the plaintiff eventually brought an action of copyright infringement against the defendant. And the court held that, in this case, the defendant platform has breached a duty of care to take appropriate measures to ensure that outputs do not contribute any copyright law and the relevant AI regulations in China. And the output that the AI generative tool created has infringed on the copyright of the other protected works. So this Ultraman case serves a timely reminder to Chinese AI platform providers that it is of utmost importance to comply with the relevant laws and regulations in China. And another interesting point of law is the potential liability of AI developers in the scenario that copyright materials are used to train the AI tour. So far, there haven't been any decisions relating to this issue in China, and it remains to be seen whether AI model developers would be liable for infringement of copyright in the process of training their AI models with copyrightable materials, and if so, whether there are any defenses available to them. We shall continue to follow up and keep everyone posted in this regard. Barbara: Yes, indeed Cheryl those are all very interesting developments. So to conclude for our podcast today, with the advancement of AI technology, it's almost inevitable that more legal challenges will emerge related to the training and application of a generative AI system. To this course, we have been expected to develop innovative legal interpretations to strike a balance between safeguarding copyright and promoting the technology innovation and growth. So our team, Reed Smith in Greater China, will bring all the updates to you on the development. So please do stay tuned. Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    21:08
  • AI explained: AI and financial services
    Reed Smith partners Claude Brown and Romin Dabir discuss the challenges and opportunities of artificial intelligence in the financial services sector. They cover the regulatory, liability, competition and operational risks of using AI, as well as the potential benefits for compliance, customer service and financial inclusion. They also explore the strategic decisions firms need to make regarding the development and deployment of AI, and the role of regulators play in supervising and embracing AI. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Claude: Welcome to Tech Law Talks and our new series on artificial intelligence, or AI. Over the coming months, we'll explore key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in financial services. And to do that, I'm here. My name is Claude Brown. I'm a partner in Reed Smith in London in the Financial Industry Group. And I'm joined by my colleague, Romin Dabir, who's a financial services partner, also based in London.  Romin: Thank you, Claude. Good to be with everyone.  Claude: I mean, I suppose, Romin, one of the things that strikes me about AI and financial services is it's already here. It's not something that's coming in. It's been established for a while. We may not have called it AI, but many aspects it is. And perhaps it might be helpful to sort of just review where we're seeing AI already within the financial services sector.  Romin: Yeah, absolutely. No, you're completely right, Claude. Firms have been using AI or machine learning or some form of automation in their processes for quite a while, as you rightly say. And this has been mainly driven by searches for efficiency, cost savings, as I'm sure the audience would appreciate. There have been pressures on margins and financial services for some time. So firms have really sought to make their processes, particularly those that are repetitive and high volume, as efficient as possible. And parts of their business, which AI has already impacted, include things like KYC, AML checks, back office operations. All of those things are already having AI applied to them.  Claude: Right. I mean, some of these things sound like a good thing. I mean, improving customer services, being more efficient in the know-your-customer, anti-money laundering, KYC, AML areas. I suppose robo-advice, as it's called sometimes, or sort of asset management, portfolio management advice, might be an area where one might worry. But I mean, the general impression I have is that the regulators are very focused on AI. And generally, when one reads the press, you see it being more the issues relating to AI rather than the benefits. I mean, I'm sure the regulators do recognize the benefits, but they're always saying, be aware, be careful, we want to understand better. Why do you think that is? Why do you think there's areas of concern, given the good that could come out of AI?  Romin: No, that's a good question. I think regulators feel a little bit nervous when confronted by AI because obviously it's novel, it's something new, well, relatively new that they are still trying to understand fully and get their arms around. And there are issues that arise where AI is applied to new areas. So, for example, you give the example of robo-advice or portfolio management. Now, these were activities that traditionally have been undertaken by people. And when advice or investment decisions are made by people, it's much easier for regulators to understand and to hold somebody accountable for that. But when AI is involved, responsibility sometimes becomes a little bit murkier and a little bit more diffuse. So, for example, you might have a regulated firm that is using software or AI that has been developed by a specialist software developer. And that software is able to effectively operate with minimal human intervention, which is really one of the main drivers behind it, behind the adoption of AI, because obviously it costs less, it is less resource intensive in terms of skilled people to operate it. It but under those circumstances who has the regulatory responsibility there is it the software provider who makes the algorithm programs the software etc etc and then the software goes off and makes decisions or provides the advice or is it the firm who's actually running the software on its systems when it hasn't actually developed that software? So there are some naughty problems i think that regulators are are still mulling through and working out what they think the right answers should be.  Claude: Yeah I can see that because I suppose historically the the classic model certainly in the UK has been the regulator say if you want to outsource something thing. You, the regulated entity, be you a broker or asset manager or a bank, you are, or an investment firm, you are the authorized entity, you're responsible for your outsourcer or your outsource provider. But I can see with AI, that must get a harder question to determine, you know, because say your example, if the AI is performing some sort of advisory service, you know, has the perimeter gone beyond the historically regulated entity and does it then start to impact on the software provider. That's sort of one point and you know how do you allocate that responsibility you know that strict bright line you want to give it to a third party provider it's your responsibility. How do you allocate that responsibility between the two entities even outside the regulator's oversight, there's got to be an allocation of liability and responsibility.  Romin: Absolutely. And as you say, with traditional outsource services, it's relatively easy for the firm to oversee the activities of the outsource services provider. It can get MI, it can have systems and controls, it can randomly check on how the outsource provider is conducting the services. But with something that's quite black box, like some algorithm, trading algorithm for portfolio management, for example, it's much harder for the firm to demonstrate that oversight. It may not have the internal resources. How does it really go about doing that? So I think these questions become more difficult. And I suppose the other thing that makes it more difficult with AI to the traditional outsourcing model, even the black box algorithms, is by and large they're static. You know, whatever it does, it keeps on doing. It doesn't evolve by its own processes, which AI does. So it doesn't matter really whether it's outsourced or it's in-house to the regulated entity. That thing's sort of changing all the time and supervising it is a dynamic process and the speed at which it learns which is in part driven by its usage means that the dynamics of its oversight must be able to respond to the speed of it evolving.  Romin: Absolutely and and you're right to highlight all of the sort of liability issues that arise, not just simply vis-a-vis sort of liabilities to the regulator for performing the services in compliance with the regulatory duties, but also to clients themselves. Because if the algo goes haywire and suddenly, you know, loses customers loads of money or starts making trades that were not within the investment mandate provided by the client where does the buck stop is that with the firm is that with the person who who provided the software it's it's all you know a little difficult.  Claude: I suppose the other issue is at the moment there's a limited number of outsourced providers and. One might reasonably expect competition being what it is for that to proliferate over time but until it does I would imagine there's a sort of competition issue a not only a competition issue in one system gaining a monopoly but that particular form of large model learning then starts to dominate and produce, for want of a better phrase, a groupthink. And I suppose one of the things that puzzles me is, is there a possibility that you get a systemic risk by the alignment of the thinking of various financial institutions using the same or a similar system of AI processes, which then start to produce a common result? And then possibly producing a common misconception, which introduces a sort of black swan event that was anticipated.  Romin: And sort of self-reinforcing feedback loops. I mean, there was the story of the flash crash that occurred with all these algorithmic trading firms all of a sudden reacting to the same event and all placing sell orders at the same time, which created a market disturbance. That was a number of years ago now. You can imagine such effects as AI becomes more prevalent, potentially being even more severe in the future.  Claude: Yeah, no, I think that's, again, an issue that regulators do worry about from time to time.  Romin: And I think another point, as you say, is competition. Historically, asset managers have differentiated themselves on the basis of the quality of their portfolio managers and the returns that they deliver to clients, etc. But here in a world where we have a number of software providers, maybe one or two of which become really dominant, lots of firms are employing technology provided by these firms, differentiating becomes more difficult in those circumstances.  Claude: Yeah and I guess to unpack that a little bit you know as you say portfolio managers have distinguished themselves by better returns than the competition and certainly better returns than the market average and that then points to the quality of their research and their analytics. So then i suppose the question becomes to what extent is AI being used to produce that differentiator and how do you charge your you know your your fees based on that is this you've got better technology than anyone else or is it you've got a better way to deploy the technology or is it that you've just paid more for your technology you know how did how because transforming the input of AI into the analytics and the portfolio management. Is quite a difficult thing to do at the best of times. If it's internal, it's clearly easier because it's just your mousetrap and you built that mousetrap. But when you're outsourcing, particularly in your example, where you've got a limited number of technology providers, that split I can see become quite contentious.  Romin: Yeah, absolutely. Absolutely. And I think firms themselves will need to sort of decide what approach they are going to take to the application of AI, because if they go down the outsourced approach, that raises issues that we've discussed so far. Conversely if they adopt a sort of in-house model they have more control the technology's proprietary potentially they can distinguish themselves and differentiate themselves better than relying on an outsource solution but then you know cost is far greater will they have the resources resources expertise and really to compete with these large specialist providers to many different firms. There's lots of strategic decisions that firms need to make as well.  Claude: Yeah but I mean going back to the regulators for a moment Romin, it does seem to me that you know there are some benefits to regulators in embracing AI within their own world because certainly we already see the evidence that they're very comfortable using manipulation of large databases. For example, it's trade repositories or it's trade reporting. We can see sort of enforcement actions being brought using databases that have produced the information the anomalies and as I see it AI can only improve that form of surveillance enforcement whether that is market manipulation or insider dealing or looking across markets to see whether sort of concurrent or collaborative activity is engaged and it may not get to the point where the AI is going to to bring the whole enforcement action to trial. But it certainly makes that demanding surveillance and oversight role for a regulator a lot easier.  Romin: Absolutely. Couldn't agree more. I mean, historically, firms have often complained. And it's a very common refrain in the financial services markets We have to make all these ridiculous reports, detailed reports, send all this information to the regulator. And firms were almost certain that it would just disappear into some black hole and never be looked at again. Again, historically, that was perhaps true, but with the new technology that is coming on stream, it gives regulators much more opportunity to meaningfully interrogate that data and use it to either bring enforcement action against firms or just supervise trends, risks, currents in markets which might otherwise not have been available or apparent to them.  Claude: Yeah, I mean, I think, to my mind, data before you apply technology to it, it's rather like the end of Raiders of the Lost Ark in the Spielberg film, you know, where they take the Covenant and push it into that huge warehouse and the camera pans back and you just see massive, massive data. But I suppose you're right with AI, that you can go and find the crate with the thing in other Spielberg films are available. it seems to me almost inexorable that the use of AI in financial services will increase and you know the potential and the efficiencies particularly with large scale and repetitive tasks and more inquiry it's not just a case of automation it's a case of sort of overseeing it but I suppose that begs a bit of a question as to who's going to be the dominant force in in the market. Is it going to be the financial services firms or the tech firms that can produce more sophisticated AI models.  Romin: Absolutely, I mean I think we've seen amongst the AI companies themselves so you know the the key players like google open AI microsoft there's a bit of an arms race between themselves as to the best LLM who can come up with the most accurate, best, fastest answers to queries. I think within AI and financial services, it's almost inevitable that there'll be a similar race. And I guess the jury's still out as to who will win. Will it be the financial services firms themselves, or will it be these specialist technology companies that apply their solutions in the financial services space? I don't know, but it will certainly be interesting to see.  Claude: Well, I suppose the other point with the technology providers, and you're right, I mean, you can already see that when you get into cloud-based services and software as a service and the others, that the technology is becoming a dominant part of financial services, not necessarily the dominant, but certainly a large part of it. And that, to my mind, has a really interesting question about the commonality of technology and I in general and AI in particular in you, know you can now see the these services and particularly you know and I can see this with AI as well entering into a number of financial sectors which historically have been diffused so the use of AI for example in insurance the the use in banking, the use in asset management, the use in broking, the use in advisory services, there's now a coming together of the platforms and the technology, such as LLM, across all of them. And that then begs the question, is there an operational resilience question? It's almost like, does AI ever become so pervasive that is a bit like electricity, power. You can see the CrowdStrike. Is the technology so all-pervasive that actually it produces an operational risk concern that would cause a regulator to take it to an extreme, to alter the operational risk charge in the regulatory capital environment?  Romin: Yeah, exactly. I think these are certainly is the space that regulators are looking at with increased attention, because some of the emerging risks, etc, might not be apparent. So like you mentioned with CrowdStrike, nobody really knew that this was an issue until it happened. So regulators, I think, are very nervous of the unknown unknowns.  Claude: Yeah. I mean, it seems to me that AI has a huge potential in the financial services sector, in, A, facilitating the mundane, but also in being proactive in identifying anomalies. Potentials for errors, potentials for fraud. It's like, you know, there's a huge amount that it can contribute. But as always, you know, that brings structural challenges.  Romin: Absolutely. And just on the point that we were discussing earlier about the increased efficiencies that it can bring to markets you know there's been a recognized problem with the so-called advice gap in the uk where the kind of mass affluent less high net worth investors aren't really willing to pay for the receipt of financial advice. As technology gets better, the cost of accessing more tailored, intelligent advice will hopefully come down, leading to the ability for people to make more sensible financial decisions.  Claude: Which I'm sure was part of the responsibility of financial institutions to improve financial and fiscal education. That's going to be music to a regulator's ears. Well, Romin, interesting subject, interesting area. We live, as the Chinese say, in interesting times. But I hope to those of you who've listened, it's been interesting. We've enjoyed talking about it. Of course, if you have any questions, please feel free to contact us, my colleague, Romin Dabir, or myself, Claude Brown. You can find our contact details accompanying this and also on our website. Thank you for listening.  Romin: Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved.  Transcript is auto-generated.
    --------  
    24:43
  • AI explained: AI in shipping and aviation
    This episode highlights the new benefits, risks and impacts on operations that artificial intelligence is bringing to the transportation industry. Reed Smith transportation industry lawyers Han Deng and Oliver Beiersdorf explain how AI can improve sustainability in shipping and aviation by optimizing routes and reducing fuel consumption. They emphasize AI’s potential contributions from a safety standpoint as well, but they remain wary of risks from cyberattacks, inaccurate data outputs and other threats. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Han: Hello, everyone. Welcome to our new series on AI. Over the coming months, we will explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, my colleague Oliver and I will focus on AI in shipping and aviation. My name is Han Deng, a partner in the transportation industry group in New York, focusing on the shipping industry. So AI and machine learning have the potential to transform the transportation industry. What do you think about that, Oliver?  Oliver: Thanks, Han, and it's great to join you. My name is Oliver Beiersdorf. I'm a partner in our transportation industry group here at Reed Smith, and it's a pleasure to be here. I'm going to focus a little bit on the aviation sector. And in aviation, AI is really contributing to a wide spectrum of value opportunities, including enhancing efficiency, as well as safety-critical applications. But we're still in the early stages. The full potential of AI within the aviation sector is far from being harnessed. For instance, there's huge potential for use in areas which will reduce human workload or increase human capabilities in very complex scenarios in aviation.  Han: Yeah, and there's similar potential within the shipping industry with platforms designed to enhance collision avoidance, route optimization, and sustainability efforts. In fact, AI is predicted to contribute $2.5 trillion to the global economy by 2030.  Oliver: Yeah, that is a lot of money, and it may even be more than that. But with that economic potential, of course, also comes substantial risks. And AI users and operators and industries now getting into using AI have to take preventative steps to avoid cyber security attacks. Inaccurate data outputs, and other threats.  Han: Yeah, and at Reed Smith, we help our clients to understand how AI may affect their operations, as well as how AI may be utilized to maximize potential while avoiding its pitfalls and legal risks. During this seminar, we will highlight elements within the transportation industry that stand to benefit significantly from AI.  Oliver: Yeah, so a couple of topics that we want to discuss here in the next section, and there's really three of them which overlap between shipping and aviation in terms of the use of AI. And those topics are sustainability, safety, and business efficiency with the use of AI. In terms of sustainability, across both sectors, AI can help with route optimization, which saves on fuel and thus enhances sustainability.  Han: AI can make a significant difference in sustainability across the whole of the transportation industry by decreasing emissions. For example, within the shipping sector, emerging tech companies are developing systems that can directly link the information generated about direction and speed to a ship's propulsion system for autonomous regulation. AI also has the potential to create optimized routes using sensors that track and analyze real-time and variable factors such as wind speed and current. AI can determine both the ideal route and speed for a specific ship at any point in the ocean to maximize efficiency efficiency and minimize fuel usage.  Oliver: So you can see the same kind of potential in the aviation sector. For example, AI has the potential to assist with optimizing flight trajectories, including creating so-called green routes and increasing prediction accuracy. AI can also provide key decision makers and experts with new features that could transform air traffic management in terms of new technologies and operating procedures and creating greater efficiencies. Aside from reducing emissions, these advances have the potential to offer big savings in energy costs, which, of course, is a major factor for airlines and other players in the industry, with the cost of gas being a major factor in their budgets, and in particular, jet fuel for airlines. So advances here really have the potential to offer big savings that will enable both sectors to enhance profitability while decreasing reliance on fossil fuels.  Han: I totally agree. And further, you know, in terms of safety. AI can be used with the transportation industry to assist with safety assessment and management by identifying, managing, and predicting various safety risks.  Oliver: Right. So, in the aviation sector, AI has the potential to increase safety by driving the development of new air traffic management systems to maintain distances from aircraft. Planning safer routes, assisting in approaches to busy airports, And the development of new conflict detection, traffic advisories, and resolution tools, along with cyber resilience. What we're seeing, of course, in aviation, and there's a lot of discussion about, is the use of drones and EV tools, so electronic, vertical, takeoff, and landing aircraft. All of which add more complexity to the existing use of airspace. And you're seeing many players in the industry, including retailers who deliver products, using eVTOLs and drones to deliver product. And AI can be a useful assistant, that is, to ATM actors from planning, to operations, and really across all airspace users. It can benefit airline operators as well, who depend on predictable routine routes and services by using aviation data to predict air traffic management more accurately.  Han: That's fascinating, Oliver. Same within the shipping sector, for example, AI has the capacity to create 3D models for areas and use those models to simulate the impact of disruptions that may arise. AI can also enhance safety features through the use of vision sensors that can respond to ship traffic and prevent accidents. As AI begins to be able to deliver innovative responses that enhance predictability and resilience of the traffic management system, efficiency will increase productivity and enhance use of scarce resources like airspace, runways, and stuff.  Oliver: Yeah. So it'll be really interesting to follow, you know, how this develops. It's all still very new. Another area where you're going to see the use of AI, and we already are, is in terms of business efficiency, again, in both the shipping and aviation sectors. There's really a lot of potential for AI, including in generating data and cumulative reports based on real-time information. And by increasing the speed by which the information is processed, companies can identify issues early on and perform predictive maintenance to minimize disruptions. The ability to generate reports is also going to be useful in ensuring compliance with regulations and also coordinating work with contractors, vendors, partners, such as code share partners in commercial aviation and other stakeholders in the industry.  Han: Yeah, and AI can be used to perform comprehensive audits to ensure that all cargo is present and that it complies with contracts, local and national regulation, which can help identify any discrepancies quickly and lead to swift resolution. AI can also be used to generate reports based on this information to provide autonomous communication within contractors about cargo location and the estimated time of arrival. Increasing communication and visibility in order to inspire trust and confidence. Aside from compliance, these reports will also be useful in ensuring efficiencies in management and business development and strategy by performing predictive analytics in various areas, such as demand forecasting.  Oliver: And despite all these benefits, of course, as with any new technology, you need to weigh that against the potential risk and various things that can happen by using AI. So let's talk a little bit about cybersecurity and regulation being unable to keep pace with technology development, inaccurate data, and industry fragmentation. Things are just happening so fast that there's a huge risk. Associated with the use of artificial intelligence in many areas, but also in the transportation industry, including as a result of cybersecurity attacks. Data security breaches can affect airline operators or can also occur on vessels, in port operations, and in undersea infrastructure. Cyber criminals who are becoming more and more sophisticated can even manipulate data inputs, causing AI platforms on vessels to misidentify malicious maritime activity as legitimate trade or safe. Actors using AI are going to need to ensure the cyber safety of AI-enabled systems. I mean, that's a focus in both shipping and aviation and in other industries. Businesses and air traffic providers need to ensure that AI-enabled applications have robust cybersecurity elements built into their operational and maintenance schedules. Shipping companies will need to update their current cybersecurity systems and risk assessment plans to develop these threats and comply with relevant data and privacy laws. A real recent example is the CrowdStrike software outage on July 19th, which really affected almost every industry. But we saw it being particularly acute in the aviation industry and commercial aviation with literally thousands of flights being canceled with massive disruption to the industry. And interestingly, the CrowdStrike software outage, what we're talking about there is really software that's intended to avoid cyber criminal risk. And a, you know, a programming issue can result in, you know, systems being down and these types of massive disruptions, because of course, in both aviation and in shipping, we're so reliant on technologies and the issue of regulation and really the inability of regulators to keep up up with this incredible fast pace is another concern. And regulations are always reactive. In this instance, AI continues to rapidly develop and regulations do not necessarily effectively address AI. In its most current form. The unchecked use of AI could create and increase the risk of cybersecurity attacks and data privacy law violations, and frankly, create other risks that we haven't even been able to predict.  Han: Wow, we really need to buckle up in the times of cybersecurity. And talking about inaccurate data, quality of AI depends upon the quality of its data inputs. Therefore, misleading and inaccurate data sets could lead to imprecise predictions for navigation. Alternatively, there is a risk that users may rely too heavily on AI platforms to make important decisions about collision avoidance and route optimization. And so the shipping companies must be sure to properly train their employees on the proper uses of AI. And speaking of industry fragmentation, AI is an expensive tool. Poor economies will be unable to integrate AI platforms in their maritime or aviation operations, which could fragment global trade. For example, without harmony in the AI use and proficiency, the shipping industry may see a decrease in revenue, a lack of global governance, and the rise of the black market dark fleets.  Oliver: There's just so much to talk about in this area. It's really almost mind-blowing. But in conclusion, I think a couple points that have come out of our discussion is that if the industry takes action and fully captures AI-enabled value opportunities in both the short and the long terms, the potential for AI is just huge. But we have to be very mindful of the associated risks and empower private industry and governments to provide resolutions through technology, but also regulations. So thank you very much for joining us. That's it for today. And we really appreciate you listening in to our Tech Law Talks.  Han: Thank you.  Oliver: Thank you.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    18:37

More Technology podcasts

About Tech Law Talks

Podcast website

Listen to Tech Law Talks, Lex Fridman Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Tech Law Talks: Podcasts in Family

Radio
Social
v6.30.1 | © 2007-2024 radio.de GmbH
Generated: 12/9/2024 - 7:06:21 AM