Navigating AI, Regulation and Rapid Change in Tech and Life Sciences
Insight #1 | Insight #2 | Full Webinar Video
As AI and data-driven technologies transform industries at lightning speed, technology and life sciences companies face a critical challenge: how to stay ahead of innovation while managing growing legal and regulatory risks.
Watch featured highlights from the discussion and hear firsthand insights from our expert panel.
Insight #1
Reducing Human Oversight of AI
Frank Pasquale, law professor at Cornell University, discusses the paradox of automation, explaining why the shift from human-led to fully automated systems can create unexpected difficulties – and why the transition period may be even more challenging than either endpoint.
(SPEECH)
[MUSIC PLAYING]
(DESCRIPTION)
Text: Travelers. Reducing Human Oversight of AI. Frank Pasquale. Law Professor, Cornell University.
(SPEECH)
FRANK PASQUALE: I think one of the things that's just so fascinating and challenging here is that there's a paradox in that there's a transition, say, from fully human management of a process to fully automated. And we all have a sense that fully automated should be both better, and cheaper, and faster and just overall in advance. But it turns out in that middle period, where you're doing the transition, that can be even harder or worse than either side.
[MUSIC PLAYING]
(DESCRIPTION)
Text: Travelers. Reducing Human Oversight of AI. travelers.com. © 2025 The Travelers Indemnity Company. All rights reserved. Travelers and the Travelers Umbrella logo are registered trademarks of The Travelers Indemnity Company in the U.S. and other countries.
Insight #2
The Impact of a Changing Legal and Regulatory Landscape on Life Sciences
Kashef Qaadri, Software Technology Leader, Life Science Group at Bio-Rad Laboratories, explores the challenges of keeping pace with rapid regulatory change and shares why continuous diligence and adaptability are essential for compliance in an evolving landscape.
(SPEECH)
[MUSIC PLAYING]
(DESCRIPTION)
Logo: Travelers. Text: Impact of Changing Legal/Regulatory Landscape on Life Sciences.
Kashef Qaadri, Software Technology Leader, Life Science Group, Bio-Rad Laboratories. Kashef speaks to us seated in front of a plain wall.
(SPEECH)
KASHEF QAADRI: We're living in a very changing legal and regulatory landscape. And fundamentally, life science innovation is going to always outpace legislation. Staying compliant in this rapidly -- rapidly shifting environment, excuse me, of data privacy, AI regulations and health standards requires continuous diligence and attention.
Emerging AI transparency and data usage policies are going to require organizations to ensure the compliance and maintaining global interoperability. And at this point, it seems like regulators are playing catch-up. And I think companies must anticipate future legal frameworks, while remaining adaptable in today's evolving rules and conditions. And for the instruments generating experimental data, tying it back to the life science piece, compliance must also address the accuracy, reproducibility and auditability required by those regulatory bodies.
[MUSIC PLAYING]
(DESCRIPTION)
Logo: Travelers. Impact of Changing Legal/Regulatory Landscape on Life Sciences. travelers.com. © 2025 The Travelers Indemnity Company. All rights reserved. Travelers and the Travelers Umbrella logo are registered trademarks of The Travelers Indemnity Company in the U.S. and other countries.
Learn more about navigating the complex legal and regulatory challenges of AI and data-driven technologies as James Standish, Vice President and Technology and Life Sciences Practice Leader at Travelers, facilitates the conversation featuring insights and practical strategies to help technology and life sciences companies manage these risks effectively.
Key takeaways:
- Understand the evolving legal and regulatory landscape, including recent legislation like the EU AI Act and the California AI Transparency Act, and what it means for businesses.
- Learn how companies navigate the regulatory environment while continuing to innovate.
- Gain practical insights and legal risk management strategies to help balance innovation with compliance.
(SPEECH)
[MUSIC PLAYING]
(DESCRIPTION)
Text: Travelers. Balancing Innovation with Compliance.
James Standish, Vice President and Technology & Life Sciences. Practice Leader, Travelers.
(SPEECH)
JAMES STANDISH: Hello and welcome to our webinar. Building upon our 40 years of experience at Travelers' underwriting technology and life sciences companies, today we're going to dive into the evolving legal and regulatory landscape surrounding data and technology innovation. Before we get underway, please take a look at these important notes here and review this wording on some disclaimers.
(DESCRIPTION)
Text: Important Note. This material does not amend, or otherwise affect, the provisions or coverages of any insurance policy or bond issued by Travelers. It is not a representation that coverage does or does not exist for any particular claim or loss under any such policy or bond. Coverage depends on the facts and circumstances involved in the claim or loss, all applicable policy or bond provisions, and any applicable law. Availability of coverage referenced in this material can depend on underwriting qualifications and state regulations. The information provided in this material is intended as informational and is not intended as, nor does it constitute, legal or professional advice or an endorsement or testimonial by Travelers for a particular product, service or company. Travelers does not warrant that adherence to, or compliance with, any recommendations, testimonials, best practices or guidelines will result in a particular outcome. In no event will Travelers, or any of its subsidiaries or affiliates, be liable in tort or in contract to anyone who has access to or uses this information for any purpose. Unless otherwise specified, no sponsorship, affiliation or endorsement relationship exists as between Travelers and any of the entities referenced in this presentation. © 2024 The Travelers Indemnity Company. Travelers and The Travelers Umbrella are registered trademarks of The Travelers Indemnity Company in the U.S. and other countries. All rights reserved.
(SPEECH)
With the rapid advancements in automation, machine learning and data-driven technologies, companies in the technology and life sciences sectors face increasingly complex challenges. Our goal today is to help you balance innovation with legal and regulatory risk management, offering practical takeaways from our panel of industry leaders.
(DESCRIPTION)
Panelists. Pictures of three men. Text: Frank Pasquale, Law Professor, Cornell University. Kashef Qaadri, Software Technology Leader, Life Science Group, Bio-Rad Laboratories. Michael Ciancio, Vice President, Product Marketing, IntelePeer.
(SPEECH)
Joining us today, we have Professor Frank Pasquale from Cornell University Law and Cornell Tech, where he's an expert on the legal environment of technology and data. Also with us are two of our long-term Travelers technology and life sciences customers, will share how their companies are advancing innovation while navigating the legal and regulatory environment impacting the use of data and technology.
Kashef Qaadri is joining us from Bio-Rad Laboratories, where he leads software strategy. And Michael Ciancio is joining us from IntelePeer, where he's vice president of product solutions and marketing. So let's get to it.
(DESCRIPTION)
The three panelists appear with James on a video call.
(SPEECH)
Our first question is for you, Professor Pasquale. You're a noted expert on the law of data, technology and machine learning. Can you please share with our audience your perspective on the current legal and regulatory challenges that technology and life sciences companies face, particularly when they're innovating their products and services?
FRANK PASQUALE: Sure, I'd be happy to. I think this is a very exciting time to be in these fields. But it's also a particularly interesting time for not just those who are in the fields, but the lawyers that are regulating them. And that's because there are a lot of challenges coming forward on with respect to the legal and policy implications of AI.
So I'd start, for example, with respect to AI-specific challenges with some of the intellectual property lawsuits that are now dogging some of the leading providers in this area. There are ongoing copyright lawsuits. And it's really up in the air as to how courts will treat copying and the use of copyrighted works to create AI-generated materials ranging from texts to code to images to video.
There's also a lot of ambitious AI legislation around the world. So, for example, in the EU you have the EU AI Act. And this is a major effort toward AI regulation that I think is going to be not just powerful within the EU, but also is going to be a global model.
You have something called the Brussels effect, whereby a lot of countries look to what Europe does and then either copy it or modify from that baseline. You also have some very ambitious legislation coming out of California. And we have a lot of questions about the resources that are going to be devoted to enforcing these laws. What are going to be the methods of enforcement? Are they going to be paper tigers? Or are they going to be something that's really going to affect the industry?
Now with respect to life sciences, I mean, I think there are some really interesting challenges there, particularly given the change with respect to administrations coming up. And I think that with respect to life science-specific challenges, we've got to really look at reimbursement rates. We have to look at, what are going to be the particular policies that are going to be taking place as we've got new cabinet nominees, new folks in government?
We know that there are some worries about, for example, Medicare drug price negotiations. Which direction are those going to go? And finally I'd note in the life sciences area, just a very interesting information environment, in some ways very challenging information environment, an ongoing mobilization to question medical authority, to question, say, some pharmaceutical firms, sometimes others.
And I think there's an interesting paradox in the sense of this coming from within the government as opposed to being something of an insurgent movement. And so I think that all of this is really going to be a very challenging environment, but also one that'll be very interesting for those in the field and those in the legal and policy fields.
JAMES STANDISH: Interesting. Thank you for that, Professor Pasquale. Kashef, in your role as the software technology leader for Bio-Rad, you work with a lot of innovative products that help accelerate the discovery process in life sciences. Can you share your insights on the unique legal and regulatory risks that life sciences companies encounter when leveraging these advanced technologies and the research and diagnostics?
KASHEF QAADRI: Absolutely. Thank you, James. Just leveraging off of what the professor was mentioning, we're living in a very changing legal and regulatory landscape. And fundamentally, life science innovation is going to always outpace legislation.
Staying compliant in this rapidly shifting environment, excuse me, of data privacy, AI regulations and health standards requires continuous diligence and attention. Emerging AI transparency and data usage policies are going to require organizations to ensure the compliance and maintaining global interoperability.
And at this point it seems regulators are playing catch-up. And I think companies must anticipate future legal frameworks while remaining adaptable in today's evolving rules and conditions. And for the instruments generating experimental data, tying it back to the life science piece, compliance must also address the accuracy, reproducibility and auditability required by those regulatory bodies.
Just to add a little bit to the complexity, there is a lot of conversation around the ownership of the models, the data and the IP. Fundamentally, the question of who owns the algorithm and who owns the insights that are derived from those models? And I like to think of it as data is the new reagent. And ensuring proper data-sharing agreements. Protecting IP is going to be really important, especially while fostering collaboration in R&D.
As Professor was mentioning, navigating the challenges around new AI transparency laws and enforcement is going to be really challenging. A big question around this is can you explain the algorithm or can your algorithm explain itself? And I think explainability requirements are going to create new hurdles around validation and compliance. And it's all around the accountability and the age of black boxes.
And I think this is particularly noteworthy in the neural network standpoint, where there's just so much complexity that it is going to be difficult to interpret, audit and be compliant. And finally I'll just mention, although my role doesn't get involved on the compensation side, this is going to be a game-changer in terms of revolutionizing diagnostics.
But at what cost? And I think these new technologies are going to start blurring the lines around disbursement models, between payers and providers and developers. For example, if AI systems improve experimental efficiency, how should the cost savings and the IP benefits be shared across instrument manufacturers and end users, et cetera? So there's a lot of complexity and a lot going on.
JAMES STANDISH: That's interesting. What I heard from each of you is just those four high-level themes of the changing legal and regulatory environment. Who owns the models? How do we enforce? And what are the compensation models?
Michael, in your role as vice president of product and solutions marketing over at IntelePeer, you guys are really at the forefront of delivering some rapidly deployable communications solutions powered by AI with data analytics. From your vantage point in telecommunications, can you share some insights around those four themes as well?
MICHAEL CIANCIO: Absolutely. And I think our approach is different in terms of Kashef's, in terms of the solutions that we provide are on the communications aspect. Correct. So what we're having is on the interaction side, a lot of the data that's being dispersed back and forth through the Indian infrastructure.
And who owns the data? What data are we pulling in? What's a good compliant and regulatory way to bring that data in to leverage more personalized interactions, especially for payers or providers or somewhere inside of the life sciences or healthcare space where we have a lot of customers that we're helping to create automation for interactions with the aspect of agents, how much information is needed? How much human interaction is necessary?
So there's a lot of changing regulatory landscape for that. And our ability to use our platform to include features that ensure adherence to laws where GDPR, CCPA, and there's other privacy regulations specifically that we need to adhere to.
And by utilizing AI for a lot of that, whether that's for the monitoring and reporting, there's different ways that we can help not only our own technology, but also the other organizations, our customers, to stay ahead of those regulatory changes, which inherently minimizes that risk.
In terms of the IP aspect of it, a lot of what we're dealing with is the interaction between their own data. Where does that sit? How is that housed? How is that transferred over for aspects of data and insights that's generated through AI?
And I think as long as the provider is providing clear agreements regarding the ownership models, how they're trained on certain types of data, that helps a lot from the very onset to clarify where that IP ownership aspect of it is. And in terms of one of the other things, I think enforcement to Professor Pasquale's insight into, I love the term paper tiger.
We all have a tendency to have a lot of things come through where there's a lot of ideas, laws, different types of things that we're trying to accomplish and things that do take time and things that are continuously improving over time. But the ability to enforce certain aspects of that will come down to aspects of our software where we're providing auditing, reporting tools.
Our ability to show where the data is being used, how it's being housed, can help clients mitigate that risk that's associated with certain enforcement actions. But the ability to have access to that data and provide regulatory auditable capabilities and reporting I think will help with that enforcement, whether it's for a legal requirement or whether it's for something a little bit more advanced.
And then last but certainly not least, the compensation perspective. Based on from what we're seeing, I agree with Kashef in terms of that sharing of operational efficiency. Where's that cost going to come from? How is that going to be shared across the organizations? How is AI going to leverage different aspects of not only cost-cutting but also ROI increase, revenue increase in terms of the types of interactions you can require, as well as the data that's being housed?
And to everyone else's point, I agree, data is becoming the new gold mine. And how do we secure it? And how do we make sure that the IP is being protected specifically for that data? But also how is it being leveraged from a value perspective to make sure that there are aspects of interactions and business outcomes that I think data is going to be the next stepping-stone for?
JAMES STANDISH: No, thank you both for the thoughts there. I'm sure our audience finds it particularly helpful to hear your views on those four key areas. So Frank, I wanted to pass the mic back your way.
In your research, you've really warned of the risks of overreliance on AI without adequate human oversight. You've explored the legal and regulatory environment and how that can support technological innovation without compromising societal values and privacy and transparency.
Could you expand on the implications of what happens when we reduce human involvement in these critical decision-making factors and maybe share what companies you might want to keep in mind while innovating their products and services?
FRANK PASQUALE: Absolutely. It's a really important question. And I think one of the things that's just so fascinating and challenging here is that there's a paradox, in that there's a transition, say, from fully human management of a process to fully automated. And we all have a sense that fully automated should be both better and cheaper and faster and just overall in advance.
But it turns out in that middle period where you're doing the transition, that can be even harder or worse than either side. And to give an example, we probably have all heard about a self-driving car running over someone even though it was being monitored by a guardian driver. And the worry there that we have is that essentially being just a guardian driver may make us less attentive to what's going on on the road.
And I think there's a possibility that you're going to see a lot of the same similar types of challenges through many different areas where there's a sense that the tech is good, but it's not perfect and it can make mistakes. And then what do we do about that?
So here is some concrete examples. There was a case in Canada where essentially a bot that was deployed by an airline agreed to a favorable bereavement fare for a passenger, and then the airline tried to retract the fare, saying that the bot had not acted properly. But the court ruled that the airline had to honor the bot’s agreement.
And so the lesson of that case is that you're going to find courts that are going to say, look, if you take on automation in your processes and something goes wrong, you assume the risk in that situation. Now of course, it's possible that that airline could sue its AI provider, the provider of the bot, but there may be a liability waiver in the contract between those entities.
So the big question often becomes, who's going to get stuck with the risk? Is it the customer? The firm user. The provider? And that's really up in the air in many scenarios. There's also examples of lawyers using generative AI, lawyers that really should have known better. But you've got this possibility of, it's already been documented, where lawyers have used AI to write case briefs and the AI fabricated case names in several cases.
And it similarly appears to have occurred in an expert brief in a case on misinformation recently. So one of the problems here is that people are mistaking these language models, which are merely predicting the next word in a text, for knowledge models which can understand the world. And that's something that has already been documented.
For example, in scientific contexts, for example in medicine, AI is sometimes quote-unquote, "learning from irrelevant data" -- for example, the fonts on different X-rays when it's trying to say, figure out if a patient has COVID or something along those lines. And we also have a documented problem of AI and even pre-AI technology lacking diverse examples.
So, for example, it could be dermatological AI may be effective for certain skin shades but not for others, not for minority skin shades. And there's already been a petition letter to the Food and Drug Administration with respect to some of these problems with pulse oximeters that they don't perform as well on people with darker skin.
And so I think are real problems that are out there when we think about potential delegation of a lot of authority to technology that has already proven itself to have certain challenges in certain scenarios.
So thinking about the regulatory and legal responses here, courts and regulators are recognizing these issues. They're saying that institutions can't use AI as an excuse for errors in many cases.
There's a growing emphasis on principles like enterprise liability, which, say, if your enterprise adopts these things, you're going to be responsible. And that responsibility includes things like auditing and understanding these systems.
So the takeaways I think are really key here for companies, which are they've got to ensure adequate human oversight in AI decision-making. It may not be quite as efficient as pure AI decision-making, but until we really have audits demonstrating super high levels of competence there, I think it's going to be necessary.
And being aware of the legal and regulatory environment is important. Fortunately, there are even firms out there and entities that are trying to understand and audit AI systems that can be hired. And so I think that's going to be a really growing line of business is the auditing and consulting and advising with respect to these issues.
KASHEF QAADRI: And if you don't mind, James, just wanted to add another example riffing off of the dermatological example that the professor was mentioning. I think another great example is in the area of genomic diversity and its importance in relation to drug development. This is a critical area of research because genomic diversity can significantly impact the efficacy and safety of new drugs.
And by understanding genetic variations, for instance, across populations, we can develop more effective and personalized treatments. And this not only improves patient outcomes, but also helps in identifying potential side effects and issues early in the drug development process. And I think this is a great example of how integrating advanced technologies like AI with the deep understanding of human diversity can lead to advancements in life sciences.
JAMES STANDISH: Great perspective, Kashef. And one of those key areas around the changing legal and regulatory environment, just interested given Bio-Rad's long history in the industry and how you've successfully navigated that over time. But also just today, obviously the game has changed a bit. How do you folks ensure that you're complying with those laws and regulations in that ever-changing environment?
KASHEF QAADRI: I think there a few points come to mind. The first is balancing compliance and innovation. So in terms of the compliance side, taking a proactive approach and staying ahead of the regulatory changes by closely monitoring new laws and regulations. But implementing proactive measures means ensuring compliance without stifling innovation. I think that piece is really important.
Beyond that it's about strong fundamentals. Maintaining a solid foundation in documentation. Data management. Data hygiene practices and operational processes. All of the laborious part of doing research and just maintaining accurate records of that around the data input/output usage and being able to stay ahead of those regulatory changes.
In terms of innovation, the other two points I'll mention, one is around collaboration, which is fostering early and frequent collaboration with risk and legal teams. And engaging with experts from various domains to navigate the complexities in ensuring compliance. Not everyone in legal is an IT expert and vice versa. And I think creating that collaboration is really important in not just ensuring compliance, but also protecting intellectual property.
And the final note I'll make is just around adaptability and developing processes and systems that are flexible, that can quickly adapt to new regulations. And fundamentally, it's all about embracing a culture of continuous improvement and learning to stay ahead of those regulatory requirements.
JAMES STANDISH: Thanks for that. And Michael, I suppose I'd pose the same question given IntelePeer's long history in the technology space. How have you guys historically approached navigating this challenge? And how are you doing it today to ensure the compliance with laws and regulations while you're still continuing to drive the innovation?
MICHAEL CIANCIO: And I think it's important to balance that line between compliance and innovation. And one shouldn't hinder the other from that approach. And very similar to the previous responses, having a proactive compliance framework specifically that's robust but is also integrated into the solutions that you build is paramount.
And I think encompassing regular assessments of legal and regulatory requirements that's enabling our company to stay ahead of those changes in those AI laws needs to be proactive. I think part of that starts with the collaboration with legal experts.
So we on a regular basis collaborate with legal experts and regulatory bodies to understand specifically the implications of what new laws and regulations may be coming down. As well as the ongoing dialogue helps us anticipate those changes and then adapt our solutions accordingly.
So the ability for innovation. The key to innovation is speed. It's being one of the first or second or third things to market. So having that and being proactive in terms of not only a framework but also a collaboration process will help you keep your company, your solutions as an innovative company while still not being bogged down specifically around certain levels of compliance, which are inherently extremely important as we start to expand in the AI space and its capabilities.
I think in terms of the human in the loop type of mentality, we were talking about that aspect of it where 100% automation or autonomy when it comes to agencies is a goal for a lot of aspects, but it's only applicable to the certain use cases and the environments that you're applying to.
And I think everything that we do from a solution perspective utilizes feedback loops from clients and regulatory development just to constantly make sure that we are not only creating the most iterative improvements to our solution, but also ensuring that it's a solution that remains compliant and relevant, but continuously innovate.
One of the things that we leverage, and I think one of the advantages of AI, is the ability to continuously improve the interactions and its development. And it continuously gets smarter in terms of the amount of data you're feeding into those LLMs and those interactions and how that works.
But again, the human in the loop type of process I think helps you balance that regulatory requirement with the need for continuous improvement. And we've been demonstrating that with AI, especially now with the advancements of generative AI and agentic AI and the varying different aspects that are out there without compromising from a compliance perspective.
JAMES STANDISH: Thank you both for that. Professor Pasquale, you mentioned at the beginning of our webinar that there had been some recent legislation, particularly around the EU and California.
So multi-part question for you. How do you foresee these laws and regulations shaping the responsibilities of technology developers? And what challenges do you foresee for compliance? And then if you could follow on to that with how you believe these new transparency laws are, whether they're sufficient enough to address the unique challenges of AI while leaving room to innovate.
FRANK PASQUALE: Big questions, thanks. So this is a really tough set of questions to answer because we're still in the early days in terms of enforcement here.
(DESCRIPTION)
Text: Examining AI Transparency and Regulations. The EU AI Act is a comprehensive piece of legislation with global influence. Key provisions of the California AI Transparency Act (SB 942).
(SPEECH)
But I think I can give an overview and give a sense of the general direction here that I think will be very helpful for those who are considering both the EU law and its broader implications and some California law in the U.S.
So with respect to the EU AI Act, its significance is great in that it's comprehensive legislation with global influence. You had a number of policymakers, both in the European Commission and the parliament and the council there, that have been considering over many years a wide array of implications of AI for residents’ everyday lives.
And this is AI that's ranging from the predictive AI that we're all familiar with, with respect to credit scores or employment scores or other things that are trying to predict how well a person will do in a given situation. And also with respect to generative AI, which is producing texts, images, videos, et cetera.
This EU AI Act operates extraterritorially to the extent that it's affecting those marketing in the EU, their products and services. It's also operating in a way extraterritorially in the sense that it serves as a model for other countries, similar to the way in which the General Data Protection Regulation serves as a model for privacy regulation.
And it's part of a much larger suite of tech regulation. So to get specifically to the transparency requirements that you mentioned, for high-risk AI systems, the categories there include safety components, law enforcement, migration, administration of justice.
Some of those areas they're trying to treat AI used in those contexts the way that we might treat a complicated medical device or a complicated drug, in the sense that we want to be sure that it's working. And not only that risks are addressed, but also that individual rights are respected.
So providers of those services and systems have to give clear and comprehensive instructions for their use. They have to give details on the intended purpose, the performance limits and foreseeable risks. And there has to be data and algorithmic transparency to a certain extent disclosing the data usage for training, validation and testing.
There's also general purpose AI systems. So we've talked about the high-risk ones. With respect to these general purpose AI systems, that includes foundation models, where those providers have to notify the EU Commission of any potential systemic risk and provide technical documentation similar to that for the high-risk systems, cooperate with authorities and mitigate risks to ensure robust cybersecurity. And there also are requirements of disclosure with respect to generative AI.
Now with respect to the California law, and I really want to shout out California as being very forward-thinking in its regulation, because again not just with respect to AI, but also with respect to data protection and privacy, it has quite an ambitious set of requirements.
But to focus in on the AI Transparency Act, also known as SB 942, basically it's less developed than the EU AI Act and was just signed in September of this year. So it's a little bit behind the EU AI Act. But it has some important requirements.
With respect to generative AI providers, they have to make available an AI detection tool. And they've got to offer their users the option to manifestly disclose that the content they create is AI-generated. And also to include a latent disclosure of AI-generated contents. So that could be something in the metadata of that content. And also to enter contracts with licensees to maintain these disclosure capabilities.
Now the implementation is not coming until January 2026. It comes effective on January 1, 2026, because I think some of this will require some technical expertise to make sure that this is robust.
It's really important for future transparency in California. And it dovetails with laws like the California Consumer Privacy Act, the California Privacy Rights Act, which also emphasize some level of transparency in automated decision-making.
I should note, though, that there may be some First Amendment challenges. So, for example, in California, there was a law also passed in September of this year that required disclosure with respect to parodies that were online and actually banned some of these parodies. And that was rejected relatively quickly by courts. And to the extent that, say, certain forms of manifest disclosure might be seen as forms of forced or compelled speech, you might see challenges on that level.
So there's going to be a lot of, I think, flux in the area in the U.S., at least. So to really wrap this up and put it all together, I think there are a lot of compliance challenges here in terms of ensuring the clear and comprehensive instructions for high-risk AI systems, maintaining data and algorithmic transparency, notifying and mitigating systemic risks for general purpose AI, and implementing these AI detection tools and disclosure requirements.
These are very difficult compliance challenges. And I think that these laws are aiming to balance transparency with some room for innovation, but it's going to be very important to monitor and adapt regulations over time because there might be firms or others that find that they become too burdensome. So you're going to have a real interesting balance there, I think.
JAMES STANDISH: I appreciate the response there. And obviously that was a pretty intricate question and a very thorough answer on your part. One of the things that really stood out to me in your answer was really around the disclosures and disclaimers necessary. Can you expand on that and just give us some examples of what that might look like?
FRANK PASQUALE: Yeah, so in terms of clarifying these disclaimer requirements, I mean, this is not yet part of the existing regulatory infrastructure. You're going to have to have a lot of public comment, I believe, and other input to figure out exactly what they should look like. And they're going to be clarified by regulatory agencies over time, either via rule-making or guidances or via adjudications.
But here are some possible implementations that I think would be useful. And I think a case could be made that they are forms of compliance with the act. One is that AI-generated images might have an AI label in the bottom corner.
So I've already seen some big warning labels on AI-generated images saying fake or not reality or something in those -- I haven't seen that reality, but I will say I've seen fake. And that reality maybe something out there that you could put on these things.
I think that over time there may be debates and interest in how to make that a little more subtle. And putting say, some smaller label in the corner the way that we see, for example, trademark or copyright labels.
But again, remember these are options made for the users of these systems. With respect to latent disclosures, that's going to be something that's probably above my technical pay grade to figure out how to put that into the code, but it's something that I could definitely envision lots of experts in the field figuring out ways to do that efficiently.
Now I think that with respect to specifically thinking about California, one of the precedents that's interesting there is the California Bot Act. And that required the disclosure of bots in politics and commerce. I believe that was out even before generative AI became a big thing. That was back in 2020. Of course, I think the model, the ChatGPT 3 was out then at that time as well.
But that was a precedent that I thought was quite interesting in the Bot Act in terms of requiring disclosure if a bot is trying to publicize something for political or commercial purposes. And that aim to ensure greater transparency and disclosure. I've not seen a successful First Amendment challenge to that. But I have seen successful judicial pushback on some of the other disclosure requirements, or, for example, on banning certain types of material.
So I think that's going to be a really interesting area. And ultimately, I think California is just going to be leading these efforts to increase AI transparency. And given that it's such a large part of the U.S. economy, it's really worth paying attention to for all firms.
I think we've already seen Colorado and Utah pass AI acts as well. And I think that the regulatory and legislative model that's set in California is going to inspire a lot of legislators around the country for various problems that they want to solve.
JAMES STANDISH: Michael, Kashef, I'd love to get your views both on how these new laws and regulations are going to affect the operations of your companies. Maybe Michael, we'll pass you the mic first and see what your thoughts. And then if Kashef, you want to build in with some of yours as well, we'd love to hear from your perspective on this.
MICHAEL CIANCIO: Perfect. And I think that was a really good explanation by Professor Pasquale, just in terms of the different types of regulations and government compliance that's coming out of California. And I'll focus on that for the most part, just based on our business, our locality and what we deal with on a regular basis.
But I think the two most important parts of the AI transparency legislation, one is both the obligations for disclosure similar to what Professor was talking about, but also the accountability. So I think regardless of what the actual transparency legislation continues to build into, I think one, the disclosure aspects will mandate particularly in contexts where it may influence decisions affecting individuals.
I think that's very important in terms of that disclosure is very, very important, especially when you're dealing with in terms of our solutions and how our interactions happen with different types of life sciences and healthcare providers and payers and the different types of AI content that's created for decision-making, I think is something that needs to be thought about and obviously a disclosure.
In terms of accountability, I think it's going to require different types of companies like ourselves, which we're already working on to provide information on how the AI systems function and the data that they utilize. Specifically understanding where it's coming from, the compliance that's around it. But that will help foster accountability and transparency specifically in the AI deployment.
I think for the most part, the actual implementation of it when you execute with that, the accountability, the documentation that's there is very, very important.
(DESCRIPTION)
Text: New transparency laws and regulations could impact the operations of technology and life sciences companies, particularly those operating across multiple jurisdictions.
(SPEECH)
Now in terms of its impact on operations, the data management and government's aspect, these types of regulations will necessitate stringent data management practices. Not that there aren't now, but I think that there are aspects of more stringency when it comes to those practices.
And we make sure that the data that's used for training those AI models and systems is well-documented, it's secure, it's compliant with the various different types of data protection regulations that you now have to monitor across various different types of jurisdictions, but specifically talking about the U.S. in this case.
I still think there'll be challenges in enforcement. To allow that aspect, I think that'll be a continuing theme in terms of how do we enforce these things? But I think documentation and the increase in the compliance requirements will allow us to provide better aspects of enforcement to make sure that our practices, our implementation, our execution are all being done safely and compliantly.
And last but certainly not least, I think the importance of disclosing a lot of that AI-generated content will mitigate a lot of that risk associated with misinformation and biases that are inherent in a lot of the AI systems, outside of the ways that we go to market and how we execute on guardrails and frameworks.
Making sure that the data is completely contained inside of certain types of LLMs. Making sure that the transfer, whether at rest or in transit is done. And that the guardrails are there to handle the bias. But specifically, being able to disclose that so that people understand that there will be variations in response.
And I think what we're learning from a generative AI perspective in terms of consent, whether that's verbally or in writing or imagery, is that we need to understand that there will be variations in that response. Because generative AI specifically will create different types of responses based on those interactions and continuously improve.
But as long as we're disclosing that that is the case, I think it'll go a very long way in terms of fostering a more informed user base, as well as enhancing the public perception of the different types of AI technologies that are out there.
KASHEF QAADRI: Just to add, I think it's important to note the basis of these new regulations and the nature of these new regulations, which is seen as a reaction to the popularity of Gen AI, generative AI. AI and ML have been around for a very long time, but now there seems to be an attempt to safeguard against malicious intent, which I think is great.
And as Michael was mentioning, fundamentally, it's about following good practices to mitigate your risk. I think that's recording your actions, the data processing, the data hygiene. All of these elements are really important, whether you're in the context of a CFR, or Code of Federal Regulations, or GXP, meaning good clinical or manufacturing processes, all of these basic data hygiene pieces are really important.
That being said, I think similar to the introduction of GDPR, there are no clear guidelines yet on what's acceptable and how to be transparent with a lot of these new transparency laws. And there is this expectation of similar ambiguity and evolving guidelines with these new regulations.
In terms of the impact, I think there are two points that I wanted to make. One is that this is a moving target. The regulations are likely to change over time. There's uncertainty at the moment about these regulations. And when and how to tag whether this is AI-generated or not. Whether it's fully generated by AI or if it was manipulated, I think these are going to be the nuance and some of the excitement around the regulations.
And then just to put people's mind at ease, AI-designed drugs full end to end don't exist yet. I think this is a work in progress. People are looking at specific elements, whether it's screening or target identification or safety and efficacy. But regardless, drugs still undergo in the U.S. the same FDA validation for safety and efficacy as human-designed drugs. So I think there's safety in that. And there's comfort in that.
And I think there are quite a few existing safety mechanisms and regulations that will help maintain that safety and efficacy, as well as tie in to some of the impact of some of the new transparency laws in life sciences, whether it's at the EU level or California or federal level. So lots of excitement happening, but there's good practices to maintain that safety.
JAMES STANDISH: That's really well said by all three of you. And I appreciate the thoughtful response. Frank, I want to turn back your way. A lot of technology companies and life sciences companies are going to be looking to how they balance the need for these algorithmic transparencies with protecting their IP or intellectual property. Can you shed some light on guidelines that you would recommend to help these companies navigate the tension between transparency and innovation?
FRANK PASQUALE: Sure, it's something I've been thinking a lot about in my own research. And I think it's a really vital question for sensible regulation and also to allow for full, proper rewards for innovation.
And just to start with, many firms value their IP as trade secrets and their concern about disclosing proprietary information. They don't want to disclose to the whole world aspects of models or aspects of their own AI and other innovation that it took them a lot of investment to create.
And the new transparency laws could cause some alarm among some companies due to potential exposure of sensitive information. So one of the concepts that I have worked on over time is this idea of qualified transparency. And that's a compromise between transparency and full and complete intellectual property protection for trade secrets.
And in thinking about that, I mean, essentially qualified means just offering something but not everything. And the considerations here include the depth of transparency, the scope of transparency and the timing of disclosures. So to start with depth, the question becomes, how deep will transparency requirements be?
For example, if a firm under, for example, either the GDPR or the AI Act or under a California data protection laws, if a firm is required to tell users or employees about the data that it used to evaluate them, how much data does it need to give? Does it need to give every single thing that's in its files?
Can it talk about broad categories and then perhaps give more disclosure if there's a particular category of interest that the person who wants to know more information there? Does all the information have to be disclosed? Or just a modest array there. It's going to be a very interesting question.
And in a recent draft book that I've been working on, I proposed four levels of transparency. That would be potential ways in which regulators could organize and could classify the depth of transparency required.
With respect to scope of transparency, one of the issues here is whether the transparency requirements will be broad or narrow. And I think that the scope of transparency, there's going to be some further guidance with respect to the EU high-risk systems definition. Other scenarios are going to be really interesting there as well.
And then the third aspect of qualified transparency is the timing of disclosure. Now whether the information has to be disclosed immediately or can be disclosed over time. And I think that's a really important area here because a lot of firms do not want to be continually giving out information about what they're doing. And this is, of course, a debate that's going on right now about the extent to which AI systems that are claimed to be open are actually open.
Are firms disclosing, say, one or two models a year? Or are they continually disclosing details about what they're doing? And I think that's a very interesting question about that timing. And that's also going to be something where regulators are going to have to set some standards in terms of, say, giving some interval between, say, the development of an approach and the disclosure of the approach to regulators or others.
That also goes back to depth of transparency. So, for example, you could imagine a future where the regulators get to see in but then others don't. And of course, we're all familiar with that with respect to the FDA and some of the trials data, other data that it has, that say they want to keep much of that as trade secrets, but then also has some that it does release to the general public.
So in thinking about all of those considerations, I think the guidelines for companies are to assess the depth, scope and timing of transparency requirements. Determine what information can be disclosed without compromising proprietary information. And considering phase disclosure to protect commercial advantage, while focusing on disclosing information related to high-risk systems or scenarios to comply with regulations while protecting IP. I think that's where the companies should be going with all of this.
JAMES STANDISH: Very interesting. So Michael and Kashef, Professor Pasquale talked a little bit about that balance of algorithmic transparency and protecting IP. A little bit of a spin, but a question for both of you. Given the complexities of operating across multiple jurisdictions, how do each of your firms achieve the right balance between ensuring transparency and safeguarding the proprietary technology?
KASHEF QAADRI: Happy to start off. I think as the professor was mentioning, there's a lot of conversation around how much information needs to be disclosed. And just at a fundamental level, as long as you're documenting the data input, the output, the usage, that's the right start.
I think one of the biggest challenges from my perspective is going to be around the explainability of those models, especially as I mentioned earlier around things like neural networks, the way they're derived. They're layered. They're interconnected structure. The reliance on abstract mathematical computations make it really difficult to translate those decision-making processes into human understandable terms. Even if you wanted to, it'd be really difficult to document.
But I think it's really important that we still try to document as best as possible and model that behavior despite its difficulties. And then from a fundamentals, robust IAM or identity and access management is going to be critically important. Ensuring the right checks and balances are in place by assigning rights and permissions based on roles and then monitoring access activity.
And enforcing a mindset of least privilege principles. Meaning you only get the data that you absolutely need, which I think is going to reduce the risk of data misuse. There's also going to be a continued emphasis on reliability and uptime in cloud software and software in general and availability there.
And making sure that companies have a solid foundational base to adapt to those new and changing regulations. Recognizing that those regulations are evolving but having a strong foundational approach is going to minimize that risk even with those changes.
MICHAEL CIANCIO: I think that was a really good response, just in terms of the aspects of how do you see the difference between that proprietary technology. And I think from a software perspective, the aspect of, and I'll take more of a holistic approach into my explanation around partnerships, I think the aspect of early and frequent collaboration with both legal and risk teams, both from customers, other cloud providers, our own software.
The ability to have early and often talks around what's available, what protects our own IP and their IP. And obviously the aspect of data that's around it. I think the partnership aspect is increasingly critical.
(DESCRIPTION)
Text: The challenging legal and regulatory environment necessitates early and frequent risk management discussions.
(SPEECH)
And I think you also need to engage experts from very different domains to navigate a lot of those complexities. And I think it's critical to have the right people in the conversations early and often.
So when it comes to a partnership perspective, you need to make sure that you're bringing in the right groups that have access to not only the technology, but also the data and also the documentation and also the implementation capabilities and everything that's around it early on in those conversations to make sure that you're staying compliant but also protecting our own IP and everyone else's as well.
Part of that we talked about was collaboration. But I think the legal component in terms of legal teams playing a crucial role in ensuring that compliance and protecting IP is paramount in terms of how you're dealing with your customers on a regular basis, but also different types of providers that you will partner with in terms of the aspects of the solution you're providing for your customer.
And the early legal involvement, back to Professor Pasquale's points earlier, can prevent issues, especially with antitrust cases. There's aspects of early legal involvement that can help those aspects, documentation process that is very important, especially in those types of scenarios.
So overall that balance of transparency and IP protection with solid fundamentals and strong partnerships, and then proactive collaboration with legal risk teams and the varying different types of providers that you'll be working with is essential for navigating the different types of jurisdictional operations to keep not only as transparent as possible, but also maintaining IP and everything else that we've worked hard for.
JAMES STANDISH: Thank you for that. Back to Frank. There are many organizations out there right now, both governmental and non-governmental, that are publishing risk management frameworks. As an example out there, we have the National Institute of Standards and Technology's AI Risk Management Framework. Would love your views on whether these are helpful and effective.
FRANK PASQUALE: Well, I think they're very helpful and effective. And I'm really glad that you mentioned them. I mean, I think that the National Institute of Standards and Technology has been doing a lot of work. I served on the National Advisory Committee for a couple of years. So I've gotten to watch some of their work in action from that perspective. And I think that their risk management framework is important and has many benefits to follow and to consult.
So, for example, while it is really educating users to the risks that are involved in the use of AI, it doesn't require full disclosure of sensitive proprietary information. What it does do is that it focuses on key aspects such as explainability, and accountability, and interpretability. I think these are really important concepts here.
And I think, for example, and this subtle distinction, say, between explainability and interpretability is very interesting to me. In the sense of a lot of these systems, they may well get too complex to be explainable in a direct way or in a way that would be convincing to those that are trying to hold people accountable.
But perhaps they can become more interpretable through wise application of the framework and through iteratively working on the tech with experts in both technology and law to try to guarantee that type of desirable outcome.
And also it advises, providing regulators and users with explanations, accountability mechanisms and interpretability without revealing algorithms. Now in terms of the overall balance between transparency and trade secrecy, I'd like to break that out into the EU and U.S. context.
(DESCRIPTION)
Text: AI risk management frameworks can help in balancing the need for transparency with the protection of intellectual property.
(SPEECH)
In the EU context, we have to balance the EU AI Act against the EU trade secrets directive because both of these are being developed on a Europe wide level, with the goal of creating a common market.
And similarly, the GDPR's data access and explainability provisions, for example, in articles 13, 14, and 15 of the GDPR have to be balanced against the trade secrets directive. And we are still in early days here. We have seen some cases like the SCHUFA case coming out of Germany, where that was a victory I think for the trade secrecy side. There are other cases, pushback against that a bit. So it's going to be a really interesting development to watch.
In the U.S. context, trade secret protection is really robust with constitutional dimensions. So, for example, in cases like Monsanto or Philip Morris versus Riley, we have judges who basically treated trade secrets like property and said to the extent that the government is going to require disclosure, that's like taking someone's property and thereby has to be compensated.
And that really derailed some of Massachusetts’ efforts in the Philip Morris v. Riley case to get full disclosure of the cigarette ingredients that they wanted, at least the amounts of those ingredients that was seen as the key trade secrecy issue.
But nevertheless, there's also simultaneously both a lot of scholarship and some activism saying that this needs to be limited and that to the extent that the law changes. Then once the law has changed, after the law changes, then the trade secret is not a property right because the law that would have protected it has now changed.
So this is quite complicated stuff. But it's something that we're going to I think see a lot more litigation in the future because of that. So to conclude with respect to these frameworks and this balance, I think frameworks like the NIST one can be very useful for thinking through balancing the need for transparency, the protection of proprietary information. And that we're going to see a lot of efforts by regulators and courts to balance these simultaneous requirements for disclosure and requirements for secrecy.
JAMES STANDISH: And as a follow on to that, are there other tools or resources that companies can use to try to stay current on these types of challenges?
FRANK PASQUALE: Yes, I mean, I think there are a lot of tools out there. I mean, I think I saw at one point that there was a nature article that described over 400 AI ethics frameworks that are out there. So lots of countries have them. Lots of NGOs within countries have them.
And one that I would lift up as something that I have used both in my scholarship and in advising and in thinking about these issues with students is the Australian New South Wales AI Assurance Framework that's available for free online. It's a really nice framework in the sense that it breaks down in an algorithmic way all these different dimensions of fairness, accountability, transparency and how to think about them at different stages of an AI project.
So there's often a real problem of translation between, say, a lawyer and a technologist where we've been trained in very different ways. And one of the things I appreciated about the Australian New South Wales Framework is that this clearly seemed to me to be something that they had deeply involved technologists from the start who could say to the developers of AI systems, here are some things you're going to need to assess and to check off before you move to the next stage of your project.
So I thought that was very helpful. And I think that there are other professional associations, databases, public sources. And I think on LinkedIn already, there's lots of people on there that are trying to develop ways of applying these frameworks. There's Louisa Zurawski on X, and I think other social media that he was constantly keeping her followers up to date on the latest in different AI initiatives from around the world. Lewis Montezuma is another person in that field.
So there's a number of people out there that I think are doing great jobs publicizing the development of both new laws and new frameworks to promote more accountable, transparent and fair AI.
JAMES STANDISH: Thank you for that. Professor Pasquale, just in continuing this thought, what other legal and regulatory watch-outs should companies really be prepared for when developing their advanced technology solutions?
FRANK PASQUALE: One of the things I think is going to be very interesting is watching the transition between Lina Khan's FTC, the Federal Trade Commission and the Trump team that will be taking over the FTC starting from January 2021.
And I think that one of the things that is I think fascinating about that transition is that the FTC has had an advanced notice of proposed rule-making on commercial surveillance. And we're going to have to see, is that going to continue or elements of it going to continue? On one level, there are, I think, people within the administration, I think are very negative about that. But then there are others who have been very positive about it.
So I think that this is going to be a really interesting issue, both the same. I think there's going to be a little more continuity with respect to antitrust as opposed to privacy because I think there is a bit of a commitment there, at least with the recognition of Gail Slater's leadership and importance in thinking through future directions in that field into the 2025 to 2029 time period.
With respect to the other watch-outs, I mean, I think that one thing that is maybe a sleeper risk is copyright risks. Because I think now a lot of people are using the generative AI to produce materials. And it may well be that some of these materials infringe on others’ copyright. And it's going to be really interesting to see, at what point do copyright holders start scanning the web for uses of their materials and maybe launch lawsuits with respect to that?
So that might be a hidden legal risk of generative AI. I know that some firms have claimed to indemnify users against that potential copyright liability, but I think it's a very interesting one. And then another issue is data security and cybersecurity.
I mean, we've just seen with all the news recently about Salt Typhoon, about some of these other just really massive breaches that have been quite surprising both on the governmental level and then also reaches a more commercial level, that this is going to be an ongoing challenge.
There's a debate now about the degree to which AI helps attackers as opposed to defenders in cybersecurity. I think that's still very much an open question. But at any point, this could upset the apple cart or the balance in that area. And I think that's going to be something to watch.
JAMES STANDISH: Thank you for that. And on subject, I found this to be just a really informative and interesting talk, as I'm sure our viewers have as well. I'd like to thank each of you for your time, your commitment and sharing your knowledge with all of us.
I know this is really important stuff and of great interest to all of us in the technology and life sciences realm. So with that, we'll conclude our webinar. Thank you again to each of our panelists. And thank you all for your time.
[MUSIC PLAYING]
(DESCRIPTION)
Text: Travelers. Balancing Innovation with Compliance. travelers.com. © 2025 The Travelers Indemnity Company. All rights reserved. Travelers and the Travelers Umbrella logo are registered trademarks of the Travelers Indemnity Company in the U.S. and other countries.