Could not load a react component of type BlogArticlePage.
{ "contentLink": { "id": 69546, "workId": 0, "guidValue": "c4c5c349-3a98-48c5-a2cd-47950dfa1c96", "providerName": null, "url": "https://www.freshfields.com/en/blogs/102d2rd/2025/4/legal-implications-of-agentic-ai-in-healthcare-potential-for-inaccurate-or-biase-102k8zs/", "expanded": null }, "name": "Legal implications of agentic AI in healthcare: potential for inaccurate or biased data (part 2 of 3)", "language": { "link": "https://www.freshfields.com/en/blogs/102d2rd/2025/4/legal-implications-of-agentic-ai-in-healthcare-potential-for-inaccurate-or-biase-102k8zs/", "displayName": "English", "name": "en" }, "existingLanguages": [ { "link": "https://www.freshfields.com/en/blogs/102d2rd/2025/4/legal-implications-of-agentic-ai-in-healthcare-potential-for-inaccurate-or-biase-102k8zs/", "displayName": "English", "name": "en" } ], "masterLanguage": null, "contentType": [ "ArticleBase", "CardBasePage", "BaseSearchablePage", "BasePage", "PageData", "ContentData", "IRssPage", "IClassifiableContent", "Page", "BlogArticlePage" ], "parentLink": { "id": 68163, "workId": 0, "guidValue": "c5f6128e-ed4a-4ab9-bdb4-3f0f1a35d685", "providerName": null, "url": "https://www.freshfields.com/en/blogs/102d2rd/2025/4/", "expanded": null }, "routeSegment": "legal-implications-of-agentic-ai-in-healthcare-potential-for-inaccurate-or-biase-102k8zs", "url": "https://www.freshfields.com/en/blogs/102d2rd/2025/4/legal-implications-of-agentic-ai-in-healthcare-potential-for-inaccurate-or-biase-102k8zs/", "changed": null, "created": null, "startPublish": "2025-04-21T16:24:49.83Z", "stopPublish": null, "saved": null, "status": null, "blogUrl": "https://technologyquotient.freshfields.com/post/102k8zs/legal-implications-of-agentic-ai-in-healthcare-potential-for-inaccurate-or-biase", "heading": "Legal implications of agentic AI in healthcare: potential for inaccurate or biased data (part 2 of 3)", "imageUrl": "https://images.passle.net/fit-in/400x400/filters:crop(39,0,1111,627)/Passle/5677e7453d947406989fe60a/MediaLibrary/Images/2025-04-21-14-49-24-452-68065af4ea56e9dab57c84e7.png", "tags": [ { "name": "Blog", "itemType": "ContentType" } ], "authors": [ { "id": "102hg8r", "authorName": "Vinita Kailasanath" }, { "id": "102fno9", "authorName": "Philipp Roos" } ], "articleType": { "id": 238, "workId": 0, "guidValue": "7f0f2c88-1ebf-4392-8b84-1df20424654e", "providerName": null, "url": "https://www.freshfields.com/globalassets/categories/content-type/blog/", "expanded": null }, "metaTitle": "Legal implications of agentic AI in healthcare: potential for inaccurate or biased data (part 2 of 3)", "mainBody": { "html": "<p style=\"text-align: justify\"><i>Read Part 1 of this mini series </i><a href=\"https://technologyquotient.freshfields.com/post/102k8mi/legal-implications-of-agentic-ai-in-healthcare-regulatory-compliance-part-1-of\" target=\"_blank\" rel=\"noopener noreferrer\"><i>here</i></a><i>.</i></p><p style=\"text-align: justify\">***</p><p style=\"text-align: justify\"><strong>Potential for inaccurate or biased data</strong></p><p style=\"text-align: justify\">Inaccuracy and bias in datasets used to build agentic AI applications present both legal and ethical challenges. Agentic AI applications, including those used for clinical decision-making, rely heavily on datasets to train models. If these datasets are incomplete or unrepresentative or contain inherent biases, and reasonable steps aren’t taken to cure these problems, the underlying AI models can perpetuate or even exacerbate existing healthcare disparities. For example, biased data may lead to inaccurate predictions for certain demographic groups, particularly those underrepresented in clinical research, resulting in unequal healthcare outcomes. The issue is compounded by the “black box” nature of some AI models, as even developers may struggle to explain how certain decisions are made. Such challenges raise concerns about the reliability of agentic AI in making critical healthcare decisions, as well as the potential harm that could arise from inaccurate or biased inputs, emphasizing the need for robust AI governance to ensure fairness, transparency, and accuracy. </p><p style=\"text-align: justify\"><strong>Practical tips: </strong></p><p style=\"text-align: justify\">AI developers working on agentic AI applications, particularly in healthcare, may consider prioritizing building models that are not only technically robust but also free from bias. Below are some practical ways that developers can mitigate the risks of inaccuracy and bias:</p><ul style=\"list-style-type: disc\"><li style=\"text-align: justify\">Regularly review and update their AI models to reduce the risk of algorithmic bias.</li><li style=\"text-align: justify\">Employ data governance techniques to ensure that the datasets used for training are representative of the broader population.</li><li style=\"text-align: justify\">Collaborate with healthcare professionals to ensure that the AI system aligns with clinical standards and real-world practices.</li><li style=\"text-align: justify\">Conduct thorough risk assessments to evaluate the potential harms that could arise from biased or inaccurate AI predictions and implement strategies to minimize such harm, such as establishing fallback protocols if the AI system fails to perform as expected.</li></ul><p style=\"text-align: justify\">Healthcare organizations and other companies contracting with AI developers can potentially mitigate contractual liability stemming from inaccurate or biased data by:</p><ul style=\"list-style-type: disc\"><li style=\"text-align: justify\">Requiring such developers to ensure the accuracy and fairness of their datasets and implement mechanisms for regular audits and updates.</li><li style=\"text-align: justify\">Establishing accountability for any adverse outcomes caused by faulty AI recommendations, including stipulations for corrective actions or credits if the AI system fails to meet agreed-upon standards for performance.</li></ul><p style=\"text-align: justify\">For companies covered by the AI Act, the lack of statutory rules on contractual liability requires the development of customized liability frameworks in contracts concerning AI systems. While the AI Act imposes obligations on providers and developers, including with respect to the ethical, transparent, and accountable creation of AI systems, contracting parties can redistribute liability (e.g., seeking indemnities to account for non-compliance) or further designate liability for specific tasks. Such agreements, however, cannot override the statutory obligations under the AI Act. Additionally, such agreements should delineate the parties’ ownership and usage rights for AI outputs, responsibility for IP infringement of such outputs, and other risk considerations, in order to account for the full universe of key risks stemming from contracting with AI systems.</p><input type=\"hidden\" id=\"passle-remote-hosting-tracking-shortcode\" value=\"102k8zs\" />", "structure": { "type": "richText", "children": [ { "type": "paragraph", "align": "justify", "children": [ { "text": "Read Part 1 of this mini series ", "italic": true }, { "type": "link", "url": "https://technologyquotient.freshfields.com/post/102k8mi/legal-implications-of-agentic-ai-in-healthcare-regulatory-compliance-part-1-of", "target": "_blank", "rel": "noopener noreferrer", "children": [ { "text": "here", "italic": true } ] }, { "text": ".", "italic": true } ] }, { "type": "paragraph", "align": "justify", "children": [ { "text": "***" } ] }, { "type": "paragraph", "align": "justify", "children": [ { "text": "Potential for inaccurate or biased data", "bold": true } ] }, { "type": "paragraph", "align": "justify", "children": [ { "text": "Inaccuracy and bias in datasets used to build agentic AI applications present both legal and ethical challenges. Agentic AI applications, including those used for clinical decision-making, rely heavily on datasets to train models. If these datasets are incomplete or unrepresentative or contain inherent biases, and reasonable steps aren’t taken to cure these problems, the underlying AI models can perpetuate or even exacerbate existing healthcare disparities. For example, biased data may lead to inaccurate predictions for certain demographic groups, particularly those underrepresented in clinical research, resulting in unequal healthcare outcomes. The issue is compounded by the “black box” nature of some AI models, as even developers may struggle to explain how certain decisions are made. Such challenges raise concerns about the reliability of agentic AI in making critical healthcare decisions, as well as the potential harm that could arise from inaccurate or biased inputs, emphasizing the need for robust AI governance to ensure fairness, transparency, and accuracy. " } ] }, { "type": "paragraph", "align": "justify", "children": [ { "text": "Practical tips: ", "bold": true } ] }, { "type": "paragraph", "align": "justify", "children": [ { "text": "AI developers working on agentic AI applications, particularly in healthcare, may consider prioritizing building models that are not only technically robust but also free from bias. Below are some practical ways that developers can mitigate the risks of inaccuracy and bias:" } ] }, { "type": "bulleted-list", "list-style-type": "disc", "children": [ { "type": "list-item", "align": "justify", "children": [ { "text": "Regularly review and update their AI models to reduce the risk of algorithmic bias." } ] }, { "type": "list-item", "align": "justify", "children": [ { "text": "Employ data governance techniques to ensure that the datasets used for training are representative of the broader population." } ] }, { "type": "list-item", "align": "justify", "children": [ { "text": "Collaborate with healthcare professionals to ensure that the AI system aligns with clinical standards and real-world practices." } ] }, { "type": "list-item", "align": "justify", "children": [ { "text": "Conduct thorough risk assessments to evaluate the potential harms that could arise from biased or inaccurate AI predictions and implement strategies to minimize such harm, such as establishing fallback protocols if the AI system fails to perform as expected." } ] } ] }, { "type": "paragraph", "align": "justify", "children": [ { "text": "Healthcare organizations and other companies contracting with AI developers can potentially mitigate contractual liability stemming from inaccurate or biased data by:" } ] }, { "type": "bulleted-list", "list-style-type": "disc", "children": [ { "type": "list-item", "align": "justify", "children": [ { "text": "Requiring such developers to ensure the accuracy and fairness of their datasets and implement mechanisms for regular audits and updates." } ] }, { "type": "list-item", "align": "justify", "children": [ { "text": "Establishing accountability for any adverse outcomes caused by faulty AI recommendations, including stipulations for corrective actions or credits if the AI system fails to meet agreed-upon standards for performance." } ] } ] }, { "type": "paragraph", "align": "justify", "children": [ { "text": "For companies covered by the AI Act, the lack of statutory rules on contractual liability requires the development of customized liability frameworks in contracts concerning AI systems. While the AI Act imposes obligations on providers and developers, including with respect to the ethical, transparent, and accountable creation of AI systems, contracting parties can redistribute liability (e.g., seeking indemnities to account for non-compliance) or further designate liability for specific tasks. Such agreements, however, cannot override the statutory obligations under the AI Act. Additionally, such agreements should delineate the parties’ ownership and usage rights for AI outputs, responsibility for IP infringement of such outputs, and other risk considerations, in order to account for the full universe of key risks stemming from contracting with AI systems." } ] }, { "type": "input", "_type": "hidden", "id": "passle-remote-hosting-tracking-shortcode", "value": "102k8zs", "children": [ { "text": "" } ] } ] } }, "passlePostId": "102k8zs", "passleShortCode": "102d2rd", "metaDescription": "Read Part 1 of this mini series here. *** Potential for inaccurate or biased data Inaccuracy and bias in datasets used to build agentic...", "openGraphType": "website", "cardTitle": "Legal implications of agentic AI in healthcare: potential for inaccurate or biased data (part 2 of 3)", "cardDescription": { "html": "Read Part 1 of this mini series here. *** Potential for inaccurate or biased data Inaccuracy and bias in datasets used to build agentic...", "structure": { "type": "richText", "children": [ { "text": "Read Part 1 of this mini series here. *** Potential for inaccurate or biased data Inaccuracy and bias in datasets used to build agentic..." } ] } }, "theme": "Default", "category": [ { "id": 238, "name": "Blog", "description": null } ], "breadcrumb": [] }