Netsmart Technologies, Inc. - Risk Management Report for Healthcare AI Governance

Executive Summary

This report outlines the risk mitigation strategies implemented by Netsmart Technologies, Inc. (“Netsmart”) for our use of Decision Support Interventions (DSIs) in workflows that influence the care delivery pathway, in accordance with recommendations from the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology (ONC). The report provides the risk analysis, mitigation, and governance practices that meet the intent and objectives of ONC’s Condition of Certification and Maintenance of Certification requirement for §170.315 (b)(11) Decision Support Intervention to evaluate compliance with the certification criterion of supplying Decision Support Intervention tools (DSI) within the care and practice setting which it is targeted for use.

The updated §170.315 (b)(11) certification criterion includes new technical capabilities and transparency requirements for Health IT Modules, designed to improve trustworthiness and support consistency around the use of rules-based or predictive algorithms in health care. Netsmart worked toward these objectives of transparent governance and risk mitigation in designing our risk intervention report and its subsequent real-world risk analysis and mitigation framework.

This document outlines the final risk mitigation measurements and metrics Netsmart will use to evaluate DSI risks within production settings. Within each measure, we document planned monitoring methodology, justification for measurement, expected outcomes from testing, care settings applied for this measure, and if applicable, including how test, audit, red-teaming cases were created, our selected methodology, and our general approach and justification for decisions.

In support of trustworthy AI practices at the enterprise level, this report is made publicly available and accessible on the Netsmart website.

Introduction: Governance Practices for DSI Systems at Netsmart

For the purposes of this report Decision Support Intervention and Artificial Intelligence (AI) are used interchangeably.

As artificial intelligence (AI) continues to transform industries, the governance of AI systems has emerged as a critical area of focus for organizations committed to maintaining a competitive edge while ensuring ethical, legal, and operational compliance. At Netsmart, we recognize that the integration of AI technologies into our business operations brings unparalleled opportunities, along with significant responsibilities.

Our approach to AI governance is designed to manage the unique risks associated with AI systems, including bias, transparency, accountability, and security. By implementing a robust governance framework, we strive to align our AI initiatives with our core values, regulatory requirements, and long-term patient care objectives. This report outlines the key practices and strategies we have developed to mitigate the risks associated with AI deployment, designed with the goal our systems operate safely, fairly, and effectively.

Netsmart’s AI governance framework is consistent across all DSI applications and will be described only once for the purpose of this report. Our AI governance framework was designed based on the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0). Key components of our AI governance framework include:

  1. Stakeholder Engagement: We actively involve cross-functional stakeholders, including representatives from clinical, business, legal, technology, security, compliance and other relevant departments in our AI governance processes. This collaborative approach helps ensure that diverse perspectives are considered, and that our AI systems meet the needs and expectations of all stakeholders. In addition, stakeholders engaged in AI operations undergo comprehensive AI training.
  2. Risk Assessment and Mitigation: We conduct risk assessments at every stage of the DSI development, deployment, and operation. This includes identifying potential risks related to data privacy, model bias, and decision-making processes, and developing mitigation strategies to address these risks proactively. Effective risk intervention and reporting practices are in place with oversight from accountable stakeholders within the enterprise.
  3. Continuous Monitoring and Improvement: AI governance is an ongoing process at Netsmart. We continually review and refine our governance practices and AI inventory to address emerging risks, adapt to technological advancements, and respond to regulatory changes. Continuous monitoring helps maintain the safety, reliability, and alignment of our AI systems with our strategic objectives.

Our AI governance practices are designed to help us comply with relevant laws and regulations, including Health Insurance Portability and Accountability Act (HIPAA), applicable data protection laws, industry-specific cybersecurity standards, as well as federal and state-level obligations for trustworthy AI. We continuously monitor the evolving regulatory landscape to adapt our practices as necessary.

In the sections that follow, we will delve deeper into these AI governance practices, providing an overview of how Netsmart mitigates the risks associated with AI while driving innovation in care delivery workflows and maintaining trust with our stakeholders.

A.  DSI Governance Committee Structure and Process

The AI Governance Committee (AIGC) at Netsmart plays a pivotal role in overseeing the ethical, legal, and operational aspects of AI systems within the organization. This committee ensures that AI initiatives align with the company’s strategic objectives, regulatory requirements, and ethical standards. Below is an overview of the committee's structure, roles, and key functions:

  • Committee Charter. The AIGC operates under a formal charter outlining its purpose, scope, and responsibilities. The charter serves as the foundational document guiding the committee’s activities and decision-making processes through accountable stakeholders.
  • Committee Membership. The AIGC is composed of a diverse group of members, each bringing expertise from various areas critical to AI governance. The membership structure is designed to ensure that the committee has the necessary skills and perspectives to make informed decisions. Key members of committee include clinical, business, legal, technology, security, privacy, and compliance departments.

B.    DSI Policies and Controls

  • Governance Framework. The standards for DSI governance are defined in a framework of policies overseen by the AIGC. The AIGC is responsible for establishing governance policies that define operating standards and controls for DSIs in use at Netsmart Technologies. Additionally, the AIGC oversees the implementation of DSIs, monitors (through delegation) each DSI performance, and ensures compliance with both internal policies and applicable regulatory obligations.
  • Governance Policy. The governance policies implemented at Netsmart address the core elements of enterprise risk across the AI lifecycle. The governance policies establish safeguards and standards for DSI safety and privacy, fairness and bias mitigation, DSI monitoring, whistleblower protection, incident response, and the decommissioning of DSI systems.
  • Governance Controls. Usage of AI applications at Netsmart Technologies is governed by a set of administrative and technical controls. The administrative governance controls address areas such as incident response, DSI decommission process, and policy review and update cycles. The technical governance controls address areas such as validating AI models, assessing dataset and model fairness, and managing fallback thresholds to address performance degradation. These governance controls mitigate risks, prevent failures, and support continuous, controlled operations when AI systems face uncertainty, errors, or unexpected outcomes.

C.   DSI Portfolio

  • DSI Portfolio: The DSI portfolio tracks DSI applications and associated datasets throughout their entire lifecycle, from exploratory phase to pilot, production use, and decommission. The AIGC reviews the DSI portfolio to ensure that all tools and datasets comply with the enterprise's DSI governance policies.
  • Data Lineage. For each AI application, the AIGC monitors that data is acquired from reputable and compliant sources. All data acquisition processes adhere to industry standards and regulations, including HIPAA for patient data privacy and cyber security best practices. Data is managed in accordance with best practices for data integrity and quality. This includes data validation procedures, and controlled access protocols. Data usage is governed by strict policies that ensure compliance with legal and ethical standards. Access to data is restricted to authorized personnel only, and data usage is monitored to prevent misuse.
  • Change Management: Change management practices are in place to maintain transparency throughout the DSI lifecycle. Updates to the DSI portfolio (including updated evaluation results or newly reported DSI risks) are logged and available for audit purposes. Any changes to the DSI undergo a formal change management process, including impact assessments and approvals, to ensure that updates do not introduce new risks.

DSI Risk Analysis

  • Fairness: Each Decision Support Initiative (DSI) application undergoes fairness assessments to identify and mitigate any biases in its decision-making processes. These assessments are conducted on an ongoing basis, covering both the datasets used for training the models and the models themselves.
    • The fairness audits focus on how the DSI impacts different population groups, with the goal of ensuring that no group is unfairly advantaged or disadvantaged by the AI's decisions. To achieve this, fairness metrics are used to identify any inequities in treatment across various protected attributes.
    • These protected attributes can include up to 17 categories, such as race, color, national origin, age, disability, and sex.
    • When biases are detected, they are tracked and monitored, with corrective actions taken to address them. This ongoing process helps ensure that the DSI remains equitable and that potential biases are addressed promptly.
  • Validity: Each DSI application undergoes validity evaluation to ensure that the model's outputs are accurate and reliable.
    • This involves computing a variety of validity metrics, such as precision (the proportion of true positive predictions out of all positive predictions), recall (the proportion of true positive predictions out of all actual positives), and the F-measure (a harmonic mean of precision and recall).
    • These metrics are calculated at both the macro level (averaged over all classes or groups) and the micro level (considering each individual instance) to provide a view of the model's performance across unevenly represented population groups.
    • By regularly monitoring these metrics, potential issues in model performance can be detected early, ensuring that the DSI maintains high levels of validity in its decision-making processes.
  • Reliability: Reliability assessments of each DSI application involve monitoring its performance across various test scenarios to ensure consistent behavior over time.
    • This includes testing how the DSI performs under normal operating conditions, as well as in edge cases or less common scenarios that may arise in real-world use.
    • The goal is to ensure that the DSI provides consistent and dependable outputs, regardless of the conditions under which it is operating.
  • Robustness: The DSI is subjected to robustness testing to ensure that it remains stable and effective even under adverse conditions. This testing includes:
    • Adversarial Attacks: The model is exposed to adversarial examples—carefully crafted inputs designed to mislead or confuse the model. This helps assess the model's vulnerability to such attacks and its ability to maintain accuracy and reliability.
    • Noise and Outlier Testing: The model is tested by introducing noise or outliers into the input data to determine its sensitivity to these perturbations. The objective is to ensure that the DSI can handle unexpected variations in data without significant degradation in performance.
    • Edge Case Evaluation: The model's performance is evaluated on edge cases or rare scenarios that may not have been well-represented in the training data. This testing ensures that the DSI can handle unusual or extreme cases effectively, contributing to its overall robustness.
  • Intelligibility: Efforts are made to ensure that the AI's decision-making processes are transparent and understandable to users. This is achieved by:
    • Facilitating the explainability of model performance in the DSI source attributes, allowing users to understand how and why the AI reached certain conclusions.
    • Using confusion matrices (which show the actual versus predicted outcomes for different categories) to help troubleshoot potential validity biases in model performance.
  • Safety: The DSI is deployed with built-in safety controls designed to prevent harm to patients and users. These controls include:
    • Fallback Performance Thresholds: Predefined limits below which the system should not operate, ensuring that any drop in performance triggers appropriate alerts or corrective actions.
    • Alerts for Potential Issues: Automated alerts that notify users or administrators of potential issues, allowing for timely intervention to prevent harm or mitigate risks.
  • Security: A security risk assessment is conducted to evaluate safeguards in place to protect the integrity, confidentiality, and availability of DSI. In addition to adhering to company-wide cybersecurity policies, the DSI is configured with robust security measures aligned with NIST 800-53 security and privacy controls to protect data and system integrity. These measures include, but are not limited to:
    • Encryption: Ensuring that data is encrypted both at rest and in transit to prevent unauthorized access.
    • Access Controls: Implementing strict access controls to limit who can access the DSI and its data, reducing the risk of breaches and ensuring that only authorized personnel can interact with the system.
    • Security Testing: Conducting regular security testing, including vulnerability assessments, penetration testing and controls assessments, to identify and address potential weaknesses in the DSI.
  • Privacy: Privacy is a critical consideration for the DSI, particularly when dealing with sensitive patient data. In addition to following company cybersecurity and privacy policies, the DSI ensures privacy through:
    • Data Anonymization: Techniques are employed to anonymize data when necessary, removing personally identifiable information (PII) and Protected Health Information (PHI) to protect individual privacy.
    • Data Minimization: Techniques are employed to ensure that DSI collects, retains, and uses only the minimum amount of PII and PHI necessary to achieve the specific purpose. When data anonymization is not possible, PII and PHI is kept only for as long as needed to meet legal, operational, or regulatory requirements.
    • Compliance with Privacy Regulations: The DSI complies with all relevant privacy regulations, ensuring that patient data is handled with the utmost confidentiality and that privacy risks are minimized.

DSI Risk Mitigation

The following practices collectively help Netsmart mitigate AI risks, ensuring that AI systems are safe, effective, and aligned with both organizational goals and regulatory requirements.

People and Talent Management

  • AI Training and Awareness Program: Netsmart associates receive training on AI technologies on a regular basis, including understanding potential risks and incident response procedures. This ensures that l personnel involved in AI initiatives are equipped to identify and address risks effectively.
  • AI Governance Policies: AI specific policies have been established to help ensure that all Netsmart associates adhere to defined ethical AI usage, development, and deployment guidelines and best practices.
  • Role-Based Access and Data Governance: Strict role-based access controls are implemented to safeguard sensitive data and prevent unauthorized use of AI systems. Data governance policies ensure that data handling and usage align with regulatory standards and organizational best practices.
  • Establishment of AI Governance Committees: Netsmart Technologies has an AI governance body comprising cross-functional stakeholders to oversee AI implementation, ensuring alignment with organizational policies and regulatory requirements.
  • Incident Response Plan: An incident response plan is in place which includes a structured approach to identifying, responding to, and recovering form incidents that could compromise the integrity, security, or functionality of DSI.

Organizational Governance and Compliance

  • Ongoing Compliance Monitoring: Ongoing monitoring is conducted to ensure AI systems comply with healthcare regulations and industry guidelines. This process includes risk assessments of fairness, validity, and where applicable, safety and appropriateness.
  • AI Ethics and Accountability Frameworks: Ethical guidelines and accountability frameworks are implemented to ensure AI systems uphold the organization's values and ethical standards. This includes addressing potential biases and ensuring transparency in AI decision-making.

Risk Mitigation Across the DSI Portfolio

  • AI Risk Assessment Models: Netsmart developed and implemented a risk assessment framework to evaluate the potential risks of AI systems throughout their lifecycle, from development to deployment. This framework evaluates the impact of DSIs on patient care, data integrity, and operational processes.
  • Monitoring and Validation: AI models are monitored for fairness, validity, and unintended biases. Validation processes ensure that AI outputs remain reliable over time, and any deviations trigger re-evaluation and model adjustments.
  • Scenario Planning and Stress Testing: AI systems undergo scenario planning and stress testing to predict and mitigate potential risks. This includes simulating adverse events and assessing the system's response, ensuring robust risk management practices are in place.
  • Validation and Testing: Ongoing validation and testing procedures are employed to evaluate and improve the DSI's performance and accuracy.
  • Bias Detection and Correction: Techniques for detecting and correcting biases are integrated into the system to promote fairness.