Tuesday, August 19 | Thought Leadership

AI Safety in Healthcare: Applying the ASL Framework to Responsible Innovation

By Tom Herzog, Chief Operating Officer

Artificial intelligence (AI) has become a powerful force in healthcare—especially in human services and post-acute. Despite automation being involved in healthcare for many years, recent widespread use of AI has amplified adoption of and discussion about exciting new solutions. From chatbots and virtual assistants to AI scribes and revenue optimization tools, the technology is advancing quickly.  

But as adoption accelerates, so does the need for thoughtful oversight. 

Healthcare providers are asking important questions:  

  • Can we trust AI to understand tone and nuance?  
  • How do we safeguard emotional safety? 
  • What happens when an AI tool gets it wrong?  

With concerns around data privacy, cost, ethical responsibility and transparency, organizations need more than just innovation—they need guardrails. 

This brings us to the AI Safety Levels (ASL) Framework. This tiered model provides a clear lens through which we can evaluate and deploy AI systems responsibly, based on their level of autonomy and associated risks. 

What Is the ASL Framework? 

The AI Safety Levels (ASL) Framework is designed to categorize AI systems according to their potential impact and the safeguards required to mitigate risk. It spans from ASL 1 to ASL 5, with each level requiring increasing scrutiny, governance and human oversight. 

Here’s a simplified breakdown: 

ASL Framework Levels


This framework is not about limiting innovation. On the contrary, it helps healthcare organizations responsibly integrate AI tools by clarifying their intended use and safety expectations. 

Why ASL Matters for Healthcare

As we mentioned earlier, AI has been supporting healthcare organizations in meaningful ways for years. Documentation assistants are reducing administrative burden, predictive models are helping identify risk earlier and engagement tools are making care more accessible. 

But unlike the thousands of other businesses turning to AI in recent years, healthcare isn’t just another industry. It's about people—often those of us who are vulnerable, underserved or experiencing a crisis. In healthcare, emotional safety and human connection are paramount. That’s why transparency, clinical accuracy and provider oversight aren’t optional for healthcare leaders—they’re foundational. 

Tools that interpret symptoms, predict risk or interact with clients must operate with integrity, especially when deployed in settings like substance use treatment or crisis intervention. A missed nuance or misleading output could lead to poor outcomes, delays in recovery or broken trust between consumers and clinicians. 

Frameworks like ASL help providers and technologists work together to ensure AI enhances the care experience safely. 

Applying the ASL Framework to Healthcare AI Use Cases

Let’s look at how common healthcare AI tools might be evaluated using the ASL model. 

Clinical Documentation Assistants (ASL 2–3) 

Tools that assist with notetaking or summarizing clinician-patient interactions typically fall under ASL 2 or 3. These systems carry relatively low risk but still require thoughtful oversight. 

Key risks include misrepresenting tone or intent—particularly in healthcare settings where language is nuanced. Ensuring clinician review and transparency about how notes are generated helps keep safety and trust intact. 

Predictive Analytics and Risk Scoring (ASL 3–4) 

Predictive tools that flag clients at risk of readmission or relapse can be valuable—when used appropriately. These systems generally fall under ASL 3 or 4 due to their influence on clinical decisions at the point of care. 

Risks to avoid include misclassification, biased data or overreliance by staff. Clear documentation, explainable outputs and a requirement for human review should be built into the workflow to avoid harm. 

AI-Powered Consumer Tools (ASL 4–5) 

Chatbots or digital companions used directly by clients present higher emotional risk, particularly in healthcare. These tools may provide helpful coping strategies or interact in a therapeutic-like context. The risk is that in rare cases consumers may misinterpret this technology as a real presence in their life or care, causing confusion or emotional dependency due to the tool’s inherent ability to project empathy or friendliness in its responses. 

Guardrails such as escalation protocols, user education and clearly defined scope of use are essential. These systems must be designed to augment care, not replace it, and should be transparent about their limitations. 

Responsible Innovation: The Netsmart Approach 

At Netsmart, responsible innovation starts with accountability. Every AI feature we develop is grounded in explainability, transparency and alignment with the ASL Framework and leading standards from organizations like NIST, SAMHSA, the ONC and the Coalition for Health AI. 

We collaborate with our clients to co-design features that serve real-world needs, ensuring safety and usability go hand in hand. Rather than layering disparate AI tools from third parties, we build intelligence across the platform. That’s the only way to ensure systems can be managed responsibly at scale. 

Responsible AI is not just about what a tool can do. It’s about what it should do—and how we make sure it does it safely. That means designing tools that are ethical, person-centered and aligned with the reality of modern healthcare work. 

 

 

Meet the Author

Tom Herzog · Chief Operating Officer

From the CareThreads Blog

mastering value-based contracting in HS

Mastering Value-Based Contracting: A Playbook for Payer-Provider Success

Tuesday, September 02 | Thought Leadership,Value-based Care

How leading organizations like VNS Health turn value-based care into a scalable, sustainable strategy that drives measurable outcomes.

More
EMR and EHR difference

The Difference Between an EMR and an EHR (and Why It Matters)

Tuesday, August 19 | EHR Solutions and Operations,Human Services,Post-Acute Care,Thought Leadership

It’s surprising how often two important healthcare acronyms—EMR and EHR—are used interchangeably.

More
AI Safety (ASL) Framework

AI Safety in Healthcare: Applying the ASL Framework to Responsible Innovation

Tuesday, August 19 | Thought Leadership

With concerns around data privacy, cost, ethics and transparency, organizations need more than just innovation—they need AI guardrails. This brings us to the AI Safety Levels (ASL) Framework.

More