Back to blog

How to Create an AI Policy That Works: A Guide for Clinicians and Agencies

How to Create an AI Policy That Works: A Guide for Clinicians and Agencies

9

Min read

Nov 24, 2025

Whether you're a solo clinician exploring AI scribes to ease your documentation load or an agency director leading your organization through the integration of AI tools, one thing is clear that we need to talk about policy.

AI is entering therapy rooms, community mental health programs, and human service agencies faster than most organizations can adapt. And while the promise of increased efficiency is real, so are the risks. Without a clear, well-communicated AI policy in place, your practice may face ethical dilemmas, legal uncertainty, and confusion among staff and clients. It’s time we lead with intention, not reaction.

This blog will outline why an AI policy is essential for ethical and effective practice. I’ll walk you through the key components of a strong policy, provide practical steps to get started, and share free resources and templates to help clinicians and organizations build responsible AI frameworks grounded in professional values.


Why Every Practice Needs an AI Policy

Creating an AI policy is about more than compliance. It is a demonstration of ethical leadership. A well-crafted policy protects your clients, guides your staff, and aligns technology use with your agency's values. It helps build trust, internally and externally.

Without a clear AI policy, practitioners may find themselves navigating ethical gray areas on their own. This lack of structure can lead to inconsistent use of AI tools, blurred professional boundaries, and even unintentional harm, such as breaches of confidentiality or the misuse of sensitive client data.

Ethical practice requires more than good intentions. It requires systems and safeguards. An AI policy provides that foundation. So, whether you're managing a private group practice or a large social service agency, having a documented policy is now a best practice.


What Should an AI Policy Include?

Define the Purpose and Scope of AI Use

Start by clearly stating why your agency is using AI and where it will and won’t be applied. For example, AI may be used for:

  • Documentation assistance

  • Scheduling and administrative support

  • Client psychoeducation

However, you may decide AI should not be used for:

  • Clinical decision-making

  • Crisis intervention

  • Direct assessments or therapeutic relationships

Having these boundaries up front ensures alignment across your team and protects against mission creep.

Align with Ethics and Core Values

AI use must be rooted in ethical principles, not just efficiency. Your policy should reaffirm the NASW Code of Ethics, highlighting:

  • Respect for client dignity and autonomy

  • Transparency in AI tool usage

  • Cultural humility and anti-oppressive practice

Make it clear that AI tools are meant to enhance, not replace, human connection and clinical insight.

Center Data Privacy and Confidentiality

Client confidentiality is non-negotiable. Your policy should require:

  • The use of HIPAA-compliant and secure tools

  • Explicit client consent is required before using any AI-supported service

  • Data encryption and anonymization protocols, when appropriate

Tools like Berries AI publicly outline their HIPAA compliance and security standards, making them easier to vet against your policy requirements.

Establish Clear Boundaries for AI Use

Don’t leave it to chance. Your policy should be clear in communicating:

  • Which tasks require human oversight

  • Which activities is AI permitted to assist with
    Where AI is not allowed to replace human judgment

For example, while AI might help generate first drafts of case notes, final decisions, interventions, and diagnoses must remain fully within clinician control.

Promote Transparency and Accountability

AI use should be transparent and clear. Include guidance in your policy on:

  • How to document AI-supported activities in clinical records

  • How to talk with clients about AI use in plain language

  • Who is responsible for monitoring AI integration in the agency

This builds trust, reduces misinformation, and ensures clients remain informed participants in their care.

Monitor for Bias and Promote Equity

Bias in AI is real, especially when tools are trained on non-representative or incomplete data. Your policy should outline:

  • Protocols for reviewing AI outputs for fairness and inclusivity

  • A preference for tools that reflect diverse populations and inclusive design
    Strategies to monitor and respond to any disparities created by AI use

Build a Strong Training and Learning Culture

AI tools evolve quickly. Ensure your staff is trained on:

  • The specific AI tools your agency uses

  • Ethical considerations and data security

  • How to identify potential misuse or ethical conflicts

Commit to ongoing professional development through annual trainings, team discussions, and external workshops. You can start with the free resources at www.theaisocialworker.com.

Acknowledge Environmental Impact

AI isn’t just a digital issue, it’s an environmental one. Consider including:

  • A statement about minimizing unnecessary AI use

  • Preference for energy-efficient tools or those that disclose their carbon impact

  • Ways your agency offsets its environmental footprint (e.g., green servers, paper reduction)

Ethical leadership today includes awareness and action from local to global impact. 

Practical Tips for Getting Started

Begin with a manageable scope, but treat it seriously. Developing an AI policy doesn’t need to be an overwhelming task. Start small and build from there; but whatever you create, ensure it is formal, documented, and accessible.

If you’re a solo practitioner, draft a concise one-page AI use policy that outlines the tools you use, how you protect client data, and your boundaries around AI-generated content. This document serves both as an ethical compass and a safeguard should questions arise about your practice. You can start with this template here

For Agencies and Larger Organizations 

For agencies and larger organizations, initiate a cross stakeholder collaborative process. Convene a diverse work group that includes clinicians, supervisors, clients or client advocates, compliance leads, and IT support if available. This collaborative model ensures that multiple perspectives, especially those of clients, are reflected in the policy and that ethical use is centered, not just efficiency.

Use Documentation

Develop a living document. Your policy should be dynamic, not static. AI tools, regulations, and ethical guidelines are rapidly evolving. Schedule regular reviews at least annually, or more frequently if your agency is actively integrating new AI tools. Use each review period as a chance to reflect on how AI is working (or not) in your setting, and adjust the policy accordingly.

Establish Training

Build training into your rollout of your policy. A policy is only useful if everyone understands and follows it. Include guidance for onboarding new staff and build in training or professional development for existing staff to promote ethical and responsible AI use. Most importantly, align your AI policy with your practice values. Ensure it reflects your commitment to ethics, privacy, transparency, and client-centered care.

Leading AI Use Responsibly and Intentionally

Creating an AI policy is about shaping your practice culture and aligning your organization with the best ethical practices. When we lead with intention and transparency, AI becomes a tool for empowerment, not confusion or harm.

So let’s stop reacting to AI and start shaping its role in our work. Building a thoughtful policy is the first step toward doing that, with clarity, care, and a commitment to the people we serve.

Additional Resources on AI Policy Development for Agencies

BC Association of Clinical Counsellors (BCACC). (2025). AI and Clinical Practice Guidelines. Retrieved from https://bcacc.ca/wp-content/uploads/2025/03/BCACC_AI_Guidelines_March_2025.pdf

Golden, A., & Aboujaoude, E. (2024). The Framework for AI Tool Assessment in Mental Health (FAITA-Mental Health): A scale for evaluating AI-powered mental health tools. World Psychiatry, 23(3), 444–445. https://doi.org/10.1002/wps.21248

PMID: 39279357; PMCID: PMC11403176

Utah Office of Artificial Intelligence Policy & Utah Division of Professional Licensing. (2025, April). Best practices for the use of artificial intelligence by mental health therapists: Executive summary. https://ai.utah.gov/wp-content/uploads/Executive-Summary-Best-Practices-Mental-Health-Therapists.pdf