Managing Artificial Intelligence (AI) and Healthcare Compliance

Brian Williams, MHA, MBA
woman looking at computer screen in dark

Technology advancements, such as remote patient monitoring (RPM), artificial intelligence (AI), machine learning (ML), and chatbots are creating tremendous opportunities for improving virtually all aspects of healthcare. In an already overburdened system, this technology can also save time, reduce costs, and improve overall management.

As the VP of Compliance at MedTrainer, I’m not only intrigued by the potential of this technology, but also by the new questions and concerns related to AI and healthcare compliance.

In this article, I’ll review how this technology is being used, the areas of greatest concern for compliance professionals, and ideas for maintaining compliance during this evolution.

Common Uses of AI in Healthcare

AI has enabled the integration of big data to identify disease risk, changes in physical/cognitive conditions, and provide decision support for healthcare providers, clinical team members, billing specialists, and leadership. 

Patient Care

RPM digitally transmits a patient’s personal health and medical data to providers who are in a different location. This offers providers insight into how well a patient is responding to treatment regimens, vital sign trends, and enables remote interventions. RPM has tremendous potential for addressing health disparities and providing important services to underserved patient populations and the elderly to receive home care. 

Pattern and Trend Identification

AI and ML enables technology to make predictions based on the healthcare organization’s own data. For example, it can predict no-shows, propensity to pay, adverse event likelihood, and more. With this data, healthcare organizations can greatly increase their efficiency and increase the number of patients served. 

Administrative Functions

From HR to patient scheduling, technology is eliminating manual functions to reduce the time and cost of administrative tasks. A chatbot can be used to answer common employee HR or IT questions, giving humans time for other tasks. Numerous AI applications are also being deployed across the industry to supercharge healthcare analytics including population health management, drug discovery, reducing HAIs, and more. 

Concerns Around AI and Healthcare Compliance

As AI advances, it is becoming more widely adopted across healthcare organizations, which is raising alarms for compliance professionals. These are some of the most common concerns around AI and healthcare compliance.

Protection and Security of PHI

It’s not just data in electronic health records (EHR) being used in AI that is a concern for patient privacy. AI applications can collect almost any data from audio and video recording (telehealth appointments), fingerprint, retina, and facial data from scanners, and more. Collection of health data through apps may violate federal or state laws and regulations and/or HIPAA, not to mention the increase in cybersecurity threats as more medical devices are becoming interconnected.  

Laws Are Behind the Technology

Governing bodies are racing to keep up with the AI evolution, but very few have released policies to help guide compliance professionals. The European Commission has proposed one of the more robust packages, the Food and Drug Administration (FDA) issued an action plan (in 2021), the National Institute of Standards and Technology (NIST) shared guidelines on security and trustworthiness, and the Word Health Organization (WHO) offers these key ethical principles for use of AI for healthcare:

  • Protect human autonomy
  • Promote human well-being, safety, and the public interest 
  • Ensure transparency, explainability, and intelligibility
  • Foster responsibility and accountability
  • Ensure inclusiveness and equity
  • Promote AI that is responsive and sustainable

Widespread Systemic Errors, Bias, and Unanticipated Risk

When deployed poorly, AI can create widespread systemic errors, bias, and unanticipated risk. For example, when a human incorrectly codes an item, it happens one time. When ML is trained incorrectly, hundreds or thousands of items could be incorrectly coded before the error is discovered. The FDA issued draft guidance and the White House Office of Science and Technology Policy published the Blueprint for an Artificial Intelligence Bill of Rights to prevent algorithmic discrimination which may violate legal protections and ethical values.

The convergence of AI and RPM is also generating major concerns because PHI data is shared between devices and databases in extraordinary ways. The propensity to pay when combined with social determinants of health (SDOH) may create health disparities and cause discriminatory practices based on environmental and/or social factors.

Rush to Implement

Department leaders are often so excited about the time or cost savings that AI can provide, they make purchases and implement technology without notifying the organization’s compliance office or IT team. As a result, you don’t know how the automated solution was developed, trained, and tested, which puts the entire organization at risk.

Maintaining Healthcare Compliance As Technology Evolves

As a healthcare compliance officer, you’re in a tough position.

You are charged with managing risk to your healthcare organization when you have little regulatory guidance and a constant evolution of technology. As a former health system compliance officer, these are a few things I would implement to manage AI and healthcare compliance.

1. Apply the Proven Compliance Principles

There’s no need to reinvent the wheel — the seven elements of an effective compliance program, identified by the Office of the Inspector General (OIG), are beneficial to your current compliance program, so apply them to any AI-specific plans you create. As a reminder, the seven items are:

  • Implementing written policies and procedures
  • Designating a compliance officer and compliance committee
  • Conducting effective training and education
  • Developing effective lines of communication
  • Conducting internal monitoring and auditing
  • Enforcing standards through well-publicized disciplinary guidelines
  • Responding promptly to detected offenses and undertaking corrective action

Pro Tip: Use a learning management system to ensure training is up-to-date and easily accessible.

2. Develop Consistent Procedures

Create a procedure, or framework, for people in your organization to follow when they are interested in implementing AI in their department. This should help compliance and legal professionals manage risk, while also responding to the business’ need for AI. These procedures should include:

  • Identification of internal and external parties that should be involved, and at what point in the process
  • Guidance for selecting reliable business and technology partners (vendors) who share the organization’s values and risk tolerance
  • Requirements for data governance so all internal and external parties are aware of the security, privacy, and compliance requirements

Pro tip: Use a compliance platform to keep all policies and documents in one place, accessible to all employees.

3. Communicate Regulatory Changes

The fast pace of change in technology guarantees there are regulatory changes coming. Make sure you are effectively communicating these throughout the organization to make any technology purchases smoother. Ensure you’re staying up-to-date on both federal and state laws, and that you have a plan in place for making adjustments when laws change. 

4. Leverage Your Board of Directors

A healthcare organization’s board of directors needs to understand all compliance risks and with the adoption of AI, your board presents an opportunity. Berkeley Research Group’s Thomas F. O’Neil III suggests that filling at least one board seat with a person experienced in overseeing AI consumer products is an emerging best practice.

Pro Tip: Use these four tips to keep your board informed. 

5. Encourage Incremental Adoption

You can’t control AI adoption throughout your healthcare organization, but you can encourage best practices, which include involving compliance and legal in the process, choosing applications that have demonstrated consistency and validity, and incremental adoption. By adopting incrementally, you’ll be able to address any snags early in the process — for example, before you have thousands of items coded incorrectly.

Looking to the Future

Although the current focus on AI/ML in the healthcare environment is focused on clinical data, technology, improving health outcomes, and increasing health equity, the underlying regulations will require compliance professionals to consider how to utilize AI/ML to their advantage in the near future. 


Get tips from the pros on how to use AI to write compliance courses.