At the start of 2026, the Utah Department of Commerce’s Office of Artificial Intelligence Policy announced a partnership with Doctronic to allow an AI system to legally renew certain prescriptions for patients with chronic health conditions.
The pilot presents a ground-breaking use case for AI in healthcare that can deliver potential benefits – assuming the right guardrails are in place. Utah’s approach reflects a proactive effort to explore how emerging technologies can help address access gaps, reduce friction in care, and test new models under controlled conditions. The pilot program also raises some questions – which is why we sat down with Sandy Shtab, Healthesystems VP of Industry and State Affairs, and Silvia Sacalis, VP of Clinical Services, to discuss the role of AI in treatment decisions and what needs to be considered in its governance.
How does this prescription renewal program work exactly?
Shtab: At the Doctronic prescription renewal portal, patients must confirm that they are located in Utah, enter the medication they want refilled, and then select an in-state pharmacy for fulfillment. Users must then upload their ID, along with a verification selfie and proof of an old prescription and then pay a $4 service fee.
The AI system reviews the information to ensure a prescription history exists, after which a health assessment is given, where patients must answer certain questions before the program issues a refill. If the AI is uncertain if a prescription should be renewed, it refers the patient to a Utah-licensed human physician.
Sacalis: It should be noted that Doctronic has not publicly detailed which specific clinical factors are evaluated. According to the contract between Doctronic and the state of Utah, after the patient completes their health assessment, Doctronic applies evidence-based clinical guidelines to determine renewal appropriateness.
There are some parameters in place that limit the scope of where AI is being applied. Doctronic can only renew prescriptions from a list of 192 drugs, none of which are controlled substances. The list does include many drugs for the treatment of chronic conditions including hypertension, diabetes, pain, inflammation, muscle spasms, asthma, COPD, anxiety and depression, erectile dysfunction, migraine/headache, and more.
What are the strengths and limitations of relying on patient‑reported data in an AI‑supported renewal process?
Sacalis: A system like this requires patients to effectively monitor and report their symptoms when interacting with AI. And it’s worth noting that the pilot is intentionally focused on renewal scenarios where patients are already familiar with their therapy and symptoms, which may make structured self-reporting more feasible than in an initial diagnostic or treatment decision.
However, there may be scenarios where self-related data is unreliable. Barriers such as low patient engagement, limited health literacy, cognitive impairment, or difficulty self-monitoring can all affect the quality and completeness of patient-reported information. As stewards of medication safety and outcomes, we must thoughtfully account for these variables.
This is particularly important in a pilot centered on chronic conditions. Supporting medication adherence is essential to reducing gaps in therapy. However, many of the therapies included – such as those used to treat hypertension and diabetes – address conditions that are inherently complex and require ongoing clinical assessment, lab monitoring, and therapeutic adjustments.
These elements cannot be fully replicated by AI alone. They reinforce the continued importance of a human-in-the-loop model as it relates to complex care decisions, where pharmacists and other clinicians apply clinical judgment to interpret data, identify risk, and individualize care.
That said, we should also recognize the opportunity. In care environments where provider touchpoints may be limited, structured AI-enabled channels that consistently capture patient-reported information can enhance visibility between visits. When thoughtfully integrated into pharmacist-led care models, these tools have the potential to extend our reach, surface issues earlier, and ultimately strengthen – not replace – the clinician-patient relationship.
Shtab: I think there’s also a generational aspect to consider. Programs like these may be better received by younger patients and tech-savvy patients of any age, especially if these individuals are using an app or medical device to help them monitor their symptoms. In a state like Utah with so many patients living in rural areas, provider access can be a challenge, so carefully governed, technology-enabled models may offer meaningful support for continuity of care.
Access to care is cited as a key benefit of this pilot. Doctronic stated that medication compliance is cited as the largest driver of preventable health outcomes, and that prescription renewals account for 80% of medication activity. They believe that programs like theirs can reduce delays in medication lapses. What are your thoughts on that?
Sacalis: When thoughtfully designed and implemented with clear guardrails and an appropriate level of human oversight, AI offers meaningful potential to improve access to care, strengthen medication adherence, and prevent avoidable therapy lapses. By automating routine, protocol-driven decisions – such as refill authorizations, adherence outreach, or standardized monitoring prompts – AI can help reduce administrative bottlenecks that often delay care. This allows patients to maintain continuity of treatment while freeing clinicians to focus their time on complex clinical decision-making and high-risk interventions.
In this context, AI can serve as a force multiplier. By proactively identifying patients at risk for nonadherence, flagging gaps in therapy, or prompting timely follow-up, these systems can help ensure that treatment plans remain on track – particularly for individuals managing chronic conditions where interruptions in therapy can have significant downstream consequences.
Although the technology is new, the structure is not. Much like collaborative practice agreements, this model operates within defined clinical parameters. The difference is that certain standardized decisions are executed through AI.
Shtab: This program raises an important policy question, one which has caused tension between stakeholders. For years, pharmacists have sought expanded authority to perform similar functions within collaborative, clinician‑led frameworks, without success in advancing these legislative efforts. The Utah pilot reflects how technology often advances faster than professional scope-of-practice reforms – essentially creating a leapfrog over a well-qualified, available health professional. It does raise the question: Are we entrusting AI with decisions that are not currently entrusted to other qualified health professionals?
However, I will note that this Doctronic pilot does clearly identify all AI-generated renewals to the pharmacist, and pharmacists will retain full authority to escalate any renewal to a Doctronic-affiliated physician that is licensed in the state of Utah to practice medicine.
How important is clinical oversight in determining whether AI‑supported renewal models like this can scale responsibly?
Sacalis: Clinical oversight is foundational to whether AI‑supported renewal models like this one can scale responsibly, particularly when decisions affect individual patients and carry the potential for serious downstream consequences.
At a high level, the impact will depend on the depth and precision of the clinical logic and how effectively it evaluates individual patient scenarios. With respect to this particular program in Utah, my perspective is balanced, with both opportunities and areas that warrant careful consideration.
The pilot is limited to processing 30-, 60-, or 90-day renewals of medications that have already been prescribed by a licensed provider. It cannot initiate new therapies or modify treatment plans. While these guardrails narrow the scope of clinical decision‑making, they do not eliminate risk – particularly in patients with chronic conditions where stability can change subtly but meaningfully over time.
From a pilot-design perspective, the phased review approach is likely intended to assess performance at scale while maintaining operational feasibility. However, without full visibility into the clinical rules driving the program, it is difficult to fully assess the risk. Oversight is not simply about whether the system works as intended – it is about how the underlying clinical logic is governed, validated, and continuously monitored and refined over time.
Let’s consider the phased review structure. Only the first 250 patients will have prospective physician review before prescriptions are sent to the pharmacy. The design then shifts from a human-in-the-loop to a human-on-the-loop model, where the next 1,000 patients move to retrospective review, and in the final phase, just 5-10% of renewals will be audited monthly after the fact. Under this model, some patients could receive renewals for an extended period – potentially up to a year within the scope of the pilot – without direct human review. From a clinical risk perspective, this raises understandable concern. Even if the system performs well overall, the consequences of missing a small number of patients can be significant.
So, while the technology may offer efficiencies, the primary consideration remains clear: how consistently and rigorously clinical oversight is maintained. For many clinicians, confidence in these models will depend less on aggregate performance metrics and more on assurance that safeguards are strong enough to minimize the risk of any individual patient being overlooked.
Shtab: And that brings up a serious question: Who is liable when something goes wrong? While Utah has temporarily relaxed its regulatory requirements, there is now a software product – not a person or physician – deciding about a specific patient’s care. While this may not be an immediate concern, given the types of medications on the list, and the fact this is only specific to renewals, what happens in this pilot will inform what we may see in the future as this tech gets implemented further up in the treatment cycle.
The Doctronic pilot is also in an emerging area of risk management and regulation. Can software be regulated as if it is a physician? In this case, it may be. Reports indicate that Doctronic has taken out a new type of medical malpractice insurance policy for this pilot, designed specifically for this use case. This is another first of its kind: an insurance policy which protects a corporation, not a human doctor, in the event of a medical malpractice claim.
While we don’t have any answers at this stage, it’s not difficult to say that all of this potentially opens the door to more autonomous healthcare in general. AI is currently being used in an assistive capacity, requiring oversight by an actual physician when reading diagnostic studies and even lab work. But what happens when doctors trust but do not verify an AI’s decision due to time constraints, capacity, or other factors? These questions are not unique to Utah, but reflect a broader regulatory frontier as AI assumes more autonomous roles in healthcare delivery.
Given everything we’ve discussed, what should this pilot ultimately signal about the role of AI in treatment decisions?
Sacalis: The Utah pilot is a helpful example of how organizations are beginning to explore AI’s potential to streamline access, reduce administrative burden, and support patient engagement in new ways. It also reminds us that, even with these promising developments, there are inherent limits to what AI can safely manage on its own. Chronic conditions change. Health status fluctuates. Self‑reported data may be incomplete or inconsistent. And without clinical oversight, even a well‑designed model can miss subtle but meaningful indicators that affect patient outcomes.
Pilots like this are valuable precisely because they surface questions early, allowing systems to be refined before broader adoption. Alongside the valuable insights this initiative will contribute to the industry, it reinforces a core principle: Innovation in healthcare must be paired with accountability and clinical vigilance. AI can absolutely enhance care, but in clinical decision-making of this nature, it is my perspective as a clinical leader and healthcare professional that human clinicians must remain central to reviewing, guiding, and validating that care.
Shtab: From a governance standpoint, the results of this Utah pilot will serve regulators, policymakers, and care organizations well beyond this program. What is learned in this use case will inform future AI programs, not only in identifying where it can add value, but also where stronger safeguards may be needed. This program opens the door to thoughtful exploration rather than signaling a definitive shift toward automation.
As these models continue to evolve, the broader takeaway is less a conclusion than an open question: Can AI meaningfully augment clinical decision‑making at scale without eroding accountability or patient protections? Pilots like Utah’s begin to explore whether human‑on‑the‑loop oversight can provide sufficient governance as autonomy increases. The answers will depend not only on performance outcomes, but on how transparently these systems are monitored, escalated, and corrected over time.
Sandy Shtab is VP of Industry and State Affairs at Healthesystems. She leads the Advocacy & Compliance team, which regularly tracks regulatory activity via the Regulatory Recap e-newsletter. In addition to email, the newsletter is published on the Advocacy & Compliance page of the Healthesystems website.
Silvia Sacalis, PharmD, BS is VP of Clinical Services at Healthesystems, overseeing clinical strategy, services and operations within the organization. You can learn more about Healthesystems’ Clinical Services here.





