Skip to main content

Kyna Fong in Fast Company: How do we get smart about designing AI for physician trust?

In this piece for Fast Company, Elation's co-founder and CEO, Kyna Fong explores how AI should and can be designed for trust in the healthcare setting. 

There is a persistent stereotype that medical professionals are Luddites. After all, this is the last profession still using both fax machines and pagers.

In truth, however, physicians are often rapid adopters of new technology—just when they trust that that new technology will deliver significant patient benefit balanced with accurate data on risk. In the U.S., the medical profession is trained to be highly adept at evaluating new drugs, diagnostics tools, and therapeutics once proven, and incorporating these into their standards of care for patients when appropriate. 

When new innovations are slow to be adopted in medicine, it is worth being curious about why physicians are making those choices. A great example is electronic health record software, where adoption lagged for decades—justifiably, as the physician experience with those technologies did not build trust given they were poorly designed, detracted from patient care, created administrative burden and notification fatigue, and increased documentation time hence decreasing efficiency and therefore revenue. The result has been a disappointing cost/benefit ratio, causing many physicians to delay adoption or delegate use entirely for as long as possible. 

What does this mean for the current euphoria around AI in healthcare? How do we predict if the technology will find a path to rapid adoption or will lag decades behind? The key question is, how do we get smart about designing AI for physician trust?

To start to understand why these stereotypes persist and what technologists can learn from it in developing tools for clinicians, it is critical to understand the sensitive nature of the professional practice of medicine. Clinicians carry significant professional responsibility and liability—perhaps more so than any other profession. Upon graduation from their training programs, clinicians commit to a unique code of ethics, the implications of which have grown increasingly complex in modern society. Once practicing, physicians continue to bear vast liability for ethically caring for their patients’ health, and also their patients’ privacy, safety, comfort, and dignity. Physicians must carefully navigate complex decisions with potentially grave consequences for not only their patients but also their own professional credentials and insurability. 

In a survey of 156 physicians from earlier this year, my company, Elation Health, asked physicians about their concerns related to AI in their practice. The top response reflected this responsibility: worrying about “errors that decrease trust in the technology or compromise patient safety” with 68% in agreement. This was followed by putting “confidentiality and privacy of my patients’ data at risk” (63% agree), “implicit bias negatively impacting vulnerable populations” (52% agree), and “increased exposure to medical liability” (52% agree). Concerns about being replaced by the technology ranked much lower and with less agreement. 

So, how do we design AI for trust in the healthcare setting? Here are four things our team has learned are key so far from working with clinicians on AI innovation:


AI is less deterministic than other technologies that we’re used to building with—asking the same question to AI may give different answers each time. And while those answers may be highly accurate, the variation means that testing requires more effort and iteration.

We have to design systems that tolerate a degree of variability and failure beyond what we’re generally accustomed to with machines. One of the ways our team is approaching this is by taking inspiration from existing best practices in healthcare for evaluating human performance, since humans are also not 100% deterministic yet still deliver significant value.


AI inherently operates as a “black box,” gathering data and producing outputs without clear traces back to sources. This can be seen as risky for medical applications. How does a physician know that quality sources of information are being used? And, to the point above, what kinds of error rates may have been accepted in the designs? As much as possible, bringing transparency into these processes will help physicians evaluate the risks more closely and determine if use of an AI system is right for their own practice. 


Expansion of team-based primary care has already leaned our team into a principle of “design for delegation” on our platform, but AI pushes things even further. AI’s potential to reduce administrative burdens means that a substantial portion of clinician focus may shift from documenting today, to reading and reviewing in the future. It will be critical to focus on making content and clinical context easier for physicians to consume to avoid familiar administrative burden mistakes of the past.


In architecting AI for trust in healthcare settings, it is critical to choose use cases and designs carefully. AI can impact the perceived empathy and trust in patient-provider relationships in both negative and positive ways. Elation team’s goal is to amplify empathy in those interactions by finding opportunities to give providers leverage and to scale the quality of care they want to provide. Teams that aren’t intentional about this might fall into the trap of seeing AI as an opportunity to slash time or cost in ways that undermine the patient experience.

Designing AI to be trustworthy for clinical use is a significant challenge for the digital health field, but with the right tools, physicians will be able to spend their valuable time deepening patient care instead of administration and documentation. When it comes to adopting new technologies for their patients—physicians must be given the confidence to trust that they are making the right decisions and that the risk is worth the reward.