
February 2020 | ISE Magazine 35
where people tend to interpret data from upper left to lower
right. Additionally, the placement of texts and images should
be applied considering the human visual field, which is di-
vided into two different segments, as shown in Figure 2. Thus,
the placement of texts or images according to humans’ visual
field minimizes the users’ cognitive load.
Clinicians may experience difficulties whenever there are
multiple parallel tasks overwhelming their senses. For exam-
ple, an alarm noise in an emergency floor or a patient talking
in an outpatient setting can be distracting if auditory infor-
mation is also running on the AI system. Such multitasking
should be avoided to reduce their cognitive load. Their sense
of sight might be impaired when background images or texts
are embedded under pieces of informative texts. Thus, AI-en-
abled applications should avoid such embedded content. And
the need to process information simultaneously by two motor
systems increases cognitive load, such as listening to a patient’s
distress while using a complex AI system.
Trust and meaningful use
As governing bodies decide which downstream features mat-
ter for AI systems, they should also keep in mind these sys-
tems will fail as a technology without acceptance and trust
from clinicians despite having good analytical performance.
Studies have shown that trust is built in a continuous manner,
demanding two-way interactions between the user and the
technology.
Initial trust is essential for ensuring the adoption of new
technology. Trust is influenced by a user’s first impression and
is built based on that person’s personality and institutional cues.
Once trust is developed, it must be nourished to be sustained.
In our context, continuous trust depends on the functioning
of the AI system in reducing users’ cognitive load and yielding
clinically meaningful outcomes.
It has been believed that trust in technology is determined
by human characteristics (personality and ability), environ-
mental characteristics (culture, task and institutional factors)
and technology characteristics (performance, process and pur-
pose). However, the impact of meaningful use on trust has
been neglected. Meaningful use,
just as it is imposed on the applica-
tion of EHR technology, should be
imposed on AI systems to improve
the quality of care.
The meaningful use of AI sys-
tems means that specific AI should
be implemented and interpreted in a
specific manner. Depending on the
functioning of the AI algorithm, not
all systems can be generalized across
a healthcare system.
For instance, the WAVE platform
algorithm is based on five vital signs: heart rate, respiration,
oxygen saturation, temperature and blood pressure. Since such
measures are common across health systems, the system could
be employed by multiple diverse health systems. However,
other AI-based platforms, especially those based on institu-
tion-specific EHR or image datasets, may not translate across
other EHRs. Moreover, Al trained on specific datasets, such
as patients from a specific institution, may not be generally ap-
plied across broader populations.
Increasing AI interoperability may necessitate developers to
deliver more specified data to confirm that predictive algo-
rithms will achieve reliable, replicable and valid results. In-
deed, regulators should focus on balancing the clarity of pre-
dictive models without impeding the proprietary interests and
intellectual property of algorithm developers.
Healthcare AI systems are just getting evaluated and being
made available for clinical use, so the influence of the exist-
ing regulatory framework on patient outcomes is yet to be
determined. It is also uncertain what impact the 21st Cen-
tury Cures Act passed to relax regulatory standards for low-
risk health technology will have on the value and quality of
predictive algorithms. The FDA’s Digital Health Innovation
Action Plan, issued in 2017, launched a precertification pro-
gram to analyze clinical outcomes of AI-based algorithms.
Such efforts should be acclaimed but improved based on our
recommended norms.
Some developers may disparage the overregulation and
standardization of a vaguely understood field. Indeed, a pledge
to regulate healthcare AI systems will emerge over time and
impose financial costs to stakeholders. Policymakers should
also be sensitive to the stability between regulation and in-
novation in this evolving field.
Avishek Choudhury is a Ph.D. student and research assistant in the
School of Systems and Enterprises at Stevens Institute of Technology
in Hoboken, New Jersey. He received his master’s degree in industrial
and systems engineering from Texas Tech University. His research in-
terests include healthcare systems, artificial intelligence, clinical decision-
making, patient safety and network science. He is an IISE member.
FIGURE 2
Human visual field
Where the eyes and the brain connect to process information.