40 ISE Magazine | www.iise.org/ISEmagazine
August 2020 | ISE Magazine 41
Engineers are sometimes their own worst enemy. They will de-
sign a system with the sole intention of creating the best possible
results; the last thing they want is a product skewed with bias. So
how could a system designed by engineers and programmers who
are opposed to bias in all its forms ever produce the opposite of the
intended outcome?
The introduction of unintentional bias in complex systems is nothing new.
A decade ago, Nikons digital cameras used a primitive form of face recog-
nition that would identify subjects in a photograph and automatically focus
on the human face. Some commentators branded the feature to be racist
because the camera failed to recognize Asian faces as human. It is not likely
the Tokyo-based camera giant secretly harbored such a grudge. Nonetheless,
they were caught up in a minor but embarrassing controversy.
Fast forward to today when far more advanced facial recognition algo-
rithms, like Amazons Rekognition, have generated the same racial bias
charges – this time from weightier sources such as MIT Media Labs and the
National Institute of Standards and Technology (NIST).
Fortunately, developers of these algorithms respond swiftly once the prob-
lems were brought to their attention and the revised systems produce better
results. But the damage is already done. In the age of Twitter-fueled, short-
form media consumption, many will remember the initial incendiary “racist
AI” headlines. Few will ever see the more modest “the AI has been margin-
ally improved” follow-up stories, if they are ever written.
Unexpected results like this and the resultant media tempest represents
the greatest long-term threat to the success of advanced artificial intelligence
systems. Combine this with science fictions near universal depiction of AI (in
this case, referring to artificial general intelligence) as an evil force attempt-
ing to destroy the world and the public will demand that politicians take
action to protect them from AI in all its forms.
We shouldnt let the problem get that far. If society ever expects to progress
beyond the consumer-grade “AI” found in Alexa, Siri and Google Assistant,
we need to make a commitment to shore up the ethical foundations of AI de-
velopment so that we can end unforced errors and unintentional bias. Doing
so will produce better results and open the door to revolutionary advances in
business efficiency.
Validation is the key to ethical AI
Last year, the Institute of Electrical and Electronics Engineers released ethi-
cally aligned design guidelines that are meant to encourage developers of
automated systems to reflect on the principles that, if followed, will help to
avoid such problems. The all-encompassing principles deal with many dif-
Rethinking the
foundations of ethical AI
Guidelines can ensure human control of advanced
automated systems
By Joseph Byrum
E
42 ISE Magazine | www.iise.org/ISEmagazine
Rethinking the foundations of ethical AI
ferent potential problems, but those following them will have
to think about, then implement, effective validation routines.
Validation is more than half of the battle in preventing the
unintended consequences in AI.
One issue with deep learning and machine learning variants
of articial intelligence is that the end users and developers
generally have no idea how an algorithm comes up with a
solution for the problem it was created to solve. The whole
point of this subset of AI is that these systems can create their
own algorithm to evaluate data based upon an analysis of a
training dataset.
Let’s say engineers wish to create a machine learning pro-
gram that diagnoses a certain type of disease from an X-ray.
Their first step would be to approach medical experts to col-
lect images from a set of individuals known to have the disease
and a second set from individuals without it. The developers
would pay special attention to finding the most difficult edge
cases that might trip up a simple program. The trickiest photos
might, in fact, even fool a physician.
A machine learning algorithm would process the train-
ing dataset by making a catalog of all the distinctive visual
features of the disease. For instance, it might note that black
spots in the lung of a certain shape and size as a sure sign of
disease. It would then compare these images to what the lungs
of a healthy subject looks like. The algorithm would assign a
weight to each feature.
Thus armed with training data, the program would ex-
amine a new X-ray image and evaluate how many healthy
features are contained in the image and how many diseased
features are present. It would then make a statistical evaluation
based on what has been “learned” from the training dataset to
decide whether it is likely that the picture came from someone
with the disease.
Such algorithms arent static; they are designed to make ad-
justments according to the algorithms ongoing successes and
failures. For instance, if a black spot 2 pixels wide triggers too
many false positives, the AI might adjust its sensitivity to only
trigger a positive indication at one 3 pixels wide.
Human intervention isnt required for the algorithm adjust-
ment, nor is human understanding or approval. The whole
point is to automate this process so that instead of requiring
manual intervention each time an equation goes wrong, self-
learning code can make the changes needed for optimum re-
sults. As much as that contributes to overall efficiency, it’s also
a potential liability and source of error.
Approaching a solution through principles
Machine learning will always have these limitations because a
training dataset of necessity presents only a slice of reality. The
real world consists of many layers of complexity, with count-
less interacting factors that conspire to confound anyone or
anything attempting to come up with a simple algorithm that
can produce perfect results.
As with dodgy digital camera image recognition, the ethical
problems arent caused by unethical engineers. They’re caused
by them taking shortcuts in development as a way to manage
the overwhelming complexity of the task at hand.
IEEEs eight ethical design principles are designed to get AI
creators thinking broadly about human rights, human well-
being, privacy, transparency, accountability, potential misuse
and competence. These principles direct development teams
to think in terms of making AI that advances the interests of
humanity, ensuring the system works in a way that’s highly
documented and under human control.
A developer adhering to these standards will create an au-
tomated system with an audit trail allowing inspection of why
the AI made any given choice. In this way, transparency allows
the systems human creator and owner to be accountable for
the decisions. It also makes it more likely that developers will
be more vigilant about the possibility someone would manip-
ulate or otherwise exploit the AI.
AI is focus at IISE symposium,
Annual workshop
A comprehensive view of artificial intelligence, its applications
and potential for ISEs is scheduled for the Artificial Intelligence
Symposium, a virtual event held in conjunction with the IISE
Engineering Lean & Six Sigma Conference 2020 Oct. 12-14. The
conference, originally headed to Atlanta, was rescheduled as a
virtual event.
The symposium will include expert speakers that include
organizers Ben Amaba, global chief technology officer for IBM
Watson and Cloud Division, and Michael Testani, director of
Industrial Outreach & Continuing Professional Education at
Binghamton University. Visit iise.org/LeanSixSigma for updates.
Amaba and Testani also will lead a preconference workshop
on artificial intelligence at the IISE Annual Conference & Expo
2020 Oct. 31-Nov. 3 at the Hyatt Regency in New Orleans.
“Creating a Business Case for Artificial Intelligence Using Design
Thinking,” is set for 8 a.m. to 5 p.m., Oct. 31, and will discuss
the many benefits of AI and its application to specific businesses.
To register, visit, iise.org/Annual.
Learn more about AI applications with Amaba and Testani
in a Season 1 episode of Problem Solved: The IISE Podcast, at
https://link.iise.org/iisepodcast_ai.
August 2020 | ISE Magazine 43
Perhaps more importantly, the principles urge the creation
of standards to validate that the AI and its human operators
alike are effective in their respective tasks, getting the job
done properly. In all cases, a human must always be oversee-
ing whats happening with the power to step in and correct
potential mistakes.
Augmented intelligence bypasses
common ethical dilemmas
Deep learning and machine learning get all the attention these
days, but implementation of the IEEE principles is more dif-
ficult with these forms of AI because they lack transparency.
An alternative would be to concentrate on augmented intel-
ligence systems that dont need to be squeezed and molded into
something that might fit under ethical guidelines. Augmented
intelligence remains human-centered by design.
With augmented intelligence, the AI system by design is
always under a human operators full control. The job of aug-
mented intelligence algorithms is to process data and to pro-
vide refined intelligence to its user. The system can also offer
suggested courses of action for the operator who reviews those
and makes a final decision about what to do based upon the
information presented.
Such systems are also transparent by design; because the in-
formation is provided to the human operator, there’s a clear
chain of data to follow to determine how each decision was
formed. There’s no mystery involved.
It’s also better in terms of protecting human well-being.
Augmented intelligence is similar to driving a car with a GPS
device taking care of navigation, or with a lane assist feature
that nudges drivers to keep them from veering off-course. It’s
entirely possible for an evil driver to use a car so equipped to
hit another vehicle or pedestrian, but that would only happen
as the result of the operator’s deliberate choice. The system
itself remains within ethical boundaries.
With a fully autonomous car, the systems are far more vul-
nerable to error and software glitches or hacking could cause
unwanted results, to the detriment of human well-being. Hu-
man control acts as an insurance policy for human interests –
it’s not a guarantee of good results, but it certainly helps.
Why ethics matters:
Building the intelligent enterprise
Advancing ethical AI is a critical step toward having AI sys-
tems capable of assisting all the functions of a business. I refer
to a company designed from the ground up to use augmented
intelligence to boost the decision-making abilities of its em-
ployees as the intelligent enterprise. Judging from the benefits
of one-off AI optimized systems, unlocking the productivity
of every employee in a business would unleash a step change
in efciency, to the benefit of consumers and business owners
alike.
But the intelligent enterprise will never become reality if
AI development efforts are directed toward dead ends or sys-
tems the public will never fully trust. Ensuring more robust
and ethical development of AI systems in these early days is
the most critical step in achieving the long-term potential of
automated systems.
Joseph Byrum is chief data scientist at Principal. He was previously
senior R&D and strategic marketing executive in life sciences-global
product development, innovation and delivery at Syngenta. In that
role, he was chief architect of initiatives that won Syngenta the 2016
ANA Genius Award in Marketing Analytics and 2015 Franz Edel-
man prize for contributions in operations research and the management
sciences. His bachelors degree in crop and soil science and his masters
degree in genetics are from Michigan State University. He earned a
masters degree in business administration from the University of Michi-
gan Stephen M. Ross School of Business. His Ph.D. in quantitative
genetics is from Iowa State University. He is an IISE member.
Machine learning will always have
these limitations because a training
dataset of necessity presents only a
slice of reality. The real world consists
of many layers of complexity, with
countless interacting factors that
conspire to confound anyone or
anything attempting to come up with
a simple algorithm that can produce
perfect results.