IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Wed, June 8, 2022
People tend to overtrust sophisticated computing devices, especially those powered by AI. As these systems become more fully interactive with humans during the performance of day-to-day activities, ethical considerations in deploying these systems must be more carefully investigated. Bias, for example, has often been encoded in and can manifest itself through AI algorithms, which humans then take guidance from, resulting in the phenomenon of excessive trust. Bias further impacts this potential risk for trust, or overtrust, in that these cyber-physical systems are learning by mimicking our own thinking processes, inheriting our own implicit gender and racial biases, for example. These types of human-AI feedback loops may consequently have a direct impact on the overall quality of the interaction between humans and machines, whether the interaction is in the domains of healthcare, job-placement, or other high-impact life scenarios. In this talk, we will discuss various forms of bias, as embedded in our machines, and possible ways to mitigate its impact on cyber-physical human systems.