Dr. Howard: “We should not gender” robots

by Thorsten Koch MA, PgDip

The TWIML AI Podcast this week welcomed Dr. Ayanna Howard, a roboticist and Dean of the College of Engineering at Ohio State University. Her recent book, ‘How to Be Human in the Age of AI,’ deals with perspectives and issues between humans and robots.

Howard designs and builds hardware and software and is engaged in making interactions with robots more reliable so that these can be of help in health, education, and climate change.

Building trust through trial applications

Robotics, Howard explained, signifies AI with a body, and AI signifying virtual AI without a body. In trials, the question arose about whether humans see robots as authority figures. Howard described a time-critical application, a building on fire with robots helping the tenants. In this experimental setting, robots committed some mistakes, but even after a while, the robot was seen as a trusted figure. This put in question prior hypotheses.

The AI systems which are now being put to use are limited by certain biases. This is due to the element of over-trust (Note: perhaps due to a lack of AI literacy at this point in time). Howard pointed out that “we are not sheeps that are at the mercy” of the new systems. We owe this to mitigation effects in the process of applications. Howard wishes to continuously understand “real-world implications” that are to be learned about by AI, and by humans about the systems they develop. There is already a mass market for AI. Language, as well as the recognition of so-called “universal basic emotions” such as happiness, sadness, and anger, to be identified through features, are being developed and still have to be improved. “Making things right despite people’s tendency to be lazy will be a challenge to be met in the coming years. Companies to produce AI and physical devices will have to know what factors would render their products difficult to sell, for various reasons. Deception is a term which would arouse distrust with there lacking the right checks. Experiments fail to the same extent as those which fail.

Building trust through choice

One of Howard’s insights is that developers should “not gender” robots as male, female, or non-binary, for instance with regard to the voices the robots can express themselves in. People are expected to judge genderization in various, different ways, which would lead to a “cognitive mismatch.” In consequence, individual users or groups of people would not only treat the situation in a multitude of ways but show feelings of less comfort instead of trust. There should be some choices, however, Howard said.

We live in a time where AI is not completely immune to manipulation, such as during the application of search routines, when the voter of one party, for example, would select different search words than the voter of another, leading to different search results. This depends on the parameters used as well as the measurement of the results by the search engine provider.

“Every element of the learning process” will lead to bias, however this bias can be mitigated to some extent. A developer, Howard does “not like to collect data,” but developers in general will have to prepare their AI systems very responsibly. In the area of “predictive policing,” developers and experts work in very solutions-based ways, Howard said. One of the advantages of AI which will make life more comfortable, i.e. personal assistance, such as managing appointments, will become more important for markets and customers alike.

Listen to the podcast at:
https://twimlai.com/how-to-be-human-in-the-age-of-ai-with-ayanna-howard/

Further reading:
https://www.amazon.com/Sex-Race-Robots-How-Human/dp/B08DSMYYNC

Author: author

Leave a Reply

Your email address will not be published. Required fields are marked *