An interview with Linda Onnasch: The Berlin psychologist is researching how humanlike machines ought to be

04. March 2019

An interview with Linda Onnasch

The Berlin psychologist is researching how humanlike machines ought to be

Linda Onnasch, HU © WISTA Management GmbH

Research partners: Linda Onnasch with industrial robot Sawyer. Picture: © WISTA Management GmbH

She was never interested in becoming a therapist. Instead, psychologist Linda Onnasch is investigating how humanlike a machine ought to be and who should have the final say in the cooperation between man and machine. For all of her enthusiasm for technology, she states clearly:  it harbours a lot of potential for manipulation. Since October 2017, Onnasch is a junior professor of engineering psychology at the Humboldt-Universität zu Berlin (HU). Having grown up in the urban Ruhr area, the psychologist’s choice of career was influenced by one very special car in particular.

&nbs

Adlershof Journal: How humanlike should machines be?

Linda Onnasch: That depends on the application. In care situations, for example, a humanoid appearance is very helpful. Patients tend to feel less fear of contact and know intuitively how to interact with a humanoid care robot – talking to it, for example, instead of having to push buttons. In the industrial sector, we have to look at what is the common goal of humans and robots, because humanisation of the machines also harbours dangers.

What dangers are those?

That we lose sight of a machine’s function. People are making machines more humanoid irrespective of their purpose. They give them names, apply social norms and values when interacting with machines, and develop strong emotional attachments to them. Humanising descriptions have a similar effect. Inanimate objects are then perceived as animate. An extreme example of this is bomb-disposal robots developed for use in war zones. To rescue them from threatening situations, soldiers have even put their own lives at risk.

You also see imposing stereotypes onto machines as a problem. Why is that?

Surveys have shown that more than 80 percent of robots in the service sector have feminine names. They are smaller than industrial robots, which usually have masculine names, and they are typically coloured in shades of pink. It’s come to a point where we have to start thinking about a women’s quota for industrial robots.

Do we tend to have too much trust in machines?

Yes, that is another danger. Machines and assistance systems are becoming smarter, but they make their decisions based on data. So the more important question is: do I, as a human, still get to make a different decision from that of a super-smart machine? Should I trust my gut feeling over the artificial intelligence? In German nuclear power plants people are responsible, while in Korean power plants everything is automated. Which is better? Ultimately, when there is any doubt, the person in charge is always made the scapegoat; but would have even had the choice to make a different decision? Another role just as disconnected from responsibility is the safety driver in an autonomous vehicle. After hours of passively sitting there as a passenger, can I still make the right decision within a split second when there’s a danger? Distributing the tasks this way, making the human the last decision-maker, doesn’t work.

What would be a good distribution of tasks between human and machine?

When factors like incapacitation, situational awareness and performance are taken into account, the kinds of assistants that prepare information but don’t pre-empt the decisions are very good. What that entails for us humans is regular training, over and over, because we only learn and change our behaviour when we make mistakes.

How do you maintain emotional distance from machines, for example the cute NAO-type robots you are using in your research at the university?

With the NAOs, I find it easy because I see behind their facade. But at home, if my vacuum bot hasn’t vacuumed properly, I feel like he’s had a bad day.

Where did your interest in automation psychology come from?

It was K.I.T.T., the talking AI car from “Knight Rider” on TV, which I watched all the time as a kid. I really wanted a car like that.

When was your first visit to Adlershof?

It was in 2006. I was studying psychology at the Technische Universität Berlin and attended a marketing seminar that was promoting Adlershof as a scientific location. When I started here as a junior professor in 2017, I was positively surprised at how much the location had changed. As a scientist, I am especially impressed by the spatial proximity of the institutes and companies, which promotes collaboration.

What do you do in your spare time?

Walk around the Brandenburg environs with my husband and my dog. I also love racquet sports. When it’s warm I play paddle tennis, otherwise I play squash and ping-pong. I like to cook, mostly Asian cuisine. And I spend a lot of time with friends.

 

Interview by Sylvia Nitschke for Adlershof Journal

Related News

Adlershof Journal März/April 2019. Titelbild
Adlershof Journal March/April 2019
K. O. or A. I.? What artificial intelligence is doing to us and our time