In conversation with Jens Nachtwei
The psychologist researches and teaches on the changing world of work and human–machine interaction
“A humane future does not lie in more technology, but in greater human judgement in how we use it,” says engineer and organisational psychologist Jens Nachtwei in his book Zukunft der ARBEIT an der Zukunft, which brings together more than 150 expert contributions from academia, business, public administration, and civil society. The volume aims to offer young people orientation for their future working lives. This is one of the key themes of the 46-year-old psychologist who grew up in Alt-Adlershof and has spent the past twenty years researching and teaching on the changing nature of work and human–machine interaction at Berlin’s Humboldt-Universität.
Adlershof Journal: What fascinates you most about human–machine interaction: the novelty, the unpredictability, or something else?
Jens Nachtwei: I’m not the ‘techie type’, but a psychologist with a passion for philosophy. So what fascinates me is that technology confronts us with ourselves. The more powerful technology becomes, the more we are forced to ask what it actually means to be human. What I find it particularly interesting that the spread of AI in recent years has finally pushed this debate beyond purely academic circles. It is now being discussed much more widely, which I personally find very rewarding and which greatly enriches discussions with students in my courses.
Do you see more opportunities or more risks?
I will have to give you the standard response from engineering psychology that I learned when I was a graduate student: It depends. Technology creates opportunities for those who are in a position to protect themselves and perhaps even shape developments. For others who are exposed to certain technologies without protection, it gets risky. Think, for example, of surveillance and discrimination through AI in the workplace or in public spaces. Some people respond by saying that technology is ultimately a design challenge. This is true. However, the question remains: who designs it, for what purpose, how, and for whom.
Knowledge regarding the use of AI is spread out unevenly. Is this a topic that has “rolled over us”? Is there a lack of education as well as regulation?
I do share that impression. We could have seen it coming. After all, AI is far from being a new topic. But perhaps it is similar to other subjects like the climate or war. Many people see it coming, but they take action quite late, or sometimes too late. That says something about what it means to be human. Education and regulation are extremely important, but they are not sufficient. Take a company, for example. You can explain responsible AI use and establish regulations, but at the same time reward speed in completing tasks while resources remain scarce. It should not surprise anyone if people then start cutting corners.
In which areas of work do you think AI will offer the greatest opportunities—and the greatest potential to reduce workloads—in the future?
In everything that is undignified, monotonous or physically and mentally dangerous. For some people, chronic boredom already poses a serious risk; for others, the issue runs even deeper.
Does the Adlershof technology park already serve as a research field for you?
Not yet, actually. But that may well change, perhaps even because of this interview.
When should people not use AI?
Responsibility and trust cannot be automated. Ultimately, responsibility must always be assumed by a human being, and trust can only be built through our social and emotional capacities. AI may simulate these things, but it cannot replace them.
Will we lose our ability to think by using AI every day?
That depends on who you mean by “we” and by “thinking”. In general, thinking tends to shift rather than disappear. Our thinking changes. Is that better or worse?
It depends.
Peggy Mory for Adlershof Journal
Publication: „Zukunft der Arbeit an der Zukunft“: www.zukunftarbeitzukunft.de
