Again and again we hear the same appeal – calling out loud and clear in a number of feature articles: “If we don’t train more realists to provide us with more technology, the green transition will exhaust itself and the health services crumble in the face of an ever-increasing elderly population”.
As a research scientist with a background in psychology, I have to keep asking myself: Has Norway forgotten that successful technology development also requires contributions from other disciplines? How can technology function well if we don’t incorporate user-friendliness and design together with input from ethics and the law?
On technology’s terms
As a research scientist, I talk a great deal with specialists working in technology design. They speak of many challenges that make it difficult for them to ensure that new technology interacts positively with the people who will be using it.
Much of technology design is characterised by development on technology’s own terms.
However, I believe that effective and people-friendly technology design should always be the aim, regardless of the market at which technology is targeted. And if we need examples of fields where poor design can have fatal consequences, we need to look no further than safety-critical sectors such as energy, transport and the health services.
How bad can things get?
Here are three examples of how badly things can go wrong:
- Last autumn, the insurance companies reported that we incur dents in our cars more often than before, even though we are now driving the most high-tech cars ever produced. To design a car equipped with devices that distract and prevent us from keeping our eyes on the road at all times is not a good idea. It drains the capacity of those centres of the brain that we need to drive our cars safely and effectively.
- Investigations following two major accidents with Boeing’s 737 MAX aircraft in Indonesia in 2018 and Ethiopia in 2019 revealed that the technology design was inadequately adapted to the pilots’ abilities to control the planes when critical situations arose on board.
- As part of the investigation following the blow-out at the Deepwater Horizon platform in 2010, with disastrous environmental consequences, it was pointed out, among other things, that inadequate information provided via screen images, sensors and instruments constituted a key contributory cause of the accident.
Developers are forgetting something very basic
The glimpses I get as part of my research into the workings of current technology design indicate that developers are commonly forgetting something very basic – that all technologies have to be used by people in a given context, with all the limitations that this entails.
Moreover, at the start of many development processes, resources are heavily targeted at purely technological aspects. However, by delaying the incorporation of key and fundamental changes in design to the later phases of a process may prove to be expensive in the long run and besides, the chances of succeeding in this at a late stage are limited in many cases.
We also find that it can be difficult intuitively to assess the design quality of new technologies such as automation and artificial intelligence. In other words, the inner logic incorporated in different systems varies depending on the algorithms and machine learning that form the basis of their function.
For this reason, it is no longer just the visible layout that determines how comprehensible, predictable and transparent a technology is for its users.
Our brains have limited resources
An argument for not incorporating user perspectives at an early stage in technology development is that this is frequently too demanding of resources. This is supported by stating that it is often not known at the early stages of development who the ultimate end users will be.
However, the counter argument here is that cognitively, people are fundamentally similar in a number of ways. For example, we all have only limited attention resources and a limited short-term memory.
Thus, all technology design processes should be aimed at achieving solutions that do not require more of our brains’ limited resources than are necessary. In the long run, the technology will benefit regardless of who the ultimate end users turn out to be.
People have to interact with the systems
Innovative and disruptive technologies such as artificial intelligence offer an enormous potential. However, in most cases, people will have to interact even with so-called ‘intelligent’ systems.
Also, the technology has to function in a given context, whether as part of an industrial process or in a health service department.
Thus, the introduction of such technologies requires that developers not only understand how a given system will be used, but also that they give due consideration to the capacities of our brains to receive and process information.
Input the resources and reap the benefits
The introduction of the human perspective and the importance of context into technology development will perhaps make development processes more complex. It may also create demands for greater levels of cross-disciplinary skills and resources, and more time.
However, there is good reason to assume that technologies that function effectively within the systems for which they are intended, whether in hospitals, the transport sector or in industry, carry the potential to reduce costs far in excess of any increased investment incurred during their development.
So, while ensuring the future availability of people with adequate skills in technology, it will also be crucial to mobilise other disciplines that have key interfaces with technology specialisms. This will enable us to develop technologies that interact positively with their users.
This article was first published in the newspaper Dagens Næringsliv on 14 May 2023 and is reproduced here with the permission of the paper.