AI and Human Failure: Blaming the Machine? | Brief

Another example of the principle known as diffusion of responsibility: blaming the machine. This is not about airplane autopilots but of machine intelligence and automatization which are made use of in more and more areas.

Whether it is predetermined algorithms labeled imperfect, after things go wrong, or whether it is about neural networks which, nowadays, are used not on the front line but mainly in closed settings: they may work, but they are never perfect.

Accountability and the need for revision

When algorithms work, one should know that there is always room for improvement. After all, the world is rich and complex. The question is in what way things could be improved. Decisions are made which are either corporate, social, or political in nature. It is not about efficiency only, it is about quality.

And, let us not forget, algorithms are written or sanctioned by humans. In the end, what is of the essence are priorities, diligence, refinement and, not least, limitation of algorithmic powers. In sensitive areas at least, we need structures of accountability.

We cannot always know whether, and how, things could be better when it comes to AI. After all, be it in simple mechanics search engine operations, social networks, or criminal profiling, most of the time, algorithms work more or less well. But are they perfect? They shape reality, in the end. Hence the danger of self-fulfilling prophecies, e.g. where limits are not set, and of faulty equilibria where there are no improvement or insufficient instances of revision.

Thorsten Koch, MA, PgDip
1 August 2020

This website can be reached via:

Author: author

Leave a Reply

Your email address will not be published. Required fields are marked *