One look says more than a thousand bits.

The long-predicted vision of the future in which robots determine our everyday lives has not yet come true. But it is already impossible to imagine everyday production without robots. Where humans and machines interact, communication between the two is necessary. Can this function via looks?

Customer
HU Berlin, Human Factor Consult
Launch
2020
Scope
Design Research, Graphic Design, Illustration, Animation
Machina quo vadis?

Anyone who has a robot vacuum cleaner at home knows the problem: you never know exactly what the little machine is going to do next. From time to time it even runs over your foot - usually a manageable risk of fatal injury. The situation is different with industrial robots. They are much larger and have immense power. This means that there is a high risk of injury for people working with them. Although modern robots already have various safety systems - sensors for distance, contact or noise - painful accidents at work cannot yet be completely ruled out. What if humans could intuitively predict where the robot will move next?

Intendicate", a joint research project of Berlin's Humboldt University and Human Factors Consult GmbH, is investigating exactly how this can work.

How is movement anticipated?

A person usually looks at the object of their desire before they reach for it. They show us what they intend to do through their gaze and posture. We learn to perceive these movements even out of the corner of our eye as children and react to them automatically. Compared to other primates, humans have particularly easily recognizable eyes due to the whites of their eyes. One hypothesis is that this feature has evolved to make it easier for conspecifics to follow an individual's line of sight at close range.

The "Intendicate" research project aims to investigate whether this hypothesis can also help human-machine communication. The first step was to equip the industrial robot "Sawyer" with a screen. It shows two large, cartoon-like eyes with eyebrows that accompany various actions - they can look in a certain direction, blink, widen and more.

All eyes on the design.

However, the current design of the eyes is too emotional and playful for the Sawyer robot and its intended use and is therefore more easily misunderstood. Some study participants described it as "cute", which could lead to its power and potential danger being underestimated. In order to find the right balance between character and pragmatism, we were brought in as a partner for the project.

Based on the current state of development, we carried out design studies that were tested with test subjects to ensure clear and targeted information transfer between machine and human: without misinterpretation and irritation. The goal: greater perceived and tangible safety for users.

We wanted to know: How abstract can the representation of the eyes become? Which learned human attributes are obsolete - which are extremely important? And what does it take to make visual communication more understandable for users? To this end, we gradually reduced the designs to include human features such as eyelashes, eyelids and irises, right down to a purely graphic representation.

It became clear that to recognize a direction of gaze, we actually only need an indicator in a clearly delimited space - like the iris in the context of the eyeball. As soon as you remove the outer frame, the eyeball, orientation is lost.

Pupils fly through space without reference.

Two simple frames facilitate orientation.


For the outer shape we have chosen a circle. Compared to any other shape, it always has a constant radius, which makes it possible to display clear and mathematically correct positions. In addition, it is the shape that we expect from an eye and thus can also read best.

In the first design and animation phase, we explored various designs, always looking for the right balance between natural and artificial effects. We took very different approaches. We tested how many pairs of eyes are actually needed - is one too few, are sixteen too many? Which style is the right one? Should we stick with classic line graphics or take inspiration from the pixel graphics of the early 90s, for example? Does the graphic suggestion of three-dimensionality help? Or are there other indicators, such as time or movement, that we can hint at?

As a control, we also tested arrows as a design element instead of eyes. At first glance, they seemed very suitable for directional cues, but we quickly realized that they made no sense for this application. In order to enable spatial communication, the arrows had to be three-dimensional; with a clear "view" to the front, they were no longer recognizable and a clear localization of the target was not possible.



A selection of designs was then precisely positioned mathematically in a 3D model of the test environment and their viewing direction calculated and animated accordingly. Here, at the latest, it became clear that two eyes are needed to display an exact direction of gaze in space. The decisive factor here is the so-called convergence – the way the angle of the eyes behaves in relation to each other (how much the eyes "squint") in order to recognize whether the eyes are focusing on something near or far.


The user tests began after the design phase. We tested how the eyes are perceived in the user's peripheral field of vision. In digital experiments, we used eye-tracking to check how our designed and animated designs stand up to human eyes and how well they recognize where the robot is looking.The research continues and we are excited to see what robot eyes we will be looking into in the future.