The future of Human – Machine Interaction

Today, humans have a satisfying interaction with machines when they have good input and output devices, which is why people generally prefer computers with large screens, reliable keyboards, and precise mouses. In the same way, when drawing, people prefer to use precision graphics tablets or tablets with high-quality pens.

It is unthinkable to draw with a mouse or write long texts with the keyboard of a smartphone. Technically it could be done, but in practice, it is a torture that anyone would want to avoid.

Looking to the future, the direction is towards devices with smaller screens and nonexistent input devices. Think of smartwatches or other wearable devices, right up to smart glasses that will soon hit the market.

Today, we are already experiencing, and it will be increasingly so, difficulties in input, and often when we want something from our device, we have trouble asking for it because the classic ways in which we are used to relating to machines unfortunately do not adapt to very small devices.

If the input devices disappear and the screens become too small, as in the case of smartwatches, or lose touch, as in the case of smart glasses, we need new ways to interact with future devices.

That is why two interaction modes that already exist, but that will undergo enormous evolution in the coming years, come to our aid: vocal technologies and gestures.

We already frequently use voice technologies (Siri, Alexa, Google Assistant) to interact with devices of various kinds, this is because sometimes they are the only mode of interaction and sometimes they are simply more convenient.

There is still a lot of work to be done in terms of understanding natural language, understanding context, and turning our commands into tasks that can subsequently be executed, but the investment in voice technologies is so great that significant improvements can be expected in the near future.

Gestures, on the other hand, will be the preferred mode for interacting with mixed reality applications.

By moving our hands in front of our field of view, we will be able to simultaneously interact with physical reality and Augmented Reality information layers, that will be placed in front of our eyes thanks to smart glasses.

What we need to do, and it is good to start doing it now, is identifying the correct ways of interaction: what gestures to use to achieve a certain result, what basic functionality should be provided by all Augmented Reality applications, what kind of result is normal to expect from movements, words or behaviors.

Accessibility and usability will be features that will make the difference between successful applications, that can provide useful services to users, and others that will be discarded because they are inconvenient to use.

We are only at the beginning.


Massimo Canducci

Leads Innovation activities in Engineering Group
Faculty Global in Singularity University
Forbes Technology Council Official Member
International speaker on innovation and future fields.
And a lot of smart things you can find here.