Concepton

a device that is generating concepts

Human interface modeling for communication trends – Part Three (Model-driven applications: Motoric, Touch, Literacy, Content View and Sum)

4 Comments

Intro

This is a third and the last part of the article, that is about applications of last three human communication channels that I am talking about – Motoric, Touch and Literacy. To understand the model there would be a need to get through a first part and to get interesting insight into Audio and Visual human communication channels there is a second part of the article. Lots of important questions for entrepreneurs, Interaction designers, system architects and other fellows that care about people as part of their technology.

That’s all – I have started this model as something small for myself and it took me pretty far. I have found it so exciting, that I could extend it ten times, but meanwhile it is enough for a blog and as a dilettante dive. Need some energy, grown from opportunity.

There is also couple words about an orthogonal view of communication content and some kind of sum at the end. Muchas Gracias.

Motoric

Image

As I have mentioned the easiness of Will content generation for Motoric channel is leveraged by control applications, despite its low informativity (for example in touchscreens and Kinect).  Emotion component of Motoric content is a strong development vector under perceptual computing, which is recently heavily driven by many companies.

How can we improve the low informativity of the channel?Image

Taking today’s applications, we could improve Motoric data by adding channels with high informativity – for example audio or visual feedback. If it is Emotion-based content, than Visual channel feedback is a perfect match, while for Knowledge communication the Audio channel would be just great. It is already used today in “clicking” keyboards, Kinect games video feedback. Complementary voice information could work great with Motoric and improve output content informativity.

The informativity can be improved also by other good characteristics of the channel, for example by “Easiness of information absorption” through touch stimulus (e.g. Touchscreen electric/vibration feedback) or through Information Generation property (e.g. by enabling more complex control motions). Deaf people leverage the easiness of Motoric information generation to overcome lack of audio channel.

Can we widen Motoric interface to improve a bandwidth of the channel? Using additional parts of our body (whole arms, all parts of face, shoulders, neck, foot…) by communication technologies might make it possible.

Image

Can we improve a tech support for the Motoric channel? Can we communicate better its Will and Emotion content? This is the only Will and Emotion content that is easy for us to generate. It is used today for example through gesture recognition, typing, mouse or touch screens for control. But control is just a small part of Will. Our intentions are usually very complex and we can efficiently use a language of our body to provide complicated messages.

Can we use the Emotion generation of our face and hands to improve the Informativity? It is so easy for us to express ourselves with hands and face. Can we extract the Emotion component from our Motoric content (along with Audio) and present it back to us? It is much more important than “reading” of opponent’s emotions. It can be a critical emotional feedback for us that would expose a whole new world. The same way we look into the mirror to get a visual feedback of us before others see us, this capability would cause us “see our behavioral image” and want us to be better, to develop it, to control it effectively and would create a new market of “internal beauty”. Image

Can we use this high bandwidth channel for missing Knowledge generation? The learning curve of motoric interface is fast and native. What kind of language could we teach to leverage it? Can we take as a baseline a simplified language of deaf, add more complex structures based on needs, teach our customers and to use as input to our products? Will we study a language of signs at school in the future?

Can we share our Motoric Content? Would it be a Will sharing (e.g. database of control movements) or would it be Emotion sharing so when we get to a certain emotional condition we could post it?

Could we search for the content? Would it be a content matching my current emotional condition? Would it be people with my mood? Would it be a knowledge search correction based on Motoric control or based on emotion? Or would it be a search for Motoric Will or Emotion content itself?

Can we localize the Motoric content generation and avoid its exposure (simply saying not to see people waving their hands everywhere)? Can it be done by reading neural signals (which might improve content generation bandwidth even more)?

Important constraint of the channel is that it is physiologically limited (people got tired, non-comfortable or even causing muscle pain). This factor might be critical for various applications and has to be taken seriously. Reading of neural signals could reduce this type of constraint.Image

Motoric channel could take an emotional content communication to the new level. Will we see in the future lovers talking to each other with their hands just because this is much better way to show their feelings and their desires? Will we see a popularization of emotional expression through the channel?

Today we just starting to understand this channel, using it natively we were similar to great apes that knew to express themselves through sounds even before they knew how to speak. Who knows when each of us would be able to “write” his motoric songs? Who knows when will we start dreaming in Sign? *

* Oliver Sacks, “Seeing Voices: A Journey into the World of the Deaf”

Touch

Image

The best low level emotion content channel, it is so close to us, that it is considered intimate and natively maintained mainly by very close people. Even when other channels started to evolve, the Touch channel, rejected by religion, remain almost untouched as it was thousands of years ago. This way or another, today it is evolutionary and technologically undeveloped channel. Huge potential of Emotion communication through this channel would make our connections tighter, information that we get comprehensive, our understanding deeper and our involvement stronger.Image

How can we use a great information absorption and generation abilities to improve informativity of the channel? What technologies could we have?

Will it be a sensor/electrical nerve stimulation layer/cloth to transfer the information? Will it be central nerve stimulation?

Since we even not started evolvement of the channel it is very hard to say how technologically it will look like, how it will change us, what ethical norms it will establish as an evolutional substrate, how it will change our world.

Literacy

Image

One of the most developed communication channels, Literacy still has a room for improvement. Two major weaknesses of the channel are treated by many companies today and probably soon to be eliminated – easiness of information generation and absorption.

Hardness of Literacy information generation can be supported by another channel with easier generation and higher Infomrativity (e.g. Audio – Siri).

In addition it can be compensated by leverage other strong characteristics of Literacy such as Informativity, Sharing and Search factors e.g. today’s text message engine to provide high probability content for manual choosing (Motoric control). Can we extend it and make a universal AI engine that analyses our environment and provides a most probable content to choose?Image

Hardness of Literacy Information Absorption can be compensated by leveraging Literacy Informativity, channel high information sharing factor and easiness to search – e.g. Google. In addition, it can be compensated by channel with better information absorption and high Informativity – Visual. Parallel visual interface improves absorption as well.

As Literacy based technologies are very developed, it would be important to invest in transformation buffers of all content channels into written one for further processing and back. Fast content processing technologies from/to Audio (e.g. speech recognition/voice generation, voice emotion recognition/rendering), from/to Motion (e.g. gesture recognition/motion signals generation, face emotion recognition / avatar emotion rendering), from/to Visual (pattern/items recognition/ virtual reality rendering, situation analysis/ augmented reality) – all types of fast conversion technologies would bring all information to a single comprehensive Literacy domain for establishment of the most efficient human communication engine.

  

Content view

After all we can try to estimate properties of future Human interface model based system. It might be similar to:Image

 Mixed Motoric and Audio human outputs has to pass content split to treat separately all types of communicated content. Today it is partially done for knowledge, less for will content and mainly used directly as Literacy content. Content has to pass a transformation (to Literacy content or something new) and further processing. Input channels have to match content types and leverage channel characteristics. Most of connections and interfaces today are missing. Will content processing is starting to evolve, while for emotional content we are yet to begin our way.

Emotion communication

Poorly developed Emotion communication today, might evolve from Audio, Visual and Touch channels.

Audio channel benefit on top of all others is that it has a strong emotion component and high informativity. Can we leverage this component? Can we extract it and use to improve communication? For example we could understand the intentions or emotional conditions of our opponent or in opposite, when we speak we could use this content real-time analysis as a feedback to adjust our voice and improve the effectiveness of the message. We could extract the emotional component out of music to visualize it and intensify the impact.

Can we connect a high informativity Audio channel to the Touch channel, so we would better feel complex Emotional audio content?

In the world where all people are very close (thanks to Literacy, Audio and Visual communication technologies), connected mainly on Knowledge domain, Emotion communication driven by evolved Audio, Visual, Motoric and Touch channels could make a last very native and significant step towards transfer of Emotion content on a worldwide scale.Image

Development of Brain interface might be a breakthrough since it would bypass physiological limitations of current information channels – especially in terms of Emotion content.

Think, what would be once we make similar evolutional steps like we had for knowledge content? How would our world look like? Could we communicate our love to millions of people? Could we transfer our fear to another person from opposite side of the world? Could we leave our touch in message box for our spouse? Could we save our orgasm for memory? Could we collect smells of our children from the moment they were newborns? Could we send our pain to doctor so he could understand and compare it to pains of others? Could we feel the touch of the cold wave and tropical sun while we are on our work break? Who knows…how soon will it come?

Will communication

Data density is very important for Will communication. We’ve got used to its short content and many times use it for commands and control. Improvements of this communication brought us for example multi-touch screens that leverage a high bandwidth of Motoric channel which is great for Will communication.Image This trend will continue with development of technologies for Motoric or Audio channels. Back in the past a slow Literacy channel was used for Will communication and even that it is very slow (especially for will communication) channel it make possible to create and manage empires. Voice channel improved the communication significantly and it gave a significant push for control ability of industrial world.

Knowledge communication

Despite the knowledge communication is the most advanced one today there is still so much to do to make it more efficient.

Widening Bandwidth – how can we improve a bandwidth for knowledge content? Both Audio and Literacy channels are serial and slow. In addition information absorption of those two is low. Can we improve a knowledge channel bandwidth by leveraging high bandwidth and information absorption of Visual channel?  How can we make knowledge fully visual? Can we make a visual representation controlled by Motoric interface? Can we get information through Touch channel and leverage its high bandwidth (for example to provide electric skin matrix neural stimulation for additional information e.g. visual for blind)? Can we actually overcome a low informativity of Motoric channel and create knowledge through it to leverage its agile and native content generation?Image

Improving accessibility – in fact is a main business of biggest worldwide companies (e.g. Google). Today people are using a Literacy channel (or recently starting Audio) to get to the required knowledge. Both channels have great informativity, tech support and that is why they are mainly used for that purpose. Can we make better information accessibility by improving information sharing and search of Audio content and leveraging a good technological support for this channel?

Sum

The model should help to expose a structural approach for understanding evolution of human communication technologies, describe existing ones and give additional instrument to develop new products. Next time when you think of new product that has a communication technology component you can ask yourself about type of benefit that it brings. What is the communication channel that it is using? Is it the optimal one for this type of content? Should you add additional channel to improve required characteristics? Should you change an interaction design of the product to leverage channel strengths? If you are developing product that is intended to improve communication, ask yourself about property that you are improving. Is it something that is required to be improved for this particular channel? If you are looking for new ideas, you can take a look at the model and ask yourself, which channel/property could you improve and what are capabilities that you’ve got in your hand.

Looking further, what would be a post-communication era after we’ve got close to ideal communication between all people? Only then we would stop finally thinking about how we communicate and start thinking more about what we communicate. Only then, when we are optimally close, we would finally realize that the reason that we thought we are communicating Knowledge, Emotions and Will is orthogonal to the real one. Only then, when we know and feel in the deepest way all that we have to, we would understand that communication is not a support for other needs, but fundamental entity by itself. Only then we would understand that connections between us like between cells in the brain are the essence of our human being. With this new answer our children’s children are going to start working on communication content definition… from the beginning.

Advertisements

Author: Andrey Gabdulin

www.gabdulin.com Product Development

4 thoughts on “Human interface modeling for communication trends – Part Three (Model-driven applications: Motoric, Touch, Literacy, Content View and Sum)

  1. Another example of making Visual communication knowledge content generation easier through symbolism.

    overpriced though 🙂

  2. Pingback: Infinite Virtual Wall | Concepton

  3. Another leverage of Touch channel easiness of absorption and Audio communication tech readiness for common Emotion content: http://www.kickstarter.com/projects/1382889335/woojer-feel-the-sound?ref=category

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s