As I have mentioned in Human Interface channels analysis articles, parallel information absorption property of video channel is a powerful tool. We can get an instant exposure to a big amount of info, briefly switch focus between contents, enjoy the big amount of information. Audio channel though has a serial nature, but much more informative. To examine several properties and check a concept of video news meshing, I have created some time ago a clip out of 5 different contents that were popular at that time. Try first time only to see it (without audio) and you will feel that you can easily follow up on many types of content, but there is a critical part missing. It is even annoying to see so much content without getting its “knowledge” component, so you can mainly “feel” what is happening there and get a general feeling as a result. Now add audio channel and run the clip once more. Now it is much more comfortable. You can now realize that there is a video component that you might have missed, that now you know what is going on there, now you have better feeling about video even if there was only music in the background.
Now… what is the important part that is still missing? Synchronization between parallel video channel stream to sequence of audio information pieces. Our brain can coordinate two different information streams and “find” the matching parts, but it takes an effort, so it is better to assist him. For example through simple interaction design technique – selection. We can highlight a video section corresponding to current audio part and it will make the clip to be much more natural.
As for the concept, I think amount of video information requires some type of integration. Will it be this way or another – we’ll see )))
Typing on tablets and smartphones is inconvenient. Not only because of missing tactile feedback, but also because of its size, taking such a precious real estate of your screen, because of tiny letters in small form factor devices, because of required visual feedback and because it is hard is to blind-type. Long story short, I tried to come up with something that would solve those problems.
I took several characteristics to work on:
Form factor adjustments
Error avoidance and correction
Letter appearance frequency (Eng):
…and came up with a “Distributed Typing” concept
When standard keyboards take half a screen, got far and static keys, does not leverage the fact that “there is no spoon” any more (physical keyboard)…
Distributed Keyboard is enabled on typing, but except for rising keyboard nothing is happening, unless you put finger or several fingers anywhere on the screen. once you put finger(s), a set of letters appear around the spot and all that is required to type it is to “slide” towards the direction (not essentially far).
Well… H.N.Y. fellows! What will be this year? I am optimistic, even though nothing has been conceptually changed in economic, ethical, technological, health, geopolitical and many other conditions of the world. At least a part of the world I am leaving in… At least in my private world… Got a strong feeling of something that is going to be changed though – we have been rested for too long for the exponential world we are leaving in. Speaking of optimism and changes, got an interesting question – Is there going to be a new crisis? Let’s see…
In 2013 we got to the same level of crisis exposure as before 2008 financial crisis and it is trending down:
Interesting is that there is a periodic response to crisis – you can see it even by eye. For those who love math, I can say that Fourier transform of “Crisis” time series clearly shows the peak (skip the chart if you do not care):
The Crisis Cycle period is ~175 days (actually 165-185) throughout last decade.