a device that is generating concepts

Leave a comment

Touch 5 control

Nice implementation of Microsoft Surface Studio had two main innovations at least from external visual perspective. One is the Hinge, which put a focus on a mechanical part, that from TRIZ theory and its laws of system evolution was “screaming” to be a next part to evolve. I put several times this system component at the frontier of next changes. The law is that every system is evolving heterogeneously, so you can take a look at old legacy components to bring a next breakthrough.

Another component that has been added is a Surface Dial. Nice idea to increase the control ability and improve interaction design. While, what MSFT did, is good from margin aspect (addition to a BOM/sales), from system evolution perspective they have added another component to the super-system. It makes new functionalities, but adding complexity. Thinking TRIZ, you can tell that the next phase would be “trimming” by “changing dimension”. For example you can remove this component by determining the touch of 5 fingers altogether to call for the same functionality. Something like this:


To make the interaction design of the feature more convenient I would add the functionality of next action once the “Touch 5” is triggered. It can be a single finger left to rotate the wheel or finger taps to choose the next action.

Makes sense?


Video Content Mesh and audio channel (read first)

Leave a comment

As I have mentioned in Human Interface channels analysis articles, parallel information absorption property of video channel is a powerful tool. We can get an instant exposure to a big amount of info, briefly switch focus between contents, enjoy the big amount of information. Audio channel though has a serial nature, but much more informative. To examine several properties and check a concept of video news meshing, I have created some time ago a clip out of 5 different contents that were popular at that time. Try first time only to see it (without audio) and you will feel that you can easily follow up on many types of content, but there is a critical part missing. It is even annoying to see so much content without getting its “knowledge” component, so you can mainly “feel” what is happening there and get a general feeling as a result. Now add audio channel and run the clip once more. Now it is much more comfortable. You can now realize that there is a video component that you might have missed, that now you know what is going on there, now you have better feeling about video even if there was only music in the background.
Now… what is the important part that is still missing? Synchronization between parallel video channel stream to sequence of audio information pieces. Our brain can coordinate two different information streams and “find” the matching parts, but it takes an effort, so it is better to assist him. For example through simple interaction design technique – selection. We can highlight a video section corresponding to current audio part and it will make the clip to be much more natural.
As for the concept, I think amount of video information requires some type of integration. Will it be this way or another – we’ll see )))