Concepton

a device that is generating concepts


Leave a comment

Touch 5 control

Nice implementation of Microsoft Surface Studio had two main innovations at least from external visual perspective. One is the Hinge, which put a focus on a mechanical part, that from TRIZ theory and its laws of system evolution was “screaming” to be a next part to evolve. I put several times this system component at the frontier of next changes. The law is that every system is evolving heterogeneously, so you can take a look at old legacy components to bring a next breakthrough.

Another component that has been added is a Surface Dial. Nice idea to increase the control ability and improve interaction design. While, what MSFT did, is good from margin aspect (addition to a BOM/sales), from system evolution perspective they have added another component to the super-system. It makes new functionalities, but adding complexity. Thinking TRIZ, you can tell that the next phase would be “trimming” by “changing dimension”. For example you can remove this component by determining the touch of 5 fingers altogether to call for the same functionality. Something like this:

touch-5

To make the interaction design of the feature more convenient I would add the functionality of next action once the “Touch 5” is triggered. It can be a single finger left to rotate the wheel or finger taps to choose the next action.

Makes sense?


Leave a comment

YouTube Identity sharing

youtube_identity

From time to time I realize, that I am logged out of Google account and then interesting thing are happening – I’ve got totally different response at all Google services. This is exceptionally visible when I go to YouTube, where I discover a whole new world of content.

Similar thing is obviously happening when my wife is logged in. Then , the content is very predictable, as I know her, and I realize how much predictable for myself my own content is. This over-fitting from time to time is driving me to go out and spend time in trying to dig some new type of content, but over and over again I understand how hard is to expend the boundaries of personality… Eventually I come to a similar content, which is a part of my “YouTube identity”. And no – Google are not doing good enough job to expend boundaries of my identity, even though they know exactly how it looks like (maybe because they do not want to).

This “YouTube identity” is a YouTube-dimension of myself and the simplistic representation of the world as I see it, while to understand someone better, I would want to look at the world through his/her eyes (at least virtual ones).

So I thought – can we do something to share this insight into “YouTube identity” of ourselves, like if someone would login to my Google account without impacting it. That would be totally different from sharing a particular video – something I want to share and not something I am.

Something that is less visible, but much more impacting on a global scale, is Google searches and social pages. The impact of the algorithm of content fitting is causing clustering of similar people and growing informational walls around ourselves. That negative feedback is driving us away from intrinsic diversity and global stability to the word of narrow-mindedness and global extremism. That is another topic for discussion and there is a lot of data out there that supports it…

Can we get to a better understanding of each other by sharing our virtual identities and swapping to other points of view? I think it would make our development more efficient and at same time the world more stable.


Leave a comment

Artificial Conductor

Why there is a conductor for orchestra? Actually there are many reasons, even though musicians can play without a conductor (and sometimes they actually do).

Question is – can we make a better conductor by making the artificial system that could answer all the needs and maybe do things that could improve the performance of the orchestra?

Let’s  take several primary reasons and try to refer to each while replacing each functionality by technical capability.

  1. Synchronization
    In a large orchestra, the time taken for the sound to travel is long and ear-based feedback of musicians is insufficient (even if heard) and causing the lag.
  2. Start/Stop
    Not just for the beginning, but for the resting of musicians. Especially for brass, woodwind and percussion players, there can be considerable stretches of time when they are not required.
  3. Make sure that the volume of the instruments balances so nothing is drowned out
    It is very hard to synchronize the volume in large orchestra, so conductor is helping to stabilize it.
  4. Phrasing, tempo, bowings and general style
    Those are the elements that conductor shall dictate to the members of the ensemble.
  5. And… the Show.

To do this, we can provide the pre-programmed system with distributed audio sensors and visual central and distributed interfaces.

orch2

Required features of the system:

  1. Conductor display to provide centralized signals such as beginning and stop, including preparation indicator
  2. Local displays to be aligned with the instrument, position, adjust to and present a sheet music (it. partitura) and have personal musician’s configuration.
  3. Local displays to provide begin signals and preparation
  4. Compensation of delays (TDR) and full alignment between all local stations
  5. Audio sensors and volume/misalignment feedback to local stations.
  6. Gathering of audio data for analysis and post-performance feedback and continuous improvement of the ensemble.
  7. The show/experience component might be intensified by leveraging the real-time data and showing some powerful visuals for people in the concert. In addition the system can provide the visual streaming from local stations so people could see the people behind the performance.
  8. Visual tempo/phrasing/dynamics conductor display. Something like this (sketch): ac

Obviously there is a need in a preparations and this might be just an instrument in the hands of the professionals, but this direction can scale up the use of conducting and maybe even improve it 😉


4 Comments

Skydiving and drones

Even though I have stopped my skydiving when my son was born, I still dream about it and thinking how would it be good to jump out of the plane (with parachute) every time I fly somewhere.

I have ordered a Lily drone and thought it would be good to skydive with the drone. I did my search and found no evidence of such experience.

I assume there would be need in some adjustment in algorithm of the drone to compensate for unusual pattern of required stabilization, when there is a updraft, gyro “feels” the fallout and need to keep a tracking of falling object.

Theoretically this type of motion control requires less energy as it should compensate for difference in air resistance between skydiver and drone (which can vary based on type of exercise) and for stabilization.

Time of flight can be very short e.g. head down freeflying (~260 km/h /160 mph) or style, when speed of fall is more than 400 km/h / 250 mph. It also can be very long e.g. cross-country jumps.

It can be (mainly) vertical or (mainly) horizontal, like in wingsuit flying.

Voltige_ThomasJeannerot2013_Lily

Another critical complication is safety. Not that it is not important on the ground, but in the air it can easily become a deadly move, especially during the parachute opening. To ensure safety the drone has to keep a minimal horizontal distance from the skydiver at any time of the fall and in case of horizontal component of the fall (usually there is some) to avoid being on the way. That keeps some limited area that drone has to stay within. E.g. in case of vertical free fall the drone can be within the green area and outside of red.

drone skydiving safety area

So I assume soon we shall see the amazing views… hard to believe, but even more spectacular than those.

Enjoy!


3 Comments

Accident warning system

Accident warning systems in cars already became a standard, but still there are huge amount of cars on roads that are not aligned with recent technologies, so we only partially control the situation. The rear feedback system is there to ensure, that the distance to the next car behind you, is kept within the safe range. Following the previous posts [#1 and #2], here is the update for the concept with simpler version.

The system includes led panel [10$ – 50$], signing to the following car when the distance is less then acceptable. It is installed by the rear window behind the driver, while facing with the notification panel towards the following cars.

Bluetooth connectivity to the smartphone is giving ability to read the speed based on GPS (disabled otherwise) and notice to driver (sound signal) in case of range violation, improving the awareness in case of the break (of course on top of looking into mirrors). The system includes a middle/long distance sensor [$100 😦 need to search for low cost solutions of sensors up to 100m] and basic controller [, while most of the system logic is within the phone. The system can be charged from the car or can get a constant power when installed permanently.

image002
The signal can be color-based (can be configured from the phone app):

No signal – the range is more than two seconds

Yellow – one to two seconds distance

Red – less than one second distance

Blinking red – less than 0.5 second distance

The ranges are adjusted based on the time (+1s at night) and might be based on the weather in the area (fog/rain).


Leave a comment

NASDAQ Network – Part I

Yes, this is what I am doing at 1am when market shifts remind that there is a hidden beauty within pseudo chaotic signals that rule our life. Then I go back to the work, which is trying to systematize stock market into a single network and through the static and dynamic properties of the network to understand it better. What can we find there? Lots of cool stuff. Hidden connections, clusters, internal and cross sector connections. What can we possibly derive out of it? Investment considerations, robustness properties of the market, pathologies and more…

I am going to post findings in a small chunks – the way I am actually working on this 🙂

So how shall we start? Certainly from the raw data. We need a trading records of financial instruments for some period of time. I’ve got 2 sets of data, based on ranges  –

  • 1 week of  3.5k NASDAQ company stocks and exchange-traded fund (ETF) indexes with 5 minutes granularity
  • 10 years of daily granularity data for NASDAQ stocks

Next step is to choose the set (or subset, based on required scope), gather the data and probably to cleanup/arrange formats. Once the data is ready, we need to run cross-correlation. This would giver us a matrix of NxN with correlation coefficients (R square) between each stock. From this point and on we will call each Company Stock of ETF a “Node” and connection between two companies an “Edge“. This is because, as I said, we are going to build a network and those are the terms of basic components.

Application of Rsq threshold is going to reduce significantly the amount of stocks that are correlated. How much? Well, exponentially. This is important, since it is reducing the load on our system and makes the analysis faster. In addition it gives us the required focus of investigation:

Amount of Edges (stock connections - axis Y) as function of applied threshold on StdDev (axis X). Exponential drop, so here it is presented in logarithmic scale.

Amount of Edges (stock connections – axis Y) as function of applied threshold on Rsq (axis X). Exponential drop, so here it is presented in logarithmic scale.

I prefer to work with highly correlated signals >0.9 Rsq on 5 minutes granularity data and a bit lower for annual scale signals. This is enough data to dig into for a single person during the night. For example 0.9 Rsq cleanup is giving on my data set 364 nodes (companies and funds) and 5778 edges (connections between them).

To start work easily, need some visualization SW. I like Gephi. We import the Edge table, when Rsq values are defined as “Weights” inside So how it looks like?

Cross Correlation Network of 364 NASDAQ company stocks and funds at 16 Apr 2014 with correlation higher than 0.9 based on 5 minutes granularity sampling. Colors are based on Market Sector, size of node based on capital value.

Cross Correlation Network of 364 NASDAQ company stocks and funds at 16 Apr 2014 with correlation higher than 0.9 based on 5 minutes granularity sampling. Colors are based on Market Sector, size of node based on capital value.

Beautiful, is not it?

Rsq>0.95 gives much more focused picture:

Cross Correlation Network of 116 NASDAQ company stocks and funds at 16 Apr 2014 with correlation higher than 0.95 based on 5 minutes granularity sampling. Colors are based on Market Sector, size of node based on capital value

Cross Correlation Network of 116 NASDAQ company stocks and funds at 16 Apr 2014 with correlation higher than 0.95 based on 5 minutes granularity sampling. Colors are based on Market Sector, size of node based on capital value

Now we can inspect by zooming in, filtering out based on sectors. For example Health Care and Pharma:

Cross Correlation Network of 26 NASDAQ company stocks in Health Care and Pharma sector at 16 Apr 2014 with correlation higher than 0.95 based on 5 minutes granularity sampling. Size of node is based on capital value. Width of connection based on Rsq value

Cross Correlation Network of 26 NASDAQ company stocks in Health Care and Pharma sector at 16 Apr 2014 with correlation higher than 0.95 based on 5 minutes granularity sampling. Size of node is based on capital value. Width of connection based on Rsq value

At this stage we can ask ourselves various questions. For example, how does it look (and why) two stocks with high correlation?

Sometimes it perfectly makes sense, e.g. for  FOXA (for the Class A shares) and FOX (for the Class B shares):

Fox

In other cases you might find that intra-day correlation is not representative and occasional (or caused by rare common event) and so there is a need to switch to annual scale.

Do we build a portfolio of Pharma companies, should we take them all? If now it does not really makes sense, then which part? Rep from each cluster? Well, we can run cluster algorithm, Run Centrality algorithm and probably choose based on those considerations.

 

All those things and more in next parts…