Back in beginning of 2011, I was working on concept of Touch Pad for Blind.
It was unfortunately with no significant progress. Several meetings with entrepreneurs (back then I had no MBA by myself) gave nothing.
Not sure if there is anything like this exist today (I hope that there is), I am still posting parts from a work (with small additions as our world has been changed for good and we have commoditized smartphones and Kinect).
Mobile system giving blind people real-time vision abilities by transferring surround environment into a touchpad sensing representation system.
The system contains 3 main modules:
- Environment Capture Module (camera(s), laser distance meter matrix)
- Signal Processing Unit (One of mobile solutions e.g. notebooks, smartphones)
- Representation Matrix: Sense of touch based display.
Touch Sensing Display
Pad containing a matrix of mini-cylinders, each moving along its axis by electromagnetic source and according to a control circuit. See tech details further.
The Environment Capture Module (ECM)
- The Environment Capture should contain at least depth detection module and might contain a picture capturing module.
- There are several ways to implement a depth detection e.g. laser transceivers matrix, double view phase detection.
- The module can be body-based or head based.
There is ability to adjust the scope and depth characteristics so the represented picture would be derived out of different range.
- Far range – derived more from cam and used for wide view representation.
- Medium range – derived from cam and depth detection for orientation
- Close range – mainly based on depth detection for hand feedback and recognition.
There might be a low cost 1-dimensional implementation which is less informative, but still helpful for close/middle range edge detection
Another ultra-small form factor solution might be:
Matrix output is towards the palm inner side, while depth control is at the matrix external side. Orientation is based on arm direction.
While ECMs and Matrix has to be done from the scratch and add to a cost structure of the product, other parts can be (or partially can be) covered by mobile devises (e.g. smartphones).
ECMs can be done today by use of depth cameras (e.g. Kinect based). Body based and head based ECM constraints: stability, flexibility, comfort, weight, complexity.
Matrix is still a challenge (at least I did not see it anywhere in those scales and resolutions)
There can be motionless matrix generators that push rods within cylinders for physical orientation strength. Close generators can have an electromagnetic interference cross near “pixels”. So generation coils can be located within insulating material or “refresh wave” should be performed to build the picture incrementally.
The Matrix can also have a per-pixel signal generation in case of multi-stable rod holding (unlike spring-based uni-stable architecture that might save dynamic power or bi-stable magnet holders that might save static power).
Bi-stable pixel has worse picture (saturated), but easier to implement. Example of bi-stable architecture of the matrix:
Here plastic rod (upper blue) is moved up and down and is stable in 2 states, while magnets (gray boxes) are holding it at top or at bottom positions.
In this case per-pixel triggers are aligned between top and bottom matrix and generated two EM signals influence on each other only and EM force between them that is stronger than holder magnets is changing the state of the pixel.
x/y grid is connected to matrix of coils around dielectric cores of rods from one side and to AC signal mux that is controlled by matrix controller from another.
Hope blind people will have it.