Ready to market your stellar app?
Samsung Developer Program is your gateway to app monetization success. Learn More
Using VR to Bring the Low-Vision Community Back Into Real Visual World

Often when you hear about virtual reality advancements, it’s in the world of gaming or out-of-this-world experiences only possible through a headset, but what about the other side of the coin? Over the past few weeks, you’ve seen how VR is influencing the world of storytelling, enterprise and gaming, but there’s another industry making great improvements to peoples’ quality of life.

Dr. Frank Werblin, Professor of Neuroscience at the University of California at Berkeley, and his team have developed a VR experience that brings the miracle of sight back to the low-vision community.

Virtual Reality takes people out of the real world and moves them into virtual space. IrisVision brings the low vision community back into the real visual world, a world that they have been denied for many years because the technology was not available at a reasonable cost.

Our Vision

We began this project 3 years ago with the intent of providing a low cost, highly functional device to the low vision community. The low vision community is not well-served with technology. When a low vision patient visits a clinic the prescription is either a magnifying glass or a text reader that projects the image onto a large monitor. In other words, current technology confines the low vision patient to a visual bubble that extends no further than the reach of his arms. We wanted to bring the low vision patient back into the visual world. To be able to recognize expressions on the faces of his grandchildren across the room. To find items on the supermarket shelf. To read the music while playing the piano. To play cards with other adults and children. To sit across the room to watch TV rather than sitting inches from the screen. To read a smartphone display and email.

Through my time as an undergrad at MIT, and a biomedical engineering PhD student at John Hopkins, I made many of the fundamental discoveries of how the retina works.  Moving to the neuroscience department at UC Berkeley, I worked on devices to restore vision to the blind by inserting chips into the eye, and by inserting genes into individual neuron types in the retina. As you can imagine, these techniques are generally extremely expensive, involve teams of physiologies, psychologists, engineers, and molecular biologists, and are extremely invasive. Due to their nature, they’re typically only offered to the almost-blind, and will be beyond the means of the most people.

I realized early on that there was a much larger population of low-vision patients who could benefit from an inexpensive, non-invasive device to help them see. The Samsung Galaxy S6 and now Galaxy S7 has made this possible on a worldwide distribution level.

The Design

We began by designing the interface between existing cameras, processors and display. Much effort went into miniaturizing these components and optimizing power. Our device was bulky and power hungry. Then, about 2 years ago we had a brilliant revelation: we realized that the threshold had been crossed in smartphone technology. The Samsung Galaxy S6 had a display that could provide sufficient resolution for our project. Specifically, the Samsung Galaxy S6 had a 16 M pixel camera and a 2560 x 1440 pixel display. If you calculate the distance between pixels, it is about 3 minutes of arc, taking into account that the screen is split into two images, one for each eye. Normal 20/20 vision can discern 1 minute of arc. So the inherent “resolution” of the Samsung display is about20/60. This means that, for patients with vision less than 20/60, for example 20/80 to 20/800, a magnified image on the Samsung display could improve their ability to discriminate. (Expectations are that future devices will continue to increase the pixel density, greatly improving the resolution of the device).

Considering the Patient

We wanted to provide the patient with a convenient way to magnify objects of interest in the visual scene, but maintain situational awareness of the overall scene. For this we designed a magnification bubble as shown in the figure. The bubble has “soft edges” that merge into the scene so that the edges of the bubble do not obscure any details in the scene. The user can control the size and magnification of the bubble using the touchpad on the side of the Samsung Gear VR. The bubble size is increased by swiping up on the touchpad; the magnification within the bubble is increased by swiping back on the touchpad. This allows the patient to have his hands free so that he could perform other hand/eye coordinated activities like reading a book, scrambling eggs or playing the piano.


Tuning the Device

In addition to bubble control, there are numerous adjustments that can be made to the display to accommodate the needs of the user. We have a “clinical menu” that allows for adjustment of bubble shape (it can be square), interpupillary distance, ambient level, contrast, inverted text (white letters on black background), and repositioning of the bubble within the scene.

Connections to the Digital World

The low vision patent has difficulty with the immediate environment, but also suffers from a disconnect to the digital world. To accommodate this need we allow, on what appears as a gigantic screen, access to all digital media including news, streaming video, email, documents, and anything else that is available on the internet.

Clinical Evaluations

We have been testing and evaluating our system at the Johns Hopkins Low Vision Clinic for the last 3 years. Many of the crucial ideas underlying our development were contributed by Dr. Robert Massof at Johns Hopkins who build the first modern low vision aid LVES in the 1980’s. That device was bulky and inefficient by today’s measure but it provided the standard by which all subsequent devices have been tested.

See it in action:

Don’t just take my word for it. These people are living proof:

A Disruptive Technology

The modern smartphone has disrupted many industries by integrating some of its functions in new and exciting ways to perform previously unheard of tasks (take Uber for example). Consistent with that model, we have integrated the camera, display, processor, wifi, Bluetooth, head tracking, voice control to create a low cost, highly functional solution to a problem that affects more than 5 million people in the US, and more than 70 million worldwide. We can supply these patients worldwide because Samsung does the manufacturing.

Telemedicine possibilities

Having an internet-connected solution offers many additional advantages. Right now patients are “fitted” or “tuned” to the device when they go to the clinic to be treated. But because of internet connectivity clinical telemedicine is now possible. This make our technology available to people in remote areas for whom traveling to a clinic is not possible A connected clinician in Baltimore can tune a patient in Bangladesh via Internet.

To learn more about the great work IrisVision is doing, visit

By Dr Frank Werblin
Professor of Neuroscience, University of California at Berkeley
November 10, 2016
You may also like