Researchers have developed display technology that adjusts for vision defects
The days of fiddling with bifocals and holding tablets at arm’s length may be numbered. Part of a joint project by MIT and UC Berkeley teams, new software alters a screen’s display according to lens prescription.
As announced last week, researchers from the Massachusetts Institute of Technology and the University of California at Berkeley have joined forces to produce a system which “predistorts” digital content for the individual observer in order to produce a correctly-perceived image without corrective eye wear. Drawing from UC Berkeley’s School of Optometry and Computer Science Division and MIT’s Media Lab and Camera Culture Group, the team has developed technology which can account for, but also potentially diagnose, a user’s vision correction.
As lead author Fu-Chung Huang explains, the project’s significance is that, “instead of relying on optics to correct your vision, we use computation. This is a very different class of correction, and it is non-intrusive.” Project leader Brian Barsky has further suggested that a potential impact of the technology may even be removing the need for invasive eye treatments and the effects of lowered visual function:
We now live in a world where displays are ubiquitous, and being able to interact with displays is taken for granted … In some cases, [hard-to-treat vision defects] can be a barrier to holding certain jobs because many workers need to look at a screen as part of their work. This research could transform their lives.
Building on concepts from earlier studies, the software addresses visual defect — a mismatch in the distance between an object and its viewer and the amount of distance a viewer needs to focus on that object — by simulating the object at the correct focal distance. Based on the digital viewer’s vision prescription, the program harnesses aspects of 3-D display technology to present pixels from different viewing angles for a clear image, much as traditional visual correction bends the wearer’s field of vision.
As earlier research discovered, the use of multiple on-screen pixels to simulate a single virtual pixel can drastically reduce the resolution of an image. By designing an algorithm to tailor images on liquid-crystal displays (LCDs) and allow the system to mask multiple perspectives while letting more light pass through, the team has managed to preserve image quality, a feat previously un-achieved. Chris Dainty, a professor at the University College London Institute of Ophthalmology, praised the project for having cracked this fundamental problem: “[m]ost people in mainstream optics would have said, ‘Oh, this is impossible.’ But [the] group has the art of making the apparently impossible possible.”
The group plans to present the project’s fundamentals at upcoming graphics conference Siggraph. While the researchers stress that the technology is in its early developmental stages, their hopes for the implications and applications of their discoveries are high.
Images: PC World, MIT Media Lab/Camera Culture Group