At the Google I/O 2017 conference, Google announced its new Lens feature, designed to not only provide information about an image, but connect it to real-world information. This feature could potentially offer significant benefits to people who are blind or vision impaired.
While the idea of image recognition is not a new one for Google as it has dabbled in this area for several years with its Google Goggles feature, Lens takes image recognition to a new level. While image recognition software would be capable of identifying an image of, say, a flower,, Lens has the ability to use contextual information to determine what type of flower it is.
Other examples demonstrated at Google I/O highlight how Lens can also use photos to provide you with additional context by using GPS tracking to help give the user more accurate results. It also has the ability to identify text and make use of that information, such as the ability to take a photo of a restaurant which can then provide information on the restaurant itself, the menu of that restaurant and various online reviews.
For people who are blinder vision impaired, the Lens feature has the potential to provide significant benefits. While there are several effective apps available on mobile devices that can deliver image recognition and OCR capabilities, Lens has the additional benefit of connecting the image with meaningful data that is likely to be useful while the user is in that specific location.
Lens is expected to be introduced to the Assistant and Photos features on Google Android later in the year. Additional information can be found in the Google Lens article on the CNet website. An overview of all the Google I/O announcements can also be found on the CNet website.