Skip to content

Month: March 2018

ACT TF & YouDescribe – two highlights from CSUN 2018

It is with a bittersweet feeling that I write this article about CSUN 2018. Originally I was planning to be there to present on my Internet of Things research provided to W3C. Sadly though it wasn’t meant to be with a medical issue flaring up leading me to cancel my travel just after check-in. It’s a strange feeling to ask the airline to retrieve your luggage after it has already gone through! 

While I didn’t make it to the conference myself, there were lots of amazing presentations, and after touching base with several people who did present I wanted to pull together two things which really stood out for me coming out of the conference.  

Accessibility Conformance Test (ACT) Task Force leadership

Shadi Abou-Zahra from W3C gave a great presentation about the work of the Accessibility Conformance Test (ACT) Task Force which is focusing on providing support to Test tool developers, test professionals, industry, manufacturers of technology, and procurers of accessible technology among others in the testing of accessibility. The goals of the ACT TF are to:

  • Reduce differing interpretations of WCAG
  • Make test procedures interchangeable
  • Develop a library of commonly accepted rules for WCAG
  • Establish a community of contributors

While WCAG-EM 1.0 is already in place to help people create an accessibility auditing process, the guidance is quite broad. The upcoming standards work of WCAG 2.1 and Silver highlight the need to bring a uniform and consistent mechanism of procedures to auditing testing processes.

I’ve personally seen the auditing processes of four companies and while they all provide effective guidance, they’re also all significantly different – including my own approach. The work of the ACT TF will be an exciting development going forward and really help bringing people who work in the industry together with a consistent interpretation of the relevant standards.

 YouDescribe

A second presentation I was sad to miss out on but have managed to catch up with is about YouDescribe, a way to create audio described videos on YouTube for free. The website explains how it works like this:

“Sighted people view YouTube videos and record descriptions of what they see. When the video is played with YouDescribe, the descriptions are played back with the video. Underneath the hood, YouDescribe uses an exclusive API to store description clips and information about them. YouDescribe knows what video each clip belongs to and what time the clip should be played. Lots of other information is stored along with the descriptions, including who recorded it, when it was recorded, how popular it is, etc. YouDescribe is the first video service to allow anybody, anywhere, to record and upload video descriptions to the cloud. It provides a unique way for people to get descriptions for the instructional, informational, and entertainment videos offered on YouTube.”

 

Due to audio described video being of benefit to people who are blind or vision impaired, and its notable lack of implementation here in Australia, YouDescribe potentially makes the creation of audio described content much easier to do and easier to find. While YouDescribe is not a brand-new service, I’ve found it hard to track down concrete information until the CSUN presentation which is one of the things that makes such conferences great.

Internet of Things report now available  

Also, while talking about CSUN 2018, I’d like to take this opportunity to sincerely apologise to anyone that was trying to find me or turned up to my presentation slot due to the late cancellation. If you would like to read the full report of my Internet of Things research that was prepared for W3C, you can find a link to it in the Publications list of the W3C Web of Things wiki.

Google Lens receives Assistant support and wider release

Last year, Google announced the launch of its new Lens feature, designed to not only provide information about an image, but connect it to real-world information. The exciting news is that the feature, initially limited to Google’s own Pixel smartphones, is now being rolled out to most Android users in an update to the Photos app. iPhone users will also receive Lens at a later date.

At the time I mentioned that for people who are blinder vision impaired, the Lens feature has the potential to provide significant benefits. While there are several effective apps available on mobile devices that can deliver image recognition and OCR capabilities, Lens has the additional benefit of connecting the image with meaningful data that is likely to be useful while the user is in that specific location. For example, a blind user could take a photo of a café and not only have the café itself identified, but a menu could be provided at the same time along with the accessibility of the building.

In addition, the feature is being added to the Google Assistant. According to an article by Android Police, “Lens in Google Photos will soon be available to all English-language users on both Android and iOS. You’ll be able to scan your photos for landmarks and objects, no matter what platform you use. In addition, Lens in Assistant will start rolling out to “compatible flagship devices” over the coming weeks. The company says it will add support for more devices as time goes on.”

With the Google Assistant also receiving the ability to use Lens, it will make the feature much easier for people with vision-related disabilities to simply speak to theirphone to identify their surroundings. Additional information on the feature can be found at Google’s Lens information page.