Skip to content

Dr Scott Hollier - Digital Access Specialist Posts

Featured Post

Welcome to – Home of Dr Scott Hollier

The biggest change to web accessibility in a decade is nearly here – is your organisation ready?

Dr Scott HollierAs W3C puts the finishing touches on its new Web Content Accessibility Guidelines (WCAG) 2.1, it’s a great time to upskill your staff on what’s coming and maximise your support for people with disabilities.

Whether your needs are local to Australia or international, Scott can provide you with a range of consultancy services and research endeavours to make websites, apps and documents accessible. Scott can also be booked for speaking engagements based on a variety of topics relating to disability, education, current and future technologies and his life story discussed in his book ‘Outrunning the Night: a life journey of disability, determination and joy’ now available for purchase.

Scott’s credentials include a PhD in the field and two decades working across the corporate, government and not-for-profit sectors. Scott is also an active contributor to W3C research and has a personal understanding of digital access as a legally blind person. You can learn more about digital accessibility in the news items below.

Thank you for visiting!

Google Lens receives Assistant support and wider release

Last year, Google announced the launch of its new Lens feature, designed to not only provide information about an image, but connect it to real-world information. The exciting news is that the feature, initially limited to Google’s own Pixel smartphones, is now being rolled out to most Android users in an update to the Photos app. iPhone users will also receive Lens at a later date.  

At the time I mentioned that for people who are blinder vision impaired, the Lens feature has the potential to provide significant benefits. While there are several effective apps available on mobile devices that can deliver image recognition and OCR capabilities, Lens has the additional benefit of connecting the image with meaningful data that is likely to be useful while the user is in that specific location. For example, a blind user could take a photo of a café and not only have the café itself identified, but a menu could be provided at the same time along with the accessibility of the building. 

In addition, the feature is being added to the Google Assistant. According to an article by Android Police, “Lens in Google Photos will soon be available to all English-language users on both Android and iOS. You’ll be able to scan your photos for landmarks and objects, no matter what platform you use. In addition, Lens in Assistant will start rolling out to “compatible flagship devices” over the coming weeks. The company says it will add support for more devices as time goes on.” 

With the Google Assistant also receiving the ability to use Lens, it will make the feature much easier for people with vision-related disabilities to simply speak to theirphone to identify their surroundings. Additional information on the feature can be found at Google’s Lens information page. 

WCAG 2.1 Candidate Recommendation – what it means for Level AA compliance

In January 2018, the hard work of W3C resulted in its new   Web Content Accessibility Guidelines (WCAG) 2.1 standard reaching Candidate Recommendation stage.  This  means that, barring any significant issues in its real-world testing, the standard is close to completion and on track for its mid-2018 release date.

Last year I wrote an article titled WCAG 2.1 draft: reflections on the new guidelines and success criteria.  It’s fair to say that a lot has changed since then and that is ultimately a good thing. Importantly, the purpose of WCAG 2.1 is NOT to replace the well-established WCAG 2.0 standard, but rather to extend support for the mobile web. In my view, WCAG 2.1 as it stands does this very well with some additional success criteria added to existing guidelines, and a much-needed focus on supporting mobile platforms by providing two new guidelines.

With WCAG 2.1 reaching a near-complete stage its  a great time to look at all this in a bit more detail and focus on what additional work would be required by ICT professionals to meet WCAG 2.1 Level AA compliance.

Before we get started, it’s important to highlight two things: firstly, I’m not going to go through all the guidelines and success criteria associated with WCAG 2.0, just the WCAG 2.1 extensions.  If you’d like more information on the original WCAG 2.0, I’d be happy to provide you with some resources to get up-to-date with the international standard.

Secondly, I’m going to focus particularly on Level AA compliance. A number of the new WCAG 2.1 success criteria are Level AAA so for now I’ll leave those out given most policy and legislative frameworks don’t go to Level AAA.

Two new guidelines

So with those things in mind, let’s start with the two new guidelines. The addition of the guidelines brings the total in WCAG 2.1 to 14 with both guidelines sitting under the Operable design principle. The new guidelines are:

  • 2.5 Pointer Accessible: Make it easier for users to operate pointer functionality.
  • 2.6 Additional sensor inputs: Ensure that device sensor inputs are not a barrier for users.

Guideline 2.5 is focused on making sure that whatever type of pointer is being used, such as a mouse pointer, a finger interacting with a touch screen, an electronic pencil/stylus or a laser pointer, all functions should work correctly. It may be the case that a person with a disability finds it easier to use one type of pointing device over another, so from an accessibility standpoint all options should be available.

Guideline 2.6 relates to the use of sensor inputs such as a gyroscope or accelerometer found on mobile devices. From an accessibility standpoint, it may be the case that users can’t operate these sensors such as the inability to tilt a mobile phone if it is mounted on a wheelchair. As such, it is essential that people with disabilities are able to achieve the same outcome through the user interface if they are unable to use a sensor-based input method.

In addition to the two new guidelines, there are also new success criteria that have been added to existing guidelines. The following is a list of all Level A and AA WCAG 2.1 success criteria extensions.

WCAG 2.1 Level A

Success Criterion 2.4.11: Character Key Shortcuts

If a keyboard shortcut is implemented in content using only letter (including upper- and lower-case letters), punctuation, number, or symbol characters, then at least one of the following is true:

  • Turn off
  • Remap
  • Active only on focus

Success Criterion .2.4.12: Label in Name

For user interface components with labels that include text or images of text, the name contains the text presented visually.

Success Criterion 2.5.1: Pointer Gestures

All functionality that uses multipoint or path-based gestures for operation can be operated with a single pointer without a path-based gesture, unless a multipoint or path-based gesture is essential.

Success Criterion 2.5.2: Pointer Cancellation

For functionality that can be operated using a single pointer, at least one of the following is true:

  • No Down-Event
  • Abort or Undo
  • Up Reversal
  • Essential

Success Criterion 2.6.1: Motion Actuation

Functionality that can be operated by device motion or user motion can also be operated by user interface components and responding to the motion can be disabled to prevent accidental actuation, except when:

  • Supported Interface
  • Essential

These essential WCAG 2.1 success criteria demonstrate the importance of catering for people with disabilities on the mobile platform.  Success criteria 2.4.11 and 2.4.12 add additional support in helping users navigate and find content by ensuring that customised keyboard shortcuts don’t interfere with assistive technology and improvements in labelling identification.  The two success criteria 2.5.1 and 2.5.2 explain the importance of making sure that multi-gesture commands can be achieved using a pointer and that there is a consistent technique to cancel the action. The final success criteria in this list, 2.6.1, explains that the commands reliant on the movement of a device should also be achievable through the standard user interface.

One of the concerns that has been raised through the WCAG 2.1 process is that several of these success criteria have an ‘essential’ element which is viewed by many as a way in which developers can avoid the need to implement accessibility with the justification being that the design is essential to the functionality of the web content. However, I suspect in practical terms it will be difficult for a developer to argue that other accommodations are not possible if it is obvious that alternative interface controls could have addressed the issue, so it will be interesting going forward to see under what circumstances the ‘essential’ argument is put forward as a way of avoiding some WCAG 2. success criteria.

WCAG 2.1 Level AA

Success Criterion 1.3.4: Identify Common Purpose

The meaning of each input field collecting information about the user can be programmatically determined when:

  • The input field has a meaning that maps to the HTML 5.2 Autofill field names; and
  • The content is implemented using technologies with support for identifying the expected meaning for form input data.

Success Criterion 1.4.10: Reflow

Content can be presented without loss of information or functionality, and without requiring scrolling in two dimensions for:

  • Vertical scrolling content at a width equivalent to 320 CSS pixels;
  • Horizontal scrolling content at a height equivalent to 256 CSS pixels;

Except for parts of the content which require two-dimensional layout for usage or meaning.

Success Criterion 1.4.11: Non-Text Contrast

The visual presentation of the following have a contrast ratio of at least 3:1 against adjacent color(s):

  • User Interface Components
  • Visual information used to indicate states and boundaries of user interface components, except for inactive components or where the appearance of the component is determined by the user agent and not modified by the author;
  • Graphical Objects
  • Parts of graphics required to understand the content, except when a particular presentation of graphics is essential to the information being conveyed.

Success Criterion 1.4.12: Text Spacing

In content implemented using markup languages that support the following text style properties, no loss of content or functionality occurs by setting all of the following and by changing no other style property:

  • Line height (line spacing) to at least 1.5 times the font size;
  • Spacing following paragraphs to at least 2 times the font size;
  • Letter spacing (tracking) to at least 0.12 times the font size;

Word spacing to at least 0.16 times the font size.

Exception: Human languages and scripts which do not make use of one or more of these text style properties in written text can conform using only the properties that are used.

Success Criterion 1.4.13: Content on Hover or Focus

Where receiving and removing pointer hover or keyboard focus triggers additional content to become visible and hidden, respectively, the following are true:

  • Dismissable
  • Hoverable
  • Persistent

Exception: The visual presentation of the additional content is controlled by the user agent and is not modified by the author.

Success Criterion 2.6.2: Orientation

Content does not restrict its view and operation to a single display orientation, such as portrait or landscape, unless a specific display orientation is essential.

While many of the success criteria here may initially appear difficult to implement, the bundling of 1.4.x success criteria together make it helpful in understanding that these are all related to visual improvements. I’m particularly pleased to see that the issue of the infamous ‘double scrollbar’ scenario that screen magnification users face so often, particularly on small displays, is addressed to some degree here along with contrast elements, text scaling and hover issues. It is interesting to note again as to the remarkably specific level of detail in some of these success criteria and it’ll be interesting to see if the real-world testing confirms these values prior to the final WCAG 2.1 release.

I’m also pleased to see Orientation make it this far, and out of all the new success criteria I suspect this will be the thing that resolves the most frustration for people with disabilities overall on a mobile device. When a mobile website or app forces the user to use their device in portrait or landscape for no apparent reason it is a great frustration and significantly affects how the device can be used as it can change the reach of buttons, make swipe gestures more awkward or affect the use of screen real estate. While there is another ‘essential’ statement creeping in here, it’s been my experience that there are few apps that lock in an orientation that really need to do so, which leads me to be optimistic about the implementation  of this one.

Where to from here?

From here, it’s likely that WCAG 2.1 will be released this year as promised, and the take-away message for ICT professionals is to get ready to consider the access implications of mobile-specific elements suchas multiple input methods and sensor integration. For accessibility professionals, the biggest message here is that testing the accessibility of web content on a mobile device is no longer optional and new testing processes will need to be established.

That said, it’s been exciting through my W3C work to see the standard being developed and I’d like to say a big thank you to all the W3C staff, invited experts and volunteers who continue to give up their time to ensure that the mobile web receives a much-needed accessibility focus through the development of WCAG 2.1.

NOTE: at this time of writing, WCAG 2.1 is a Candidate Recommendation and is NOT an official W3C standard. As such, the guidelines and success criteria discussed in this article may differ from the final W3C recommendation. 

Perth Web Accessibility Camp 2018 highlights

The 2018 Perth Web Accessibility Camp was held on 15 February and a fantastic day was had by all. About 140 people were in attendance, the largest number in the five years it has been running and was hosted by BankWest in their Perth CBD head office.

Camp attendees sitting at tables

Photo: Camp attendees in the Bankwest presentation room

David Masters from Microsoft Australia provided the keynote discussing some of the innovative projects Microsoft is working on. The presentation included some information about how their internal processes support accessibity standards, the Seeing AI app available for IOS devices to help blind users and the upcoming Hearing AI app will potentially help support people who are Deaf or hearing impaired. Ayesha Patterson gave a demonstration of the Seeing AI app on both its benefits and limitations.

David Masters

Photo: David Masters presenting at the Camp

Greg Madson and Erika Webb from Vision Australia asked in their presentation ‘Are web browsers keeping up with accessibility needs?’. They explained that the merger of many assistive technology providers into the company VFO have had a notable impact on the assistive technology industry, such as the creation of a product called Fusion now available which joins ZoomText and JAWS together. Furthermore, web browsers that have recently innovated such as Firefox have done so with accessibility being sacrificed in the process. Greg and Erika said that its likely the issues with Firefox will be resolved by around May and should restore accessibility, Other bits of information include Narrator in Windows 10 being improved significantly and while browsers continue to evolve, the Web AIM screen reader survey indicated that the preferred options for blind users are still JAWS with Internet Explorer or NVDA with Firefox.

Amanda Mace from Web Key IT did a great presentation on Accessible Gaming. Amanda explained that gaming has many health benefits and its important to make sure that people with disabilities can effectively participate. Amanda explained that the ‘work = reward’ equation, the sense of community with other players and the ability to achieve can make a big difference. Amanda indicated that there are 30 million gamers in the US alone and its important to make sure that controls are designed in an accessible way, adjustable keyboard mappings are available, colour contrast options are included, and captions provided for videos.

Another presentation that was very well received was Vithya Vijayakumare from VisAbility talking about the importance of 3D sound, also known as spatial audio – the capturing of audio in a similar way to how we hear the world. The option Vithya demonstrated was 3D binaural sound and it was amazing how immersive the sound was when using the provided demo on headphones. Importantly, Vithya sees 3D sound as having great accessibility implications in real-world scenarios such as avoiding obstacles.

After lunch, a lot of fun was had with Great Debate V: ‘The Internet of Things is Awesome for Accessibility’. There was much discussion, enthusiasm and ranting by both teams that had three minutes per person to make their case. This year I was on the negative team and made the case that while interesting, it hasn’t quite reached awesome status yet. For the first time in a while the debate was voted to be a draw so perhaps it’s still a little too early to determine its awesomeness level.

Jason O’Neil gave a great presentation on how design systems can encourage accessible, on-brand colours. With a focus on colour contrast issues due to the large number of people with a colour vision impairment, Jason explained how the colour palette can be customised to ensure that the colours selected for a design will support accessibility.

My final highlight was the presentation by Kammi Rapsey from Media on Mars and David Doyle from DADAA presented on a case study about the development of the DADAA website. David made the case that the move to an accessible website transformed the organisation from talking about people with disabilities to talking to them, while Kammi stepped the audience through some of the challenges in getting the web industry on board for accessibility, but how the journey is worth it in the end.

Dr Scott Hollier

Photo: Dr Scott Hollier presenting at the Camp

In addition to my Great Debate appearance, I gave a presentation about the Curtin research I undertook last year which is related to my upcoming CSUN presentation. I discussed how IoT is not as new as you might think but with connectivity, affordability, environmental data and the ease of communication with digital assistants it’s now a popular reality. I then went on to discuss some of the benefits and issues faced by students with disabilities such as monitoring lecturers to improve their delivery and issues with interoperability. Full details of the research paper on which my presentation was based can be found in the Publications section of the W3C Web of Things resource.

So that pretty much wraps up the Perth Web Accessibility Camp for another year. Many thanks to my colleagues on the organising committee who did a wonderful job and to all presenters and attendees for making it such a great day.

CES 2018: evolutionary, but not revolutionary, for digital access

The Consumer Electronics Show (CES) is the world’s largest consumer technology exhibition and showcases the products likely to appear in our shops in the coming months. While CES 2017 was considered revolutionary due to the Alexa digital assistant popping up everywhere, this year is more of a continuation of last year’s trends but with improved capabilities. This includes some helpful improvements for people with disabilities and some innovative proof-of-concept products that are likely to lead to digital access improvements in the future.

Roll-up TV

The announcement that received the most attention was LG’s rollable 65” OLED TV. As noted in the video below, the prototype could be useful in a house with limited space whereby you could use it as a full-sized TV for watching movies or shrink it down to be used as a computer monitor.

While this model is only for demonstration purposes as critical features such as how to connect devices to it is still being worked out, this may prove to be significant in a few years’ time as the technology becomes more defined. From a vision impairment perspective there are significant benefits to making a TV screen instantly bigger to see text and images on the screen, then instantly put the screen back to a smaller size for other users. However, the implication I see as being particularly exciting about this is its portability. Imagine having a mobile phone that can fit in your pocket, but with the press of a button turn into a screen the size of a home TV. As a vision impaired person, I see this as a great step forward and I’m looking forward to seeing how this proof-of-concept evolves.

Digital assistants are now getting screens

At last year’s CES Amazon’s Alexa stole the show with the digital assistant being integrated into a variety of different devices. This year Google has struck back with its integration of its Digital Assistant now reaching 400 million different devices. While Alexa-based digital assistants such as the Amazon Echo have had tremendous success in the US, it’s Google that has been the winner this year due to its international push, beating out Amazon in markets such as Australia where the Amazon Echo has only just been released.

In terms of access potential, the convenience of using a digital assistant has been available for a few years now. If you have a mobility impairment, being able to simply speak to a Google Home or Amazon Echo to turn on a light switch or play music has been a significant step forward, and likewise for people with a vision impairment, home automation using a digital assistant makes it much easier to achieve things. In the video below, the Google booth at CES 2018 even demonstrated the ability to connect a device that cooks popcorn!

However, while the ability to provide hands-free and non-visual commands to achieve everyday tasks is a fantastic thing for people with vision and mobility impairments, the new trend in digital assistants which is likely to provide an improvement is the addition of screens. While it’s great to make popcorn, ,the video about the Google booth also highlights that several manufactures are providing displays for digital assistants to show information such as recipes while cooking. Although this feature is available in a limited form already using the Google Chromecast, the version of Alexa on Amazon’s Fire TV and some other Echo devices, the integration of a screen will have tremendous benefits for people who are Deaf or hearing impaired as they will be able to visually see the information provided by the digital assistant. This opens new possibilities such as the use of a Google Home-type device for an office reception desk whereby the results can be conveyed using both audio and visual feedback.

Other highlights

While there were many other products that are likely to have a profound impact on people with disabilities such as driverless cars and drones that deliver people to their destination rather than packages, their availability to the public is unlikely to be this year. However, there are a few minor improvements to existing products which will have a benefit to people with disabilities such as the domination of wireless charging for mobile phones and improvements to Virtual and Augmented Reality.

While wireless charging may not seem particularly exciting and not particularly new, its inclusion in the latest iPhone models has been flagged as a time for industry to include the feature in more affordable devices rather than just the high-end phones. The other good news is that the charging technology is standard across different devices meaning that charging mechanisms are likely to become more affordable. From a digital access perspective wireless charging can be very helpful, especially for a person with a mobility impairment as the phone can just be place don a table to charge rather than having to find and plug in a cable.

Virtual Reality and Augmented Reality are also not exactly new concepts, but their move towards affordability and as a standalone experience not requiring a computer or phone to be attached is a notable trend from CES this year. Benefits in relation to digital access are particularly notable in terms of simulation of real-world environments for navigation and mobility, as a rehabilitation device and a good way for people without a disability to get a better understanding of a physical limitation. As these devices continue to be less cumbersome and more affordable, its significance will continue to grow.  

Overall this year’s CES is more about evolutionary benefits rather than revolutionary with improvements to digital assistants, wireless charging and virtual reality all likely to trickle into our homes during the course of the year. However, concepts such as the roll-up TV demonstrate that exciting things are on the horizon, and I’m still excited about the driverless car when the time comes that I can get one. Additional information on CES products can be found at the CNet CES 2018 news website.