Skip to content

Dr Scott Hollier - Digital Access Specialist Posts

Registrations open for 2018 Perth Web Accessibility Camp

The time has come again when the Perth web accessibility community comes together for its annual web accessibility camp. The event will be held on 15 February and registrations are now open.

Celebrating its fifth year, the Perth Web Accessibility Camp (PWAC) is a one-day event featuring a variety of presentations and other things relating to disability and technology.  The keynote will be David Masters from Microsoft Australia, sharing details on how Microsoft is evolving its culture to be more inclusive, including growing the diversity of its workforce to include more people with disabilities.

I’ll also be presenting at the event sharing information on the Internet of Things in relation to international developments and research relating to the tertiary education sector. Other Speakers feature from organisations such as VisAbility, Web Key IT and Blind Citizens WA. BankWest is the primary sponsor for the Camp and as such the event will be held in their Perth CBD office.  

Additional information on registering and the programme can be found on the Perth Web Accessibility Camp 2018 website. If you’d like to get a better understanding of how the day works you can also read my highlights article from last year’s PWAC event.

Audio Description on Australian television – an interview with Chris Mikul

In my former role with Media Access Australia, I used to have a tradition whereby I’d ask a prominent accessibility specialist to provide some insights on an important topic for the year ahead. In continuing that tradition on my own website, it’s my great pleasure to introduce Chris Mikul.

The speciality area of Chris is in relation to digital access in television, and to say he is an expert is an understatement – he has been working in this area since joining the Australian Caption Centre in the 1980s and continues to provide guidance as an accessibility consultant.

Chris Mikul

Photo of Chris Mikul © 2017 Media Access Australia

In this interview, Chris discusses the somewhat haphazard journey of audio description on television both here in Australia and abroad, considering the possibilities and pitfalls of an audio described video future.

SH: Can you give us an overview of what Audio Description is?

CM: Audio description is the descriptive narration of a film, TV show, performance or other media for people who are blind or have vision impairment. It was first developed for American TV in the early 1980s, and is now available on television in the United Kingdom, United States, Canada, New Zealand, Korea and many European countries. But not, sadly, in Australia. If it’s well done, it’s a wonderful service that transforms experience of media for people who need it.

SH: How common is AD in cinema and the Arts?

CM: Since 2014, audio description has been available in over a hundred cinemas owned by the four major cinema groups, Hoyts, Village, Greater Union Birch Carroll & Coyle and Reading. It’s delivered via headphones, which the consumer has to request before a screening. Unfortunately it’s not well publicised so people who would benefit from it often don’t know about it, and sometimes the cinema employees aren’t clear about it how it works either, so there can be problems when people request it. Audio description is also available for selected theatrical performances, art exhibitions, and so on.  

SH: In Australia, AD on TV seems to be elusive, but there was a trial for it some years back.  Could you tell us a bit about the trial and other AD experiments in Australia to date?

CM: There have been two trials, both involving the ABC. The first took place over 17 weeks in 2012, during which two hours of audio described programs were broadcast on the main ABC channel each day. The ABC prepared a report for the Government, which was eventually made public, and this was supposed to be followed by the Government consulting with stakeholders about a future service on TV, but this simply didn’t happen. For the second trial, in 2016, audio description was provided for programs on the ABC’s online catch-up service, iview. This was judged a great success by consumers, but again, it hasn’t been followed by the introduction of a regular service, which is extremely frustrating.

SH: How does AD on TV in Australia compare with AD in other countries such as the UK?

CM: The levels overseas vary considerably. The United Kingdom has the highest mandatory levels – 20% of programs on most channels must be audio described, but some channels have elected to do 30%. Canada has a dedicated accessible channel on which everything is audio described and captioned. The levels in other countries that have it tend to be low, but in New Zealand they’re now up to 40 hours a week. That’s a bittersweet situation for me because I went over to New Zealand in 2011 to train the first audio describers there.

SH: Do you think legislation has an impact on whether or not AD is provided on TV?

CM: Having worked in the accessibility sector for almost 30 years, one thing that has unfortunately become clear to me is that legislation (sometimes combined with the threat of litigation over discrimination) has always been the main mechanism for attaining high and continuing levels of accessibility in media. The reason that we do not have audio description on broadcast TV here is that Australian government has shown no interest in making the delivery of it mandatory, and the broadcasters don’t want to provide it because of the costs and the technical work involved.

SH: Are there any other mechanisms such as websites and apps that might be able to make AD more common?

CM: There’s actually an interesting experiment being conducted at the moment by an Australian company called Big Access Media. They have developed an app that you can use to access audio description files for some children’s programs on Foxtel (the app synchronises the file with the program’s soundtrack). That’s a good development and I’m sure we’ll see more of these sorts of solutions in the future. The main issue with something like this is that a lot of a lot of blind people, particularly older people, don’t have access to the internet, and don’t have smartphones. That’s why the blindness advocacy groups have always called for audio description to be available on TV, so everybody can experience it easily.  

SH: What do you see as the future of AD?

CM: I think it will continue to grow around the world, although it will probably be a long time before it becomes as common as captioning. In Australia, it’s hard to say. The Department of Communications recently completed an investigation into the future of audio description, during which it consulted with advocacy groups, broadcasters and access service providers. I attended meetings in my capacity as a consultant for Media Access Australia (now the Centre for Inclusive Design). A report has been completed that sets out various options, so now it’s up to the Government to decide what they’re going to do, if anything.

SH: If people would like to experience AD, what organisations are best placed to help provide information about what’s on and where?

CM: There’s no single repository of information, and because the provision of the service is so patchy here, that creates its own problems. Cinemas owned by the groups I listed above should identify audio described screenings on their websites. Otherwise it’s really just a matter of contacting theatres, art galleries and the producers of other events and asking if there will be any audio description. That’s actually a good thing to do anyway, as it will increase awareness of the service among organisations that may never have even thought of it.    

SH: Chris, thank you so much for your time.

Thanks again to Chris for providing such fantastic insights and I’ll endeavour to keep you posted throughout the year as news relating to audio description and digital access more broadly continues to break.

Dr Scott Hollier to present on IoT education access research at CSUN 2018

The new year is off to an exciting start with confirmation that I’ll be presenting a paper about the Internet of Things (IoT) and education access at the CSUN 2018 Assistive Technology conference in San Diego, USA this March.

The presentation, titled ‘Internet of Things Education: Implications for Students with Disabilities’ is based on research I was involved in during 2017 at Curtin University. The presentation will focus on findings of a report that investigated the impact of IoT on students with disabilities in tertiary education, the evolving benefits and issues of IoT and potential future projects for consideration in this emerging space. The report was structured to support W3C processes as part of the Web of Things work of which I am a contributor through the Research Questions Task Force.

My presentation session will be held on Friday 23 March at 2:20pm. If you are planning to go to the CSUN conference this year, please get in touch as it’d be great to see you there.

Many thanks to my colleagues at Curtin and W3C for their support during the project which has led to this opportunity.

W3C WAI – 2017 year in review

In my W3C WAI review last year, I mentioned at the time how 2016 was a remarkable year and a year of great change both professionally and personally. Taking the plunge into digital access consultancy as an individual has, for the most part, worked well and I’ve enjoyed the variety of work in supporting organisations with their needs, undertaking some great research projects and continuing my teaching of the Professional Certificate in Web Accessibility course hosted by UniSA.  However, the most enjoyable and rewarding part of my work this year has been a voluntary one, dedicating time to the work of the World Wide Web Consortium (W3C) Web Accessibility Initiative (WAI).

 My role – the Research Questions Task Force

I’ve been asked by many over the past year as to what my involvement is with W3C WAI, so to kick off this round-up I’ll start with an overview of my own 2017 contribution. I’m an invited expert for the W3C WAI Accessible Platform Architectures (APA) Research Questions Task Force (RQTF) which I appreciate is a bit of a mouthful, so I’ll refer to it as just RQTF for the remainder of the article. The RQTF is a little bit like an advanced scouting party whereby we research current and emerging technologies to determine their accessibity implications. This in turn provides guidance to other groups in W3C WAI as to the key areas that need more formal developments such as the creation of web standards. This process involves a lot of researching, performing analyses as to what the current literature has to say on the various topics we explore and the creation of recommendations for other groups.

With the RQTF commencing just over a year ago, it’s been exciting to join the group at the very beginning and undertake literature reviews to support the group and put forward recommendations. Topics that the RQTF have researched this year include implications for accessible virtual reality, the Internet of Things, web authentication and an update to the current advice on the accessibity of CAPTCHAs, those annoying squiggly characters that can’t easily be read. Interestingly on that last point CATPCHAs are not only mostly inaccessible but also don’t help that much anymore with security, hence the need to explore the literature so that information such as this can be updated. Currently our findings are being polished up and are likely to be available in early 2018.

For me the RQTF is a perfect fit – it is a great use of my academic research background, it provides an opportunity to read about interesting research taking place around the world and I really enjoy working with others in the group. While it’s still a bit challenging to stay awake long enough to join the 10pm Wednesday teleconference call, an unfortunate time resulting from my +8UTC time zone, the flexibility around my other work has meant I’ve been able to do it and make a meaningful contribution. Incidentally, if anyone reading this has a flare for research, can dedicate a few hours a week to do some reading and doesn’t mind a regular late-night phone call if you live in the Asia-pacific region, please let me know as we’re always looking for more people to get involved. 

WCAG 2.1

While my own involvement is more looking to the future, there’s a massive change taking place in the present – an update to the Web Content Accessibility Guidelines (WCAG) 2.0 standard that has been the definitive guide on how to make content accessible since 2008. With WCAG 2.0 adopted around the world as part of policy and legislative frameworks, W3C had a difficult decision to make for any future updates: do you create an entirely new standard and break all the existing adoption around WCAG 2.0, or do you keep all of WCAG 2.0 and just make some additional terminology tweaks and guidance to update it? The short answer is that W3C decided to do both. The latter is where WCAG 2.1 comes in, which does indeed retain all WCAG 2.0 but adds important guidance on how the standard can be applied to the mobile web. The reason this is important is that when WCAG 2.0 was released in December 2008, the idea that a blind person could use a touch screen like the iPhone was considered ridiculous. Today, however, both Apple iOS and Google Android contain a wealth of accessibity features built-in. As such, the standard needs an update and 2017 has seen rapid progress.

While I won’t go into too much detail here as to the specific changes to guidelines and success criteria, the biggest takeaway is that WCAG 2.1 will require that your website is checked for accessibility compliance on a mobile device. In addition, the new standard will provide guidance to mobile app developers as well. In terms of timeframes, there is now a largely ‘feature complete’ draft of WCAG 2.1 available with the final release due mid-2018.


While WCAG 2.1 represents one of the paths towards updating the web accessibility standard, the other path being taken during the year is Silver, also referred to as Accessibility Guidelines or AG, and for those familiar with the periodic table you’ll see where the codename Silver came from.

Silver is a highly ambitious approach to updating the guidelines, hence the need to take the two development streams of updating the existing WCAG 2.0 while also creating something new. The reason why it is ambitious is because it is endeavouring to unify many different W3C standards while keeping an eye on emerging technologies and their access implications. The existing standards that are being rolled into AG include WCAG, the Authoring Tool Accessibility Guidelines (ATAG) and the User Agent Accessibility Guidelines (UAAG). This means that all web content, tools that create content and applications that control content such as web browsers and media players will all have one standard for developers to check if their work is accessible. Furthermore, the standard will also provide accessibility guidance on other products such as wearables, the Internet of Things, Virtual Reality, Augmented Reality and driverless cars. While the development of Silver is a big job, in my opinion it also makes a lot of sense given that it is no longer practical to produce a new web accessibility standard every time a new technology becomes popular, so there needs to be an umbrella standard to check for accessibility and make sure that people with disabilities everywhere can embrace the benefits that these technologies – plus the ones we don’t know about yet – will offer.

The timeframe for Silver is a little more fluid with optimistic target dates tentatively set for 2020. During 2017 we have seen the Silver Task Force make a lot of progress in determining what is needed in Silver, so it’ll be interesting to see how 2018 goes as the work on the draft standard takes shape.

Cognitive Accessibility Roadmap and Gap Analysis First Public Working Draft

Another great development in 2017 has been the increased work in providing guidance on accessibility in relation to people with cognitive disabilities. One of the main criticisms of WCAG 2.0 is that it falls short in addressing the needs of people with cognitive disabilities. To address this, W3C WAI created the Cognitive and Learning Disabilities Accessibility Task Force which has been doing a lot of work in 2017 to provide guidance to other working groups relating to the needs of people with cognitive disabilities. Most recently, the Task Force has published a first public Working Draft of Cognitive Accessibility Roadmap and Gap Analysis. It explores user needs for people with cognitive or learning disabilities and identifies where additional web content authoring guidance is needed to help authors meet these needs. While WCAG 2.1 and Silver have captured most of the headlines this year, I suspect it’s this work which will prove the most significant development in providing additional support to people with disabilities when reflecting on the achievements of the year.

Other updates

In addition there’s also been updates to existing resources and standards including WAI-ARIA 1.1, Core-AAM 1.1, DPub-ARIA 1.0, and DPub-AAM 1.0 becoming W3C Recommendations, an update to Easy Checks and updates to Web Accessibility Tutorials. The ARIA updates will provide significant improvements to assistive technology interaction and both the Easy Checks and Tutorials updates will help people taking their first steps into web accessibility with some guidance on what it’s about and how to perform basic checks.

The WAI work listed here is by no means a complete list but does give you some idea on the great things taking place in the international community to help people with disabilities get access to online content along with all the benefits that access provides. Many thanks to all the hard-working people involved in this work and I’m looking forward to continuing my involvement in 2018.

Google Maps gets new crowdsourcing feature to improve accessibility

Google is continuing its initial efforts to provide accessibility features to Google Maps for wheelchair users by introducing crowdsourcing features. This allows Maps users to add accessibility information on their favourite places.

In a recent article written by Claudia.Cahalanefor AbilityNet, it is explained that “Google is asking the public – in particular its ‘local guides’ – to add accessibility information to Google Maps. It’s hoping that visitors to restaurants, theatres, offices and lots of other venues, will add info on whether entrances, toilets and spaces are suitable for wheelchair users.”

While wheelchair users are the primary group to benefit from the new feature, users of the Maps app can add other information such as whether or not a venue is noisy which is also likely to be helpful for people with a hearing impairment.

To add your accessibility information on an Android device:

  1. Open the Google Maps app
  2. Select the Settings icon in the top-left corner or swipe left-to-right
  3. Select Contributions
  4. Select Accessibility

The inclusion of the feature marks a significant expansion of the wheelchair maps accessibility information which was previously limited to Maps users in the United States.