VisAbility, one of Australia’s service providers for people who are blind or have low vision, recently launched a new resource titled My vision, My Choice to help people with a vision disability navigate their use of the National Disability Insurance Scheme (NDIS).
The official VisAbility press release describes the My Vision My Choice website as “…a comprehensive online one-stop-shop, where you can access all you need to know about the NDIS if you live with vision impairment or blindness, are a family member, carer or provider. [the website] successfully brings together everything a person with vision impairment needs to make an informed choice about their NDIS plan in one place.”
The website contains a large amount of information including:
Access to the NDIS planning booklet
Comprehensive list of Service Providers roles
Personal stories and podcasts.
As the project progressed, I had some involvement through the creation and implementation of a community consultation programme to determine what content people would need from the resource. It was great to be involved in the design and implementation of this process which included focus groups and an online survey.
Since the launch of the resource there’s been great feedback in the community that the website is effectively meeting its need as people with vision disabilities continue to work their way through the NDIS process.
The OZeWAI 2018 conference, hosted by the ABC in Sydney, has now ended and a great time was had by all. The three-day event is held every year as Australia’s dedicated national conference for digital access specialists and is renowned for its great community atmosphere and presentations with this year being no exception. Here’s some of my personal highlights from the three days.
The keynote was delivered remotely by Nic Steen out from Knowbility titled No Rights, No Responsibility. The speaker made the point that it is important to ensure that people with disability are included in the digital access processes and that training is critical in making sure that effective digital access is achieved.
Another great presentation was Here comes WCAG 2.1! by Amanda Mace from Web Key IT and Julie Grundy from Intopia. There was some great discussion across the new WCAG 2.1 Success Criteria, explaining the importance of things like reflow and ensuring that content on mobile devices needs to work effectively for people that may not be able to move their device to activate various sensors. With this year marking the WCAG 2.1 release it was a great introduction as to how the new extensions build on the legacy WCAG 2.0 requirements.
Just before the lunch break on the first day it was my turn to present, discussing the W3C work on inaccessible CAPTCHA. In the presentation I talked about how traditional CAPTCHAs such as the use of text on bitmapped images and audio-based CAPTCHAs are not only inaccessible but also not secure. I also provided an update on the advice our group has been putting together as part of the CAPTCHA advisory note. It was great to have a chance to share the information.
Another session that I really enjoyed was Andrew Downie’s presentation titled The Graphics Divide – When the alt Attribute does not Suffice. I’m frequently asked in workshops as to what is best practice when using alternative text, and Andrew illustrated the point well using popular landmarks and providing relevant text descriptions. The key takeaway from his talk is that it’s relatively easy to use alternative text for WCAG compliance, but that doesn’t mean it’s accessible.
A presenter I always enjoy is Greg Alchin, and he did a great job in discussing the importance of ePub. In a PDF-obsessed world, Greg made the point well that there are a lot of tools and readers available to make the most of the ePub format which is essentially web pages compiled into a document format. While there’s still no WYSIWYG editor that works well for the ePub format and this was a point acknowledged as a current restriction, it’s encouraging to hear that there are plans for it to be included in the Office suite in the future which will go a long way to addressing this issue.
On the second day I featured in a second presentation hosted by Sean Murphy from Cisco Systems whereby we discussed the accessibility implications of Artificial Intelligence and the Internet of Things. When Sean invited me to join him, he said it’d be great to structure it like a fireside chat, so we had an agreement whereby he would bring the questions, and I would bring the fire. As such, I had my laptop next to me playing a YouTube video of ‘HD fireplace with crackle’ while we discussed the implications. Sean made several great points about how the quality of data will heavily determine the effectiveness of our AI perceptions and how issues such as security still have a long way to go. I also talked about my Curtin research as it related to the IoT needs of students in tertiary education.
The last presentation that really had an impact on me was Making Chatbots Accessible by Ross Mullen. Until this presentation I had always assumed that chat boxes were largely a no-go zone for accessibility, but Ross explained that if an effort is made then both conversational support and accessibility can be achieved.
In addition to the presentations it was also great to catch up with lots of familiar faces at the breaks and conference dinners. I also really enjoyed making new friends and meeting many of the Alumni from the Professional Certificate in Web Accessibility course.
The W3C Web Accessibility Initiative (WAI) has released a new resource titled The Business Case for Digital Accessibility, designed to highlight the rationale for addressing digital access issues within an organisational strategic context.
In the official announcement e-mailed to the WAI Interest Group Mailing list, the primary purpose of the resource is to provide guidance on the “…direct and indirect benefits of accessibility, and the risks of not addressing accessibility adequately.”
Features of the resource include guidance on the following topics:
Enhance Your Brand
Extend Market Reach
Minimize Legal Risk
The resource also provides case studies and examples that demonstrate how continued investment in accessibility is good for organizations.
The Business Case for Digital Accessibility resource can be found at https://www.w3.org/WAI/business-case/. Many thanks to the W3C WAI Education and Outreach Working Group (EOWG) for their work on such a great resource.
The road to the Windows 10 October 2018 update has ben a hard one for Microsoft as it had to postpone its rollout due to a series of issues including the unintentional deletion of personal files. However, from an accessibity perspective, the update is great news as the built-in Narrator screen reader has received significant improvements, both in terms of features and usability. Happily, my computer survived the update before Microsoft pulled it, resulting in a great opportunity to get acquainted with the significantly improved screen reader.
Keyboard shortcuts are now more familiar
When Narrator is started with the usual Windows + CTRL + Enter command, the first thing that now greets you is a message that the keyboard shortcuts have changed.
While this will mean that existing Narrator users will have to learn a new set of shortcut keys, for users of more popular screen reader such as JAWS and NVDA – which is most blind users – the Narrator commands have become much more intuitive. This is a great move and streamlines the experience for people wanting to use Narrator whether as an ad-hoc or permanent screen reader solution
Narrator Quick Start Tutorial now included
When I realised that the Narrator commands I was used to no longer worked, I was initially a bit worried about the process of relearning everything. However, it turned out Microsoft had already considered this with the inclusion of a clever Quick Start tutorial wizard that breaks down the learning process to a few commands at a time. This is useful for everyone, but its especially useful for users new to screen readers. The tutorial wizard features about a dozen screens, each one providing a sandboxed environment to learn about some new commands and then try them out before progressing to the next section.
The Quick Start screens are as follows:
Welcome: a screen that explains how the Quick Start guide works.
Explore your keyboard: this page provides an opportunity for input learning where you can try out a key and hear Narrator explain what it does.
Scan mode: explains how the arrow keys can be used to scan around the page.
Reading words and characters: explains how Narrator can read out individual words or characters for proofing and editing.
Headings: provides a window with sample headings to move around using the ‘H’ key.
Landmarks: explains how landmarks can be useful to move between navigation, main content and search options.
Entering text: explains how Scan Mode is disabled when editing text and provides an opportunity to try it out.
Buttons: explains how Narrator can interact with checkboxes and other controls.
The Narrator key: Explains the significance of the Narrator key which like other Windows screen readers can issue commands using either CAPS LOCK or Insert.
Important Narrator commands: provides an overview of Narrator commands typically used in everyday tasks.
Try it out: provides an opportunity to try using the commands learnt through the Quick Start guide on a webpage.
Navigating Apps: highlights some general keyboard commands that are not necessarily Narrator-specific but likely to be useful.
Guide summary: an overview of the key points covered in the guide.
The great thing about the Quick Start guide is that most of the screens not only explain what the functions are but provide you with an opportunity to try out the commands while remaining inside the tutorial wizard. This means that once a user is comfortable with the command they can go to the Next button and learn the new features. While other tutorials like the Android Talkback are effective in providing an opportunity to practice in an environment away from direct interaction until the user achieves the task, Narrator has the bonus of not moving to the next option until the user is ready to do so.
In terms of improvements to Narrator itself, I’ve noticed that it seems to work much better in picking up landmarks along with a faster and easier web browsing experience. It may be the case that such features were in the older version but were difficult to access with the keyboard commands, but the updated Narrator is certainly a step above in ease and usability compared to Windows 10 prior to the October 2018 update.
Is it better than JAWS or NVDA?
The big question likely to be asked by many is whether Narrator has evolved to a point now where it can be used in place of a commercial screen reader such as JAWS or the excellent open-source screen reader of NVDA on Windows. In my opinion, Narrator has finally come of age and for many blind and low-vision users the combination of familiar keyboard commands and an excellent tutorial may be enough for casual everyday use. That said, users that rely on a screen reader for critical work such as researching or interaction with technical information will find Narrator lacking, and despite the improvements the update is unlikely to be any threat to the popularity of existing screen readers. Given that Narrator is already built into Windows and the keyboard commands will now be much more familiar, I’d recommend trying it out when your computer receives the October update but keep your usual screen reader handy as it’s likely you’ll need to return to it for heavy-duty computer use. Where I do think Narrator will be useful though is for people recently diagnosed with an eye condition as they can use the Quick Start guide to get familiar with a screen reader and the keyboard commands they learn are now largely transferrable to other screen readers.
Last week I was given the great privilege of supporting the Australian Taxation Office (ATO) by continuing to upskill its staff through the delivery of three workshops across three cities over three days.
The workshops were designed in consultation with ATO staff to support their internal marketing and communications, IT, design and content teams, focusing initially on the personal journey of disability then expanding to look at the user experience more broadly. The focus then shifted to how content can be prepared to ensure effective messaging for people with disability through the provision of accessible web, document and game accessibility requirements.
The three workshops held in Brisbane, Canberra and Melbourne respectively featured lots of great discussion relating to the experience of using a screen reader for the first time, the importance of captioned video content and discussion on how to create an accessible game using multiple control mechanisms. As a result of the workshops, the teams will continue to maximise accessibility whether that be from a marketing & communications, IT, design or publishing perspective, so that it can continue to make its community engagement and in particular ato.gov.au as effective as possible.
The opportunity to deliver the three workshops resulted from the ATO’s earlier commitment to internally transition to the new WCAG 2.1 standard. It is hoped that other government departments will follow the ATO’s lead by strengthening its messaging processes for the mobile web and beyond.