A recent exchange on Twitter has motivated me to write about the contribution published surveys on web site accessibility make towards understanding and addressing the problems that hold back web accessibility. I’ve read, and continue to read, many, many papers presenting the results of surveys of web sites, and I think we need surveys to look beyond just the data and instead delve more deeply into why the results are as they are. We’ve gone way beyond the point where a paper simply reporting that a study of x web sites from y sector revealed ‘disappointing’ levels of accessibility provides anything more than a minor contribution. Surveys need to look at process not product.
In the early days of web accessibility, post WCAG 1.0 release, published surveys of the accessibility of large numbers of web sites were relatively rare (I’m distinguishing these from reviews of a single site conducted by or on behalf of the development team, with the specific aim of identifying and repairing barriers present). So whenever a new survey emerged, it usually provided informative data on levels of conformance against WCAG 1.0, which took time achieve any significant impact on the web design industry. The data allowed us to see how particular sectors were faring, and which checkpoints were most frequently not met.
The publicity surrounding a published accessibility survey that presented data showing how poorly sites were dealing with WCAG conformance could also be claimed to raise awareness of web accessibility in general, and more specifically shame the organisations in question into doing something about the barriers present on their site. The former effect probably did take place, although I’d like to see concrete evidence that surveys actually have a positive effect on the organisations whose sites were reviewed.
(Indeed, there have been concerns that surveys may have a negative impact on ‘usable accessibility.’ If the methodology used focuses excessively on a technical measure of accessibility that becomes a highly public ‘official’ ranking of each site’s performance – with rewards for finishing high up a ranking, there would be understandable pressures for site developers to design to satisfy the surveyors and not disabled people.)
Many published surveys have had severe limitations in methodology and scope – frequently conducted using automated tools only, using a subset of WCAG, and often of the Home page only; and very, very rarely have researchers extended their survey to contact each site’s organisation for follow-up data (a notable exception was a study by Ronald Milliman in 2002). Several published surveys have appeared as academic papers in a wide variety of journals (not just computing/HCI). As the topic was initally relatively uncovered in academic literature, an investigation into the accessibility of web sites in a particular sector – be it higher education, government, e-commerce, tourism, or whatever – made for an attractive publication topic. I should know, I wrote a couple! But at least in the early days we had some data to help say ‘ hey, we all need to do a bit better here’.
Over time, we’ve had some very high impact surveys, such as the UK DRC Formal Investigation into web accessibility, published in 2004, and which combined automated testing with manual inspections; evaluations with disabled people and interviews with web developers. Just recently, at the Accessing Higher Ground 2009 conference, Terrill Thompson presented data from a longitudinal study that compared progress towards accessible content between organisations who had received technical support and those who hadn’t.
Additionally, publications have presented accessibility survey data as part of other valuable research activity, for example when testing new methodologies and tools for large scale accessibility evaluation (important in the real world for regulatory bodies as well as organisations with many web pages), and testing how effective particular evaluation methodologies could be in minimising false positives and false negatives (check the proceedings of W4A for papers by researchers such as Markel Vigo, Giorgio Brajnik and Bambang Parmanto).
Putting aside limitations of scope and evaluation methodology, surveys have not been so good at focusing on content providers and the content provision circumstances. This is to some extent understandable, as it’s easier to run an automated tool across multiple sites than identify, contact and gather data from all the content providers for each site.
Some exceptions are mentioned above – to add to these, my thesis focused on the impact of accessibility audits on the recipient organisations, and my research yielded a limited amount of information on how organisations responded. A few surveys have specifically looked perceptions on and attitudes to web accessibility (for example Lazar et al’s 2004 paper on Improving web accessibility: a study of webmaster perceptions and Bloor Research’s 2009 survey), but we still don’t know very much about the organisational reasons as to why accessibility of a particular web site is not as good as it could be.
As time goes by, therefore, the impact of a survey that presents data on web accessibility diminishes, unless it adds something new to our understanding of the problem. In 2009, it’s not enough to simply claim that results are ‘disappointing’, and that web content authors must ‘do better’. WCAG has been here for 10 years, so it’s not as if we have no best practice; and people and resources promoting and supporting web accessibility are easy to find. A failure to acknowledge in a survey report that that this has been a problem for many years does a disservice to everyone who has been advancing the cause of web accessibility (even if it also gives another stark reminder that there’s much work to do).
So if you conduct a survey of web sites and find accessibility barriers, don’t stop at reporting conformance levels. Find out why the barriers exist. Is it a lack of awareness or training amonst the providers of the content you surveyed? Is it a lack of involvement of disabled people in the design process? Is it a lack of resources; of money; a lack of will, or lack of prominence of accessibility in an organisation’s business practice and philosophy? Is it sub-standard authoring tools, content management systems; quality assurance systems? Is there another reason?
And whatever you find out, please share it with us!
5 thoughts on “Web accessibility surveys – results are frequently disappointing”
In case we are not depressed enough yet, here is a report of a study that web accessibility may have worsened between 2004 and 2008: http://www.springerlink.com/content/l23x52751j3v1557/?p=c1456fad679f4d79bf353d68ae78e12d&pi=5
I hope my blog article is not too depressing 🙂
Many thanks for the link to that survey paper – which I’ve just finished reading. The data it presents is useful in benchmarking progress (or not); and at least the authors give some thought to why things seem to have got worse. But this is speculation on their part, as there’s still no contact with the organisations concerned, nor any more in-depth analysis of the technical nature of the sites or profile of content providers.
A better (and evidence-based) understanding of why things still aren’t as they should be is where we need to be focusing our efforts; and that should also apply to longitudinal/benchmarking studies.
When I regained my eyesight I was determined to figure out a way to solve these problems and to teach people how to make content accessible to the blind. It is not enough that we have these regulations, what is needed is for the average designer to actually experience what the blind experience when they enter their website or read their document. Only then can be bridge the gap of understanding.
But that is not enough, we need the tools to make content accessible and we need to make those tools so that anyone can make content accessible to the blind. Because the experts have failed miserably!
So I just created a format for accessibility which AFB TECH the technology division of the American Foundation for the Blind says is a “Raising the floor technology,” where I make content accessible to the simplest text to speech engine and the standard is for ALL of the content to be accessible to the blind.
According to Microsoft, in their latest book on Accessibility, it is impossible. So much for Microsoft. They have created a very complex system for making content accessible to the blind but it is an indirect process and it is so complicated that webmasters cannot figure it out!
The old standards were also so difficult that most webmasters could not make content accessible to the blind.
Then you have the placeholder text problem which passes accessibility tests but for the blind creates content that is absolutely useless. Web Design programs were able to claim that they created Section 508 compliant content because they had placeholder text that described a button. But when used in programs the web designer did not actually put in a real description of the buton and so now the internet is full of websites that repeat “Button 1” over and over again, which is absolutely useless to the blind.
So it is time to stop all this nonsense and make content accessible to the blind. So I developed a word processing program that did just that. I am approaching the software companies and I beleive if this were implemented into cell phones we could bring accessibility to the entire world. I have laid it out to work in 126 languages so far and I am not limited by language.
Given the recent announcement by Cevit in India where they are moving towards 7 bit programming to make Indian Languages for phones and they are going to distribute 50 million phones to the people of India, I was thinking that they could expand that to all of India because my format is not only for the blind, it will work for the illiterate.
James G. Pepper
Does anyone have any leads on ‘accessible web survey tools’ … as I struggle (and have for the past 2 years) to find a web survey tool that people who use assistive technology can use – either as recipients of a survey, or to develop surveys – and be robust enough for researchers (data analysis).