Recently, I’ve been doing work looking at accessibility implications of electronic assessment (e-assessment for short). E-assessment covers any use of electronic means, often a web interface, to ask questions of and gather information or evidence from a user in order to provide some form of assessment of their levels of knowledge, skills or competencies in a particular subject or activity.
From a technical perspective, this is related to electronic survey accessibility, which in turn could easily be seen as a real world instance of accessible web form design plus accessible navigation; and therefore covered by a subset of WCAG 2.0. However, it’s not as straightforward as that.
I’d initially thought e-assessment was a tool mainly used in colleges and universities – my bias being partly due to working in that sector, and also my impression that the tertiary education community seemed to be responsible for much of the research and development into e-assessment, such as that funded in the UK by the Joint Information Systems Committee (JISC). But I’m now aware that there’s a much, much wider scope of use – in schools, by professional standards bodies, organisations assessing employee capabilities, lifelong learning.
Despite this diversity of use, there are common constraints which can affect how accessibility of e-assessment is approached.
Balancing accessibility and fair competency assessment
E-assessment is about objectively measuring whether someone has sufficient knowledge/skill to meet a certain level of attainment. All candidates should have an equal chance of being objectively assessed, and accessibility barriers should not obstruct a disabled candidate from being able to demonstrate their competency. This can lead to problems when trying to figure out for example how to provide appropriate text alternatives for a graphic that forms part of a question, or choosing between drag-and-drop or radio buttons as a question type.
The e-assessment author should know what knowledge and skill is being assessed, so that should be at the forefront when thinking about accessibility. Are someone’s powers of visual interpretation of a photo, diagram or video being assessed? If so, is it reasonable to exclude someone who can’t see by not describing the graphic or providing audio description for a video? Is manual dexterity a critical part of the skill being assessed? If so, is it reasonable is it to exclude someone who has a tremor, or is unable to use their hands by utilising a drag-and-drop style answer selection? These are questions that have to be answered by the assessment author before effective accessibility solutions can be applied.
The problem occurs when the method of assessment requires a capability that isn’t necessary for the skill being assessed. So, in the above example, example keyboard-inaccessible drag and drop questions are rarely justifiable.
The assessment environment also presents accessibility challenges. In theory, e-assessment presents many accessibility wins, by supporting flexibility of delivery – alternative formats become easier to generate based on personal preferences – and easing maintenance tasks. In practice, flexibility can be difficult to achieve in a controlled environment. There is the question of the extent to which the interface used to present the e-assessment to candidates is accessible. Can a disabled candidate navigate through the questions, understand each question, and select and input an answer without undue difficulty? This should be assessed, and any issues acted upon, in advance.
But, also, will a candidate who needs a particular assistive technology or accessibility solution be able to use their own computer and AT? Or will they have to become familiar (potentially at short notice) with another AT that has been provided to them at the last minute, may be unfamiliar and might not be exactly suitable for their needs? The latter situation means a disadvantage for the disabled person being assessed. How do you provide the assessment environment – which might be a special locked down browser – with sufficient accessibility support? Can you justify refusing to allow someone to bring and use their own computer and AT on the grounds of fairness to others taking the assessment?
The inaccessibility of the assessment enviroment was the central focus of the Latif vs PMI court case, where a ruling found in favour of a blind person claiming discrimination by a professional association.
We’ve been doing some work looking at the extent to which e-assessment software supports accessible assessment authoring using W3C ATAG as a reference. This work highlighted some of the potential issues that an author might unwittingly introduce, through insufficient or obscure accessibility prompting by the authoring tool. These issue can be managed short term by author training and support (general accessible design and the specific issues surrounding the authoring tool they use) and a suitable quality assurance process before assessments are presented to candidates. Longer term, of course, we need improved authoring tools.
But, at a recent event on e-assessment and accessibility held by Becta, I became more aware of the complex chain of organisations involved in the supply and delivery of an electronic assessment. At a university, a lecturer is likely to be responsible for creating and delivering their own assessments; but elsewhere there are organisations which are responsible for managing and validating the assessment process. They, or a third party, might author assessments which are then provided to schools and other organisations to administer to students.
So if the original authors are not aware of accessibility issues, there is a long chain down which a request for accessibility information – or adjustments – must pass; and there is no guarantee the request will get to the end of the chain. More informed procurement processes may help to ensure that organisations ask for, and receive, e-accessibility in an appropriate way. But the short term requires people administering e-assessments to make doubly sure in advance that they are aware of any potential accessibility barriers present, and takes steps to manage their impact.
The extreme circumstances under which e-assessment takes place means that developments like better accessibility profiling of users and assessments will hopefully have a positive impact, but a contextual and pragmatic approach to accessibility is essential. Thankfully there are people on the case helping to raise awareness, including Becta, and also Techdis, with their guidelines for accessible assessments.