Web Accessibility ♥ Python
PyCon US 2021

Curlylint

Link to “Curlylint

Curlylint is an experimental linter for HTML templates, with a particular focus on flagging accessibility issues.
Traditional accessibility testing tools rely on a web browser environment, which makes it possible to do comprehensive automated checks, but is removed from when developers author templates.

In summary,

Here’s what it can flag – linting a basic page.html Django template where a lang attribute is missing:

Screenshot of running curlylint on a template. Output: 1 error reported, with the error message.

When linting a template,

  1. Curlylint loads and parses the template source.
  2. It then runs its rules on the template syntax and HTML.
  3. Any errors are reported back as output.

Curlylint Demo

Link to “Curlylint Demo

Try it for yourself by installing curlylint locally (pip install curlylint), or with the live demo – use one of the predefined examples, or try it out on your own templates.

Don’t lint, check for syntax errors and exit.

Disallows using Django’s convenience form rendering helpers,
for which the markup isn’t screen-reader-friendly

<html> elements must have a lang attribute, using a BCP 47 language tag.

More rules configuration

Elements with ARIA roles must use a valid, non-abstract ARIA role

<img> elements must have a alt attribute, either with meaningful text, or an empty string for decorative images

The viewport meta tag should not use user-scalable=no, and maximum-scale should be 2 or above, so end users can zoom

Enforce autofocus is not used on inputs. Autofocusing elements can cause usability issues for sighted and non-sighted users.

Prevents using positive tabindex values, which are very easy to misuse with problematic consequences for keyboard users.

Why linting templates

Link to “Why linting templates

There are two primary reasons:

  1. The sooner errors are caught, the easier to fix.
  2. Static analysis as-you-code is very low-friction for developers.

This concept of the “cost of fixing errors” is often represented by a chart of the “relative cost of fixing defects” (© NIST) depending on when they are introduced in a project:

Relative cost of fixing defects: Design = 1, Implementation = 6.5, Testing = 15, Maintenance = 100

Static analysis is a common approach to this problem – Coding errors can’t be caught any sooner than as developers are writing code, and friction is lowest when receiving feedback while authoring code, rather than in a separate environment.

More generally, this is part of a testing methodology called “Shift Left”, moving the quality assurance (QA) effort towards earlier project phases. This is particularly relevant in accessibility as a field, which used to primarily be in the remit of testers.

Another clear advantage of linting templates is our ability to flag not just HTML issues, but also issues with template syntax. For example, Django’s as_table can lead to hard-to-navigate forms for screen reader users – in this case, feedback at the template level will be much more actionable (“don’t use as_table”, rather than “avoid using tables for forms layout”).

Drawbacks of linting templates

Link to “Drawbacks of linting templates

Since linters rely on static analysis, they can only find issues that are apparent without running the code.
This can lead to a false sense of security for developers – who can assume their testing tools are enough, even though they can only catch a small proprotion of issues.

This is a common issue in the field of accessibility, where even the most powerful browser-based automated tools only catch 30 to 40% of issues.


In the case of Curlylint, another clear drawback is the cost of putting together those static analysis features, relative to their usefulness.
Curlylint has to use a custom templates parser, as it needs an AST representation of both the templates syntax, and the HTML code – built-in parsers of various template languages tend to treat HTML as arbitrary strings.

This is in a sense an opportunity cost – an investment in browser-based testing tools having much more potential to find advanced issues, with a lower implementation cost.

Inspiration & alternatives for Curlylint

Link to “Inspiration & alternatives for Curlylint

Curlylint started as a fork of jinjalint, but the main inspiration for the project comes from the React world with eslint-plugin-jsx-a11y, a static AST checker for accessibility rules on JSX elements.

For serious testing, the V.Nu HTML5 validator can be integrated with Django. For in-browser testing, Axe is the most well established open-source option. Axe makers Deque are also working on a similar Axe linter concept, which is unfortunately closed-source.

Kontrasto

Link to “Kontrasto

Kontrasto is an early attempt at reaching the holy grail of automated color contrast enforcement, ridding the world of unreadable text on image backgrounds once and for all. Here is the problem statement as an image worth a thousand words:

Text on a background that is hard to read: “Can you read this text”

This is poor contrast by all standards, and particularly for WCAG, the most common accessibility compliance target. We can do better.

kontrasto Demo

Link to “kontrasto Demo

Let’s see how Kontrasto processes images. You can try this here, or by installing Kontrasto locally (pip install kontrasto) and using its command-line interface.

In this demo, we display four contrast enhancements with different capabilities across 3 areas of each image.

How this works

Link to “How this works

Kontrasto extracts the dominant color of the image, and then calculates that color’s contrast ratio against dark and light alternatives. We can then select the alternative that has the highest contrast.

To make this more suitable for real-world websites, the contrast calculation happens twice: once server-side, so users get the best possible contrast on page load, and then once in the browser, so the calculation only samples image pixels directly under the text.

For the best results possible, we also implement both the WCAG 2.0 contrast ratio score, and the experimental WCAG 3.0 contrast score, based on more modern research on color perception.

Drawbacks of automated contrast checks

Link to “Drawbacks of automated contrast checks

The main issue with automating color contrast selection is that this is very much a problem of design constraints – as such there is no single one-size fits all solution.

Here are all of the parameters the text contrast depends on:

  • Text position on the image (depends on the browser viewport width)
  • Image composition where the text appears over the image (highly specific to the image asset being used)
  • Text color
  • Font size of the text
  • Font weight of the text
  • Base thickness of the font family

Most of those parameters will need to be fixed to a certain value in order to calculate contrast. And once contrast is calculated, any of those parameters could be equally worth changing to improve the contrast!

Here is the proposed calculation for WCAG 3.0, which surfaces some of those parameters.

Inspiration & alternatives for Kontrasto

Link to “Inspiration & alternatives for Kontrasto