handlers/usecases/navigation/navdraft1a

Draft 1a: Navigability Use Cases (Pete Brunet, author)

  Note: Navigability will, perhaps, need to address multiple levels of navigability

Assisstive Technology (AT) users need to be able to navigate within sub-components of documents containing specialized content such as math, music, or chemical markup. Typically these specialized components have content that needs to be “focused” on at different levels of granularity, e.g. numerator within a numerator, expression, term, etc.

Question: Are there terms, in math for example, that can be used to define each level of granularity? If not is it sufficient to just increment/decrement the level?

Within each level functions are needed in response to AT commands to return the following “items” for a particular level of granularity:

  1. first/last item on a line
  2. first/last item within next higher level of granularity
  3. first/last item in the document
  4. previous/current/next item

There are two scenarios to consider, a read-only scenario and a scenario where the user is editing the document.

There are three system components that need to interact: the user agent, e.g. a browser, the AT, and the plugin/handler.

In the read-only case, the AT responds to some sort of “focus” change events and depending on the “role” of what got focus the AT fetches a11y info pertinent to that role and then formats/outputs a response tailored to a AT user, e.g. TTS/Braille. In the case of specialized content, a handler needs to be used by the AT because it doesn't have know how to deal with the specialized content.

The user will want these commands:

In the case of editable content there may also be a desire to have separate cursors, e.g. one to remain at the POR (caret if editing), one to move around for review purposes.

The AT will already have UI input commands for most of the above functions, but probably not for changing to higher/lower levels of granularity. Let's assume ATs add that and in response the AT would call the handler to change the mode of granularity. The AT will handle the UI commands and in turn call the handler to return an item at the current level of granularity. The AT would have told the handler about the output mode, e.g. Braille or TTS. Armed with those three things: level of granularity, mode of output, and which item (first, last, previous, current, next), the handler knows what to do.

In the case of editable content, the UA provides the input UI for the user. This editing capability would most likely be provided via a plugin. We need an example of such a plugin so we can evaluate what a11y features need to be added to the existing editors.