The Linux Foundation

 
Accessibility/Handlers/Meetings/Minutes20080811.html

From The Linux Foundation

Revision as of 19:30, 18 August 2008 by Oedipus (Talk | contribs)

(diff) ←Older revision | view current revision (diff) | Newer revision→ (diff)

Contents

Minutes of the Open A11y Expert Handlers SIG Call 2008/08/11

Attendance

  • Neil Soiffer (NS/chair)
    • Pete Brunet (PB)
    • Vladmir Bulatov (VB)
    • Glenn Gordon (Freedom Scientific)
    • Gregory J. Rosmaita (GJR/scribe)
    • Janina Sajka (JS)
      • regrets: Alexander Surkov

Agenda Review

Approval of Last Meeting's Minutes



Minutes of Expert Handlers Conference Call 2008/08/11

Resources for Review:

GG: chief technical officer for Freedom Scientific; worked on JAWS for a long' time; wanted to join call to talk about philosophy - perhaps can come up with model that is appropriate for all

Brainstorming Session with Glenn Gordon of Freedom Scientific

NS: (summarizes Expert Handlers)

NS: haven't decided yet where Expert Handlers lives: application such as FireFox (modify DOM) or an accessibility call; intermediary between AT and DOM, or something called off on side -

NS: issues that have arisen are: info that Expert Handlers needs to know in order to give user an appropriate string (speech or braille); for braille, might be dimensions of display, for speech, need to know TTS standards supported so pauses, etc. can be embedded in text returned to AT

VB: for braille and TTS matter of language and localization of special knowledge domain markup

NS: for math, between 20 and 60 different braille math codes; many natural languages also have localized codes within them

NS: 1 idea - callback interface: the Expert Handlers passes a generic interface to AT then can query AT for particulars (braille display dimensions, what language tables are supported), TTS (what standards SAPI4 SAPI5 SSML Java Speech API; competing idea: when AT says "give me your braille string" then passes all info about braille display as part of original call - need to ensure that there is a state model for interface

JS: levels of expert markup that AT doesn't handle in generic way, but end users need to have access to specialized knowledge domain languages; mechanism for hand-off, so user gets higher quality braille, speech, or magnification; too arcane and too small for AT to implement Expert Handlers

GG: this is where i came in; PeteB laid out 2 options: push info about AT or pull info; reaction, don't like either model; want to talk through to see if another layover of indirection; rather not be a black box

NS: what would allow AT to stand out is can have own proprietary black box; start with open source notation for musical notation, could modify and ship as part of JAWS; or write own from scratch; how to talk to Expert Handlers? how to discover Expert Handlers?

GG: can Expert Handlers take these domain specific markups and turn them into economical form oriented towards speech and braille, providing clues on what to do with it

NS: in some sense, yes; if talking about speech you want a textual string

GG: with semantic information

NS: semantics for math - could leave unicode characters untranslated to speech and hope AT does it; problem with unicode translation support, but could have a multi-level app

JS: structural markup important for navigation -- DAISY Model

GG: if say "here is all info to provide speech and braille so can format accordingly" does that mean need to delegate navigation to Expert Handlers, because AT knows nothing about it

NS: how much can one know about navigation; if return long string and user wants to change, how?

NS: our notion is that you again delegate to Expert Handlers to understand navigation; for music by note, by phrase, by bar, etc.

NS: in math - might want to navigate via a tree - expose tree view

NS: Expert Handlers really knows how to navigate; might be generic navigation, but would be limited

GG: for Expert Handlers to work with us, has to be generic

NS: would you give up focus to Expert Handlers which would pass to AT or do you have keys that perform specific functions according to the markup language of the document; if keys remapped, remapping should be extended to Expert Handlers

GG: who remaps?

NS: end user

GG: in browser mode without visible caret?

NS: my answer would be no

VB: user interaction should be familiar to user of specific AT; how to communicate graphics - don't understand GG question about coarser and

GG: if application has focus, and respond to cursor keys, fine; if AT mediator, we need to translate our keymappiungs to the functions to be performed

VB: go up, go down, go left, go right -- may have to go down into fraction, to next expression, containers and sub-containers;

GG: are Expert Handlers open source?

VB: general Expert Handlers framework is open source

NS: trying to define standard by which anyone can develop expert handler

VB: language localization -- Expert Handlers for small language groups; AT can't handle; what knows the ontology can perform the task - that is Expert Handlers, should be global markup; ability to extend functionality essential

JS: within document concept - beginning and end - application layer and AT knows for handling this can use this handler or that handler or go out on web and get handler; alternative input navigation also a concern

NS: IE7 has a registry key - this thing belongs to this namespace, here is app to launch for it

GG: not using OBJECT or EMBED tag

NS: IE specific "behaviors"

NS: need some way to register object in namespace here is Expert Handlers that handles it

GJR: AT should be cognizant of its capabilities and user preferences

GG: can only speak for FreeSci - browser not place maintain info about TTS or braille device

GG: Expert Handlers living in browser not going to be clean

NS: more generic - should handle any DOM

JS: browser example useful, but far more extended

GJR: all Document Object Models - XML stands out because most specialized markup tends to use XML

JS: not just navigation and read only, but editable content

GG: could there be a way that AT gathers domain specific ML and passes that to Expert Handlers; interface for Expert Handlers of form next, previous, etc.

NS: 2 ways to look at it: if AT gathers up ML and passes it off or points Expert Handlers at ML

NS: navigation trickier - if have graphics object and want to move out to next graphical item how to say to AT if Expert Handlers not communicating with DOM;

NS: DOM can tell where one is and what state things are in

GG: forced to do cross-process work or bi-lateral communication

NS: from implementation point of view: if passing ML and Expert Handlers returning something to AT, doesn't care about app - generic handler works for everything

GG: doesn't matter if application in sync if navigating, but if interacting, have to keep everything synced

NS: Expert Handlers work directly with app DOM takes onus off AT, but for each app need to rewrite Expert Handlers; architectural trade-offs

GG: philosophy - architecturally better to push an XML format to user with all properties about braille and speech; formatted as string, not interface, if want to add another attribute, can just add

GG: if want to be in browser/app, have to pass info to you

JS: string say braille engine x, speech module y, etc.

GG: if dealing with multiple processes, want to wait to get info across easily

JS: does need to be tweakable

NS: navigation the hard one -- getting speech or braille static

GG: no strong opinion, concept of model has changed a bit

GJR: depends upon native navigation support in underlying ML ; some will be sequential, some will be stacked (like SVG)

JS: VB dealing with an extreme case - math text not linear; harder than tables

GG: using TABLE as example; able to say generically move to next cell previous cell, etc. can we break navigation down to primatives and ask for primatives

NS: like that example -- common enough to make work, work wasn't easy, but other navigational features - table header query, automatic announcement

GJR: should be users' choice - should be able to set client-side within AT; can also control repetition through aural CSS, as defined in the CSS3-Speech module, which is what FireVox and Opera+Voice key off of

GG: turning SML into generic one; if could change to generic container with speech or braille commands (text formatted for TTS or Braille) can you then add enough attributes to container so AT can determine navigation based on those elements; character container within a word container within a line container, within a sentence container, etc. -- AT can then say given way this is divided, can move through them and dive down into them

GG: if moving by expression, and one marked expressions, could move to next expression regardless of where is in tree; could move to outermost, innermost, etc.

NS: graphics aren't a tree

GG: could you mark it up so that have an order by which one can walk -

GJR: have to use @order from Access Module - can designate order of traversal by IDREF or targetrole (cycle between all objects with role="foo")

VB: we are using special input and output devices - can reinforce navigation through sound

NS: input not usually coming form keyboard, but touch-pad

VB: graphics themselves handled with different input device -- would like integrated state

GG: would you like keystrokes passed to you from AT

VB: need to speak through and display through braille

GG: hard for our model: user at some level used to press one key to hear current line, current char, etc. if don't map all of that, to move focus to something and speak it of, but moving focus to new element, and want to hear first char of element, can pose problems

VB: develop that communication

JS: at what point does it move from points to lines?

VB: example: speech and tactile graphics, screen reader synchronization with braille display, or use one to do review and other to keep static point

GG: if all of this comes to fruition, to what degree do we envision diff Expert Handlers for different UAs; to what degree do screen readers need to do something special

VB: instead of hard coded commands, send XML strings back and forth; Expert Handlers

NS: pass both pieces of info - here is key that was pressed, and here is what i'd like it to mean; if doesn't make sense, ignore; if does make sense, (mixture of text and non-text) way to get AT native key-binding used by Expert Handlers

GG: could go with that except for giving Expert Handlers the key

GG: like idea better that offer navigation that user wants; Expert Handlers gets mapped to something close; if say previous word in case of math, where would go?

NS: words and letters often same in math - typically word and letter means same thing in math case; line - vertically displayed line like numerator or beginning of expression

JS: or previous equation on page

NS: leave equation to AT

GG: add additional parameters; navigation commands that work in special domains, but work in bunch of them

NS: real challenge; possibility is to have fixed set of ones, (generic1, generic2, generic3)

GG: concern about passing keys is takes away SR ability to remap keys

NS: suggesting pass both: key and meaning, Expert Handlers should say "don i have something that maps to this meaning?" - if does, reuse that/map to that; if meaningless in graphic, then free to choose whatever meaning one wants

NS: like idea of passing both pieces of info

GG: can see advantage

NS: gives ability for user mappings to have meaning in SML; should always be able to take standard stuff and make app work properly with it; in case of Expert Handlers, should be able to write Expert Handlers for "regular text document", but if don't have functionality to make work smoothly, problem; embedded object is an example - how to align with baseline?

NS: notion of passing both would work for text - all textual info available from Expert Handlers

NS: give me the speech for this - state for what one is doing

NS: math issue - if say move to next char, what will happen -- what will be spoken or displayed on braille display

NS: user initiated, but if script initiated?

VB: for graphics, next subsequent,

NS: if done navigation command and by level (word or line); should speak line or word in text, but not necessarily applicable to other MLs

JS: if navigate note by note, want next note; if measure by measure, then want next measure; similar to heading navigation

GG: have to answer "what is unit AT can move in", then AT allows to move within movements

NS: at top of hour - incredibly useful - could GG come back next week?

GG: can come back on 18 august; appreciate enthusiasm

NS: very useful conversation to have with actual implementors; solidifies thoughts

GG: would be happy if ponder this: more incentive to put time into this if put value-add on top of this -- don't want to parse when Expert Handlers can handle

NS: for math in braille, AT could be ones that provide nice UI for user to select what braille code to choose

GG: would be nice if have features that can be put on top and genericized and implemented in short period of time; any/every company wants competitive advantage

NS: support for notation; if write Expert Handlers should be able to simply plug in

GJR: expert handlers should be agnostic; the level of granularity necessary to provide meaningful interaction between the user of an AT and a specific markup language is highly dependent upon the type of specialized content being described, as well as the parameters and structures inherent to the specialized knowledge domain for which the specialized markup language has been designed; Expert Handlers, therefore, need the ability to cache ontologies specific to each type of specialized content, in order to enable full interactivity with the specialized content; ontologies can be provided through Web Ontology Language (OWL), Resource Description Framework (RDF), and/or Simple Knowledge Organization System (SKOS) in order to provide an assistive technology with meaningful, and appropriately structured, API calls and mappings; XML communication and state control could be handled by State Chart XML (SCXML): State Machine Notation for Control Abstraction

GG: each AT does different things with MSAA and IA2 than the others even though there is a standard; would like this to be the same sort of framework

JS: speaking as chair, i don't have problem with that

GJR: would prefer an AT agnostic ARIA-type approach - extensible, yes, but proprietary no - built on common framework; State Chart XML (SCXML): State Machine Notation for Control Abstraction and Element Traversal could be leveraged for this, as well as ARIA, of course; there are also the new revision of XML Events 2 to consider, as well as the XHTML2 WG's plans to modularize XML Events into Handler Module, Listener Module, and Script Module (note: script module has an implements attribute that can point to a local script or to the namespace in which the SML resides

GG: not being handled everything on silver platter; added level - what needs to be spoken and context let user choose different voice or a sound cue (earcon)

PB: interface gives access to object; what you do with it is something else

GG: differentiation helps case for working on Expert Handlers

NS: next meeting on 18 august 2008 - GG can you join?

GG: yes, will be able to join next week;






[Article] [Discussion] [View source] [History]