You are currently browsing the monthly archive for January 2009.

I have a preference for the way accessible web-mapping is presented and I think I need to put it out there.  I have had this “argument” with other developers on my team (you know who you are), and while I understand their viewpoint, I am leaning toward the opposite camp.

Call me a dreamer, but I would like to work towards an interface that doesn’t separate the two streams of users that I am addressing here with my research, the visually impaired, and everyone else.  I think that to truly make the web accessible to all, then there should be no division.  I mean sure, you should have the ability to choose your preferences, and tailor your web experience in a way that best suits you, but I don’t think that it should be two totally separate applications.

I see the textual description as a complement to the visual map, an enhancement.  As a sighted user, I may also want to interact with the textual component, not just the visual map and vice versa.  May I remind you that visually impaired does not only include people who are blind.

But will this approach to design create an application that in the end, just frustrates all users?  This is a risk, and maybe the fact of the matter is that it would be better for all users to have the seperate streams.  I don’t technically know the answer, I just have a personal ideal solution..and it’s my research, I can conduct it how I want to, can’t I?  I will look into it to see if there has been any research done on this and keep you posted.  But in the meantime, you could put in your 2 cents…

Advertisements

Here are my delicious tags…in case you are interested.

240px-reddelicious

I received a comment today pointing me to Tactile Map Automated Production (TMAP) a web-based application for producing tactile maps.

This reminded me that I have not blogged about the articles I have read so far concerning tactile maps:

Navigating maps with little or no sight: An audio-tactile approach R. Dan Jacobson

This paper outlines the benefits of tactile maps, and the general purpose behind them.  It also outlines the shortcomings of basic tactile maps.  It is a good introduction to the use of tactile maps for the visually-impaired.

Creating Tactile Maps for the Blind using a GIS Jerry Clark, Deanna Durr Clark

This paper addresses the problem of orienting blind students to a school campus.  It proposes a system that uses GIS and a coordinate digitizer to create “tactilely-enhanced” paper maps. Conclusions I was able to take from this paper is that keeping it simple is important. Users would not be interested in every specific point on a map.  It also concluded that an overall site map was beneficial upon introduction, with underlying, more specific maps they can access if desired. This aligns with the approach I’m thinking of taking in my design. Customization was also deemed important.

BATS: The Blind Audio Tactile Mapping System Peter Parente Gary Bishop

This paper outlines an application created to allow students with visual impairments explore and understand spatial information.  It is useful to understand how these students learn the concepts of spatial information, such as compass direction, relative distance, perimeter etc.

Teaching visually impaired children to make distance judgements from a tactile map Simon Ungar, Mark Blades and Christopher Spencer

This paper researches the ability of children to make distance judgments based on the scale of a map.  It shows the value of educating visually-impaired children in reading maps.  They are able to make sense of the concepts and draw information from it.  This shows that although visually-impaired people cannot see the map, they can still benefit from its use.

Thanks to Greg, I now have a collaborator…or as I like to think of it, a new best friend.  Jon Pipitone is a Masters student working with Steve Easterbrook, who is interested in creating a Climate Change modelling application that is intuitive and interactive (did I get that right?). Hopefully we will be of some help to one another.

We met yesterday, and I thoroughly enjoyed being able to talk to someone about my Web-Mapping Accessibility topic for longer than 10 minutes before their eyes glazed over.  I now have many things to think about that are a little more well-formed in my mind than before.   Here are a few of the things I was able to take from the conversation:

  • I’ve never actually discussed how this textual description will be presented.  It was pointed out that I seem to already be  settled on displaying the description in the longdesc tag.  This is what I have to do in order to satisfy accessibility standards, in its most basic form.  I see this as being much more than just a longdesc though, definitely more interactive.  So maybe I have to stop talking about the longdesc so much.  I see it displayed, just as a map is.  At first the most basic, general description is offered, and you are able to drill-down into the information depending on what aspect of the map data you are interested in.  The organization of course will be tricky as we are working with huge amounts of data, so I need to know my audience better….
  • Creating detailed use cases would be a logical next step in order to get to know and understand what people are looking for when they come to a map.  These cases could then provide me with the basic structure of the description.  But do I use my current NPRI mapping application as the subject, or do I use a basic map, with land mass, water and streets, nothing more?
  • I need to figure out my process.  I always thought I would start out by getting a clear understanding of the base map or at least a section of it such as the province of Ontario, and then gradually add the layers one by one, to get a clear understanding of how they could affect the description – Bottom-up approach.  But through the use cases and understanding the users of the current mapping-application I’m working on, NPRI would require me to start by looking at the NPRI data first and continue drilling-down into the map – Top-down approach.
  • Jon kept asking why my description can’t just be a bunch of lat’s and long’s…which I just kept crying “It can’t, it just can’t!” hmmmm maybe I should come up with more eloquent reasoning.
  • I also ran through my plea for a unified map, one that does NOT siphon users into two stream, therefore two separate interfaces, accessible and non-accessible.
  • I see a possible user study in my future, based on the verbosity game I listed below.  I could present subjects with various images of a map and ask them to describe it.  Within these descriptions I would hopefully be able to pick out common keywords used. I’m enrolled in Steve Easterbrook’s course CSC2130 Empirical Research Methods, where I’ll get to explore this idea further.

Ok, now the next step is to coin some sort of nickname for Jon, because every best friend requires a nickname.

…but am now addicted to Verbosity.  Thanks Jorge.

I was looking into the descriptive game peekaboom and found a collection of games from carnegie mellon www.gwap.com. By playing the games, the descriptive words used are stored so that images will be *properly* described for search enigines.  My favourite was Verbosity, which is comparable to the gameshow Password.  You are paired with an anonymous player, and have to guess the word from each others clues. I found myself getting very frustrated at the other players, when they gave me clues I didn’t get or when they couldn’t make a guess over my obviously intelligent hints! If it’s this hard to describe “bit” or “limb” how will I ever describe a map…*sigh*

I had some problems with the peekaboom games, maybe there just weren’t enough players online as you need a few in order to play.  I think it is their oldschool version of the gwap games though.  They explain the problem of the lack of meaningful description for images:

One of the major accessibility problems is the lack of descriptive captions for images. Visually impaired individuals commonly surf the Web using screen readers, programs that convert the text of a webpage into synthesized speech. Although screen readers are helpful, they cannot determine the contents of images on the Web that do not have descriptive captions — but the vast majority of images are not accompanied by proper captions and therefore are inaccessible to the blind. Today, it is the responsibility of Web designers to caption images. We want to take this responsibility off their hands.

The article Peekaboom: A Game for Locating Objects in Images Luis von Ahm, Ruoran Liu and Manuel Blum will be a good resource. Allowing the users to propose descriptions, add keywords etc to maps would allow for alot of the visual inferences that are missing from the data behind the map to be applied.

It occurs to me that I have never addressed the issue on this blog.

Basically, maps online are not accessible.  According to Treasury Board, this is because the image of the map needs a longdesc attached to it in order to meaningfully describe the map.  This can easily be found right at the beginning under checkpoint 1.1 in the W3C Web Content Accessibility Guidelines:

1.1 Provide a text equivalent for every non-text element (e.g., via “alt”, “longdesc”, or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, ascii art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video. [Priority 1]
For example, in HTML:
  • Use “alt” for the IMG, INPUT, and APPLET elements, or provide a text equivalent in the content of the OBJECT and APPLET elements.
  • For complex content (e.g., a chart) where the “alt” text does not provide a complete text equivalent, provide an additional description using, for example, “longdesc” with IMG or FRAME, a link inside an OBJECT element, or a description link.
  • For image maps, either use the “alt” attribute with AREA, or use the MAP element with A elements (and other text) as content.

The creation of a map requires a multitude of data, raw data that the viewer never sees, only in its graphical form.  How do we turn that raw data into a meaningful text description, as well as include information that we automatically infer when viewing the map?  The data behind interactive maps is constantly changing as you pan, zoom, turn on/off layers etc, the text description will need to be dynamic in order to address this.

I fear there are many more to go 😦

I presented to Greg’s group on December 4th.  I was pleasantly surprised at the amount of interest it generated…although it could have been brought about though pity due to my obvious nervousness.  Nevertheless, I’m sure each time it will get just a little bit easier.  I got some great feedback and ideas…if only I could remember it all as I seemed to have blacked the whole tramautic event out of my memory. Ok no more complaining, I will suck it up.

There are so many things I haven’t even taken into account, and I need to figure out the scope of this project.  One attendee (I wish I could remember her name) brought up the point about all of the things sighted people automatically infer from a map, just from experience.  How can we capture all of this?

Jorge spoke of a game where a person is blindfolded and the other player has to describe a picture of some sort to them.  Studies such as these will come in useful when deciding on a textual description of a map.

And of course, as expected, I got the text-based search comment.  I’m not sure if my argument against this is strong enough, must work on it.

I’ve uploaded the presentation, I’m due to give another one next week to the CS’s at Environment Canada.  I also have a bunch of papers I need to put up on here so stay tuned.