You are currently browsing the category archive for the ‘Uncategorized’ category.

On Saturday September 24th there will be a free(!) Accessibility Camp in Toronto. This is the first time this event will be held in Toronto. I’ve never attended one before, but if you have any interest in building for an accessible web (and you should!) then I urge you to register and attend.

Register for the event here:

Hope to see you there 🙂


That’s all for now. Just an announcement. Going to take some time to let it sink in and then I’ll be back 🙂

So at my lowest point today the convo went a little something like this:

Me: *Staring blankly out the window with a sad face on trying to emote “pay attention to me! pay attention to meeeeee!”*

A.T. : *sigh* What’s wrong?

Me: ARGHHHHHH! whiney whine whine.

A.T.: *blank stare*

Me: I just don’t wanna work on my thesis anymore!!!!! wahhhhh. *sad face*

A.T.: Don’t pout.  I’m going to nap in the park.

Me: Fine!

– temper tantrum over…but hmm maybe I’ll blog about it.

Feeling better now, although I did have to temporarily switch my music choice of Caribou to a Beyonce/Lady Gaga mix…yeah that’s how bad it got.

After today I have three weeks left to basically finish my thesis.  It has been three weeks already and time has flown!  I have a working copy *sort of* which I will post online today, no matter how nervous and uncomfortable that makes me.  I am going to start updating the version daily just like Aran (cause Aran has some good ideas sometimes). If you do happen to take the extra step and open it…which right now I hope you don’t….please be reassured that eventually it will be longer and make much more sense, and sound alot more professional.  Ok enough of that…I’m actually much more confident in my content than I seem right now, but hey, highs and lows man, highs and lows.

My plan of attack for next week is library and lab.  I have written out my rough thoughts/notes/points and now I want to expand on each more thoughtfully and concretely.

This blog post has been a nice little break. I’m sure you are all going to be sitting at the edge of your seats until the end of the(my) work day when I post up my thesis! yay! (see i told you…totally bipolar)

Ok, now that dreadful work is out of the way 😉 just thought I’d post an update of my progress.  General results are done and in, I’ll try to get them up in an appealing format on Sunday for your viewing pleasure.  Now I just need to figure out what they all mean, and then write a thesis paper about it…small potatoes.  Ok, ok, don’t take my nonchalance to heart, I’m actually quite nervous now that I’m at this point.  I’m taking 5 weeks off of work (without pay! ack) in order to write my research paper and I am freaking out a bit about now having a set in stone deadline…part-time students have a lot more wiggle room when it comes to this.

Does anyone have any tips as to a process that worked for you, or mini-deadlines I should set for myself to ensure I stay on track?  I am not on campus much so I wasn’t always there to witness the struggle firsthand.  I am going to *try* to be uber-organized and focused, Greg has already started the tough love which I appreciate, most of the time!  Pretty much everyone I have befriended at dcs is finished their Masters sooooo if you could give me one piece of advice (or two, or three) what would it be?  Shoot.

The good news is over 100 people participated in my study, which resulted in over 300 map descriptions, and they are still accumulating!

The bad news is over 100 people participated in my study, which resulted in over 300 map descriptions, and now I have to sort and analyze the data.

That’s what I’m doing now, and although it is a daunting task I am finding it fascinating.  The design of this study entailed alot of planning and discussion (thanks Greg and Jon).  To finally see the results of something you have worked on for so long and feel connected to is pretty rewarding.  There’s alot of data to get through but it’s been fun so far (although I have just started).  I know not everyone else is as invested in my topic, but I find myself wanting to tell somebody whenever I come across something interesting, or validating, or curious, or…well I just want to tell people about everything 🙂  I am looking forward to generating my results, so I can actually do just that, and see what the web accessibility community thinks.

So this post was just an update really, to whoever follows, to let you know where I’m at.  I’ve given myself about a month to establish results.

Data from Star Trek TNG with a cat and computer.  Data is saying "No Spot, you many not 'has cheezburger.'  Not until you are able to ask in a manner that is grammatically correct and lacking typos."


Here are my delicious tags…in case you are interested.


I received a comment today pointing me to Tactile Map Automated Production (TMAP) a web-based application for producing tactile maps.

This reminded me that I have not blogged about the articles I have read so far concerning tactile maps:

Navigating maps with little or no sight: An audio-tactile approach R. Dan Jacobson

This paper outlines the benefits of tactile maps, and the general purpose behind them.  It also outlines the shortcomings of basic tactile maps.  It is a good introduction to the use of tactile maps for the visually-impaired.

Creating Tactile Maps for the Blind using a GIS Jerry Clark, Deanna Durr Clark

This paper addresses the problem of orienting blind students to a school campus.  It proposes a system that uses GIS and a coordinate digitizer to create “tactilely-enhanced” paper maps. Conclusions I was able to take from this paper is that keeping it simple is important. Users would not be interested in every specific point on a map.  It also concluded that an overall site map was beneficial upon introduction, with underlying, more specific maps they can access if desired. This aligns with the approach I’m thinking of taking in my design. Customization was also deemed important.

BATS: The Blind Audio Tactile Mapping System Peter Parente Gary Bishop

This paper outlines an application created to allow students with visual impairments explore and understand spatial information.  It is useful to understand how these students learn the concepts of spatial information, such as compass direction, relative distance, perimeter etc.

Teaching visually impaired children to make distance judgements from a tactile map Simon Ungar, Mark Blades and Christopher Spencer

This paper researches the ability of children to make distance judgments based on the scale of a map.  It shows the value of educating visually-impaired children in reading maps.  They are able to make sense of the concepts and draw information from it.  This shows that although visually-impaired people cannot see the map, they can still benefit from its use.

Thanks to Greg, I now have a collaborator…or as I like to think of it, a new best friend.  Jon Pipitone is a Masters student working with Steve Easterbrook, who is interested in creating a Climate Change modelling application that is intuitive and interactive (did I get that right?). Hopefully we will be of some help to one another.

We met yesterday, and I thoroughly enjoyed being able to talk to someone about my Web-Mapping Accessibility topic for longer than 10 minutes before their eyes glazed over.  I now have many things to think about that are a little more well-formed in my mind than before.   Here are a few of the things I was able to take from the conversation:

  • I’ve never actually discussed how this textual description will be presented.  It was pointed out that I seem to already be  settled on displaying the description in the longdesc tag.  This is what I have to do in order to satisfy accessibility standards, in its most basic form.  I see this as being much more than just a longdesc though, definitely more interactive.  So maybe I have to stop talking about the longdesc so much.  I see it displayed, just as a map is.  At first the most basic, general description is offered, and you are able to drill-down into the information depending on what aspect of the map data you are interested in.  The organization of course will be tricky as we are working with huge amounts of data, so I need to know my audience better….
  • Creating detailed use cases would be a logical next step in order to get to know and understand what people are looking for when they come to a map.  These cases could then provide me with the basic structure of the description.  But do I use my current NPRI mapping application as the subject, or do I use a basic map, with land mass, water and streets, nothing more?
  • I need to figure out my process.  I always thought I would start out by getting a clear understanding of the base map or at least a section of it such as the province of Ontario, and then gradually add the layers one by one, to get a clear understanding of how they could affect the description – Bottom-up approach.  But through the use cases and understanding the users of the current mapping-application I’m working on, NPRI would require me to start by looking at the NPRI data first and continue drilling-down into the map – Top-down approach.
  • Jon kept asking why my description can’t just be a bunch of lat’s and long’s…which I just kept crying “It can’t, it just can’t!” hmmmm maybe I should come up with more eloquent reasoning.
  • I also ran through my plea for a unified map, one that does NOT siphon users into two stream, therefore two separate interfaces, accessible and non-accessible.
  • I see a possible user study in my future, based on the verbosity game I listed below.  I could present subjects with various images of a map and ask them to describe it.  Within these descriptions I would hopefully be able to pick out common keywords used. I’m enrolled in Steve Easterbrook’s course CSC2130 Empirical Research Methods, where I’ll get to explore this idea further.

Ok, now the next step is to coin some sort of nickname for Jon, because every best friend requires a nickname.

…but am now addicted to Verbosity.  Thanks Jorge.

I was looking into the descriptive game peekaboom and found a collection of games from carnegie mellon By playing the games, the descriptive words used are stored so that images will be *properly* described for search enigines.  My favourite was Verbosity, which is comparable to the gameshow Password.  You are paired with an anonymous player, and have to guess the word from each others clues. I found myself getting very frustrated at the other players, when they gave me clues I didn’t get or when they couldn’t make a guess over my obviously intelligent hints! If it’s this hard to describe “bit” or “limb” how will I ever describe a map…*sigh*

I had some problems with the peekaboom games, maybe there just weren’t enough players online as you need a few in order to play.  I think it is their oldschool version of the gwap games though.  They explain the problem of the lack of meaningful description for images:

One of the major accessibility problems is the lack of descriptive captions for images. Visually impaired individuals commonly surf the Web using screen readers, programs that convert the text of a webpage into synthesized speech. Although screen readers are helpful, they cannot determine the contents of images on the Web that do not have descriptive captions — but the vast majority of images are not accompanied by proper captions and therefore are inaccessible to the blind. Today, it is the responsibility of Web designers to caption images. We want to take this responsibility off their hands.

The article Peekaboom: A Game for Locating Objects in Images Luis von Ahm, Ruoran Liu and Manuel Blum will be a good resource. Allowing the users to propose descriptions, add keywords etc to maps would allow for alot of the visual inferences that are missing from the data behind the map to be applied.

It occurs to me that I have never addressed the issue on this blog.

Basically, maps online are not accessible.  According to Treasury Board, this is because the image of the map needs a longdesc attached to it in order to meaningfully describe the map.  This can easily be found right at the beginning under checkpoint 1.1 in the W3C Web Content Accessibility Guidelines:

1.1 Provide a text equivalent for every non-text element (e.g., via “alt”, “longdesc”, or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, ascii art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video. [Priority 1]
For example, in HTML:
  • Use “alt” for the IMG, INPUT, and APPLET elements, or provide a text equivalent in the content of the OBJECT and APPLET elements.
  • For complex content (e.g., a chart) where the “alt” text does not provide a complete text equivalent, provide an additional description using, for example, “longdesc” with IMG or FRAME, a link inside an OBJECT element, or a description link.
  • For image maps, either use the “alt” attribute with AREA, or use the MAP element with A elements (and other text) as content.

The creation of a map requires a multitude of data, raw data that the viewer never sees, only in its graphical form.  How do we turn that raw data into a meaningful text description, as well as include information that we automatically infer when viewing the map?  The data behind interactive maps is constantly changing as you pan, zoom, turn on/off layers etc, the text description will need to be dynamic in order to address this.