Applications for Mobile Internet Access for Persons with Disabilities



IGF 2010

VILNIUS, LITHUANIA

15 SEPTEMBER 10

SESSION 182

1130

APPLICATIONS FOR MOBILE INTERNET; ACCESS FOR PERSONS WITH DISABILITIES





********



Note: The following is the output of the real-time captioning taken during Fifth Meeting of the IGF, in Vilnius. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but should not be treated as an authoritative record.



********





JONATHAN CHARLES: Welcome to our session today broadcast by the BBC and European Broadcasting Union with the support of the Dynamic Coalition on Access and Diversity at UNESCO, and we're here to discuss this issue of can we create a golden age of accessibility and we live in a world which would have been unimaginable even two or three years ago in terms of the number of apps there are now available.  Someone was telling me the other day that they think on average 3 to 400 new apps are created every week, 3 to 400.  And there is now an app that you can download that will tell you which apps you should download and which ones you should ignore.  An app sorter, that's the way we are going, but it is changing things very, very quickly.

What we're doing here today is we're discussing this whole issue of how apps will enable better accessible.  

We have a distinguished panel with us.  They are going to do a number of presentations.  My name, by the way, is Jonathan Charles, and I'm a presenter with BBC World News Television, the international arm of the BBC, the international broadcasting section of the BBC.  You'll all get a chance to put your views to the panel, and we hope to have some lively debate.  Feel free to inject a little bit of humor as well.  I always find that helps things go a lot more smoothly, but feel free.  

Every so often I'll be saying to you, okay, it's your turn now to have your views put forward.  It's your turn to ask questions.  I apologize for the noise in this room.  We obviously get a lot of noise from next door, but they're probably getting our noise too so I wouldn't worry too much about it.  We'll split it into three sections but we'll start by discussing what is the situation today on the ground, how things are developing very quickly and we're going to bring you some examples of how things are changing.  And I'd like to start by asking my colleague from the BBC, Gareth Ford Williams, to do a presentation on the BBC -- because the BBC has been in the forefront of making sure content is accessible.



>> GARETH FORD WILLIAMS: Thank you.  Can everyone hear me?



>> You'll have to pull the microphone a bit closer to you.



>> GARETH FORD WILLIAMS: I'm not used to this.  I'm Gareth Ford Williams.  I actually work full-time on accessibility for the BBC on IPTV.  If I get -- have I got a clicker or something where I can move on to the next slide?  Please.  Thank you.



>> JONATHAN CHARLES: Let's see if it works.



>> GARETH FORD WILLIAMS: Oh, it works.  One of the things that we've done in the last couple of years is we did a big piece of research, because we wanted to actually understand a little bit more about the audiences around -- issues around accessibility for our digital services particularly, those delivered by our IP.  We ended up doing a big piece of work with channel 4 and with over 550 licensed payers with disability to actually talk about what their experiences are and we also backed it up with a lot of desk research to try and understand what the numbers were.  Some of the around the U.K. we found interesting, we found 21% of people are not on-line in the U.K.  Well, this was about 18 months ago.  They were mostly an older audience.  It was a down market audience, it was people with less ready money.  And this has stopped working.  (laughter)

All right.  We wanted to know within that group what the spread was against -- when compared with disabled people, and there is another slide.  Yes, so to understand a little bit more about our disabled audience we found that there are roughly 11 million adults.  Now to say that is a very rough figure because whenever you look at statistics around this, there are virtually no statistics to cross-reference the different groups so it's up to 11 million.  More likely to be an older audience, less likely to be in work.  Heavy media consumers and lots of people don't see themselves as disabled.  They see themselves as the person and not the disability, and don't recognize the label is disabled, which was quite interesting for us as a broadcaster.

When we cross-referenced Internet use we came up with this quite shocking statistic for us was that there was a large group within this 11 million people that were not using the Internet, much bigger than the general population.  And so what it equates to, as we call it, bums on seats.  There was a reach opportunity here of 6.4 million at the top end of license fee payers to access BBC content.

So what we wanted to do was to find out what they were saying.  You always end up in these situations asking some of the daftest question and probably the first daft questions you asked is what do you want, and the answer is, the same as everybody else.  No surprises there.  So when we looked back at some of the services we were doing and we were looking at things like the BBC iPlayer, where we've taken asked and broadcaster's approach to accessibility and it has won certain accolades for being accessible.  We found this is and the users really liked it but we knew there were problems beyond that if they were saying, you know, that they really liked the services when they were presented directly to them.  

We found that within the group of users in there they felt that it helped make them feel less isolated, there were a lot of things about home shopping, all sorts of great benefits, we found there were communities that had been growing up around the intern, with our disabled audiences as well, particularly -- one great example is DSL communities that have grown up via webcams to visual -- it's a great visual media.  We were looking at this saying there is so much to gain.  Where are the problems?  It wasn't age, when we tried to do a control against age, that's not the answer.  So we found out usually most of it was around an understanding around what the Internet was, what PCs could do.  People didn't understand often that there was more that you could gain from just consuming stuff through digital text.  Because it was a more down audience and there were significant large proportion of people with disability, or registered with a disability in the U.K. or out of work, the cost benefits against the benefits of actually having access to this, there was a huge imbalance for the user.  People also found that PCs were a huge barrier.  They're a huge enabler for some, but other people, you know, they see a device that has a start button for shutting it down.  They just simply don't get it.  It was very, very confusing, particularly again when we're talking about this is an older group of users predominantly.

So the current conclusion models didn't work.  We wanted to know what was wrong with them, even though we are a massive supporter of them, that, you know, the moment we do look at this as kind of a standards-based and I think it's quite a cruel way to describe this, but one size way fits all, supports assistive technology, which is great, for the users that really understand it.  Whoops.  And -- but horrible slide, I'm afraid, with lots and lots of text on it.  But we found that a lot of people just didn't understand this technology.  It's far too complicated.  They understood the service and they didn't get the grounds, didn't understand the desktop, assistive technologies were confusing, didn't understand the technology, and for us it got worse.  And for us when we applied these methodologies, which is standard methodologies for accessibility, and we tested things like iPlayer, even though it's got a huge amount of good feedback, but we're finding large groups of people saying I can't access it.  

I'm dyslexic, it's horrible, white on black, people with ASD and low literacy, they were finding it massively complicated to get their heads around.  So we realized we needed an additional model, and so we started looking -- as part of the iPlayer project initially but now this has become a BBC-wide on-line proposition, is to look at personalization, how can people change the content to make it more accessible.  We need to know who depended on personalization as well.  We found that particularly we're looking at people with mild to moderate disabilities, particularly vision-related.  We're looking at autistic spectrum acid, other cognitive disorders, simple things like color changes, changing the form, changing the size and allowing them to migrate across all the services suddenly opened the content up.  

So we've been building a solution to this, which is called the Accessibility Toolkit.  The ATK, it is a work in progress.  We have 3 million pages of original content to try and deal with, being built on multiple legacy system.  It's big, but we're determined to make this work, but it will allow you to identify the kind of issues that you have, choose things on a very, very quick -- and in very, very quick steps change tons of content to be able to, you know, improve your access.  Because it's service side, doesn't matter what browser you're on, you can your use BBC identity, so you're not setting up on one machine and having to set up another machine.  You might be in a library and import your settings there.  

Great thing is there's no downloads.  It's a framework system so we can now start looking at building as well additional bits and pieces, which is service side, which will optimize accessibility for technology users as well as non-assistive technology users, so we're looking at things that will help screen readers to quickly identify not just if there is a media player present but how many are present and where are the start buttons.  So there's a lot of stuff that we can do.

The problem is, going back to our users, they turned around and told us, this is great, but can we have it everywhere, please?  (chuckle) because as soon as you leave defined to the BBC all of your settings disappear.  And they said, oh, yeah, we want to know everything that's connected as well.  So we started looking at IPTV.  We're already exploring our mobile services.  But I think this goes beyond what we're doing at a service provider because it still doesn't tackle the problem of the browser and the desktop and the device.

So the thing -- and this is my last slide on it.  I think what we're talking about in a world -- when we're looking beyond the browser, we're looking at applications, we're looking at more platforms, particularly IPTV, mobile where there's increased access to IP services, it goes beyond the technical standards and platform native accessibility features, which are all brilliant.  I would never knock them.  I think some of the stuff appearing on Android, on the apple platforms is absolutely fantastic, but to us as a service provider it's around service modality, which is around the provision of subtitles, audio description, signing, even one of the things that we're working with with Oxil but I don't know whether Oxil is here today, we're looking at how to improve text-to-speech because we're finding a lot of text-to-speech users struggle to text-to-speech.  

Personally I'm dyslexic and I'm yet to find a text-to-speech system that is less dyslexic than I am.  We're also looking at personalization without walls, how does this kind of system expand beyond the BBC or does it interface with other systems where we can -- so we're looking at things like access for all standards around this.

And also there's -- the last one, which I think is the key around this, the whole thing around connectivity.  Applications are appearing on connected devices and different devices have different levels of access.  It's all data.  If we're going back to the standards-based approached where really at the end of the day separation of content from presentation is a fundamental deliverable, understand this gives us a huge opportunity for allowing non-accessible devices to be accessed via accessible devices.  So I think that's another discussion, but it's (chuckle) something that I just felt like saying.  But I think at the end of the day I think there will be a big litmus test to a lot of this stuff that's going on.  It's very nice and it's all very exciting.  And that's basically it.



>> JONATHAN CHARLES: Thank you very much.  Gareth, thank you very much indeed.

(Applause)

I know I work for the BBC, but I use a lot of BBC services myself, including the iPlayer and the on-line site and it's fantastic to be able to personalize everything.  It does make a big difference to the experience of actually using a site, when you can get rid of everything you don't want and make it work for you, and I think that's the great advantage in terms of access that the BBC site has.

BBC isn't the only broadcaster doing a lot of work on this.  NHK Japan I know is also working very hard on these questions.  I'm delighted to say that here with us today we've got the director of advanced broadcasting platforms in the research division, science and technology research laboratories at NHK nonpartisan broadcasting and he has a presentation to make to us today.  I hand it over to you.  He's at the back, by the way.  He's going to come forward and stand at one of the remote microphones.  Yeah.



>> Sorry for -- to be back here because of demonstration.  So our (off microphone) NHK, and I would like to introduce some topics related in this workshop.  I do research on broadcasting and communication hybrid systems, which introduce Internet and cloud computing technology, broadcasting service, so as to enhance broadcasting services.  (audio difficulties) I'll introduce (off microphone) systems here (off microphone) in Japan, Japanese caption is available on broadcasting services for our deaf and elder people.  English, Korea, Chinese, is strongly required but it is impossible by a broadcasting service.  So we're developing new services that synchronized caption from Internet and broadcasting.  I'll show demonstration with this service.  Just wait.  Okay.  Just a minute, please.  (off microphone) this is an example of (off microphone) programme (off microphone) NHK.  It was such a case.  Of course (off microphone) is a valuable here.  In our system (off microphone).  

This is a captioner's example here -- caption service.  Here (off microphone) this is caption menu.  So this service can -- you can (off microphone) this captions is coming from Internet.  For example, unfortunately I have no (off microphone) language here today, but here I introduce (off microphone) captioning is available (off microphone) programmes.  Likewise language can be provided with this system.  Okay.

(off microphone) just a simple example.  Today it is impossible (off microphone) -- please change the screen.  Yes.  This is an example of a (off microphone) Japanese sign language (off microphone).  Okay.  Thank you very much.  (off microphone) Internet and creative computing is (off microphone) our broadcasting service but (off microphone) Internet services.  Thank you very much.



>> JONATHAN CHARLES: Thank you very much indeed.  Very interesting, interesting changes there, particularly taken actually with that multi-lingual subtitling.  That really is absolutely fascinating.  Let's move from the traditional broadcasters and what they're doing to another part of the technology in the industry.  I'm delighted to welcome Patrick Foustrom from Cisco Systems here.  Patrick?



>> Does that mic not work?



>> Hello?  Yes?



>> It's working fine.



>> Thank you very much, Patrick Foustrom.  Cisco.  We at Cisco are most well-known for making equipment and software that you buy once, you put it in the cupboard and forget about it, and you don't actually remember you have the equipment there until ten years later something doesn't work and the people that installed the gear don't even remember it.  We actually have had -- we actually, every year we discover some routers that have been (off microphone) for more than ten or 12 years without (off microphone) and I think the record we have it was built in around 1992 or 1993 and still (off microphone).  Various problems in them, but at Cisco, also do things that (off microphone) which is the software that we're using here that you can see on the screen.  And this is an example of a (off microphone) application that (no audio).



>> JONATHAN CHARLES: While we're waiting I'll say hello for Greg fields from RIM designers, the manufacturers of BlackBerry -- and we are actually getting some captioning now, which is very good.  But I wonder whether we'll get it on Patrick's mic.  Greg Fields was hoping he'd join us.  He can't get into the debate right now, probably some congestion but hopefully he's join us.



>> (off microphone) hear what I'm saying?  With development of this application that looks the same, regardless of (off microphone) system or platform you're using, and the (off microphone) way to support the users, and to some degree there is the same kind of situation (off microphone) at BBC, the development service.  But the other one for the user (off microphone) the BBC application work the same way as the WebEx application which means the coordination of the look and field and where (off microphone) are and where the (off microphone) are should be the same with two different applications from two different vendors.  In some cases even two vendors which are competitors, and you might understand that the interest of (off microphone) is not very high.  

It is also the case that (off microphone) manufacturer (off microphone) regardless of whether that's (off microphone) or whether that's open source have choice in a (off microphone).  (off microphone) up in the top of the menu on the screen, while in a lot (off microphone) you have the menus on (off microphone) that seems like a simple thing.  So the question is when the development application, like (off microphone) should they be on the top of the screen or on the top of windows.  And my personal view is that that should be wherever it (off microphone) regardless of whether it's (off microphone) all the different operating systems.  

Now (inaudible) which enables it for you to run applications in multiple systems (off microphone) you're using as a programmer, that library is placing, for example, (off microphone) and windows in the specific place according to that library, which means that the application will behave the same on all platforms but (off microphone) like applications on the platform.  Okay.  

So what I'm trying to say here, what I'm trying to say is that one of the biggest problems I think we have today (off microphone) that application developers are using are not good enough.  The actual libraries we have must be better.  They must be developed and be created so that when the component application or development application that is running on a Mac, it should look like a Mac application, when the same application is compiled (off microphone) it should be like a windows application.  On top there are several good things with that.  First of all (off microphone) many different applications on the same operating system, whether or not that's (off microphone) the details of the (off microphone) the various applications (off microphone) interface of operating system and (off microphone) specific functionality in the application itself.  

So, for example, if I am going to watch TV, I'm interested to select what channel I'm going to see.  I don't want to learn how to start and stop the programme.  That that -- that should work the same way in all applications.  Now, of course, that is a good thing for users, as we just heard my friends from BBC were saying, but it's (off microphone) more important thing, and that is all the various (off microphone) software that specifically disabled people are using for text-to-speech, for various types of both hardware and software (off microphone) that are used for people that cannot use the normal user interface in terms of keyboard and screen, et cetera.  So it's not only humans that determine the user interface, it's (off microphone) better.  

And (off microphone) develop better libraries for application develops, libraries that (off microphone) but also libraries that support from the very beginning the needs and tools that disabled people can handle.



>> JONATHAN CHARLES: Patrick, thank you very much indeed.  Just before we go on to our next speakers, we will have a big debate at the end but has anybody got questions so far for the speakers we've heard, the -- Cisco Systems.  If you could go to -- we should have a microphone on one of the stands there.  We'll get a microphone into a central position from somewhere.  David will.  Thank you very much.  And if you come up to the microphone and first you say who you are -- first you say who you are and where you're from.



>> Hi, I'm Mary Laddd and I'm from (off microphone) and I'm a software developer and I need -- what he was saying from Cisco, but when we say we should have libraries, I mean, maybe -- maybe a Cisco and your partners, you know Microsoft, you know Google, you know the different mobile application people.  So maybe is there any effort happening to give us those tools?  That would be great.



>> Thank you for asking that.  That's what I forgot to tell you about.  So absolutely, evening just because the libraries, developing those libraries is a very boring task.  You can never really sell the libraries and make a lot of money.  It's more interesting to write the application.  So I think it's the responsibility of every application developer to increase the quality of the libraries, and Paul said, for example, back to the community in the form of open source.  We at Cisco are of course spending some time in giving back for these libraries, but just because we are new on user interface issues, I admit that we have not been doing that so much yet, because so far for WebEx, for example, we have been fighting just to make the application work at all, okay, which to some degree is pretty important.  

But what we have been doing, which we have a lot of experience on is GCC, the compiler.  Just because we are one of the manufacturers in the world that run operating systems on the most weird hardware you can ever think of, we're one of the companies in the world that probably submit the most code to GCC, to the compiler that you use -- that the developers are using to make sure that there are no bugs on it.  We are actually for all our hardware, regardless of what it is, we are compiling everything on the same computer.  Okay.  So we are cross-compiling to 25 different hardware architectures and we are very much supporting G CC.  I agree with you.  We're not waiting for Cisco and others to do it.  We should do our part and I hope other people will do their part.



>> JONATHAN CHARLES: Patrick.  Thank you.  Any other questions?  Yes, the lady there.



>> AUDIENCE MEMBER: Hi, my name is Maria Casey and I'm an ISOC ambassador, I'm from Ireland and I'm speaking for myself.  My he question is for Gareth from the BBC who spoke first.  I was looking at your PowerPoint slides and I was noticing you do cater for a lot of disabilities which is great for the BBC, but my question is when you're trying to provide access, which disabilities do you cater for the most and which gets the priority and the most high dedicated (off microphone) and high priority (off microphone).



>> (laughter)



>> GARETH FORD WILLIAMS: How long have you got?  That's some very big questions there.  I think it's a really hard thing to say one group has higher priority over another, so you could actually frame this within a certain context.  So when you're looking at things like the consumption of broadcast content, which is AV platform, whatever platform it is via, satellite or free view or however it's received, you have to actually take into consideration that people with sensory disabilities are going to probably have the hardest time trying to consume that content.  

So traditionally the BBC started back in 1970s I think we're doing 100% on subtitling on all our content.  I think there's an official figure of 10% on audio description, I think we're hitting nearly 20, and there's a lot of sign support, so a lot of our continent is signed.  So from a broadcaster's point of view there's one consideration, then I think from a service provider's point of view where we're getting involved in the development of new platforms and we have on-line services, you've got to take a much more pragmatic approach to this because you're turning around and saying, actually, no, we're talking about not just the sensory disabilities but we're talking about cognitive and physical as well, and you've got to open this up to just turn and say, actually, we've got to get everyone because they're all license fee payers, and if they're all license fee payers they have the right to demand value and you can't prioritize one against the other, so you turn around and try and understand where you can actually bring value so you look at it from a technology point of view, what can we do that will unable from a service point of view, from a development point of view as well, so the technology we're building on, so the technology like our templating system, for example or the ATK, which is the personalization system.  That's not a service as such.  

But if we're then developing a Web page, do we have then, as was already pointed out, you know, a single way that we mark up all our pages, a single way that we present the navigation, the global and local on all our pages.  There's a lot, and it takes time and it's taken us years to develop our own set of standards and guys lines using W3C, using expertise that's already out there.  And where there are holes, looking at it from a broadcaster's point of view, there are always holes.  It's coming up with our own solutions for it and then sharing it with our other broadcasters.  So it's one of those things you can't turn around and say one group is more important than another, but you can actually prioritize where one group will have a harried time accessing certain types of content so then you need specific strategies to get around it.

So at the moment there's a big project going on in the BBC called Games Grid, which is looking at all of the gaming that goes on on our on-line services so we're trying to put together an accessibility strategy for gaming, we're looking at interaction patents, assistive technology, we're looking at it is there a V in it, does it need subtitling so trying to understand what it would need to make gaming as inclusive as possible.  I'm sorry it's not a direct answer but very big questions.



>> JONATHAN CHARLES: Gareth, very big indeed.  Are there any more questions?  We'll get you a microphone.  Hold on, David is approaching you.  David Wood.  Okay.  Thank you.



>> AUDIENCE MEMBER: Thank you, yes, I have another question for the gentleman from the BBC.  I'm a Linux user and you mentioned you were always trying to make your services independent of a browser.  Would they also be available in the Firefox but running on Linux?  Have they touched on that?  I also wanted to ask if any of this interface software is being open sourced and the reason is very practice.  I mean, a lot of the projects that have that kind of licensing are able to benefit from feedback and also codes contributions from the community.  So it speeds up your develop and the whole feedback loop is much richer because there are technical people who are able to help you innovate and include more of your audience.  Thank you.



>> JONATHAN CHARLES: Gareth?



>> GARETH FORD WILLIAMS: There's a couple of questions there, I think first to say yes, we've done -- we do have browser support standards, but we do support a lot of browsers.  I mean, you know -- we, I think only a couple years ago we kicked off net scape 5.  We really do test and make sure that our services are on-line are available on as many different browsers and platforms as possible, and where there are issues, then we have our own Web site that will tell you, okay, where you have a problem you may be able to correct it within the browser, so we have a Web site, my Web my way, which takes you through the different things you can benefit the most on those platforms.  We test, constantly to get the iPlayer on.  The wee, all sorts of things at the minute, we know that people can access it better.

I think -- what was the second question, Jonathan?



>> JONATHAN CHARLES: Second question was about open source.



>> GARETH FORD WILLIAMS: Oh, open sourcing.  Again, there's kind of a philosophy within the BBC that we kind of build stuff and give it away.  You know, we're not a commercial organisation.  Occasionally sometimes there are issues around doing that, and we can't get involved in areas where there are fair trading issues.  For instance, the BBC would never be allowed to build a screen reader, but we can occasionally look at where there are holes in assistive technology provision and actually put a case together, and the idea eventually is, yes, once we've got it working on our systems is to engage with a wider community.  

I think we recently opened source.java script library, which has implemented a lot of ARIA into that library already so by adopting -- it's called glow.  By adopting that library then, you know, you get all of those additional benefits from ARIA through it.  So there's -- I think open sourcing, yes, is an extremely important way forward for us, and we are doing it where it is legally possible (chuckle) but we're not always developing things on our own so we have to take our partners' rights into consideration on this as well.



>> JONATHAN CHARLES: And commercial competitors.



>> GARETH FORD WILLIAMS: And commercial competitors, yes.  We can't damage marketplace.



>> JONATHAN CHARLES: Gareth, thank you very much indeed.  Any final questions before we move to our next presentation?  You'll have a chance later to ask more questions.  Let's move to the second part of our debate today, which is about what are the technical accessibility requirements in this new app world we're in.  I would like to welcome from the Worldwide Web Consortium, and -- we'd like to find out it's done actually in a fair amount.



>> SHADI ABOU-ZAHRA: Hello.  Good morning, as mentioned, my name is Shadi Abou-Zahra from the Worldwide Web Consortium.  I'll be talking about Worldwide Web accessibility.  You have the slides.



>> Yes, do we have it?  It's called W3C apparently.  Do we have that?  Do we have W3C -- here we go -- ah, and you'll need a clicker.



>> SHADI ABOU-ZAHRA: So, back again, sorry, but mobile Web accessibility.  So I wanted to step back a little bit.  At W3C we obviously develop standards so this discussion will get a little bit technical, but to step a little bit back and look at mobile technology as a whole, one of the questions I think in this workshop was will apps provide a golden age for accessibility?

My idea is that it already has.  I think mobile in itself is an accessibility improvement.  Technology as a whole, as it progress, provides more accessibility, and if we look a little bit about -- on the mobile devices themselves, and look at the amount of accessibility features already inside them, where you might not even have thought about those as being accessibility features, the idea of being able to have the alerts in different modalities, you can have visual alerts, you could have them as sounds if you can't see the visual alerts, or you could have the phone to vibrate to give you an alert in the tactile way if you can't see or hear the content.

Now, that sounds very basic, but those are a lot of the feedback mechanisms that people with disabilities have been using for ages, like for instance, the doorbell that will blink so that people with hearing disabilities know that the door well has been pressed.  And this kind of stuff is making its way into mobile technology.

We look at, for instance -- I think I -- pressed one too much.  Sorry about that.  So the profiles and the customizations option, being able to customize your phone to set the colors, for instance, to set the font size.  So my phone will have different settings than your phone so that it accommodates my particular needs or my particular accessibility preference.

And the list goes on.  Touch screen and on-screen keyboards, again, those are accessibility features that have been used by people with disabilities since ages, but they're now slowly again making themselves -- their way back into mobile devices as a whole because it's easier to use by everyone.  It provides more flexibility, and so on text-to-speech, speech recognition, hands-free computing is the buzz word so that when you're driving or doing some other task, you can still use your phone or the services on the phone.

When we look at the services side on the phone, so not only the device itself but the services behind, we also see a lot of accessibility features, and again you may not have thought of those as being accessibility use cases.  Phones, you have different types of services.  You have text-based services on the Web is currently predominantly text-based but the Web does not need to be text-based.  There are (off microphone)-based services and they are emerging increasingly and voice-based are another idea of having the Web.  

The idea of the Web having resources that you have services that are linked.  You can link between them, jump between them.  You also have increasingly multimedia or video-based.  I think there was an example shown earlier about YouTube or especially if we're talking about broadcasting, I think the video-based services, and W3C is working also at the moment on video on the Web to help link between video-based resources.

Instant communication I think -- I know we'll talk later about realtime text as an example of instant communication, but I think we all know those features and services.  Twitter, for example, Twitter being a text-based communication service.  The funny thing is being able to send text message to the wider public is something that I think people with hearing disabilities have been using text messages ever since the inception of the mobile phone or the inception of text messaging, because that's a comfortable way of -- or one of the ways in which they can communicate.

Now, opening it up to the wider public is just one example of how text messaging services, and not only that, when mobile phones started to appear with the camera and the video-based services, people with hearing disabilities started to send sign language videos to each other, and that is now spawning a new service called Twideo, in which instead of sending text messages you can send small videos out to the world, and so on.  So the list goes on and on, but things like mapping and location-based services, for example, can help guide -- people cannot see the area, for instance, but also people with physical disabilities.  I use Google street maps a lot, even though that was not a service that was developed for accessibility, but I can see on the street maps very often where I'm going, doesn't have any stairs or steps at the entrance, is there a disabled parking place nearby, I can see all this on-line and I can benefit from accessibility features.

So I -- the idea here being that mobile technology as a whole provides incredible possibility, and I think we've only started to scratch the surface of what's possible and what kind of accessibility and the meaning of accessibility that will emerge in mobile devices and mobile technology as a whole.  But there's a lot of work that needs to be done.  Mobile technology is often not as access I believe as it should be or as it could be.  So there are a couple of layers, a couple of things that need to be tackled, need to be handled.  First of all, the hardware is incredibly important in mobile devices.  For instance, you need to think about the phone being able to vibrate.  You need to think about the camera on the other side so you can exchange sign language videos or communicate in sign language.  If there is no video on the other side of the phone it gets very difficult, and so on.  So the hardware is one layer.

The operating system accessibility, so having accessibility native in the operating system, what kind of use that on the best -- so we don't talk a lot about it because it's getting better, but on mobile it still needs to go a long way.  For instance, until very recently some of the mobile phones I think would only relay the menus to the third level to the text-to-speech system.  So if your application was in the lower level, you would not have access to that.  So that's an example of operating system accessibility support that needs to be done.  And the next would be the application programming interface, so that's all the dialogue boxes and the libraries and so on that Patrick has been talking about, and there again, a lot of work needs to be done on mobile.  And obviously the browsers and the software, the apps on the mobile, they need to consider accessibility as well, and finally I think the actual point that I want to talk about a little more is the accessibility of the actual content, so all the services that you're providing and how to make those accessible.

So just to make sure that we're all on the same page and just to realign, the shared experiences or the shared needs between mobile device users and people with disabilities in general.  For instance, one of the big requirements of accessibility is having text alternatives for images or other non-text, for instance for videos and sound.  The idea being that you may not be able to see the image.



>> JONATHAN CHARLES: Let me just stop you a minute, Shadi.  We seem to have lost captioning again on your microphone.  Can we do something about it?



>> SHADI ABOU-ZAHRA: Test test test.



>> JONATHAN CHARLES: We've lost captioning on all microphones.



>> I can get Roy.  Hold on.



>> JONATHAN CHARLES: We'll try and ring someone.  Get it sorted out.  We'll see if we can bring back captioning.  Just bear with us.  I apologize for all this.

(pause)

Okay.  Someone is coming along to try to sort that out.  If you'll just bear with us for a minute, and we'll attempt to deal with that.  Sorry, we do have some people who are relying on the captioning, so we do have to make sure that we have it.  I'm sure you'll understand that.



>> JONATHAN CHARLES: Here's Andrea Saks.



>> ANDREA SAKS: Hi, I'm Dynamic Coalition on Accessibility and Disability coordinator.  Last time in IGF not every workshop was captioned, and this time every workshop is.  And I thought I was loud, but now I can be loud.  Not every workshop was captioned last time.  This time it is.  It's an incredibly big job, so consequently it does require patience because they've never done it before, and Roy -- it's back, it's back, hooray!  All we need is Roy (singing).  All we need is Roy.  Roy is a master.



>> JONATHAN CHARLES: Roy, thank you very much.  Indeed, let's carry on.  Shadi, back to you.



>> SHADI ABOU-ZAHRA: Okay.  Thank you, so I was talking about the shared needs between mobile device users and people with disabilities, and I was bringing the example of text alternatives.  So a description for an image so that if you cannot see the image, for instance, it can be read out loud to you or you can feel it in braille.

Now, text alternatives for non-text is also incredibly important for mobile users because you very often don't have the bandwidth or it's very expensive.  So you don't download the images, you don't download videos and you want to read the text alternatives for them.  Being able to access the content using just the keyboard only or the keyboard interface, because you may not have a mouse on the phone or a joy stick or whatever, and so on.  So the list goes on and on.

I think one of the points also that separate the presentation and structure so that you can key to the content or you can adapt it according to the delivery channel, I think that's one of the very important points of accessibility and of mobile content as well.

So some of the standards we have at W3C, we have the domain called the Web (off microphone) where I work, but we have a whole lot of standards for accessibility.  So the first and the probably most known is the Web content accessibility guidelines, which describes what is accessible Web content and how to make the content prepared and situated so it's accessible for people with disabilities.

The next is the user agent accessibility guidelines, and user are the media players, like the iPlayer from BBC, for instance.  That would be a typical thing that would follow those guidelines in order to provide accessibility or relay those accessibility features in the Web sites, also, for offering tools which are all the tools are used to produce the content.  I think that's incredibly important to remember that.  We need to facilitate the production of accessible content and make it more easy, especially when we're talking about social media and user generate the content and so on.  And last but not least, ARIA, which was mentioned brief by Gareth, it's a way of how to make those complex applications we're seeing be increasingly used on mobile services, be accessible to the broadest audience possible.

Another area of W3C called the mobile (off microphone) focuses on mobile Web.  Some of their standards include the so-called mobile Web best practices, and it's no surprise that the mobile Web best practices actually has a lot of the requirements from the Web content accessibility guidelines.  For instance, as I mentioned, the color contrast requirements, text alternatives, having structures and so on.  But they also have some additional requirements that are specific to the mobile.  For instance, things like (off microphone) lifetime or processing speed and so on of the factory lifetime.  And they also have new in production.  The mobile word application best practices, which is (off microphone) to the actual on-line applications rather than more static content.

So put those standards in a mixer together, and out came some guidance for developers in how to actually combine these two sets of standards together in order to provide mobile and accessible services and content in apps at the same time.  So the resource that you actually need to go to is W3.org/wai/mobile.  So that's their Web site, which contains a lot of the information about how to make mobile Web apps accessible to the broadest audience possible and also usable on phones.  Thank you.



>> JONATHAN CHARLES: Shadi, thank you very much indeed.

(Applause).



>> JONATHAN CHARLES: And there is obviously an awful lot of work being done into all of this, not just NGOs, private sector groups, all sorts of associations, not just the broadcasting union, the telecommunications union, the ICU, which has been heavily involved in today's session.  All sorts of organisations are contributing to this.  I'd like now to call upon (off microphone) from the realtime text task force.  He's going to explain what the role of that is, and it is a not for profit organisation.  Anna.



>> Can everybody understand me?



>> JONATHAN CHARLES: Yes.



>> Okay.  The slide.  Okay.  My name is (off microphone).  I'm the director of the task force and also ambassador as well.  Thank you for having me come here.  And -- let me try to get this.  Okay.  Good.

We are looking for a presentation here, the mobile Internet is becoming more and more important for all of us.  Fixed Internet to mobile.  And it's also, if you want to be part of society, you actually need Internet for your professional and personal life.  We all communicate using the Internet.  We have our contacts, we also have our meals, information, our entertainment.  So it is quite important impact in our life.

Also, if you're here right now, you're moving more and more to mobile devices, and everybody has more or less a mobile phone.  Asking questions, which of you does not have a mobile phone here?  Nobody?  Like I said.  A nice thing about the mobile phones is they are becoming more and more powerful as well, allows you to do more and more things.  Like the iPhone and Android phones, your BlackBerry, or those smartphones, and Shadi tells you, more and more service features are coming as well, so more applications.

Now, there are two possibilities.  If you have a disability or a phone application cannot do it for you, it would be a bad thing, or the mobile devices and applications can make your daily life easier.  That's a good thing if you want to have it to happen.  So the more devices should be able to offer the same level of access and usability for mobile Internet as everybody.  Of course, it's not always possible to (off microphone) some disabilities, so you have to be able to use a mobile devices to make your daily life easier, services which makes it easier to do things which would otherwise be more difficult, to have the supporting applications and services.

Then we come to the (off microphone).  When you are developing applications, or making them, you use open and standard as well, like we mentioned before several times, but also it's very important, they are building applications, they're sure the users are going to use it (off microphone) for they know what kind of things that disability need.  They know what actually needs to help and they can test and play with it, so make sure (off microphone) in that case.

So -- I'm going to talk about realtime text, so good things about it.  At least with the -- please put the mouse on the picture.  Behind the computer.  Making movies will be playing, click the mouse when it -- now you see actually realtime text, you are typing and you can immediately see what has happened, as if you're looking over his shoulder.  This allows text to be used in the same equal way as (off microphone) being used.  No need to wait.  You no need to wait, because I'm talking, start my line, and actually know that's the beginning of my line instead of having to wait.  So realtime text is (off microphone) text, and that is a very powerful feature and I will talk more about it.

Now we have an impression what realtime text looks like.  (off microphone) realtime text task force foundation founded (off microphone) and the mission of the realtime text task force is actually to make sure we have an (off microphone) harmonized text communication (off microphone) solutions, just as available as voice.  You can use realtime text anywhere, any person (off microphone) just like we are now using a phone, what kind of blend of phone (off microphone) you pick up the phone, you expect the phone to work.  And so it might be in a different language being spoken but the phone works.  It happens.  (off microphone) promoting it and make sure that the expertise is bundled there, including accessibility needs and requirements, but you know how to create it, how to implement it.  And of course (off microphone) to promote design and help implementation (off microphone).  Of course we want the build to use it, and see realtime text (off microphone) participate in the task force including, if you haven't (inaudible) speech disability.  

Everybody is welcome because I know some friends who actually have official impairment who can actually read (off microphone) text.  They actually like it, instead of having to wait for everybody.  Specifically (off microphone) to focus on it, and technology (off microphone) so we always make sure that a different (off microphone) technology, we can connect to one standard realtime text.  That's all (off microphone) now it has a name, but if you're going to implement it it's important to know that.  But the bottom line is no more (off microphone) effect.  Can somebody click -- click?  Okay.

Well, you see (off microphone) speaking different language, this is the technology level, it is a technology -- text -- hope to make sure everybody speaks the same language using realtime text (off microphone) conversation.  Please, next page.

Now, I'll give you some applications using realtime text, and that is in creation, is the aegis group research organisation (off microphone) and inclusion, and they are looking into how to implement open standards and make accessible mobile devices, for example.  And they have been doing research on what kind of realtime text (off microphone) systems are the best to implement.  And then they are starting to develop it, and it's based on the mobile (off microphone).  You'll know, the users know, it means that Nokia phones, for example, (off microphone) phones can be used and the BlackBerry.  So text communication at this moment is (off microphone) but it's not finished yet.  The prototype that will work is expected to be end of this year.

There's another open source effort going on that's a (off microphone) phone.  At this moment it's only on iPhones or Android phones.  But the (inaudible) task force, working on implementing the realtime text part of it.  And it's expected at the end of this year or beginning of next year to be finished (off microphone) the Web site so you can try and play with it, and hopefully use it.

There is an example of existing application using realtime task on the BlackBerry.  (off microphone) and the open -- task force, more than 1,000 deaf people are using it right now.  I actually have it on my BlackBerry as well (off microphone).  And it's using (off microphone).

Click.  These are some examples realtime task, a deaf woman with her child (off microphone) so they can use text messages but they must use (off microphone) because you real need (off microphone) know you're there, what's going on, have a conversation.

And of course for me (off microphone) essential to have it, but you're all welcome to have something similar as well, because you can text (off microphone) not as much background noise.  I mean, if I'm going to text communicate with (off microphone) I don't want everybody to hear what I'm saying, oh, yes, dear, love you.  Keep it private.  So there are many reasons to use realtime text, and of course some might love to hear it, so -- there's another -- most existing applications uses realtime text, especially for the Android phone, and this is realtime text included with video and audio, it's called total conversation, because if you decide it's strong enough you can have CD on it as well.  That combination is done by Opics United Kingdom, you guys know about it.  And it's starting end of this month and it's called (off microphone) mobile.  

You can see here in the pictures (off microphone) and hopefully a lot of the deaf and hard of hearing people will be involved in testing it and how to improve reliability.  It's a good beginning, anyway.

(off microphone) it is not personally this part but a lot of younger people are saying they hate the old text phone.  It's a clunky old steam-powered thing of telephony, they don't want to use it anymore.  So if you use a text application on it, video and audio, mean it's a mobile phone, can be a touch phone.  A mobile text phone.  (off microphone) to communicate and call each other, like most hearing people can.  And it means that people that need accessibility can join the force of IP (off microphone) as well.  

And I will give you some examples of additional services made possible.  For example, a life subscription (off microphone), where you see the captioning right now, is an example, of using realtime text.

Also, it allows more one to one interpretation as well, because if you don't understand what the person saying, put the microphone to it and interpreter located in America or France will type the text, and then you understand the person, and the message (off microphone) the Internet into your eyes, you can have subtitles all the time, realtime text.  That's my dream, actually.  Of course -- of course, you all know me, when you dial some (off microphone) that's 1 for (off microphone), press 2 if it's stolen, press 3 if (off microphone), press 4 -- there's 5, if you still remember it, there's -- if you forget the text, and immediately on the screen you get this number.  I think it's much quicker.  

Sometime there may be a situation where you need realtime text.  You want to dial 911.  There's a burglar hiding under your bed.  If you want to call it's very easy to text it.  So realtime text in that situation will be quite useful.  I hope you won't be in that situation.

I've spoken to a number of younger people as well.  I sound old in this way, but they laughed (off microphone) limitation, and (off microphone) realtime text, yes, much quicker much more live, much more direct.  Wonderful.  So obviously (off microphone) for all it's important and everybody benefits from realtime text.  

If there are questions, if you have questions, and if you want to have more information you can look at the realtimetext, www.realtime text. (inaudible)

(Applause)

Andrei, thank you very much, I can imagine a lot of young people will love that whether they have hearing or not.  Andrea wants to say something.  Andrea Saks.



>> ANDREA SAKS: Thank you.  I'm the Dynamic Coalition on Accessibility and Disability for the IGF, and also I organized the joint coordination activity on accessibility and human factors for the ITU.  But beyond that I'm a telecommunication specialists for the deaf.  My father was founder of the text phone he wants to throw in the bin.  I don't really mind that because I do understand.  Andrei has been telling me for years -- but the one in Holland is based on a very bad code.  There are actually some still in use and older people can't use the phone.  It's not in the bin yet.  Because it has a full keyboard.  Realtime text started with telephony, but that's not the point.  Why not use SMS instead of realtime text?  Why not use instant messaging?  The simple reason is, as you're hearing my voice, you're hearing every word as I speak it.  

With realtime text you don't have the timing issues that you have with IM.  If everybody uses the same standardization of the current realtime text, which is now available and usable on the Internet as well as in mobile telephony and standardized by 3GP, we can have real conversations.  When you're in an IM conversation how many of you have typed really quickly and changed the subject, but the other person didn't keep up and you're completely lost.  

Now, imagine if you had a person with a learning disability.  I'm dyslexic too, so we're in the same boat, or you were autistic or something else where your severity of understanding written word was worse.  That's a big problem.  And also, they're not intra-operable.  You have Google, you have IOL, you have Microsoft.  We can't talk an all of them together.  And interoperability is really important and that's something you didn't mention so I'm going to spank you.  You have to include the fact that we need to have interoperability with text telephony, and realtime text works.  

So call it text telephony -- I'm not done yet.  You got to let me say one other thing.  The other problem is with SMS, that's transmittible.  I mean, how long does it take for that to get to another phone?  We don't actually know.  If the wireless is down, it could take three days.  If it's really fast good broadband, takes an instance.  Who will pick up the phone and read it?  We don't know.  Does the doctor read it, have his vibrator on?  We don't know.  But with realtime text you're live, you're talking.  It is so important that we get this particular technology into the Internet, into VoIP, into what he was describing as total conversation.  Total conversation is an ITU standardized service description.  It's not a standard because it can use other standards from other areas.  There's a combination of standards in realtime text, which also utilizes of standard bodies like IETF as well as ITU.  So even though I do a lot of work with the ITU, I believe that everything should work together.  

So proprietary standards are really my big bugaboo at the moment, especially when we have videophones in one country that we can't even communicate in the same country because they're on a different service provider, not to mention the divide between the U.S. and the fact that if it's proprietary, you want to sell a lot of phones, okay, okay, guys, wouldn't it be so much more if we had more people using everything and we could more sign our videos.  

And I want to mention gunner's outreach 112.  He has total conversation in 20 countries in Europe using realtime text, video and voice.  So a deaf person, a voiceless person, a deaf line person, an ordinary person, can use anything they've got that hooks into that and be able to contact the emergency services from wherever they are, and this applies to relay services.

So I'm not going to smack you for saying TTYs are out, because they are and they aren't.  I'm just going to elaborate on what you were trying to say.



>> JONATHAN CHARLES: You have one minute to respond.



>> Let me indeed respond to it.  The nice thing about realtime text is it's very easy to translate realtime text into TTY phones (off microphone) protocols.  For example, in the Netherlands it actually has (off microphone) allow connected TTYs, because indeed most phones are too small and too difficult to use if your AT is old.  There are some exceptions as well.  So I agree, I could make it more clearly.  (off microphone) support is also possible.

The second thing is interoperability, yes, I mentioned, one form of realtime text (off microphone), but I didn't want to go into much into it.  The important part of the function of realtime text software is indeed interpretability.  So you have to make sure it works.  That's a very good thing to mention as well.



>> JONATHAN CHARLES: I'll leave you two to slug that one out afterwards like two old boxers in the ring.  I'd like to introduce now Arun Mehta.  He does fantastic work, if you want to see it it's worth going to his Web site, skid.org.  There are amazing things on there, and he'll talk today on what has been done and what needs to be done in terms of standards and guidelines.  Arun.



>> ARUN MEHTA: Thank you.  Can I have the first slide?



>> JONATHAN CHARLES: Yes, you probably need that.



>> ARUN MEHTA: Okay.  My focus now in my work with disabilities a lot with children with mental challenges, and addressing that one effectually deals with all disabilities.  Gareth talked about this one size fits all not working.  We have an extreme case where you require a different kind of software for each person, because each mentally challenged child has a completely unique combination of abilities and disabilities, depending upon how the damage happened, when it happened, et cetera.

So among mentally challenged, many are nonverbal, many have sensory issues, that means they're very sensitive to bright lights, loud sounds or maybe even touch.  So I've been in computers now for decades, and I have rarely been so excited about a new technology as I am about smartphones, because there is so much in there that would be amazingly useful for children with mental challenges.

Now, we have on the smartphone a large number of inputs and outputs and all of them can be great for a mentally challenged child.  A mentally challenged child works very closely with a caregiver, very often the mom, who give up their professional life to look after the child.  It's very, very hard.  So a smartphone, for example, the mic can act for the caregiver as a remote eye, to see what's happening in the vicinity of the child.  Likewise the mic to hear what's happening, the speaker to talk back either to the child or whoever the child is trying to communicate with.  The GPS does (off microphone) where the child is.

And another very important thing about the smartphone, particularly for children, is that the phone is cool.  You know, what he was talking about, big, clunky things.  Well, so far what we have been trying to get children to use is a net book like this, which is portable but it's not something that you want to take with you to the market or to the club or wherever else you're going just in order to be able to speak.  But a smartphone, and the apple smartphone, everyone is interested to know what you're doing and they like it, so that's important too.

Let me give you a very, very brief introduction to the persons with mental challenges and where that comes from.  Of course everyone is unique, but a very common problem is that there are fewer interconnections between different parts of the brain.  So trying to get different parts of the brain to work together is a problem, and I have a whole lot of things listed here, for instance, writing, which requires many muscles coordinating, or when you're talking about higher -- more complex language, that's another problem, flexibility.  

But when different parts of the brain aren't working well together, for example, hearing and viewing, may not coordinate very well, and I would be interested to know how the TE people handled that.  A person cannot be following what's happening on the screen at the same time as understanding what is being said, which is why, for example, mentally challenged people will often view the same video again and again and again and again, because they get different pieces each time that they have to try and combine.

Let me go there.  Now, here I'm trying to summarize and extract, this is a whole presentation -- as to how one best would address -- how one best would write software that helps children with mental challenges, using facts, rules, only essential information, very important.  No distractions.  You don't want, you know, lots of animations and flashing and all kinds of weird signs, unless it's significantly contributing to the information.  I mean, a lot of these things are probably good for everybody.  

Software needs to be forgiving of errors, and that's true for everyone, but even more so for people who are dyslexic and, you know, people who have other similar problems.  You really must have a very high level of automation.  You stop the (off microphone) and forget about it.  You shouldn't have to keep interacting with the app.

And then we're using -- it's much harder to comprehend line drawings than it is to comprehend pictures, so that's another thing that's quite useful when writing apps for persons with mental challenges.

Okay.  Now, if we are trying to make smartphones more friendly to the needs of the mentally challenged, since every child is different, one is going to need to write fresh apps pretty much for every child.  We would like to take it to the point where the caregiver can do this herself, and eventually also teach this to the child, so that you are in a position to solve your own problems and problems of people like you, as they say, better to teach a person how to catch fish than to give a person fish.

So I'm very excited about things like Google's app inventor and I would like to see these be a standard you would make it really easy to write apps and these can work across platforms.  That would be really cool.

Patrick talked about how we need to standardize libraries and the interfaces that people use.  I totally support that, but also another thing that is very important is that the inputs and the outputs should also be standard so that you don't to revise yourself when you're writing to a different kind of phone.  It was discussed that making the interface customizable is hugely important so you can throw stuff out that doesn't work for you and reduce the information to the minimum that does.  Yes, and what is -- that was also discussed, having a similar interface across applications helps hugely, so when you're learning a new app you don't have to completely figure out how the damn thing works.

Okay.  This is -- this is my last slide.  I mean, I hugely believe that rather than trying to fix the product, which is what we are doing a lot, it's much better to fix the process that makes the bad product.  And an excellent way of doing that is to employ developers who are disabled in the team.  You know, that is the best way to get accessible apps, and some of the best software that has been written for blind people has been written by blind people themselves.  As Fernando and others will tell us.

The problem, of course, is that you don't have that many disabled developers, and the reason is that the entire educational system, higher education system, an engineer in college, for example, are typically not accessible, I am not aware of a single programme in India that is open to, for example, an -- an engineering programme, for example, that is open to blind people.  That really needs to change, you know?

Another very important thing is that if you want developers who are disabled to be able to work with you on the app writing, then the developer tools need to be accessible and that is often a serious problem.  So that's another area that smartphone people should pay attention to.

I think it's hugely important to focus on those with the most severe disabilities.  We had a lady asking about what your priorities should be, and so I think, you know, really one very, very important segment is the deaf/blind.  These are people who have extreme information and communication challenges, and something like this could work quite well with a smartphone they have good feedback.  We've also been discussing in the DCAD how we might work to develop something for the deaf/blind, maybe something with Morse code.  You can tap it on the screen and the feedback (off microphone) back to you.  So that might be a way to go.  

Now, what has also been mentioned is these kinds of things aren't just for persons who are disabled.  Now, depending on how you use the smartphone, you might be disabled, like when you're driving a car, or, for example, a teenager who wants to be texting with the phone in the pocket.  You are deaf/blind in that situation.  So an app that would work in that situation might also be quite interesting for teenagers.  And so this is something that needs to be looked at that -- every user of the smartphone, depending on the usage, is disabled in some way.  So accessible apps make the app more user-friendly for everyone.

And my last point really is that here in the Dynamic Coalition, as, you know -- the speakers that you've listened to before me and also in the other sessions that we are holding and you talk to them, these are some of the best experts that you will ever come across in the area of technology for persons with access needs, and I think that we really should take advantage of the expertise available in groups like that, which is indeed quite clear.



>> JONATHAN CHARLES: Can I just ask you something?  I saw it in various interchanges you had in the run up to this.  You think there's also something about the way you get apps on cell phones that particularly children with mental disabilities, something to do with the fact they can touch it and actually motivate them that way?



>> SHADI ABOU-ZAHRA: Thank you for pointing that out.



>> JONATHAN CHARLES: I thought that was very interesting and I wonder if you'd say more.



>> SHADI ABOU-ZAHRA: One of the problems a person with mental challenges might have is to make the connection between the movement of the mouse and the movement of the cursor on the screen.  You know, it's not an obvious connection for many people.  Likewise, pressing a key on the keyboard and having something happen on the screen, again, there is a level of abstraction.  With a cell phone you're touching, that's good for kids.  They love pointing and touching.



>> JONATHAN CHARLES: I think that's an interesting way about how cell phones operate, especially for people with disability.  We've heard obviously all our speakers now.  Again, it's your turn to ask your questions.  We'd love to hear from you.  Does anyone have any questions now, not just from the three speakers we heard from the last session but anyone.  Yes, again the lady and then the gentleman.  We'll get you a microphone.  David is running that way.



>> AUDIENCE MEMBER: Just for the last speaker there, is it Arun Mehta.  Just to make a point.  One thing you said.  You think the access (off microphone) should be made for people with the biggest disabilities, which is something I agree with, actually, but isn't it true that a lot of the times -- the phrase, the needs of the many outweigh the needs of the few will actually come about and it's the people -- the majority of people, or the numbers of people who are actually affected with a minor disability will still actually get the most funds or the most help for this, and the people with the most severe disability, just because there might only and few, they might be ignored -- or maybe not ignored, but there's not aid for them.



>> ARUN MEHTA: Again, how much time do you have?  (laughter) well, we've got about 15 minutes.



>> In India, autism is not a disability under the Persons with Disabilities Act.  When the, you know, head of the autism society of India, a woman with an extremely severely disabled daughter, wrote to the government to ask that (off microphone) be included, the reply that came back was that the government decided not to do so, because to do so, this is a quote, would take attention and resources away from those whose needs are greater.  This is the kind of attitude that you find very often, and this is something, you know, that one has to fight.  Persons with disabilities have to fight all the time, for every little thing we have to fight, and the more severe your disability the bigger the fight is.  And that's why they need a lot of people to help them.  

I mean, if you have a communication disability you cannot organise, you cannot come here and do a talk, so you need other people to do it for you.  And so this is therefore a request to everybody to pay a little bit more attention there.

The point that I made about apps, app writing becoming simpler, that's a huge, huge thing.  It is really becoming a lot simpler to write software now.  So these considerations that, you know, the money is limited and we can't -- is less and less of an excuse, but still it is a problem.

What we want to do, and this is something that we are trying to start now, is the on-line self-help group for persons with mental challenges, where we treat the caregiver and the child as a team and where every caregiver of a mentally challenged person becomes an expert in the field because there is so little help available.

So we want this knowledge and information to enrich the entire community, to reach more people, so if you have an older caregiver who's had decades of experience in bringing up a mentally challenged child or children, that will be great help to a young mother who demonstrated she has a child with disabilities.  And we would like the caregivers, really, to be able to themselves develop apps, for two reasons:  One, that this kind of a problem isn't there, that you can develop your own apps, number one, and number two, once you teach the caregiver you have a good chance that this information might actually also reach the mentally challenged child, who then might become an absolutely great developer.  I mean, the Silicon -- Silicon Valley is full of persons with autism, you can check out the geek syndrome, autism.  Computers attract people with mental challenges.  So we would like them to become developers, testers, you know, all of these things.



>> JONATHAN CHARLES: Briefly, Shadi, I think you want to come in here.



>> SHADI ABOU-ZAHRA: Yeah, I do want to chip in a little bit on the prioritization question.  I think that's a very, very important, very good question.  I think one has to differentiate a little bit.  What I was hearing Arun say is a little bit also.  You know, when you're developing policies or when you're researching, don't forget, those disabilities we really know very little of and where they repeatedly have been forgotten in many areas and many things.  So I'd completely support that.

Evening when it comes to implementation, like implementing, you know, a Web site or a service or an application, I think it gets very tricky to start slicing and segmenting the groups, the individual groups and people and to try to prioritize them.  I think Gareth touched a bit on that.  

And I think -- so the idea, what we do at the W3C WebEx initiative, we try to combine the requirements we know.  For instance the requirements for people with different disabled together in unified requirements.  So this needs to be a base level, so even if you decide to implement, say, a different combination mechanisms, that it's geared to a specific set of people, still the default mechanism should be at least priority double A, so it should be a decent level of accessibility, so you can try to reach as many people as possible, and on top of that you could try to, say, go beyond that but it gets very difficult if you start from scratch, and the thought of, no, I'm going to start -- pick up and select who I'm going to apply to, because what very often happens, what Arun pointed out, the most lucrative in whatever context that is, get addressed and the others gets forgotten.



>> JONATHAN CHARLES: Shadi, thank you very much.  Listening to what we've been doing is Greg Fields, from RIM, the manufacturer of BlackBerry, the developer of BlackBerry, and I think he can hear us and join us now.  Greg Fields, can you hear us?  And your contribution, please.  

If you can't speak, Greg, you could give us something on the chat line, because.... try again later.  We'll try again maybe just before we end.  

A gentleman wanted to ask a question, if you could go to the microphone or we'll bring a microphone to you, and if you say who you are that will be great.



>> AUDIENCE MEMBER: Thank you.  I'm Mr. Lundy from Italy.



>> Thank you.



>> AUDIENCE MEMBER: Just three spot questions.  The first one is related to speech recognition and (off microphone) translation.  We've heard here in this panel that it was promised in the future that we could speak to each other in our -- each other in our language and through speech recognition and (off microphone) translation could understand each other.  I'm afraid that at the moment we are quite far from that stage, if you read what happens on the big screen with speech recognition, it's not really very, very good, apart from those who speak very good English and they speak with (off microphone) and also the global translator is not -- probably all the technologies at the moment, are probably more and more efficient.

And the second question is related to the integration of (off microphone) and graphic, on computers and mobile phones, if these could be (off microphone) to help disabled people but people who have -- speak of, I think about indigenous people, those religions are often (off microphone) languages that do not have an alphabet so they cannot (no audio) all this -- these things can create problems at the level of indication of pedagogy of the people.  I'm from a generation who was frightened when at school he had to do autographic exercise, and today probably the children don't know this -- this training, the training -- this type of training and probably it's a (off microphone) I want to follow up.



>> JONATHAN CHARLES: Thank you very much.  We don't have very long.  Who wants to answer one of those questions?  Gareth, you were straining at the leash.



>> (laughter) Was I?



>> JONATHAN CHARLES: You are now.



>> GARETH FORD WILLIAMS: I am now.  I don't think -- this isn't text-to-speech.  There's someone at the other end listening and typing with this, but I wasn't here last year, and I actually do buy into the whole idea that text-to-speech and speech to text and translation in between is something that will happen.  How accurate that will be, how different languages and different modes of language and different structures of language interface with each other, we see this quite often when we've tried to build things like on-screen advertise for people doing sign language because PSL is a completely different language structure to spoken language and it also has a different vocabulary in place as well as a limited vocabulary.  You try and automate the sense and you end up with sign supported English with gaps.  So sometimes there are disconnects between languages and that's something that is a future feature issue.

Now, please, the rest of the question.



>> AUDIENCE MEMBER: One is this whole issue that has been coming out for the past few interventions here, the idea of mainstreaming, the idea that if you develop something for the disabled, of course, actually, an awful lot of openness --



>> GARETH FORD WILLIAMS: Oh, absolutely.  One of the things -- subtitle everything, everything you've got, because it's the richest form of Mehta data and if it's time coded into your video you can search directly into content.  As soon as you make things accessible you find there are so many additional benefits.  W3C's work is absolutely brilliant and we adapted -- adopted and kind of adapted, it's a semantic marker.  As soon as you make your screen reader more readable, immediately it becomes -- Google loves it and your run (off microphone) go up in search engines.  So you find that everything you do has a knock on the fence.  And that's the history of accessibility.



>> JONATHAN CHARLES: Emily Taylor is in the room now.  Emily?  And Emily comes from Oxil.  She was going to give a presentation but she was at another meeting.



>> EMILY TAYLOR: Sorry to be late.  I was in the critical Internet resources main session, so apologies for that.  As you say, Jonathan, I work with a company, Oxford Information Labs, Oxil, and we're lucky enough to be working with the BBC on the wide Accessibility Toolkit and particularly text-to-speech, which is one point I'd like to pick up from the recent discussion or mainstreaming.  We're very well aware that even just in the U.K. there are 60 million adults with a reading age of less than 11, and so clearly those people will need tools designed for people with disabilities, such as text-to-speech.  They have an enormous role to play in enhancing the on-line experience of those who are less advantaged.

I'd like to highlight the BBC's best example of a client from our point of view, because as experts in accessibility products, we struggle with both the lack of standards, and by that I don't just mean technical standards but a standard way of everybody doing this, so clients don't know what to ask for or what to expect, and that does lead to a culture of box-taking.  But the BBC actually care about what the text-to-speech sounds like because their research shows, and I'm sure Gareth has covered this, but people who are less advantaged, the economic bracket, will simply not upside down a work if it is not correctly pronounced, and therefore this is a play for quality rather than box-taking, to understand that tools developed for disabled people have mainstream application and can enhance the Web experience for everybody.  Thank you.



>> JONATHAN CHARLES: Emily, thank you very much.  Andrew Miller, perhaps I could call on you, parliamentary, and we'll bring you a mic, unless you have a loud voice.  

Actually if you were reading the text of Emily's comment there, she said 16 million adults have a reading age of 11.  It came up at 60 million, but that is the entire population of the U.K.  (laughter) Make of that what you like.  16 million is probably nearer.



>> ANDREW MILLER: Andrew Miller, member of the British parliament.  I can't help but praise the BBC because I'm quite extensively on the technology page today but that's another story.  Entirely.  One of the things that's -- I think is critical that we get the Dynamic Coalition to think about is how we get all of these presentations and all the many other similar organisations that are working around the world rolled together in a common document set so that we can help the people realise the software developers and so on, that there are big, big markets out there.  

The opportunities that these technologies create go well beyond some of the narrow areas for which the first thought processes came, and the applications that we've seen demonstrated today are ones that will become mainstream, and they'll become mainstream not because a lot of commercial guys out there are going to say that they've suddenly taken a particular interest in a particular type of disability, but they're going to do it because they see huge markets out there.  So the Dynamic Coalition, they've got a huge -- important job to do to help get that word out.



>> JONATHAN CHARLES: I'm going to give the last word to the Dynamic Coalition in the form of Andrea.



>> ANDREA SAKS: Thank you very much, you're absolutely right.  And we don't have a lot of resources, actually.  This is another problem.  Only the ITU and on occasion for specific areas of request, like when we had to pay for captioning, we have this time, did we get actual physical money.  But the problem so more than that.  People have to join the Dynamic Coalition.  It's on the ITU Web site.  It's also on the IGF Web site.  We have telephone meetings.  They are accessible, they are captioned.  You are welcome to join.  I can't do it without you, and you're absolutely right.  So I run it, but it is not me.  It is everybody.  It's a democratic group, and yes, since I'm being thrown out there to speak, later.



>> JONATHAN CHARLES: You've been following this.  Writing down the key point.  Give us a 60-second verse of what you think the main thought stream you got out of this.



>> Eight pages I got here.



>> JONATHAN CHARLES: I thought that was the second page.



>> I think the BBC has been tremendous in trying to do a job that's impossible.  One size does not fit all.  They did a great survey in finding out that maybe some of the reasons aren't the reasons that we think, that people don't understand how the Internet works.  That's very true.  I used to teach older adults.  Believe me, it's not a myth.  And that open source is important, apartment more that we can take standards and make them open source, make the libraries, as Patrick put out, that will enable software developers to maybe get convergence and interoperability and be on the same page and that we have to also -- I'm going to leave something out, that we -- I appreciate so much what Mr. Cato did regarding the fact with high-definition TV, that we can get all kinds of different languages for captioning.  

This is important, captioning on demand.  Patrick, you've got -- I appreciate the fact that working with older technology, who says if it isn't broken don't fix it?  It's extraordinary.  The ITU has done -- has done another service description called Y.1901, which was definitely giving the accessibility features.  Glad to see some of those have been actually done.  Mainstream, that's the key, making child caregivers the new -- and children who have a mental disability, the new experts.  That's important.  Universal design, designed from the beginning so we don't have to add on.  So if a person who is disabled can use it, everybody can use it.  If it can interoperate, everyone in the world, we have a unified system for ICTs, that we are a global, global place.  We need to stop proprietary exclusion.



>> JONATHAN CHARLES: That was great.  Thank you to our panelists.  I'd like to thank David Wood from the EBU, Alexander -- and of course everyone on DCAD who emailed each other, spoke to each other, thank you for taking part.

(Applause)

JONATHAN CHARLES: There are, of course, many more sessions on this issue of access for the disability during this conference.



>> Can I just advertise hours at -- what time is it, but it's from -- it's from Athens to Vilnius.  It's tomorrow at 3:00 and this is a synopsis of what's going on in the whole of IGF, what we've done, what we still need to do.  So we will be passing out brochures as soon as Cynthia gets them.  Please attend.  Your input is extremely important.  Thank you.



>> JONATHAN CHARLES: Thank you very much.

(Applause)





*******