Boston University researchers grabbed a $900,000USD grant from the National Science Foundation to capture 3,000 American Sign Language gestures to create a “Deaf Dictionary” that will “interpret” signed video requests for information.  While the idea is good, we certainly feel nobody on the National Science Foundation grant committee is Deaf or has any idea why this sort of project is doomed to failure.

As students, instructors and authors of books about American Sign Language, we are quite certain this noble experiment will not work in the real world for several reasons:

  1. There are too many minute differences in signing that cannot reliably be interpreted via video like “girl” and “aunt” and other “too close to call” signing examples.
  2. Regional differences between signing style and cross-cultural synonyms.
  3. PSE pretending to be ASL.
  4. Speed of the signer indicating meaning and context.
  5. Curse words will be 90% of the examples and searched-for definitions.

The technology is not yet sophisticated enough to think fast enough in real time for an ASL Video Dictionary to thrive in practice and performance.

The National Science Foundation, while well intentioned, wasted a million dollars funding a foolhardy project that will never find everyday purchase in the real world.

This site uses Akismet to reduce spam. Learn how your comment data is processed.