The internet community is all agog over new of a “lipreading computer” that can “read” many different languages.
Computers that can read lips are already in development but this is the first time they have been ‘taught’ to recognise different languages. The discovery could have practical uses for deaf people, for law enforcement agencies, and in noisy environments. Led by Stephen Cox and Jake Newman of UEA’s School of Computing Sciences, the groundbreaking research will be presented at a major conference in Taiwan on Wednesday April 22.
The technology was developed by statistical modelling of the lip motions made by a group of 23 bilingual and trilingual speakers. The system was able to identify which language was spoken by an individual speaker with very high accuracy. These languages included English, French, German, Arabic, Mandarin, Cantonese, Italian, Polish and Russian.
We’re calling poppycock on this entire idea — just as we did on the “Deaf Video Dictionary” project — and we know this “lipreading by computer byte” sounded great in the pitch sessions but it will be an abject failure if it ever tries to find memeingful work in the real world.
You might be able to train a computer to lipread a few people, but you will never be able to train a computer to read every set of lips in the world because every set of lips are as unique as a fingerprint — except lips are moving in context with teeth and tongues and spit and facial hair and lipstick and twitches — while fingerprints are flat, unmoving, and dead.
While we applaud the notion of this lipreading technology, the greatest danger is promoting an invention that will never work is that the failures tend to taint, condemn, define and frustrate the few actually revelatory inventions that just might make our lives better while saving us from our failed folly.