The Evolution of Voice Typing

 

From facsimile machines to typewriters, we have come a long way in technological upgradation. We now have Alexa and Siri who can tell us about the weather, the score, latest movies and stores around us in a moment, even while we are cooking or driving. A recent survey reported that nearly 40 per cent of adults use a virtual assistant on their smartphones and this figure is greatly expected to increase by a staggering 200 per cent by 2025. So, how did we come this far? This post traces the evolution of voice-typing, with research sources available online.

The beginning – 1950s

The earliest devices that can loosely be traced as the predecessors of the modern day voice typing could understand digits spoken in the human voice. IBM greatly changed the scene when it came up with its Shoebox machine that could understand just sixteen words in the English language. Consider that computing technology was still nascent at that time and the internet would not gain prominence until almost thirty years later, this marked a great beginning of voice typing.

1970s phase – a period of great leaps

It is reported that Speech recognition technology made major strides in the 1970s, thanks to interest and funding from the U.S. Department of Defense. The DoD’sDARPA Speech Understanding Research (SUR) program, from 1971 to 1976, was one of the largest of its kind in the history of speech recognition, and among other things, it was responsible for Carnegie Mellon’s “Harpy” speech-understanding system. Harpy could understand 1011 words, approximately the vocabulary of an average three-year-old.

The Early 1990s – From Dolls that spoke to Voice-recognition for the masses

Cut to two decades later, in 1987, World of Wonder’s Julie Doll made its debut with the tagline ‘the doll that understands you’. It could be trained to speak to children using voice recognition. The era of the 1990s is synonymous with great progress in computing technologies and the internet. Voice responses on telephones gained prominence to record responses and voice-typing came about but were still prohibitively expensive.

Google’s introduction of voice-typing

While Apple was making giant strides in innovation, it was Google, equipped with the ability to handle large scale storage of data for training its voice-typing applications that first came up with a tool for Windows Vista and later, iPhone users. Cell phones, considering the ease of usage and the disincentive of using a small keyboard for typing became the ideal testing grounds for Google’s voice typing. Apple wasn’t far behind and introduced Siri as a voice-assistant for iOS in the year 2011.

Last year, in November 2018, Amazon unveiled its home-assistant Alexa which is becoming hugely popular as not just a voice-recognising device but an all-inclusive entertainment and information device. This gain that technology offers seems to be compounded for practising professionals, including litigators, advocates and corporate professionals. Now, specific tools catering to the needs of niche industries are available. MySteno is one such tool that has dictation features tailored specifically to Indian law practice.

What the future holds

Internet of Things as a concept is already integrating technology in every sphere of our lives, with lights in our house taking voice-instructions on the phone from wherever we are. Given the immense progress in Artificial Intelligence that has led to the development of innovative voice-typing tools, it may not be far-fetched to say that we may see predictive text based on our thinking and real-time cognitive recognition. In this era of modernity, change is the only constant.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *