In my earlier post, I introduced my new gaze-ocr package for easy clicking and text editing in any app or website (demo video). It took months of experiments and tweaks to make the system as robust as it is today. In this post, I’ll take you under the hood to see how it was built and where I’d like to take it next.Continue reading Gaze OCR: Under the Hood
In my previous post, I introduced my new gaze-ocr package for easy clicking or text editing in any app or website (demo video). If you haven’t upgraded the screen-ocr and gaze-ocr packages recently, go do that now. On Windows, I’ve added support for the built-in Windows Runtime OCR, which is incredibly fast (~40X faster than Tesseract!) and also very accurate. Be sure to follow the instructions to install the necessary dependencies, which includes Python 3.7 or 3.8 (3.9 isn’t quite ready yet). NatLink now supports Python 3 (32-bit only), but you need to follow special installation instructions while it is in beta. Upgrading is worth your time: WinRT is so fast that it opens up the possibility of processing the entire screen instead of just near the gaze point — although in practice I still find it’s helpful to restrict it somewhat.
I learned about WinRT OCR thanks to a comment from Ivan on my previous post. This is why I love open source software — I always learn from others once I share my work!
In a later post, I’ll share more details on all the experiments and tweaks that have gone into making this package as robust as it is today.
User interfaces revolve around clicking on-screen text: descriptive links, named buttons, or editable text that can be selected and moved around. As a power user of voice control, I often bypass this with commands that simulate keyboard shortcuts or operate APIs directly. But this is only feasible for a small number of heavily-used apps and websites: it takes too long to add custom commands for everything. I’ve seen several ways to handle this long tail, but they all have issues. Speaking the on-screen text directly requires disambiguation if the same text occurs in multiple places. Numbering the clickable elements adds clutter and takes time to read. Implementations of both of these methods tend to only work in one app or another, leading to an inconsistent experience. Head or eye tracking can control the cursor anywhere, but they are tiring and accuracy isn’t good enough for precise text selection. As it turns out, however, the pieces for an effective system do exist — they just need to be put together.
A lot has changed in the open-source speech control world in just the last year, much less the 5+ years since I started writing this blog. My own involvement has shifted towards longer-term projects and engaging the community through chat rooms on Gitter (I’m @wolfmanstout). Since a lot of people still discover hands-free computing through my blog, I want to help them get oriented in this new world. To that end, I’ve changed the handsfreecoding.org homepage into a structured introduction to my blog entries, along with some key updates and information about alternative approaches. Even if you’re a long-time reader, I encourage you to take a look and see if you learn something new! I plan to keep this new page up-to-date, although I’ll continue to complement that with (occasional) new blog posts. Please let me know what you think in the comments, and if you have any suggestions!
For the past several years, we Dragon users have had to endure increasingly poor native support for text manipulation in third-party software such as Firefox and Chrome. For a while, Select-and-Say only worked on the most recent utterance; then it stopped working entirely. As someone who writes a lot of emails, it was a pain to lose this functionality, and the workaround of transferring to a text editor is slow and messes up the formatting when composing an inline reply to someone’s email. Nuance offers the Dragon Web Extension which supposedly fixes these issues, but in practice it has earned its 2 out of 5 star rating by slowing down command recognition, occasionally hanging the browser, and not working in key web apps such as Google Docs. Over the past few months, I’ve been working to integrate Dragonfly with the accessibility APIs that Chrome and Firefox natively support, which brings this functionality back — and much more. As of today, Windows support is now available, and I’m here to tell you how to leverage it and what’s under the hood.Continue reading Enhanced text manipulation using accessibility APIs
For years, I’ve been approaching speech recognition like a backend engineer: I have a flexible coding style for managing my grammars, I’ve implemented a lot of functionality, and I’ve added some helpful integrations. But embarrassingly, until recently, I hadn’t put much thought into the User Experience. This all changed after I received an email from Kim Patch, the author of Utter Command, a set of extensions to Dragon that has been around for decades. Continue reading Utter Command: Why I Rewrote My Entire Grammar
Thanks to the work of several volunteers and an anonymous helper at Nuance, the latest version of Dragon NaturallySpeaking (DPI 15) now works with NatLink and Dragonfly! I’ve been testing it for the last couple weeks and it works well with only a few minor issues to work around.
Continue reading Dragon 15 now works with NatLink and Dragonfly
Last week, Mozilla announced the first official releases of DeepSpeech and Common Voice, their open source speech recognition system and speech dataset! They seem to have made a lot of progress on DeepSpeech in little time: they had a target of <10% word error rate and achieved 6.5%! This is a very strong result — for comparison Google boasts a 4.9% WER (albeit on different datasets). See their technical post for more details on how they pulled it off.
For this post, I’ll cover the basic information you’ll need to get it up and running on a Linux guest VM running on VirtualBox on a Windows host, since that’s my home setup. Note that the engine has not yet been integrated into any sort of real-time system, so what you’ll have at the end of this is a developers sandbox to play with — not something you can start using day-to-day. I do hope to eventually get it integrated into my daily workflow, but that’s going to take much more time.
UPDATE(12/25): If you are using Windows 10, consider running DeepSpeech natively on WSL (Windows Subsystem for Linux) instead of a VM, where you don’t have to compile from source and you’ll have faster recognition speed. Instructions here: https://fotidim.com/deepspeech-on-windows-wsl-287cb27557d4. If you run into problems with processor limitations, see info below on how to adjust CPU optimizations when compiling from source.
Continue reading Mozilla DeepSpeech: Initial Release!
Firefox has gained a lot of exciting updates recently that make it very competitive with Chrome. Try it out if you haven’t already (I use the developer edition). Because both browsers now use the same extension API, I’ve just published my hands-free browsing extensions to both Firefox and Chrome repositories.
The second is a fork of Vimium that I’m calling Modeless Keyboard Navigation (get for Firefox or Chrome) to avoid confusion with Vimium. Unlike Vimium, the keyboard shortcuts can be used at any time, and the default bindings use modifier keys (think Emacs, not Vim). I find this much faster for voice control, where mode switching means a round-trip to Dragon.
Hope you find them useful! If you’ve discovered or created any useful browser extensions that help with voice control, please post them in the comments.
I learned about a couple very exciting new developments this week in open source speech recognition, both coming from Mozilla. The first is that a year and a half ago, Mozilla quietly started working on an open source, TensorFlow-based DeepSpeech implementation. DeepSpeech is a state-of-the-art deep-learning-based speech recognition system designed by Baidu and described in detail in their research paper. Currently, Mozilla’s implementation requires that users train their own speech models, which is a resource-intensive process that requires expensive closed-source speech data to get a good model. But that brings me to Mozilla’s more recent announcement: Project Common Voice. Their goal is to crowd-source collection of 10,000 hours of speech data and open source it all. Once this is done, DeepSpeech can be used to train a high-quality open source recognition engine which can easily be distributed and used by anyone!
This is a Big Deal for hands-free coding. For years I have increasingly felt that the bottleneck in my hands-free system is that I can’t do anything beneath the limited API that Dragon offers. I can’t hook into the pure dictation and editing system, I can’t improve the built-in UIs for text editing or training words/phrases, I’m limited to getting results from complete utterances after a pause, and I can’t improve Dragon’s OS-level integration or port it to Linux. If an open source speech recognition engine becomes available that can compete with Dragon in latency and quality, all of this becomes possible.
To accelerate progress towards this new world of end-to-end open source hands-free coding, I encourage everyone to contribute their voice to Project Common Voice, and share Mozilla’s blog post through social media.