Dictating Code

For a site titled Hands-Free Coding, I haven’t written much about How To Actually Write The Code. It turns out this is easier than you might expect. Before reading this post, please familiarize yourself with my getting started guide and how to move around a file quickly.

There are two basic approaches to dictating code: using custom grammars such as Dragonfly, or using VoiceCode, (not to be confused with VoiceCode.io for Mac, which I just discovered and haven’t used yet). VoiceCode is much more powerful out-of-the-box, but is also harder to extend and more restrictive in terms of programming language and environment. You might say that VoiceCode is Eclipse, and Dragonfly is Emacs. You could also consider Vocola for your custom grammars; it is more concise but not quite as flexible because you can’t execute arbitrary Python. Since I prefer Dragonfly, I’ll cover that approach.

The multi-edit module is a good place to start. I follow a few fundamental techniques for dictating code:
1) I use prefix commands to dictate keywords in any style. For example, I can say “score test word” to print “test_word”, or “camel test word” to print “testWord”.
2) I use short made-up words to dictate common symbols. For example, I can print “()” with “leap reap”. I made these words up over time, but if I were starting new I would probably use a standard language such as ShortTalk.
3) I use templates in my text editor to quickly generate boilerplate syntax, such as the skeleton of a for loop. In particular, since I use Emacs, I use the yasnippet package.
4) I rely on automatic formatting in my text editor to keep the code neat.
5) Most importantly, I structure my grammar so that I can dictate continuously, instead of having to stop and start after every keyword or symbol. This is the hardest part of my setup, because there are many trade-offs between supporting continuous commands and keeping performance high, which I will cover in another post.

If you follow these basic techniques, the biggest problem that remains is misrecognized words. You can avoid this in your own code by preferring easily recognized identifiers, but it’s much harder when working with someone else’s code or library. I find that the best way to combat this issue is to use Dragon’s built-in Vocabulary Editor. The moment you find yourself spelling out a word, stop right away and add this word to the Vocabulary Editor. If the problem is a variable or function with multiple misrecognized words, add the whole phrase to your vocabulary. For example, if you regularly use a class named SimDataManager, add “sim data manager” to your vocabulary and then you can type it in any style using the prefix commands.

I have also experimented with a fancier solution to this problem, where I dynamically add words from nearby code into my Dragonfly grammar. Unfortunately, I haven’t found a way to seamlessly integrate this into my vocabulary without incurring a significant performance penalty, so I only call upon this dynamic grammar explicitly. So it’s not quite as powerful as you might expect, and most of the time I rely on the built-in vocabulary. It’s better than nothing, though, so I will cover this in a later post.

That covers the basics, but much of the challenge of writing code is editing code you (or someone else) have already written. I’ll save that for later posts!

48 thoughts on “Dictating Code”

  1. You mentioned Dragonfly and Voicecode as the major options for voice coding. Have you tried Vocola/Unimacro and what are your thoughts on that option? I’m hoping to get a true voice coding setup going and am debating trying Dragonfly or Vocola/Unimacro. Any thoughts would be appreciated.

    1. I added a little snippet about Vocola. Vocola is more concise than Dragonfly, but it’s not as powerful so you will probably end up using both if you go that route. I’d rather have all my grammars in one place, so I just use Dragonfly. Also, Vocola does have support for continuous command recognition, but I think it is “all or none”, i.e. you don’t have the fine-grained control of Dragonfly where you can specify exactly which commands can be repeated within the same sequence, which I think is pretty important for large grammar. Vocola is definitely more friendly to non-programmers, though.

      1. Vocola 2 allows turning on continuous command recognition on a per-grammar basis.
        Dragonfly is currently slightly more flexible in terms of you can specify that some commands can only occur in specific places in a sequence. I thought about adding this to Vocola (I’m the maintainer) but really can’t come up with any decent use case for it.

        1. The primary use case for me is that I only want a very small subset of commands to work after a command that contains arbitrary dictation. Otherwise (at least with Dragonfly) I find that Dragon is heavily biased towards hearing those command words instead of the actual words I’m dictating.

          1. Yeah, Dragon is designed so that it attempts to minimize the words in the variables. This lets you do things like “score my variable equals score my variable plus one” to get my_variable = my_variable +1
            Because dragonfly reparses the input it gets from Dragon, it could change this rule to say treat more like .* in regexps (e.g., be greedy).

            I would naïvely assume you would want the Dragon behavior for examples like the above, and would add “long” forms like “code score ” that are not part of a command sequence grammar so you can include things like “plus” in identifiers when necessary. Yes, you have to pause more in that case but that case should be much rarer than the common case where your identifiers don’t contain command words.

          2. Yes, that’s right. I have special escape words I can use when I need to actually type out a word that is normally a command.

  2. There are several other approaches to writing code. A lot depends on the frequency of punctuation (lisp is horrible but most other languages are quite manageable) and the types of unpronounceable symbols you have.
    If you don’t have too many camel case or unpronounceable symbols, straightforward dictation plus a few templates for things like for loops work surprisingly well given some punctuation variants with formatting properties.

    1. When writing lisp, I highly recommend either paredit or smartparens if you’re using Emacs. These will automatically generate parentheses and let you manage nesting. I still find it a bit clunky, though.

      1. As far as I know, those only handle at most half the parentheses, namely the closing ones.

        I never found a great solution to handling Lisp, but I have not been all that motivated because I only use it for the occasional elisp.

  3. FYI, you can indeed execute arbitrary python code from Vocola 2 — you just have to do it in an extension. Extensions are quite easy to write but this is a case where Vocola is arguably less concise than dragonfly.

  4. Technically, you are not dictating continuously when you issue multiple voice commands in the same utterance. Commands are not dictation — DNS treats them very differently.

  5. Hi,

    Do Dragonfly and Natlink with Dragon naturally speaking version 15 because at the top of the Unimacro site it says this!

    Thanks

    Rob

      1. Hi,

        Sorry there was a missing word in my first message. I was saying do Dragonfly and Natlink work with dragon naturally speaking 15 now?

        I don’t know how often the Unimacro site is updated and thought you might know?

        Also I saw in this post from 2014 you were using Dragonfly on Windows 7 http://handsfreecoding.org/?p=9

        Does it work on windows 10 now?

        Thanks,

        Rob

  6. Hey,

    I am a newb to this but could the first 3 of the techniques you use above not be done with custom commands in dragon naturally speaking professional?

    I have used it for things like camel case, boilerplate text of code snippets and opening brackets like you did above with one word.

    Is the big difference the chaining of commands so it is faster?

    Thanks,

    Rob

  7. For that particular technique, the big difference is indeed the chaining of commands. I will often say that command after performing a move command, and this way I don’t have to wait.

    Of course, there are also many other benefits to using Python to implement your grammar. For example, I have voice commands that integrate with my eye tracker, and commands that directly interact with my browser (both covered in other posts).

  8. Hi,

    Thanks for the info.

    Do you know if you can install Dragonfly on Windows 10 yet?

    Are there any preset grammars to use in Dragonfly when you first install it or do you have to create them all yourself?

    Thanks,

    Rob

    1. AFAIK Dragonfly works fine on Windows 10 (I don’t think there were ever any compatibility issues with that).

      Dragonfly may come with a couple preset grammars, but everything I’m using I created myself. You can find information about getting started with basic grammars in my earlier posts.

  9. Thanks for the info.

    I think I will try and install Dragonfly.

    When I try to install python 2.7.12.exe from the link on the unimacro
    http://qh.antenna.nl/unimacro/installation/installation.html

    my anti virus software quarantines the file. Where did you find the python 2.7.12.exe because I don’t know if it’s safe to download from there now?

    The only python 2.7.12 I could find on the python site is a .msi file not a. exe file.

    I am starting a coding course soon so this would be helpful because I’m quadriplegic. I use the head mouse nano as a mouse are there voice commands that would integrate with that like you have integrated commands with your eye tracker?

    Thanks,

    Rob

    1. You can also install the standard ActiveState python (that’s actually what I use).

      As for integrating with head mouse, I think this would actually be pretty easy because you can just use the regular mouse integration (e.g. commands to click the mouse). Using eye tracking was a bit more tricky because the eye tracker I was using didn’t have a “behave like mouse” mode.

      1. Thanks for the info.

        Is it the most recent version of active state you are using or another version?

        Does the version of dragon you use have select and say in any app?

        I’ve been trying to use dragon professional individual 14 in sublime text 3 and none of the normal commands are working for example after dictating something saying select that, correct that or delete that does nothing.

        Is there a workaround for this and all the other applications dragon doesn’t support?

        If I was able to get Dragonfly installed with dragon v14 I’m guessing the custom commands would work in these apps dragon doesn’t support by default.

        But is there a custom command that can be written for unsupported apps like Sublime text 3, in Dragonfly that would give the same functionality as saying select that to select what had just been dictated?

        Thanks,

        Rob

  10. You should use the most recent ActiveState Python that is still 2.x (not 3).

    Yes, Dragon 12 does have select and say in any app, and is the last version to have that. From what I understand, there is a workaround for this built by Mark Lillibridge, called Vortex. From what I remember, it is somewhat tied to Vocola so I got it to work but it didn’t integrate very well with my setup. You can give it a try; I believe it is built into the latest versions of Vocola.

    Indeed, custom commands will work regardless of Select-and-Say support. The only version of Dragon where that is broken is the very latest one (Dragon 15).

    I think it would be fairly difficult to create a custom command that does the equivalent of “select that”, but I could be wrong. In general the commands I write don’t try to bridge between the worlds of dictated text and commands, which is what that would demand.

      1. Let me preface this by saying that I have not used Vocola extensively, and there are definitely folks who have a powerful hands-free coding set up who rely on Vocola instead of Dragonfly.

        I think the main benefit of Vocola is that it has a simpler syntax which makes it easier to get up and running with a custom-built grammar. The downside is that it does not have the flexibility of Python. While you can run external processes from Vocola, that’s much more clunky than being able to call arbitrary Python without forking a new process. Also, because Python gives you more control over how you weave your grammar, you can do fancy stuff like have careful control over which commands can be repeated in a sequence without pauses. I rely on this heavily, documented here: http://handsfreecoding.org/2015/04/25/designing-dragonfly-grammars/

        I’m using Dragon 12.

        1. Vocola is actually more flexible than it 1st appears. Here’s a post regarding this:
          http://www.knowbrainer.com/forums/forum/messageview.cfm?catid=25&threadid=19217&highlight_key=y#141607

          Vocola does allow for chaining of commands:
          http://vocola.net/v2/CommandSequences.asp

          as well as writing full on Python code through an extension mechanism which is definitely more flexible than just calling a process:
          http://vocola.net/v2/Extensions.asp

          You can definitely also have pretty fine-grained control over how you can repeat commands etc. although it is probably easier to set up some of those relationships and dragonfly.

  11. Hello, I saw that you mentioned voicecode.io and that you had not tried it out yet. I was wondering if you have tried it out since? and/or have any thoughts on how it compares?
    I am looking at setting up a voice programming setup for myself and am trying to decide if I want to pursue 1) the voicecode.io route or 2) the DNS + dragonfly route
    Thank You!

    1. I’m afraid I still haven’t tried voicecode.io. One consideration in favor of voicecode is that Nuance has not maintained compatibility with Dragonfly in the most recent version of NaturallySpeaking for Windows (15). If you go that route please let us all know how it goes!

  12. Here’s another approach which does not require Dragon Naturally Speaking (or Windows/Mac), but hooks directly into the IDE:

    https://github.com/OpenASR/idear

    It’s a work-in-progress, but I think this approach has a lot of potential as the system has access to the AST and is aware of your file/symbol names etc and could adapt the grammars accordingly

  13. Thanks for the amazing website!

    I find the Dragonfly-Dragon installation & command generation daunting. For the coding I do it seems like I can do everything I need if I could only get Dragon never to capitalize words or insert spaces. Do you know if this is possible?

    Also,I feel like I might be missing something but is the big advantage of the dragonfly-Dragon interface that formatting is done automatically or that you can have a single short command do a lot in the editor (which can also be done using the vocabulary editor in Dragon)?

    [I code in the statistical language R in the Rstudio editor]

    Thank you!

    1. There is no way to globally get Dragon to eliminate capitalization and spaces. You can, however, do this while you dictate. For example, to dictate “gmail.send_email” try saying “no caps gmail dot send underscore email”. It works perfectly well for small amounts of dictation, although I wouldn’t want to write a whole program this way.

      The big advantage of Dragonfly is it gives you full control, and you can use that control to define whatever kind of command grammar you can dream of, and have it do whatever kind of actions you want, because it is running arbitrary Python. Practically speaking, this means grammars that have you talking less and doing more. If you find Dragonfly daunting, I would recommend looking at Vocola, which has similar power but is designed to be more user-friendly.

        1. Sure! Also, you can use “no space” to eliminate spaces between words. As my example illustrated, this often isn’t needed if you are speaking the punctuation.

  14. Hello James,

    I have been coding and R with Dragon for a while, but recently installed Vocola and Dragonfly. I’m a little puzzled as to where to begin with Dragonfly after I installed it.

    I also have one other question, in: Section 3.2 of http://vocola.net/programming-by-voice-FAQ.html, “Why is programming by voice hard to learn / build systems for?”

    Sample editing command might be “go 14 leap semicolon erase”, meaning go to the start of the visible line whose line number’s last two digits are 14, move the cursor forward to the next semicolon, then erase 1 character (namely, the semicolon).

    Is there example of the syntax for this command in Vocola/Dragonfly? I have not been able to find an example of the code in any of the libraries.

    Thanks so much. I’m grateful your running the site,
    Best, Matt

    1. Matt, this is a pretty complicated command that would require integration with the editor to implement. I don’t use this exact grammar, although I do have the capability to do what’s described in that command, and my grammar is available here:
      https://github.com/wolfmanstout/dragonfly-commands/blob/master/_repeat.py#L1116

      My equivalent command would be “line 14 after semi snap”, although I would be more likely to use “east” instead of “after semi”, assuming the semicolon is at the end of the line.

      The most relevant part is the Emacs portion, but as you will see, I structure my grammar in such a way that everything gets compiled into a few top-level rules, so unfortunately it’s not as simple as just copying and pasting a few lines of code.

  15. Hello, I am new to using Vocola/Dragonfly and I am little puzzled. Is there a way to edit the dragonfly scripts that others have written? For example, I like some of the features and the Caster dragonfly add-in, but I like to remove a few of them too. And how do you use unimacro? I haven’t found any documentation for it, but it looks like a easy to use alternative dragonfly with a little more power than Vocola.
    Thanks, Matt

    1. Yes, you would need to copy the scripts into your own grammar directory and then edit them locally. Or better yet, clone the repositories so you can track your changes.

  16. I’m curious, is there a way to lock an application into using only lowercase?

    The Dragon VBScript has the no-caps-on
    and VBA has LCase Command.

    Neither of these the work satisfactorily. It looks like it should be doable in dragonfly, but It might really slow down the performance. Is there a easier way to do this in vocola?

    Thanks, Matt

  17. What is the syntax for using SendDragonKeys in vocola? For example, Microsoft Excel sometimes does not play well with Dragon/vocola so I would like to send the command using SendDragonKeys to get the desired outcome.
    Thanks, Matt

  18. Hello, I’m back with some more questions!

    I was wondering if there is a problem writing spoken forms for action objects that are pre-existing in Dragon so as not to trigger the dictation box. For example, dictating “colon” for “:”, which would otherwise trigger the dictation box to open when coding. I understand this may not be practical for all dictations, but am curious if there is a conflict that I have not recognized. I guess I’m just forcing Python to “act”as another Dragon supported program.

    Also, when you make up words, do these have to be added to Dragon as a new word after you code for it?

    Lastly, when I make these changes in the multiedit file, will this be available for any Python interface. ie. Command prompt, GUI etc.?

    I tried looking at the short talk link but the site is unfortunately down for maintenance right now. Considering I’m very new to this and don’t know what an eMac is, perhaps I should stick with writing my own for now… Let me know if you’ve any suggestions and thanks for your time!

    Tiff

    1. If you simply want to be able to dictate into unsupported applications, you don’t need to create a grammar for that. Just go into Miscellaneous options in Dragon and unselect “Use the dictation box for unsupported applications”.

      When you make up words, Dragon will “guess” a reasonable pronunciation. It usually does a pretty good job, but if you want to train it, you can do so using “train word”. You do not have to add trained words to your vocabulary (which you usually don’t want to do if it is just used in a command).

      The multi-edit configuration is a global configuration, so it should be usable in any window.

      I wouldn’t recommend downloading any software from the shorttalk site. I just mentioned it as a potential basis for designing a grammar. I’m actually now much more excited about this other grammar that I plan to write a blog post about: http://redstartsystems.com/humanmachinegrammar#UC

  19. Is there a way to make lower case the default for all words? I am using RStudio and the closest way I can force Dragon to keep jumping back into lowercase is to use the HeardWord command after every command in Vocola. This is a very brute force method, and not always the best route.
    Any ideas on a better syntax?

Leave a Reply

Your email address will not be published. Required fields are marked *

Markdown is supported. Make sure raw < and > are wrapped in code blocks. You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.