Archives August 2014
So I got tired of paying work this afternoon and decided I would work on getting a dbus service started for Listener. The idea here is that there will be a DBus service which does all the context management, microphone setup, playback, etc and client software (such as the main GUI and apps that want to allow voice coding without going through low-level-grotty simulated typing) can use to interact with it.
But how does one go about exposing objects on DBus in the DBus-ian way? It *seems* that object-paths should produce a REST-like hierarchy where each object I want to ...
So at this point Listener can generate a language model from a (Python) project and the dictation results from that are reasonable, but by no means perfect, still, you can dictate text as long as you stick to the words and statements already used in the project. There are some obvious reworks now showing up:
- we need the dictionaries to chain, and we likely need to extract a base "Python" statement-set by processing a few hundred projects (yay open source)
- we need to have separate statement, dictionary, etc. storage for the automatically generated and actual user-generated stuff so that on ...
Since the point of Listener is to allow for coding, the language model needs to reflect how one would dictate code. Further, to get reasonable accuracy I'm hoping to tightly focus the language models so that your language model for project X reflects the identifiers (and operators, etc) you are using in that project. To kick-start that process, I'm planning to run a piece of code over each project and generate from it a language model where we guess what you would have typed to produce that piece of code.
Which immediately runs into issues; do you think ...
I spent the day working on Listener. The bulk of the work was just cleanup, getting TODO items set up, fixing the setup script, etc. The actual task of voice dictation didn't get moved very far (I got a trivial "correct that") event working, but it doesn't actually make anything happen.
I also started thinking about how to integrate with other applications (and the desktop). That will likely be done via DBUS services, one for the "send keys" virtual-keyboard service and another for the per-session voice-dictation context.