Archives September 2014
So today I started into the basic command-and-control part of the Listener system. The original tack I took was to create escaped commands inside the text stream using the "typing" substitutions (the stuff that converts ,comma into ','):
some text \action_do_something more text
But that got rather grotty rather fast when looking at corner cases (e.g. when you want to type \action to document that mechanism). So I reworked to have two different levels of operation, the first pre-processes to find commands and splits out the text such that you get a sequence of commands-and-text while interpreting. That should allow ...
So as of earlier this evening I've now got the Listener service hooked up such that I can dictate code into Eric (via an Eric Plug-in talking to the service over DBus). There's still an enormous amount that doesn't work. But that first-light moment has made me rather happy; instead of a collection of little tools/toys, there's something that has the rough shape of a working project.
The actual Eric code so far is about 150 lines, with a lot of that being boilerplate for an Eric plugin. I'll likely do quite a bit ...
[Update] I got it working by going down to the "connection" level and registering the callback there. The code below is updated with the working version... as of now I've got some basic "voice coding" possible, but lots more to do to make it practical and useful.
Plowing ahead with integrating Listener into Eric (my primary IDE). That seemed to be going swimmingly, got a new plugin created, set it up to notice new editors, disconnect old ones and (in theory) process new results by inserting the recognized and interpreted text at the current position. Then I tried to ...
Sometimes as you're developing a project it's easy to lose site of the end goal. I've spent so long on giving Listener context setup, code-to-speech converters, etc. that the actual point of the whole thing (i.e. letting you dictate text into an editor) hasn't actually been built. Today I started on that a bit more. I got the spike-test for a dbus service integrated into the main GUI application, so that you can actually get events from the same Listener that's showing you the app-tray icon and the results. I disabled the sending of ...
So I decided to play a bit with how to get raw audio samples from gstreamer pipelines. This is normally done (in C or whatever) with an `appsink` element in your gstreamer pipeline. You hook up the pipeline, watch for the buffer, and then you can map it into a Numpy buffer (or whatever you like really). I haven't actually got a use-case for this right now; when I sat down I was toying with the idea of running neural nets on the samples to try to detect phonemes, but that will be some other day.
For today, consider ...