Author archives: Mike C. Fletcher

Interpretation of Dictation

Sometimes as you're developing a project it's easy to lose site of the end goal. I've spent so long on giving Listener context setup, code-to-speech converters, etc. that the actual point of the whole thing (i.e. letting you dictate text into an editor) hasn't actually been built. Today I started on that a bit more. I got the spike-test for a dbus service integrated into the main GUI application, so that you can actually get events from the same Listener that's showing you the app-tray icon and the results. I disabled the sending of ...

Continue reading

Using GStreamer AppSink from Python

So I decided to play a bit with how to get raw audio samples from gstreamer pipelines. This is normally done (in C or whatever) with an `appsink` element in your gstreamer pipeline. You hook up the pipeline, watch for the buffer, and then you can map it into a Numpy buffer (or whatever you like really). I haven't actually got a use-case for this right now; when I sat down I was toying with the idea of running neural nets on the samples to try to detect phonemes, but that will be some other day.

For today, consider ...

Continue reading

Python-dbus needs some non-trivial examples

So I got tired of paying work this afternoon and decided I would work on getting a dbus service started for Listener. The idea here is that there will be a DBus service which does all the context management, microphone setup, playback, etc and client software (such as the main GUI and apps that want to allow voice coding without going through low-level-grotty simulated typing) can use to interact with it.

But how does one go about exposing objects on DBus in the DBus-ian way? It *seems* that object-paths should produce a REST-like hierarchy where each object I want to ...

Continue reading

Seem to need hidden-markov-models for text extraction...

So in order to "seed" listener with text-as-it-would-be-spoken for coding, I've built up a tokenizer that will parse through a file and attempt to produce a model of what would have been said to dictate that text. The idea being that we want to generate a few hundred megabytes of sample statements that can then be used to generate a "python coding" or "javascript coding" language model. Thing is, this is actually a pretty grotty/nasty problem, particularly dealing with "run together" words, such as `asstring` or `mkdtemp`. You can either be very strict, and only allow specifically defined ...

Continue reading

Project-to-language model almost working

So at this point Listener can generate a language model from a (Python) project and the dictation results from that are reasonable, but by no means perfect, still, you can dictate text as long as you stick to the words and statements already used in the project.  There are some obvious reworks now showing up:

  • we need the dictionaries to chain, and we likely need to extract a base "Python" statement-set by processing a few hundred projects (yay open source)
  • we need to have separate statement, dictionary, etc. storage for the automatically generated and actual user-generated stuff so that on ...

Continue reading

Automatic Language Model Creation Started

Since the point of Listener is to allow for coding, the language model needs to reflect how one would dictate code. Further, to get reasonable accuracy I'm hoping to tightly focus the language models so that your language model for project X reflects the identifiers (and operators, etc) you are using in that project. To kick-start that process, I'm planning to run a piece of code over each project and generate from it a language model where we guess what you would have typed to produce that piece of code.

Which immediately runs into issues; do you think ...

Continue reading

Listener is Crawling Forward

I spent the day working on Listener. The bulk of the work was just cleanup, getting TODO items set up, fixing the setup script, etc. The actual task of voice dictation didn't get moved very far (I got a trivial "correct that") event working, but it doesn't actually make anything happen.

I also started thinking about how to integrate with other applications (and the desktop). That will likely be done via DBUS services, one for the "send keys" virtual-keyboard service and another for the per-session voice-dictation context.

Continue reading

Create your own virtual keyboard with Python

So at some point I need the voice dictation client to be able to do basic interactions with applications on the desktop (think typing text and the like). So how do I go about doing that? I want to be compatible with Wayland when it shows up, but still work on X (since that's where I'm working now). That would seem to preclude using X event sending. What about making a "virtual keyboard" that actually sends the events through the Linux kernel event subsystem?

The resulting spike-test (using uinput) is checked into the listener project. It seems to ...

Continue reading

GStreamer Level Plugin Monitoring

So you have an audio stream where you'd like to get a human-friendly readout of the current audio level. You add a level component, but how do you actually get the level messages it generates?

            bus.add_signal_watch()
            bus.connect( 'message', self.on_level )

It really seems that you *should* be able to use element.connect(), but there doesn't seem to be an available event to which to connect on the level. So, you wind up having to process all of the events and look for your level event...

    def on_level( self, bus, message ):
        """Level message was received"""
        if message ...

Continue reading

wx in a VirtualEnv (for RunSnakeRun)

Since I got asked about this in email I'll post it here for the google-verse. Say you want to allow your developers to use RunSnakeRun running in a virtualenv on an Ubuntu distribution.  You'll recall that normally to run RSR as a utility you do:

$ sudo apt-get install python-wxgtk2.8
$ pip install --user SquareMap RunSnakeRun
$ runsnake

That gets a bit more complex when you want to put RSR in a virtualenv (the question was actually how to make this work on many, many workstations using puppet, but you puppet peoples can figure that out). Building wxPython is not ...

Continue reading