So at this point Listener can generate a language model from a (Python) project and the dictation results from that are reasonable, but by no means perfect, still, you can dictate text as long as you stick to the words and statements already used in the project. There are some obvious reworks now showing up:
- we need the dictionaries to chain, and we likely need to extract a base "Python" statement-set by processing a few hundred projects (yay open source)
- we need to have separate statement, dictionary, etc. storage for the automatically generated and actual user-generated stuff so that on git pull you can rebuild the auto-generated stuff without losing your custom dictionaries and recorded/corrected phrases
- we need some way to weight certain statements (e.g. meta-commands) highly
- need to start working on "context command" mechanisms so that the dictation actually generates the expected text
- we need to use the base (large) dictionary a *lot* as every time there's a missed word we'll want to look up in that for the pronunciation
Anyway, not going to get it all done today, but at least there's something approaching a voice-dictation system poking through.