Here, for instance, is a radical (but not particularly new) idea in computer language design:
The computer records all interactions with the programmer and shares it's understandings of meanings and implications of any given set of events/contexts with all other computers in the system.
With every spare bit of processing and communications power, the computer attempts to resolve all of the various statements the user/programmer has made using contextual clues in order to determine the final form of the "software" (interface).
Computers share recipes in the form of more detailed plain-text descriptions of a particular concept, including exceptions to the common case that might be indicated by various aspects of situations. This sharing is two-way and allows for querying contextual frameworks to find commonalities for statistical modelling.
The software is never finished, it just learns as it goes. "No, I meant" not only changes the current state, but trains the given computer and every computer with which it shares the information. (iterative/assumptive)
Obviously all of this should be hooked up to voice dictation and eye tracking, but it's not a prerequisite for making the system useful.
Pingbacks are closed.