Dreaming about binry-hop

Starting out, I imagined bunny hop as classifying binary images, like
the MNIST handwriting recognition which is a classic machine learning
task. 

Classification  sucks though, it's not what hopfield  nets are trying
to  do:  They gradually  fade whatever  you gave 'em bit by bit  into
exactly one of their memories.  So you would have to wait a long time
for your classifier extra bytes to give an identifiable  answer. Much
better  would be to have an input that improves  gradually  by having
some bits flipped. 

Differently,  it needs to be automatic in the sense of being  passive
for it to be a super  power that is different  to just being able  to
program   normally.   This requirement  has two facets;   it must  be
programmable  (a minimal  lisp itself), and autonomous  once started,
similar to the lispusers DONZ program in interlisp. 

DONZ gives your shrunken windows (iconified  windows in modern window
manager   parlance)  a clock and list of interjections,  so they will
sporadically   call  out  to you by name  with  hopefully   important
reminders  about themselves.   Other lispusers programs  have special
defaults.   The defaults are silly and on a random timer, but you can
do what you want with the inputs  and programs  and reminder  timers.
Speech  bubbles pop up from the shrunken  windows as they chatter  at
you. 

One more consideration: I want to use a streaming approach to inputs.

While  reasoning  about black and white images, it occured to me that
instead  of working  on entire 64x64 images at once, I could layer my
net,  and stream  through  considering  8x64 bands of the image (with
8x64 memories), and then afterwards  reason across the bands, maybe a
subregion of the bands. 

Now we can identify  these  8x64 bands with ~ lines of ASCII  text in
common lisp since 8x64/7 ~ 73 characters of ascii (one short line) 

or single names in interlisp since 8x64/16 = 32 interlisp  characters
(*IIRC* interlisp uses 16 bits: 8 bits to specify a characterset, and
then  8  bits  to specify  a character   in  that  characterset   for
characters, since in general many non-ascii characters are supported.
This is the size of a name that you would write,  such  as a variable
name. 

So.  So so so so. Up to here I think I have enough  equipment  to use
bunny hop to reimplement interlisp's do-what-i-mean's spellcheck in a
quirky way, which I am naming do-what-I-hop. 

One application of interlisp DWIM is spell-checking, so I could write

<- (lis 1 2)

lis        (in        EVAL)       ->        LIST        ?         yes
(1 2)

(Where yes is a single tap of y)

In 7 bit ascii,  this spelling  mistake  is like this (2uppering  for
brevity): 

Input: 1001100100100110100110000000

Memory:1001100100100110100111010100

;   Aside:   (loop  for ch across  "LIST"   do (format  t  "~2,7,'0r"
(char-code ch))) 

What I'm imagining for do-what-I-hop is to passively converge entered
names to memories in the background while the line is being entered. 

NB :  DWIM is not a spellcheck per se. It's interlisp's  facility for
defining   comprehension  of different  language  rules (such  as  to
fuzzily  check entered text against names in present packages).   For
example FOR loops, and record accesses are implemented having special
rules  using  DWIM.  But passive  memory-based  autocorrect   is  one
starting point for me. 

From which  if I have lists of automatched  lisp names, that's enough
for the evalqt  to eval me some lisp, whether a small separate  lisp,
or interlisp/common lisp as such. 

I was going  to write  about  how I'm envisaging  streams  and copied
streams and concatenated  copies of streams working,  but this stream
of consciousness has gone on for some time already now. 

* In for a penny - jotting a second idea

A way to match lines of code I am currently  writing against memories
codes I have written before (yes I am writing codes deliberately  for
Martin) in another alternate-universe DWIM way. 

Imagine I have previously written (in CL)

(reduce   (lambda  (a b) (concatenate  'string  a b))  (maplist  'car
*my-strings*)) 

 ; please ignore my bad style in applying reduce and maplist here

In  order   to try and recognise  this  as a memory,  I am  imagining
separating    the   leaf  structure   from  the   names,   and   then
do-what-i-hopping each of those separately. So the above reduce would
be stored in two networks as 

(i) the leaf locations

(t (t (t t) (t t t t)) (t t t))

and (ii) the names

(reduce lambda a b concatenate 'string a b maplist 'car *my-strings*)

With  the goal of the ai recognising-  oh.  We're  doing  this  again
later. 

Using   do-what-I-mean    cleverness   on  the   names   (and   maybe
cross-referencing  them to function  / value cell meanings  from  the
leaf locations and other cleverness). 

I think this is a powerful and small (sm0l) ai to extrapolate forward
lines of code being written. 

Proof of course being in the pudding...

1) Bunny hop migration  into interlisp (since I'm twiddling bits, and
I think Paolo fortuitously  published  a package  twiddling  bits  in
interlisp) 

2) do-what-i-hop spellcheck edition

3) do-what-i-hop leaf-by-niggle edition * (Aside from having the word
leaf,  this  one of Tolkien's  books isn't an appropriate   reference
here. Well, if you try hard enough anything can be believed.)