I need to get this phost out there.  I found it was hard  to
improve on my initial hack together of hopfield network deep
learning. 

The gist  of a hopfield  network is that  you have  a binary
array, and edges, which you form as a lagrangian 

L = T - V

where   T  is  in  some  sense  the edges   and  V  is  your
input/output.   You start with a series of memories  (binary
vectors the size of V). Along with an activation  function F
which in this case should be a rectified  polynomial,  these
wholly determine your symmetrical edges, T. 

These  are chosen  (ie read Hopfield's  papers)  to  meet  a
condition  called the Lyapunov  condition  (inb4  spelling).
This condition  means energy is a useful measure  to recurse
down to a unique stable state from any starting point V. 

Normally  you use Monte  Carlo - update one random  bit at a
time until the system converges. 

Right  now I'm imagining  V is an (expt  2 13) bit-array   I
identify  to be a 90x90  binary bitmap + a 92 bit flag  (not
realised here). 

In this case I made two rough  memories:  First  half is 1s,
2nd half  is 1s. Then setting  V = first quarter  is 1s, the
monte carlo updates fill in the second quarter of V with 1s,
but if it's the last quarter  that start with ones, it fills
in from the other memory instead (bit by bit per update). 

*While it eventually  converges to one of the memories, bits
could conceivably be flipped and unflipped more than once at
different   times in the convergence.   If you don't like MC
single   bit  flips which are  considered   more  realistic,
breadth first updates are logically similar. 

This  is sufficient  and more interesting  that XOR or a 4x4
rule matrix, I think. 

Now  I'm thinking  about  what rules and  inputs   could  be
collected throughout the course of our days, and loaded onto
various personal devices. 

The whole thing,  as such, could just be moved  from special
scope into a closure,  but I don't get a lot from doing that
(I tried to write improved versions, but they just did a lot
more navel gazing for the most part). 

I'm really excited about this package.

Messy example:

#<"binry-hop" package>
binry-hop>
(SETTING MEMORIES)
(TRAINING)
(SETTING TOP QUARTER)
(TOP ~QUARTER SEEDED)
(MEMORY 0 SAMPLE)
(1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)

INITIAL-SAMPLE
(1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)

MC-UPDATES
POTENTIAL-SAMPLE
(1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)

(BOT ~QUARTER SEEDED)
(MEMORY 1 SAMPLE)
(0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1)

INITIAL-SAMPLE
(0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1)

MC-UPDATES 
POTENTIAL-SAMPLE 
(0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1
1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1)
NIL
binry-hop>