Reply by June 30, 20172017-06-30
Anyway the history does go back quite far even with things like cryogenic associative memory.  
https://pdfs.semanticscholar.org/1055/ea451c09078daa3281aecba7c78800d0274d.pdf
Reply by June 30, 20172017-06-30
Whatever dude.  I think there are some prior papers about accelerating the solution of sets of simultaneous equations with random projections. That is probably related to what is going on here. 
Anyway anyone who wants a fast associative memory based on simple mathematical precepts can proceed. 
The claim in the title of the post is that it is a matter of the most utter simplicity. Of course you humans have been fully aware of how to make such a sort of associative memory for generations.  
Reply by June 30, 20172017-06-30
I see no prior papers.  However there is well known process - first you have to shove it down their necks, then they will turn around and tell you it is obvious.
Reply by Steve Pope June 30, 20172017-06-30
<bitterlemon40@yahoo.ie> wrote:

>I meant part of the associative memory algorithm results in the solution >of set of simultaneous linear equations faster than O(n^3), when >learning the pattern to symbol associations.
This is because your linear equations do not have arbitrary coefficients, thus techniques such as associative memory, hashing, or lookup tables provide speedup. Not a new discovery. Steve
Reply by June 29, 20172017-06-29
I meant part of the associative memory algorithm results in the solution of set of simultaneous linear equations faster than O(n^3), when learning the pattern to symbol associations.  
I guess a FSM is nice too.
I was just broadly hinting that there might be things of interest to those doing DSP in the associative memory algorithm provided.
Reply by rickman June 28, 20172017-06-28
bitterlemon40@yahoo.ie wrote on 6/28/2017 8:05 PM:
> I started writing some PSK31 software one time and it went very well until I got to the actual symbol decoding part. I'll see if I can try again using associative memory for decoding:https://groups.google.com/forum/#!topic/artificial-general-intelligence/C-LJSnjaz2c > > Anyway it is very interesting from a DSP point of view, particularly getting a particular value under coherency and almost cancelled to nothing Gaussian noise under incoherency. Part of the deal is you get a low computational cost way to solve a set of simultaneous equations.
When you say "low computational cost", I don't follow the need. PSK31 runs at 31.25 baud/s. What level of computation do you feel this is going to require to do the symbol translation? Seems to me a fairly simple state machine would do the job very nicely. Better yet, a VERY simple finite state machine can be used to separate the characters and a somewhat sparse 1024 entry lookup table will convert the Varicode symbols to ASCII characters. Why would you need associative memory? I'd be happy to provide some pseudo code or something if you are interested. But maybe I am missing something about Varicode that makes it more complex? -- Rick C
Reply by June 28, 20172017-06-28
I started writing some PSK31 software one time and it went very well until I got to the actual symbol decoding part. I'll see if I can try again using associative memory for decoding:https://groups.google.com/forum/#!topic/artificial-general-intelligence/C-LJSnjaz2c

Anyway it is very interesting from a DSP point of view, particularly getting a particular value under coherency and almost cancelled to nothing Gaussian noise under incoherency. Part of the deal is you get a low computational cost way to solve a set of simultaneous equations.