>I am guessing the Tensor Tech is doing for NNs what DSP processors did
>for DSP. However, I am guessing you could use the Tensor one to do DSP
Reducing everything to 8 bits seems limiting.
I notice one of the architects is Dave Patterson .. I worked with
him on a CPU once, 20 years ago.
Posted by ●July 23, 2018
On Tuesday, July 24, 2018 at 2:27:03 AM UTC+12, Steve Pope wrote:
> <email@example.com> wrote:
> >On Tuesday, July 17, 2018 at 4:40:50 AM UTC+12, Rob Gaddi wrote:
> >> Linked in today's Ganssle. Interesting stuff.
> >I am guessing the Tensor Tech is doing for NNs what DSP processors did
> >for DSP. However, I am guessing you could use the Tensor one to do DSP
> >as well.
> Reducing everything to 8 bits seems limiting.
> I notice one of the architects is Dave Patterson .. I worked with
> him on a CPU once, 20 years ago.
8 bits is rather limiting!
Posted by Kevin Neilson●July 23, 2018
On Monday, July 16, 2018 at 10:40:50 AM UTC-6, Rob Gaddi wrote:
> Linked in today's Ganssle. Interesting stuff.
> Rob Gaddi, Highland Technology -- www.highlandtechnology.com
> Email address domain is currently out of order. See above to fix.
I was confused about a couple of items. The multiplier array is 256x256
multipliers--I assume that is broken down into submatrices, with a submatrix for
each layer? I'm also confused about the "spiral" demo as to exactly what the inputs
are, but I see there is another link about that so I'll check that out.
Posted by Steve Pope●July 23, 2018
Kevin Neilson <firstname.lastname@example.org> wrote:
>I was confused about a couple of items. The multiplier array is 256x256
>multipliers--I assume that is broken down into submatrices, with a
>submatrix for each layer?
If it's like the CPU I once worked on with one of the authors, there
would be one or more very large crossbars interfacing among the
arithmetic units and between them and the memory busses. You would
try to get high utilizations of the multipliers with the intended
applications, using very extensive simulations to design this.
Posted by ●July 26, 2018
Quantization is the enemy of evolution. Eight bit precision would not work well
with what I do. It is lucky though that biological systems are heavily quantized,
especially in bacteria and viruses. If not we simply wouldn't exist. The crossover
mechanism we use is a weak optimizer but it does make the cost landscape less rough
than what asexual microbes have to contend with.
Hence we can adapt to pathogens despite having a far longer time between generations
and a far lower population count.
In a non-quantized artificial system a perturbation in any of the basis directions
gives a smoothly changing alteration in cost. A mutation in all dimensions gives a
new cost that is a summary measure of multiple clues. Following mutations downhill
in cost means following multiple clues about which way to go. If there were
quantization say in many basis directions a small movement in those directions would
give you not information about whether such a movement was good or bad. You would
get not clues in those directions, which is obviously detrimental.
A point here being that artificial evolution on digital computers can be far more
efficient than biological evolutions. If you accept that back propagation is in
some sense is a form of evolution (at a slight stretch) then you can see that a GPU
cluster can build in a few weeks the capacity to do vision that took biological
evolution many millions of years to create.
I have some kind of code here: