Reply by July 26, 20182018-07-26
Quantization is the enemy of evolution.  Eight bit precision would not work well with what I do.  It is lucky though that biological systems are heavily quantized, especially in bacteria and viruses.  If not we simply wouldn't exist.  The crossover mechanism we use is a weak optimizer but it does make the cost landscape less rough than what asexual microbes have to contend with.

Hence we can adapt to pathogens despite having a far longer time between generations and a far lower population count.  

In a non-quantized artificial system a perturbation in any of the basis directions gives a smoothly changing alteration in cost. A mutation in all dimensions gives a new cost that is a summary measure of multiple clues.  Following mutations downhill in cost means following multiple clues about which way to go.  If there were quantization say in many basis directions a small movement in those directions would give you not information about whether such a movement was good or bad.  You would get not clues in those directions, which is obviously detrimental. 

A point here being that artificial evolution on digital computers can be far more efficient than biological evolutions.  If you accept that back propagation is in some sense is a form of evolution (at a slight stretch) then you can see that a GPU cluster can build in a few weeks the capacity to do vision that took biological evolution many millions of years to create.
I have some kind of code here:
https://github.com/S6Regen/Thunderbird    
Reply by Steve Pope July 23, 20182018-07-23
Kevin Neilson  <kevin.neilson@xilinx.com> wrote:

>I was confused about a couple of items. The multiplier array is 256x256 >multipliers--I assume that is broken down into submatrices, with a >submatrix for each layer?
If it's like the CPU I once worked on with one of the authors, there would be one or more very large crossbars interfacing among the arithmetic units and between them and the memory busses. You would try to get high utilizations of the multipliers with the intended applications, using very extensive simulations to design this. Steve
Reply by Kevin Neilson July 23, 20182018-07-23
On Monday, July 16, 2018 at 10:40:50 AM UTC-6, Rob Gaddi wrote:
> Linked in today's Ganssle. Interesting stuff. > > https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu > > -- > Rob Gaddi, Highland Technology -- www.highlandtechnology.com > Email address domain is currently out of order. See above to fix.
I was confused about a couple of items. The multiplier array is 256x256 multipliers--I assume that is broken down into submatrices, with a submatrix for each layer? I'm also confused about the "spiral" demo as to exactly what the inputs are, but I see there is another link about that so I'll check that out.
Reply by July 23, 20182018-07-23
On Tuesday, July 24, 2018 at 2:27:03 AM UTC+12, Steve Pope wrote:
> <gyansorova@gmail.com> wrote: > > >On Tuesday, July 17, 2018 at 4:40:50 AM UTC+12, Rob Gaddi wrote: > > >> Linked in today's Ganssle. Interesting stuff. > > >https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu > > > >I am guessing the Tensor Tech is doing for NNs what DSP processors did > >for DSP. However, I am guessing you could use the Tensor one to do DSP > >as well. > > Reducing everything to 8 bits seems limiting. > > I notice one of the architects is Dave Patterson .. I worked with > him on a CPU once, 20 years ago. > > Steve
8 bits is rather limiting!
Reply by Steve Pope July 23, 20182018-07-23
<gyansorova@gmail.com> wrote:

>On Tuesday, July 17, 2018 at 4:40:50 AM UTC+12, Rob Gaddi wrote:
>> Linked in today's Ganssle. Interesting stuff.
>https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu
>I am guessing the Tensor Tech is doing for NNs what DSP processors did >for DSP. However, I am guessing you could use the Tensor one to do DSP >as well.
Reducing everything to 8 bits seems limiting. I notice one of the architects is Dave Patterson .. I worked with him on a CPU once, 20 years ago. Steve
Reply by July 22, 20182018-07-22
On Tuesday, July 17, 2018 at 4:40:50 AM UTC+12, Rob Gaddi wrote:
> Linked in today's Ganssle. Interesting stuff. > > https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu > > -- > Rob Gaddi, Highland Technology -- www.highlandtechnology.com > Email address domain is currently out of order. See above to fix.
I am guessing the Tensor Tech is doing for NNs what DSP processors did for DSP. However, I am guessing you could use the Tensor one to do DSP as well.
Reply by Kevin Neilson July 20, 20182018-07-20
On Monday, July 16, 2018 at 10:40:50 AM UTC-6, Rob Gaddi wrote:
> Linked in today's Ganssle. Interesting stuff. > > https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu > > -- > Rob Gaddi, Highland Technology -- www.highlandtechnology.com > Email address domain is currently out of order. See above to fix.
Thanks; that was pretty interesting. It's not really as complex as I'd imagined.
Reply by Rob Gaddi July 16, 20182018-07-16
Linked in today's Ganssle.  Interesting stuff.

https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.