DSPRelated.com
Blogs

Make Hardware Great Again

Jeff BrowerJune 29, 20205 comments

By now you're aware of the collective angst in the US about 5G. Why is the US not a leader in 5G ? Could that also happen -- indeed, is it happening -- in AI ? If we lead in other areas, why not 5G ? What makes it so hard ?

This hand-wringing has reached the highest levels in US government. Recently the Wall Street Journal reported on a DoJ promoted plan 1 to help Cisco buy Ericsson or Nokia, to give the US a leg up in 5G. This is not a new plan, having been around since 2018. Plan B is a partnership between a major US operator, cloud company, and small radio hardware providers -- for example Microsoft Azure and AT&T plus Parallel Wireless and Altiostar to create "software based 5G" (i.e. OpenRAN).

Would either of these work ? The answer is no, because they don't address the root problem: hardware. The promise of 5G lies in "edge computing", for automated vehicles, smart factories, and AI applications such as image and speech recognition that offer ultra high performance, adhere to privacy restrictions, and avoid storing personal data in the cloud. Meeting this promise takes a massive increase in hardware complexity, and most importantly, in computational performance. Before the US can lead in 5G and AI it needs to regain leadership in hardware. That starts with the semiconductor industry and its cornerstone, CPU chips.

This article is available in PDF format for easy printing

CPU chips are the foundation of all cloud computing run by big tech, whether they want to admit it or not. Every time a Fortune 500 company buys an ad with Google, Facebook, or Amazon, and before you see it on your desktop, laptop, or smartphone, that ad is generated on a cloud server built with CPUs based on Intel x86 or ARM. Intel is the only major CPU chip company remaining in the US. For computational performance purposes, we can't count AMD CPUs that are are functionally compatible with Intel's (although with enhanced capabilities in some cases 2). We can't count ARM, a UK company (or Japanese company, depending on who you ask). For 5G and AI technologies, which require extreme computational performance, we can include US companies Nvidia and Xilinx, who make "computational CPUs" 3 that run in servers., bringing the total to 3. But that's it ... mobile device SoCs 4 made by Apple, Broadcom, and Qualcomm don't add to US strength in semiconductors; they are "appliance CPUs" not even sold to external customers. Over the last 30 years there have been dozens of CPU chip companies in the US, with famous names like Texas Instruments, MIPS 5, Cyrix, Mostek, Zilog, Cavium, Motorola, and AT&T Microelectronics, and many smaller outfits and startups, such as Maxim, Wintegra, Lattice, and Actel. It's also worth remembering there were multiple thriving semiconductor areas in addition to Silicon Valley, including Dallas and Boston. Not anymore.

How did we end up in this situation ? When you hear big techs talk about "software only" and "serverless", you might suspect some marketing fuzzy talk, and you would be right. Over the last 20 years, the US tech industry fell hook, line and sinker for "software defined solutions", a paradigm that assumes Intel and ARM CPU chips are all that's needed. This thinking is fine for huge server farms run by big techs, who care about collecting your search queries, hosting your social media, and billing you for IT services. But it's woefully inadequate in the face of challenges posed by 5G and AI. Big techs, with their monolithic, advertising focused grip on the CPU market, are essentially blocking CPU innovation. They are replacing hard work and relentless R&D in CPU design with marketing slogans implying only their software matters. In the real world, no cloud technologies exist without hardware, which has to be designed and then manufactured. Just like leadership in finance, derivatives, and high frequency trading does not make a country strong and independent, neither does leadership in social media, apps, and cloud farms. The pandemic has shown clearly the importance of supply chains and the intrinsic value of making things. Are we to repeat that mistake with 5G and AI ?

CPU innovation has always been a never ending conflict between high performance and low power consumption. These two design objectives are like cobra and mongoose: the moment they meet, they fight. They put the "hard" into hardware. One way to think about this is to compare a single cloud server, filled with Intel x86 CPUs and Nvidia GPUs and consuming 1000+ Watts, with the human brain, a fraction of the size and consuming 40 Watts. How many such servers does it take to obtain the performance of one brain ? 10,000 ? More ? Even if we assume a rough number, 80 years after the first computers we still have no idea how to connect 10,000 servers, power them with 40 Watts, and what software to run to make them equal a brain. The limitations of silicon have brought Moore's law almost to a standstill and we are still not even close. Do we need to be obsessed with sub 10 nm resolution, 3D ICs, exotic new IC materials, and other incremental improvements ? We can still use silicon, but we need to change the fundamental design approach, and devise new metrics, a new law, for CPUs that incorporate AI and deep learning 6. According to brain biology, the future of CPU innovation lies in massive parallelism, slower connections to conserve power, memories 1000s of time larger but with tolerance for high error rates -- all things perfected already by evolution. If only we knew how.

As it stands, who will lead this difficult CPU innovation ? Obviously not Intel, who seem to be banking their future on obtaining waivers to sell to Huawei. What about ARM ? ARM has done a good job of reducing power consumption, but massively parallel, high performance architecture is not their forte. What about Texas Instruments, who at one time was the world leader in combining high performance and low power consumption into small chip packages ? Unfortunately they shot themselves in both feet a few years back by exiting the CPU market. TI did not foresee the rise of AI, and neither Intel nor TI foresaw the rise of deglobalization. 

The US government's real problem is to revitalize the computer hardware industry, starting with CPU chips. DoC and DoJ guys would be well advised to think about how to (i) reorganize Intel and AMD to refocus on CPU innovation and break their subordination to big techs, (ii) fund neural net chip R&D at universities and startups, (iii) incentivize Texas Instruments to re-enter the CPU market (or broker a sale of their CPU technology to an outfit like Amazon or Tesla), and other strategies aimed at the building blocks of 5G and AI. A government backed "Hardhattan Project" for 5G and AI, organized as a pairing of Cisco and Ericsson at the top, server providers Dell and HP in the middle, and key chip players as the foundation would both get the ball rolling now, and provide a framework for expansion over time.

1 DoJ promoted plans involving telecom must be taken seriously. Bill Barr was general counsel and executive VP at Verizon (previously GTE) from 2000 - 2008.

2 A good read on this: Why AMD Is Intel's Only Competitor

3 Nvidia chips are known as GPUs and Xilinx chips as FPGAs

4 SoC = System on a Chip

5 Somehow, all MIPS IP is now owned by China. Wave Computing and Imagination were involved; the former is bankrupt, the latter is controlled by Chinese investors.

6 Neuromorphic computing is a step in this direction


[ - ]
Comment by umeshdeshmukhJune 30, 2020

Hi Jeff good article, I want to point at some of the projects which are interesting 

1.The open road project is trying to fully automate physical design flow 

https://theopenroadproject.org/

2. Electronic Resurgence Initiative-  a project by DARPA includes government, industry, universities

https://www.darpa.mil/work-with-us/electronics-resurgence-initiative

ERI summit video playlist-

https://www.youtube.com/playlist?list=PL6wMum5UsYvbgCsYe_QDMtI6HZvDeZ6MD

this includes AI, novel computer architecture approach, carbon nanotube based 3D ICs etc.

really interesting.

3. RISC-V 

RISC-V project started at UC Berkely ,It offers a free instruction set architecture so you can freely innovate around it and already there are companies shipping products containing RISC-V CPU e.g. nvidia, western digital, microsemi etc.

https://riscv.org/

Many of the above project started at US or universities, companies are major part of them so I don't think they are behind in hardware research, what is your opinion? do you think such initiative should have been taken earlier?

- Umesh

[ - ]
Comment by jbrowerJune 30, 2020

Umesh-

Great links Umesh. I agree that University and startup R&D is in decent shape. Unfortunately their results are not filtering up to US semiconductor outfits. Because Intel effectively has no US competition, they are intent on reinforcing their monopoly rather than urgent spending on radical CPU design R&D. They make the occasional acquisition (Movidius, Nervana, Altera, Habana) then you never hear of them again. One exception is that Altera has morphed into FPGA based inference accelerator PCIe cards. But these are still far from GPU level of widespread acceptance, and I don't think Intel has the long term drive/hunger to compete with Nvidia, as long as everyone puts GPUs into x86 servers anyway.

Besides Intel, who else would spend massively on AI-centric CPUs ?

-Jeff

PS. I do give Intel credit for their neuromorphic computing R&D effort.

[ - ]
Comment by itshinaAugust 11, 2020

Great article.

Thank you for sharing very imprtant information and highlighting important factors about 

<a href="https://www.ssla.co.uk/embedded-hardware/">hardware </a> systems.

Really appreciate your efforts

[ - ]
Comment by rharding6464October 20, 2020

hello. 

i am working on C805

[ - ]
Comment by aclarkOctober 24, 2020

Jeff, This is a really well thought out piece. I really liked the Mongoose/Cobra analogy.

I really can't add to much to the discussion. As a hardware designer (not chips), I am a bit dismayed with the attitude that the target really doesn't matter since we can abstract everything with a compiler. 

In my world, we don't need megacore - go fast parts all the time but clearly the world of "DSP" devices has shrunk. Obviously, parallelism goes a long way in scaling performance given Moore's Law.

I really like the latest ADI 2156x DSPs. It seems like so much of the rest of the manufacturers just want to wrap some peripherals around a Cortex A and call it a day. 

I know I am a bit off topic with this comment, but we have industry gaps in the mied performance gaps too. It's not just AI, 5G, etc......


Al Clark
Danville Signal




To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: