Blogs

Make Hardware Great Again

Jeff BrowerJune 29, 20203 comments

By now you're aware of the collective angst in the US about 5G. Why is the US not a leader in 5G ? Could that also happen -- indeed, is it happening -- in AI ? If we lead in other areas, why not 5G ? What makes it so hard ?



This hand-wringing has reached the highest levels in US government. Recently the Wall Street Journal reported on a top level government plan to help Cisco buy Ericsson or Nokia, to give the US a leg up in 5G. This is not a new plan, having been around since 2018. Plan B is a partnership between a major US operator, cloud company, and small radio hardware providers -- for example Microsoft Azure and AT&T plus Parallel Wireless and Altiostar to create "software based 5G" (i.e. OpenRAN).

Would either of these work ? The answer is no, because they don't address the root problem: hardware. The promise of 5G lies in "edge computing", for automated vehicles, smart factories, and AI applications such as face and speech recognition that adhere to privacy restrictions and avoid storing personal data in the cloud. Meeting this promise takes a massive increase in hardware complexity, and most importantly, in computational performance. Before the US can lead in 5G and AI it needs to regain its leadership in hardware. That starts with the semiconductor industry and its cornerstone, CPU chips.

This article is available in PDF format for easy printing

CPU chips are the foundation of all cloud computing run by big tech, whether they want to admit it or not. Every time a Fortune 500 company buys an ad with Google, Facebook, or Amazon, and before you see it on your desktop, laptop, or smartphone, that ad is generated on a cloud server running an Intel x86 or ARM CPU. Intel is the only major CPU chip company remaining in the US. We can't count AMD as its CPUs are functionally compatible with Intel's (although with enhanced capabilities in some cases 1), and we can't count ARM, a UK company. For 5G and AI technologies, which require extreme computational performance, we can include US companies Nvidia and Xilinx, who make "computational CPUs" 2 that run in servers., bringing the total to 3. But that's it ... mobile device SoCs 3 made by Apple, Broadcom, and Qualcomm don't add to US strength in semiconductors; they are "appliance CPUs" not even sold to external customers. Over the last 30 years there have been dozens of CPU chip companies in the US, with famous names like Texas Instruments, Cyrix, Mostek, Zilog, Cavium, Motorola, and AT&T Microelectronics, and many smaller outfits and startups, such as Maxim, Wintegra, Lattice, and Actel. It's also worth remembering there were multiple thriving semiconductor areas in addition to Silicon Valley, including Dallas and Boston. But not anymore.

How did we end up in this sad situation ? Over the last 20 years, the US tech industry fell hook, line and sinker for "software defined solutions", assuming that Intel and ARM CPU chips were all that would ever be needed. That thinking is fine for big techs running huge server farms, who care mostly about collecting your search queries, hosting your social media, and collecting monthly fees to provide IT services, but it's woefully inadequate in the face of challenges posed by 5G and AI. Essentially what's happened is we have replaced hard work, relentless R&D, and constant innovation in CPU design with a marketing slogan. Just like leadership in finance does not make a country strong and independent, neither does leadership in social media technology and server farms. The pandemic has shown clearly the importance of supply chains and the intrinsic value of making things. Are we to repeat that mistake with 5G and AI ?

CPU innovation has always been a never ending conflict between high performance and low power consumption. These two design objectives are like cobra and mongoose: the moment they meet, they fight. They put the "hard" into hardware. Another way to think about this is to compare a single cloud server, filled with Intel x86 CPUs and Nvidia GPUs and consuming 1000+ Watts, with the human brain, a fraction of the size and consuming 40 Watts. How many such servers does it take to obtain the performance of one brain ? 10,000 ? More ? Even if we assume a rough number, 80 years after the first computers we still have no idea how to connect 10,000 servers, power them with 40 Watts, and what software to run to make them equal a brain. The limitations of silicon have brought Moore's law almost to a standstill and we are still not even close. Do we need to be obsessed with sub 10 nm resolution, 3D ICs, exotic new IC materials, and other incremental improvements ? We can still use silicon, but we need to change the fundamental design approach, and devise new metrics, a new law, for CPUs that incorporate AI and deep learning 4.  According to brain biology, the future of CPU innovation lies in massive parallelism, slower connections to conserve power, memories 1000s of time larger but with tolerance for high error rates -- all things perfected already by evolution. If only we knew how.

As it stands, who will lead this difficult CPU innovation ? Obviously not Intel, who seem to be banking their future on obtaining waivers to sell to Huawei. What about ARM ? ARM has done a good job of reducing power consumption, but massively parallel, high performance architecture is not their forte. What about Texas Instruments, who at one time was the world leader in combining high performance and low power consumption into small chip packages ? Unfortunately they shot themselves in both feet a few years back by exiting the CPU market. TI certainly did not foresee the rise of AI, and neither Intel nor TI foresaw the rise of deglobalization. 

The US government's real problem is to revitalize the computer hardware industry, starting with CPU chips. DoC and DoJ guys would be well advised to think about how to (i) reorganize Intel and AMD to refocus on CPU innovation, (ii) fund neural net chip R&D at universities and startups, (iii) incentivize Texas Instruments to re-enter the CPU market (or broker a sale of their CPU technology to an outfit like Amazon or Tesla), and other strategies aimed at the building blocks of 5G and AI. A government backed "Hardhattan Project" for 5G and AI, organized as a pairing of Cisco and Ericsson at the top, server providers Dell and HP in the middle, and key chip players as the foundation would both get the ball rolling now, and provide a framework for expansion over time.

1 A good read on this: Why AMD Is Intel's Only Competitor
2 Nvidia chips are known as GPUs and Xilinx chips as FPGAs
3 SoC = System on a Chip
4 Neuromorphic computing is a step in this direction
Previous post by Jeff Brower:
   Are DSPs Dead ?

[ - ]
Comment by umeshdeshmukhJune 30, 2020

Hi Jeff good article, I want to point at some of the projects which are interesting 

1.The open road project is trying to fully automate physical design flow 

https://theopenroadproject.org/

2. Electronic Resurgence Initiative-  a project by DARPA includes government, industry, universities

https://www.darpa.mil/work-with-us/electronics-resurgence-initiative

ERI summit video playlist-

https://www.youtube.com/playlist?list=PL6wMum5UsYvbgCsYe_QDMtI6HZvDeZ6MD

this includes AI, novel computer architecture approach, carbon nanotube based 3D ICs etc.

really interesting.

3. RISC-V 

RISC-V project started at UC Berkely ,It offers a free instruction set architecture so you can freely innovate around it and already there are companies shipping products containing RISC-V CPU e.g. nvidia, western digital, microsemi etc.

https://riscv.org/

Many of the above project started at US or universities, companies are major part of them so I don't think they are behind in hardware research, what is your opinion? do you think such initiative should have been taken earlier?

- Umesh

[ - ]
Comment by jbrowerJune 30, 2020

Umesh-

Great links Umesh. I agree that University and startup R&D is in decent shape. Unfortunately their results are not filtering up to US semiconductor outfits. Because Intel effectively has no US competition, they are intent on reinforcing their monopoly rather than urgent spending on radical CPU design R&D. They make the occasional acquisition (Movidius, Nervana, Altera, Habana) then you never hear of them again. One exception is that Altera has morphed into FPGA based inference accelerator PCIe cards. But these are still far from GPU level of widespread acceptance, and I don't think Intel has the long term drive/hunger to compete with Nvidia, as long as everyone puts GPUs into x86 servers anyway.

Besides Intel, who else would spend massively on AI-centric CPUs ?

-Jeff

PS. I do give Intel credit for their neuromorphic computing R&D effort.

[ - ]
Comment by itshinaAugust 11, 2020

Great article.

Thank you for sharing very imprtant information and highlighting important factors about 

<a href="https://www.ssla.co.uk/embedded-hardware/">hardware </a> systems.

Really appreciate your efforts

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.

Sign up

I agree with the terms of use and privacy policy.

Try our occasional but popular newsletter. VERY easy to unsubscribe.
or Sign in