The IBM Roadrunner Supercomputer, with Petaflop capacity can be scaled down in size and power consumption to desktop form factor by expanding the size of the standard microchip to that of a full 12 inch silicon wafer. Network all the processing cores and memory on the wafer together, with infiniband or optics or other technologies, and you have a system of systems on a very large silicon chip which is silicon wafer size. If this does not provide enough processing power, any number of the silicon wafers with the networked processor cores can be stacked vertically and networked together. Power supply and heat dissipation will have to be dealt with. The design can be optimised using electronic design automation software, soft computing and computational intelligence technologies. Thus, you can produce petaflop processors in desktop form factor. This can be called MacroProcessors on MacroChips. Ian Martin Ajzenszmidt
Shrinking the IBM Roadrunner Supercomputer to desktop form factor - MacroProcessors on MacroChips
Started by ●June 12, 2008
Reply by ●June 12, 20082008-06-12
Reply by ●June 12, 20082008-06-12
PFC wrote:> >> Power supply and heat dissipation will have to be dealt with. > > Ahem.That's easy. I wrote down how, but I can't find it on my messy desk. Jerry -- Engineering is the art of making what you want from things you can get. ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply by ●June 12, 20082008-06-12
On Jun 13, 10:30�am, PFC <li...@peufeu.com> wrote:> > Power supply and heat dissipation will have to be dealt with. > > � � � � Ahem.On Jun 13, 6:37 am, Neal <nealcr...@gmail.com> wrote:> On Jun 12, 1:26 pm, iajzens...@yahoo.com.au wrote: > > > > > > > The IBM Roadrunner Supercomputer, with Petaflop capacity can be scaled > > down in size and power consumption to > > desktop form factor by expanding the size of the standard microchip to > > that of a full 12 inch silicon wafer. Network > > all the processing cores and memory on the wafer together, with > > infiniband or optics or other technologies, and you have a system of > > systems on a very large silicon chip which is silicon wafer size. If > > this does not provide enough processing power, any number of the > > silicon wafers with the networked processor cores can be stacked > > vertically and > > networked together. Power supply and heat dissipation will have to be > > dealt with. The design can be optimised > > using electronic design automation software, soft computing and > > computational intelligence technologies. Thus, > > you can produce petaflop processors in desktop form factor. This can > > be called MacroProcessors on MacroChips. > > > Ian Martin Ajzenszmidt > > One question... what are your thoughts on yield for such a proposal? > > Neal- Hide quoted text - > > - Show quoted text -The technique described in the proposal is also known as Wafer Scale Integration and has been around for at least 20 years. A Wikipedia article on wafer scale integration can be found at http://en.wikipedia.org/wiki/Wafer-scale_integration. It states "The vast majority of the cost of fabrication (typically 30%-50%) is related to testing and packaging the individual chips. Further cost is associated with connecting the chips into an integrated system (usually via a printed circuit board). Wafer-scale integration seeks to reduce this cost, as well as improve performance, by building larger chips in a single package � in principle, chips as large as a full wafer. Of course this is not easy, since given the flaws on the wafers a single large design printed onto a wafer would almost always not work. It has been an ongoing goal to develop methods to handle faulty areas of the wafers through logic, as opposed to sawing them out of the wafer. Generally, this approach uses a grid pattern of sub-circuits and "rewires" around the damaged areas using appropriate logic. If the resulting wafer has enough working sub-circuits, it can be used despite faults." This wikipedia article also states "Wafer-scale integration, WSI for short, is a yet-unused system of building very-large integrated circuit networks that use an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers." The following abstract from http://www.springerlink.com/content/xg4r721l1hxl4r78/. states Alessandro Zorat1, 2 (1) Department of Computer Science, State University of New York at Stony Brook, 11794 Stony Brook, New York, USA (2) Present address: Istituto di Ricerca Scientifica e Tecnologica, Loc. Pant� di Povo, 38050 Trento, Italy Received: 25 July 1986 Abstract With the advent of wafer-scale integration (WSI), the placement of several processors on a single VLSI wafer is becoming a realistic possibility. To avoid the problems of a very low yield inherent in any silicon component of (very) large area, redundant components will be used. In this article we examine three different solutions for reconnecting the nonfaulty processors so that the resulting network is a square grid. We then present results of simulations for various percentages of faulty processors, which show that a small amount of redundancy is the interprocessors paths and a simple back-track based algorithm can produce a resulting grid that, while not necessarily optimal, includes most of the nonfaulty processors. This research was supported by the National Science Foundation, under grants ECS-80-25376 and ECS-83-05195." Ian Martin Ajzenszmidt
Reply by ●June 12, 20082008-06-12
iajzenszmi@yahoo.com.au wrote:> On Jun 13, 10:30 am, PFC <li...@peufeu.com> wrote: >>> Power supply and heat dissipation will have to be dealt with. >> Ahem. > On Jun 13, 6:37 am, Neal <nealcr...@gmail.com> wrote: >> On Jun 12, 1:26 pm, iajzens...@yahoo.com.au wrote: >> >> >> >> >> >>> The IBM Roadrunner Supercomputer, with Petaflop capacity can be scaled >>> down in size and power consumption to >>> desktop form factor by expanding the size of the standard microchip to >>> that of a full 12 inch silicon wafer. Network >>> all the processing cores and memory on the wafer together, with >>> infiniband or optics or other technologies, and you have a system of >>> systems on a very large silicon chip which is silicon wafer size. If >>> this does not provide enough processing power, any number of the >>> silicon wafers with the networked processor cores can be stacked >>> vertically and >>> networked together. Power supply and heat dissipation will have to be >>> dealt with. The design can be optimised >>> using electronic design automation software, soft computing and >>> computational intelligence technologies. Thus, >>> you can produce petaflop processors in desktop form factor. This can >>> be called MacroProcessors on MacroChips. >>> Ian Martin Ajzenszmidt >> One question... what are your thoughts on yield for such a proposal? >> >> Neal- Hide quoted text - >> >> - Show quoted text - > > The technique described in the proposal is also known as Wafer Scale > Integration and has been around for at least 20 years. A Wikipedia > article on wafer scale integration can be found at > http://en.wikipedia.org/wiki/Wafer-scale_integration. It states "The > vast majority of the cost of fabrication (typically 30%-50%) is > related to testing and packaging the individual chips. Further cost is > associated with connecting the chips into an integrated system > (usually via a printed circuit board). Wafer-scale integration seeks > to reduce this cost, as well as improve performance, by building > larger chips in a single package � in principle, chips as large as a > full wafer. > > Of course this is not easy, since given the flaws on the wafers a > single large design printed onto a wafer would almost always not work. > It has been an ongoing goal to develop methods to handle faulty areas > of the wafers through logic, as opposed to sawing them out of the > wafer. Generally, this approach uses a grid pattern of sub-circuits > and "rewires" around the damaged areas using appropriate logic. If the > resulting wafer has enough working sub-circuits, it can be used > despite faults." > > This wikipedia article also states "Wafer-scale integration, WSI for > short, is a yet-unused system of building very-large integrated > circuit networks that use an entire silicon wafer to produce a single > "super-chip". Through a combination of large size and reduced > packaging, WSI could lead to dramatically reduced costs for some > systems, notably massively parallel supercomputers." > > The following abstract from http://www.springerlink.com/content/xg4r721l1hxl4r78/. > states > > Alessandro Zorat1, 2 > > (1) Department of Computer Science, State University of New York at > Stony Brook, 11794 Stony Brook, New York, USA > (2) Present address: Istituto di Ricerca Scientifica e Tecnologica, > Loc. Pant� di Povo, 38050 Trento, Italy > > Received: 25 July 1986 > > Abstract With the advent of wafer-scale integration (WSI), the > placement of several processors on a single VLSI wafer is becoming a > realistic possibility. To avoid the problems of a very low yield > inherent in any silicon component of (very) large area, redundant > components will be used. > In this article we examine three different solutions for reconnecting > the nonfaulty processors so that the resulting network is a square > grid. We then present results of simulations for various percentages > of faulty processors, which show that a small amount of redundancy is > the interprocessors paths and a simple back-track based algorithm can > produce a resulting grid that, while not necessarily optimal, includes > most of the nonfaulty processors. > This research was supported by the National Science Foundation, under > grants ECS-80-25376 and ECS-83-05195." > > Ian Martin AjzenszmidtMy questions are Why now? and Why here? There is very little interest in the group discussions in the design of computer chips, although we're generally happy to use whatever available part suits the task. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●June 13, 20082008-06-13
On Thu, 12 Jun 2008 21:10:43 -0400, Jerry Avins <jya@ieee.org> wrote:>PFC wrote: >> >>> Power supply and heat dissipation will have to be dealt with. >> >> Ahem. > >That's easy. I wrote down how, but I can't find it on my messy desk.What did you write it down on? Was it too big to fit into the margin of a book?> >Jerry
Reply by ●June 13, 20082008-06-13
Ben Bradley wrote:> On Thu, 12 Jun 2008 21:10:43 -0400, Jerry Avins <jya@ieee.org> wrote: > >> PFC wrote: >>>> Power supply and heat dissipation will have to be dealt with. >>> Ahem. >> That's easy. I wrote down how, but I can't find it on my messy desk. > > What did you write it down on? Was it too big to fit into the > margin of a book?It might have fit, but I don't write in books or use an underliner (except to note errata). So I don't really know. Jerry P.S. I'm glad to see that you got my drift. -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●June 13, 20082008-06-13
iajzenszmi@yahoo.com.au wrote:> The IBM Roadrunner Supercomputer, with Petaflop capacity can be scaled > down in size and power consumption to > desktop form factor by expanding the size of the standard microchip to > that of a full 12 inch silicon wafer. Network > all the processing cores and memory on the wafer together, with > infiniband or optics or other technologies, and you have a system of > systems on a very large silicon chip which is silicon wafer size. If > this does not provide enough processing power, any number of the > silicon wafers with the networked processor cores can be stacked > vertically and > networked together. Power supply and heat dissipation will have to be > dealt with. The design can be optimised > using electronic design automation software, soft computing and > computational intelligence technologies. Thus, > you can produce petaflop processors in desktop form factor. This can > be called MacroProcessors on MacroChips. > > Ian Martin AjzenszmidtLong ago, WSI initiatives worked with wafers where only a few percent of the devices were functional, and needed to offer schemes to map and use the good stuff. Now yields are pretty high. If most of a 12" wafer works, and each pentium's worth of slice area takes 100W, the cooling will be real fun. Not just that, but the slice will be running at a little over a volt, so all those watts equate to a comparable number of amps. On a modern PCB very high current switching supplies sit alongside the chips, pumping that current in. You have both quantity and distance problems with a 12" slice. Also, that heavy high current wiring is going to have real problems keeping out of the way of the thermal management. The seemingly throw away line "Power supply and heat dissipation will have to be dealt with" is the very heart of the problem. I know people who had a huge struggle in the 70s and 80s to get anyone to take their WSI proposals seriously, and they have really interesting possibilities to offer. I really can't see how this can fly now. Regards, Steve
Reply by ●June 13, 20082008-06-13
> The seemingly throw away line "Power supply and heat dissipation will > have to be dealt with" is the very heart of the problem. I know people > who had a huge struggle in the 70s and 80s to get anyone to take their > WSI proposals seriously, and they have really interesting possibilities > to offer. I really can't see how this can fly now. > > Regards, > SteveOn a much more realistic scale I would like to see some stacked chips like a DSP with a RAM chip stacked on top, or a FPGA with some RAM stacked on top too. This would simplify design a lot since you'd get a verified design and all the signal integrity problems aren't your problem anymore. Besides it would allow the use of smaller, cheaper packages with less pins, since RAM usually consumes a lot of IO pins. Of course, then, it has less flexibility... It's already done in mobile applications but that's the kind of chips you can't get unless you want 100k of them. WSI has another problem, which is that all chips must use the same process, and the proess used for RAM isn't likely to be the same as the process used for the CPUs or FPGAs, for instance.
Reply by ●June 13, 20082008-06-13
PFC wrote:> >> The seemingly throw away line "Power supply and heat dissipation will >> have to be dealt with" is the very heart of the problem. I know people >> who had a huge struggle in the 70s and 80s to get anyone to take their >> WSI proposals seriously, and they have really interesting >> possibilities to offer. I really can't see how this can fly now. >> >> Regards, >> Steve > > On a much more realistic scale I would like to see some stacked > chips like a DSP with a RAM chip stacked on top, or a FPGA with some RAM > stacked on top too. This would simplify design a lot since you'd get a > verified design and all the signal integrity problems aren't your > problem anymore. Besides it would allow the use of smaller, cheaper > packages with less pins, since RAM usually consumes a lot of IO pins. Of > course, then, it has less flexibility... > It's already done in mobile applications but that's the kind of > chips you can't get unless you want 100k of them.Things pull in different directions. Mobile device people ask for multi-chip packages. Then, when the find they are a bit thicker, they don't like them. :-) It seems the signal integrity problems can be quite problematic. Multi-chip packages are most interesting for high performance things, often where parts of the system are in different processes. It seems a number of proposals for parts grow from market demand and then die from the problems of achieving adequate signal integrity for other higher devices in the stack.> WSI has another problem, which is that all chips must use the same > process, and the proess used for RAM isn't likely to be the same as the > process used for the CPUs or FPGAs, for instance.I think that is only a big issue if you want to blend DRAM on the slice. Typical modern high performance processors already have more high performance SRAM than logic. I recall a Signetics seminar in the early days of the 68000. Someone in the audience asked how the company aimed to cope with the rising per package power of successive device generations. A guy from Signetics said (I thought quite pragmatically) that if we get a 100W out of a power transistor we'll figure out a way to get as much out of a CPU chip. Almost everyone in the audience laughed at this, as absurd. I wonder if he moved to Intel. :-) Engineer's expectations of what is practical change over time, I guess. Steve






