DSPRelated.com
Forums

Anyone using Linux?

Started by Eric March 14, 2017
On 3/14/2017 7:08 PM, Tim Wescott wrote:
> On Tue, 14 Mar 2017 23:01:30 +0000, eric.jacobsen wrote: > >> On Tue, 14 Mar 2017 22:42:23 +0000 (UTC), spope33@speedymail.org (Steve >> Pope) wrote: >> >>> Tim Wescott <seemywebsite@myfooter.really> wrote: >>> >>>> On Tue, 14 Mar 2017 22:28:42 +0000, Steve Pope wrote: >>> >>>>> Something about running an ARM with "no OS" is freaky. >>> >>>> The LPC811M001 from NXP has 8K of flash and 2K of RAM (and 16 pins). >>>> You could shoe-horn an RTOS in there, but it would take up significant >>>> space. >>>> >>>> Less than $0.58 each if you buy 1000 of 'em from Digikey. $1.26 in >>>> onesies. >>> >>> Thanks -- that sounds like a pretty neat product. I'll have to check >>> out the datasheet. >>> >>> I've been on two significantly sized "no OS" projects, one used an 8051 >>> and the other a 68020. One of the two had sufficient programming >>> discipline that "no OS" worked out well, but I still in retrospect >>> wished that I had written a synchronization kernel. It was mostly the >>> interplay between timer interrupts and other types of interrupts that >>> led to trickiness. >>> >>> Steve >> >> For "embedded"-ish stuff, up until a few years ago the vast majority of >> my experience was on "bare metal" (I.e., no OS) projects. I've always >> found them much simpler, but for cases where they made sense, i.e., the >> processor really only did one thing, and had to do it fast. Even larger >> projects where we had to populate a micro or two for Monitor and Control >> purposes, they were always bare metal (no OS). That's not difficult with >> smaller processors, and it makes many aspects of debug easier. >> >> So I always resisted working with an embedded or real-time OS because >> they always seemed to just be in the way. The extreme other end of >> that is working on a platform with a full OS (Linux) with networking >> support, etc., which makes connecting for development and debug easy and >> opens up a whole crazy world of additional support. >> >> I think anything in between is a little sketchy, but the amount of CPU >> and OS power on the cheapie platforms does make them attractive for >> cases where they're enough oomph to do a job. > > I find that when you have things on a processor that are working at > vastly different time scales -- e.g., a human interface that needs to > respond within 50ms, and a control loop that needs to respond in 100us -- > then an RTOS is a huge help.
An RTOS actually does little for you in a case like this. You can still screw up royally using an RTOS. What is important is understanding how multitasking works and needs to work. An RTOS doesn't remove you from that requirement. The rest is not so complex actually.
> They're also good when you have more than one person touching the > software, because you can let the lower-ranked software engineers loose > on the less time-critical parts of the code base with a lower likelihood > that they'll do something that'll prevent the fast stuff from doing its > job on time.
I think you are showing a lack of understanding of how multitasking has to work.
> (Lower, but not zero -- if they can turn off interrupts, they can screw > things up royally, and I've seen it done and the results blamed on me!)
They can also do their job properly and create a priority inversion if the project is not architected properly. -- Rick C
rickman  <gnuarm@gmail.com> wrote:

>On 3/14/2017 7:08 PM, Tim Wescott wrote:
>> I find that when you have things on a processor that are working at >> vastly different time scales -- e.g., a human interface that needs to >> respond within 50ms, and a control loop that needs to respond in 100us -- >> then an RTOS is a huge help.
>An RTOS actually does little for you in a case like this. You can still >screw up royally using an RTOS. What is important is understanding how >multitasking works and needs to work. An RTOS doesn't remove you from >that requirement. The rest is not so complex actually.
>> They're also good when you have more than one person touching the >> software, because you can let the lower-ranked software engineers loose >> on the less time-critical parts of the code base with a lower likelihood >> that they'll do something that'll prevent the fast stuff from doing its >> job on time.
>I think you are showing a lack of understanding of how multitasking has >to work.
One does not need to be doing multi-tasking for a synchronization kernel to be useful. Steve
On Wed, 15 Mar 2017 14:33:28 -0400, rickman wrote:

> On 3/14/2017 7:08 PM, Tim Wescott wrote:
<< snip >>
> >> They're also good when you have more than one person touching the >> software, because you can let the lower-ranked software engineers loose >> on the less time-critical parts of the code base with a lower >> likelihood that they'll do something that'll prevent the fast stuff >> from doing its job on time. > > I think you are showing a lack of understanding of how multitasking has > to work. >
I've been there and done that. I made the above assertion based on experience with teams of three to six software engineers of widely varying skill sets, working on over a dozen boards with different functions and a common software base. If I don't understand the intersection of RTOS features and project management in this regard then the universe shares my misunderstanding, and follows it. << snip >> -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote:
> On 3/14/2017 7:09 PM, Tim Wescott wrote: > > On Tue, 14 Mar 2017 22:55:01 +0000, eric.jacobsen wrote: > > > >> On Tue, 14 Mar 2017 15:59:20 -0500, Tim Wescott > >> <seemywebsite@myfooter.really> wrote: > >> > >>> On Tue, 14 Mar 2017 17:33:29 +0000, eric.jacobsen wrote: > >>> > >>>> On Tue, 14 Mar 2017 11:08:42 -0400, Eric <Eric@spamspamorspam.com> > >>>> wrote: > >>>> > >>>>> Just curious about how much Linux is being used for embedded DSP apps. > >>>>> If you're using it, what are your normal development tools? > >>>> > >>>> Lately I've done a number of projects similar to what Tim described, > >>>> using ARM Cortex cores running some flavor of Linux or other. > >>>> Generally they're good enough that an executable ports easily across > >>>> the different common flavors of Linux, so it doesn't matter too much > >>>> which one you're developing under. > >>>> > >>>> Unlike what Tim described, though, I do the development on Windows, > >>>> using Eclipse for the IDE and cross-compiling. Eclipse has some very > >>>> good remote-development tools, so doing the development and compiling > >>>> on Windows and then running the executable and debug on the ARM > >>>> platform is actually pretty easy. > >>>> > >>>> I had a client that also wanted the same process that was running on > >>>> ARM cores to also run on x86/IA32/IA64. Although it was a simple > >>>> port, it did make me set up a native Linux tool flow on the other > >>>> platforms since it isn't as easy to cross-compile from Windows onto > >>>> those platforms with the libraries I was using. > >>>> > >>>> And, yes, the apps I'm talking about are DSP apps, they're just > >>>> running on ARM cores or whatever. These days a lot of processors are > >>>> good enough that they're fine for the task. > >>> > >>> I'm developing on Linux by preference, and the processors I'm developing > >>> for are deeply embedded using ARM Cortex M<whatever> cores running > >>> little RTOS kernels or no OS at all. It sounds like you may be > >>> developing for ARM Cortex A<whatever> cores that are actually > >>> _executing_ Linux. > >> > >> Yes, A8s or better or whatever. I think having a clock speed >1GHz > >> does help the situation somewhat. ;) > > > > Depending on what your requirements are. The best quote I've heard to > > date on real-time development is "Real Fast does not mean Real Time". > > If you want real time, ditch the OS. Or better yet, ditch the CPU. > Real men use FPGAs. > > -- > > Rick C
Yes, but they are only better if you are doing parallel operations. That's when they win out.
On 3/21/2017 12:30 PM, gyansorova@gmail.com wrote:
> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >> On 3/14/2017 7:09 PM, Tim Wescott wrote: >>> On Tue, 14 Mar 2017 22:55:01 +0000, eric.jacobsen wrote: >>> >>>> On Tue, 14 Mar 2017 15:59:20 -0500, Tim Wescott >>>> <seemywebsite@myfooter.really> wrote: >>>> >>>>> On Tue, 14 Mar 2017 17:33:29 +0000, eric.jacobsen wrote: >>>>> >>>>>> On Tue, 14 Mar 2017 11:08:42 -0400, Eric <Eric@spamspamorspam.com> >>>>>> wrote: >>>>>> >>>>>>> Just curious about how much Linux is being used for embedded DSP apps. >>>>>>> If you're using it, what are your normal development tools? >>>>>> >>>>>> Lately I've done a number of projects similar to what Tim described, >>>>>> using ARM Cortex cores running some flavor of Linux or other. >>>>>> Generally they're good enough that an executable ports easily across >>>>>> the different common flavors of Linux, so it doesn't matter too much >>>>>> which one you're developing under. >>>>>> >>>>>> Unlike what Tim described, though, I do the development on Windows, >>>>>> using Eclipse for the IDE and cross-compiling. Eclipse has some very >>>>>> good remote-development tools, so doing the development and compiling >>>>>> on Windows and then running the executable and debug on the ARM >>>>>> platform is actually pretty easy. >>>>>> >>>>>> I had a client that also wanted the same process that was running on >>>>>> ARM cores to also run on x86/IA32/IA64. Although it was a simple >>>>>> port, it did make me set up a native Linux tool flow on the other >>>>>> platforms since it isn't as easy to cross-compile from Windows onto >>>>>> those platforms with the libraries I was using. >>>>>> >>>>>> And, yes, the apps I'm talking about are DSP apps, they're just >>>>>> running on ARM cores or whatever. These days a lot of processors are >>>>>> good enough that they're fine for the task. >>>>> >>>>> I'm developing on Linux by preference, and the processors I'm developing >>>>> for are deeply embedded using ARM Cortex M<whatever> cores running >>>>> little RTOS kernels or no OS at all. It sounds like you may be >>>>> developing for ARM Cortex A<whatever> cores that are actually >>>>> _executing_ Linux. >>>> >>>> Yes, A8s or better or whatever. I think having a clock speed >1GHz >>>> does help the situation somewhat. ;) >>> >>> Depending on what your requirements are. The best quote I've heard to >>> date on real-time development is "Real Fast does not mean Real Time". >> >> If you want real time, ditch the OS. Or better yet, ditch the CPU. >> Real men use FPGAs. >> >> -- >> >> Rick C > > Yes, but they are only better if you are doing parallel operations. That's when they win out.
Why are they "only" better doing parallel operations? Every CPU chip has parallel I/O units we call "peripherals". There is no reason FPGAs are limited from doing sequential operations. I designed a test fixture that had to drive a 33 MHz SPI like control interface, another high speed serial data interface and step through test sequences, all in an FPGA. Most CPUs couldn't even do the job because the SPI like interface wasn't standard enough. Then they would have had a hard time pumping data in and out of the UUT. FPGAs start to loose steam when the sequential task gets very large. If you want to run Ethernet or USB you are likely going to want a CPU of some sort, but even that can be rolled in an FPGA with no trouble. I believe it is the ARM CM1 that is designed for FPGAs along with a host of other soft cores. A lot of myth has grown up about FPGAs because of the way they have been used and the lack of understanding by most people. It is CPUs that are hard to use because performing parallel tasks using a sequential processor is fraught with peril. Because of necessity users have learned to cope using fairly complex rules for making it all work. With FPGAs it can be much, much simpler. Then there is the issue of marketing. FPGA makers have targeted a market which can utilize their largest and most profitable parts. So they have yet to realize they can also market to the lower end where FPGAs need to be more like MCUs. Only one company makes an FPGA with analog. That needs to change. -- Rick C
In article <oas04t$1vf$1@dont-email.me>, rickman  <gnuarm@gmail.com> wrote:

>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote:
>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>> Real men use FPGAs.
>Every CPU chip >has parallel I/O units we call "peripherals". There is no reason FPGAs >are limited from doing sequential operations. I designed a test fixture >that had to drive a 33 MHz SPI like control interface, another high >speed serial data interface and step through test sequences, all in an >FPGA. Most CPUs couldn't even do the job because the SPI like interface >wasn't standard enough. Then they would have had a hard time pumping >data in and out of the UUT.
>FPGAs start to loose steam when the sequential task gets very large. If >you want to run Ethernet or USB you are likely going to want a CPU of >some sort, but even that can be rolled in an FPGA with no trouble. I >believe it is the ARM CM1 that is designed for FPGAs along with a host >of other soft cores.
Why sure there are freeware CPU cores (8051 on up) for FPGA's. But, in between winding my own transformers, and mining my own tantulum in east Congo so as to fabricate my own capacitors, I might not have time left to roll my own CPU-containing FPGA's. I might just, like, buy a CPU. S.
On 3/22/2017 1:10 AM, Steve Pope wrote:
> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: > >>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: > >>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>> Real men use FPGAs. > >> Every CPU chip >> has parallel I/O units we call "peripherals". There is no reason FPGAs >> are limited from doing sequential operations. I designed a test fixture >> that had to drive a 33 MHz SPI like control interface, another high >> speed serial data interface and step through test sequences, all in an >> FPGA. Most CPUs couldn't even do the job because the SPI like interface >> wasn't standard enough. Then they would have had a hard time pumping >> data in and out of the UUT. > >> FPGAs start to loose steam when the sequential task gets very large. If >> you want to run Ethernet or USB you are likely going to want a CPU of >> some sort, but even that can be rolled in an FPGA with no trouble. I >> believe it is the ARM CM1 that is designed for FPGAs along with a host >> of other soft cores. > > Why sure there are freeware CPU cores (8051 on up) for FPGA's. > > But, in between winding my own transformers, and mining my own > tantulum in east Congo so as to fabricate my own capacitors, > I might not have time left to roll my own CPU-containing FPGA's. > I might just, like, buy a CPU.
As I mentioned, there are a few apps that are easier on a CPU chip. It's just not many who can distinguish which those are. -- Rick C
On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/22/2017 1:10 AM, Steve Pope wrote: >> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >> >>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >> >>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>> Real men use FPGAs. >> >>> Every CPU chip >>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>> are limited from doing sequential operations. I designed a test fixture >>> that had to drive a 33 MHz SPI like control interface, another high >>> speed serial data interface and step through test sequences, all in an >>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>> wasn't standard enough. Then they would have had a hard time pumping >>> data in and out of the UUT. >> >>> FPGAs start to loose steam when the sequential task gets very large. If >>> you want to run Ethernet or USB you are likely going to want a CPU of >>> some sort, but even that can be rolled in an FPGA with no trouble. I >>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>> of other soft cores. >> >> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >> >> But, in between winding my own transformers, and mining my own >> tantulum in east Congo so as to fabricate my own capacitors, >> I might not have time left to roll my own CPU-containing FPGA's. >> I might just, like, buy a CPU. > >As I mentioned, there are a few apps that are easier on a CPU chip. >It's just not many who can distinguish which those are. >
There are a number of people here (including Steve) who've done hardware (HDL) and software work, and are familiar with hardware implementations and software implementations and the tradeoffs involved. I've mixed both, including FPGA and silicon design for hardware, and bare metal CPU software on DSPs to Linux apps for software. I've probably a lot more hardware design experience than software, personally. I don't think it's as hard to sort out as you're making it out to be, and the tradeoffs have changed over the years. What I'm seeing is that the economics (e.g., part costs), power consumption, and development costs have been moving in favor of CPUs as a trend for quite a while. As CPUs get cheaper and more powerful and consume less power, they encroach on more and more of the territory that used to favor FPGAs or custom silicon. FPGAs and custom silicon still have their place, it's just shrinking. And the FPGAs that have multiple ARM cores on them have looked extremely attractive to me since they came out, but I've not yet personally had a project where even that made sense. They're certainly out there, though. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 3/22/2017 2:10 PM, eric.jacobsen@ieee.org wrote:
> On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote: > >> On 3/22/2017 1:10 AM, Steve Pope wrote: >>> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >>> >>>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >>> >>>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>>> Real men use FPGAs. >>> >>>> Every CPU chip >>>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>>> are limited from doing sequential operations. I designed a test fixture >>>> that had to drive a 33 MHz SPI like control interface, another high >>>> speed serial data interface and step through test sequences, all in an >>>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>>> wasn't standard enough. Then they would have had a hard time pumping >>>> data in and out of the UUT. >>> >>>> FPGAs start to loose steam when the sequential task gets very large. If >>>> you want to run Ethernet or USB you are likely going to want a CPU of >>>> some sort, but even that can be rolled in an FPGA with no trouble. I >>>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>>> of other soft cores. >>> >>> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >>> >>> But, in between winding my own transformers, and mining my own >>> tantulum in east Congo so as to fabricate my own capacitors, >>> I might not have time left to roll my own CPU-containing FPGA's. >>> I might just, like, buy a CPU. >> >> As I mentioned, there are a few apps that are easier on a CPU chip. >> It's just not many who can distinguish which those are. >> > > There are a number of people here (including Steve) who've done > hardware (HDL) and software work, and are familiar with hardware > implementations and software implementations and the tradeoffs > involved. I've mixed both, including FPGA and silicon design for > hardware, and bare metal CPU software on DSPs to Linux apps for > software. I've probably a lot more hardware design experience than > software, personally. > > I don't think it's as hard to sort out as you're making it out to be,
I'm not saying it's hard. I find that *many* have opinions of FGPAs that are far, far out of date. It *is* hard to make informed decisions when your information is not accurate. There is also a huge bias regarding the ease of developing sequential software vs. HDL. People so often make it sound like debugging an FPGA is grueling work. I find the opposite. I can test 99.999% of an FPGA design before I ever hit the bench by using simulation. I think it is much harder to do that with CPUs. So instead there is a huge focus on highly complex debuggers to attempt to unravel the sequential execution of many functions. No small feat. No thanks if there is any way I can avoid it.
> and the tradeoffs have changed over the years. What I'm seeing is > that the economics (e.g., part costs), power consumption, and > development costs have been moving in favor of CPUs as a trend for > quite a while. As CPUs get cheaper and more powerful and consume > less power, they encroach on more and more of the territory that used > to favor FPGAs or custom silicon.
That is a perfect example. Why is power consumption moving in favor of CPUs? I expect you have not looked hard at the low power FPGAs available. FPGAs are not a stationary target. They are advancing along with process technology just like CPUs.
> FPGAs and custom silicon still have their place, it's just shrinking. > > And the FPGAs that have multiple ARM cores on them have looked > extremely attractive to me since they came out, but I've not yet > personally had a project where even that made sense. They're > certainly out there, though.
If you don't like or need the parallel functionality of FPGAs why would you like multiple processors? Parallel CPUs are *much* harder to use effectively than FPGAs. In CPUs it is always about keeping the CPU busy. With multiple processors it is a much more difficult task and still has all the issues of making a sequential CPUs process parallel functionality. In reality there is little that is better about CPUs over FPGAs. I am working on a small CPU right now, but that is only because I mistakenly ordered too many launchpads and might as well use one up, oh, and it has an on board LCD readout. Otherwise I'd be using an FPGA board that plugs directly into a USB port and displaying the results on the computer. In fact, I need to order a couple more of those. -- Rick C
On Wed, 22 Mar 2017 15:47:02 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/22/2017 2:10 PM, eric.jacobsen@ieee.org wrote: >> On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote: >> >>> On 3/22/2017 1:10 AM, Steve Pope wrote: >>>> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >>>> >>>>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >>>> >>>>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>>>> Real men use FPGAs. >>>> >>>>> Every CPU chip >>>>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>>>> are limited from doing sequential operations. I designed a test fixture >>>>> that had to drive a 33 MHz SPI like control interface, another high >>>>> speed serial data interface and step through test sequences, all in an >>>>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>>>> wasn't standard enough. Then they would have had a hard time pumping >>>>> data in and out of the UUT. >>>> >>>>> FPGAs start to loose steam when the sequential task gets very large. If >>>>> you want to run Ethernet or USB you are likely going to want a CPU of >>>>> some sort, but even that can be rolled in an FPGA with no trouble. I >>>>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>>>> of other soft cores. >>>> >>>> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >>>> >>>> But, in between winding my own transformers, and mining my own >>>> tantulum in east Congo so as to fabricate my own capacitors, >>>> I might not have time left to roll my own CPU-containing FPGA's. >>>> I might just, like, buy a CPU. >>> >>> As I mentioned, there are a few apps that are easier on a CPU chip. >>> It's just not many who can distinguish which those are. >>> >> >> There are a number of people here (including Steve) who've done >> hardware (HDL) and software work, and are familiar with hardware >> implementations and software implementations and the tradeoffs >> involved. I've mixed both, including FPGA and silicon design for >> hardware, and bare metal CPU software on DSPs to Linux apps for >> software. I've probably a lot more hardware design experience than >> software, personally. >> >> I don't think it's as hard to sort out as you're making it out to be, > >I'm not saying it's hard. I find that *many* have opinions of FGPAs >that are far, far out of date. It *is* hard to make informed decisions >when your information is not accurate.
Absolutely, and likewise for the CPU side.
>There is also a huge bias regarding the ease of developing sequential >software vs. HDL. People so often make it sound like debugging an FPGA >is grueling work. I find the opposite. I can test 99.999% of an FPGA >design before I ever hit the bench by using simulation. I think it is >much harder to do that with CPUs.
It is not harder, and the tools tend to be more plentiful and mature because it's a larger, more varied market.
> So instead there is a huge focus on >highly complex debuggers to attempt to unravel the sequential execution >of many functions. No small feat. No thanks if there is any way I can >avoid it.
That's that bias you were talking about earlier.
>> and the tradeoffs have changed over the years. What I'm seeing is >> that the economics (e.g., part costs), power consumption, and >> development costs have been moving in favor of CPUs as a trend for >> quite a while. As CPUs get cheaper and more powerful and consume >> less power, they encroach on more and more of the territory that used >> to favor FPGAs or custom silicon. > >That is a perfect example. Why is power consumption moving in favor of >CPUs? I expect you have not looked hard at the low power FPGAs >available. FPGAs are not a stationary target. They are advancing along >with process technology just like CPUs.
Generally in the past if there was a lot of DSP to be done and a tight power budget, it was an argument heavily in favor of FPGAs or custom silicon since power only needed to spend on circuits dedicated to the specific task. Basically, the number of switching gates per unit time was smaller with an FPGA than a CPU. The trend has been that the small, low-power micros have become more and more efficient, and while FPGAs have also been improving, the rate has not been as fast. So the borderline between which might be more favorable for a given task has generally been moving in favor of small, powerful CPUs.
>> FPGAs and custom silicon still have their place, it's just shrinking. >> >> And the FPGAs that have multiple ARM cores on them have looked >> extremely attractive to me since they came out, but I've not yet >> personally had a project where even that made sense. They're >> certainly out there, though. > >If you don't like or need the parallel functionality of FPGAs why would >you like multiple processors?
Who said they didn't like parallelism? Well-written requirement specs never say, "must be implemented in parallel", because requirements shouldn't say anything about implementation. Requirements often say, must cost <$, must consume <Watts, must do this much processing in this much time. Whether it is parallel or serial or FPGA or CPU shouldn't matter to the system engineer as long as the requirements are met and important parameters minimized or maximized. Parellel or serial or FPGA or CPU is an implementation decision and sometimes requirements and applications favor one over the other. My observation is that CPUs are contenders more today than they've ever been. I'm currently spec'ing a job that will likely be multiple CPUs because it appears to be the quickest path with the least risk. I love FPGAs, they just don't still fit in all the places that they used to.
> Parallel CPUs are *much* harder to use >effectively than FPGAs.
No, they're not. They're very easy, especially these days with high-speed serial connections between them.
> In CPUs it is always about keeping the CPU >busy.
Why? I don't care if a core sits on its thumbs for a while if it isn't burning excessive power and is otherwise a superior solution. I don't think I've ever seen a requirement document that said, "resources must be kept busy". Many CPUs have convenient sleep modes, like clock management in silicon or FPGAs.
> With multiple processors it is a much more difficult task and >still has all the issues of making a sequential CPUs process parallel >functionality. In reality there is little that is better about CPUs >over FPGAs.
There's that bias that you mentioned before again. They're different, and many aspects of CPUs are FAR better in a project than FPGAs. Some things about FPGAs are better. You apparently don't see both sides.
>I am working on a small CPU right now, but that is only because I >mistakenly ordered too many launchpads and might as well use one up, oh, >and it has an on board LCD readout. Otherwise I'd be using an FPGA >board that plugs directly into a USB port and displaying the results on >the computer. In fact, I need to order a couple more of those.
--- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus