DSPRelated.com
Forums

Anyone using Linux?

Started by Eric March 14, 2017
On 3/22/2017 4:37 PM, eric.jacobsen@ieee.org wrote:
> On Wed, 22 Mar 2017 15:47:02 -0400, rickman <gnuarm@gmail.com> wrote: > >> On 3/22/2017 2:10 PM, eric.jacobsen@ieee.org wrote: >>> On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote: >>> >>>> On 3/22/2017 1:10 AM, Steve Pope wrote: >>>>> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >>>>> >>>>>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >>>>> >>>>>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>>>>> Real men use FPGAs. >>>>> >>>>>> Every CPU chip >>>>>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>>>>> are limited from doing sequential operations. I designed a test fixture >>>>>> that had to drive a 33 MHz SPI like control interface, another high >>>>>> speed serial data interface and step through test sequences, all in an >>>>>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>>>>> wasn't standard enough. Then they would have had a hard time pumping >>>>>> data in and out of the UUT. >>>>> >>>>>> FPGAs start to loose steam when the sequential task gets very large. If >>>>>> you want to run Ethernet or USB you are likely going to want a CPU of >>>>>> some sort, but even that can be rolled in an FPGA with no trouble. I >>>>>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>>>>> of other soft cores. >>>>> >>>>> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >>>>> >>>>> But, in between winding my own transformers, and mining my own >>>>> tantulum in east Congo so as to fabricate my own capacitors, >>>>> I might not have time left to roll my own CPU-containing FPGA's. >>>>> I might just, like, buy a CPU. >>>> >>>> As I mentioned, there are a few apps that are easier on a CPU chip. >>>> It's just not many who can distinguish which those are. >>>> >>> >>> There are a number of people here (including Steve) who've done >>> hardware (HDL) and software work, and are familiar with hardware >>> implementations and software implementations and the tradeoffs >>> involved. I've mixed both, including FPGA and silicon design for >>> hardware, and bare metal CPU software on DSPs to Linux apps for >>> software. I've probably a lot more hardware design experience than >>> software, personally. >>> >>> I don't think it's as hard to sort out as you're making it out to be, >> >> I'm not saying it's hard. I find that *many* have opinions of FGPAs >> that are far, far out of date. It *is* hard to make informed decisions >> when your information is not accurate. > > Absolutely, and likewise for the CPU side. > >> There is also a huge bias regarding the ease of developing sequential >> software vs. HDL. People so often make it sound like debugging an FPGA >> is grueling work. I find the opposite. I can test 99.999% of an FPGA >> design before I ever hit the bench by using simulation. I think it is >> much harder to do that with CPUs. > > It is not harder, and the tools tend to be more plentiful and mature > because it's a larger, more varied market. > >> So instead there is a huge focus on >> highly complex debuggers to attempt to unravel the sequential execution >> of many functions. No small feat. No thanks if there is any way I can >> avoid it. > > That's that bias you were talking about earlier.
Not bias, fact. It is a lot easier to debug in simulation where I can touch everything going on than in hardware where I am isolated by the debugger.
>>> and the tradeoffs have changed over the years. What I'm seeing is >>> that the economics (e.g., part costs), power consumption, and >>> development costs have been moving in favor of CPUs as a trend for >>> quite a while. As CPUs get cheaper and more powerful and consume >>> less power, they encroach on more and more of the territory that used >>> to favor FPGAs or custom silicon. >> >> That is a perfect example. Why is power consumption moving in favor of >> CPUs? I expect you have not looked hard at the low power FPGAs >> available. FPGAs are not a stationary target. They are advancing along >> with process technology just like CPUs. > > Generally in the past if there was a lot of DSP to be done and a tight > power budget, it was an argument heavily in favor of FPGAs or custom > silicon since power only needed to spend on circuits dedicated to the > specific task. Basically, the number of switching gates per unit > time was smaller with an FPGA than a CPU. The trend has been that > the small, low-power micros have become more and more efficient, and > while FPGAs have also been improving, the rate has not been as fast. > So the borderline between which might be more favorable for a given > task has generally been moving in favor of small, powerful CPUs.
You aren't up to date on FPGA technology. There are devices that will perform as well or better than CPUs on a power basis.
>>> FPGAs and custom silicon still have their place, it's just shrinking. >>> >>> And the FPGAs that have multiple ARM cores on them have looked >>> extremely attractive to me since they came out, but I've not yet >>> personally had a project where even that made sense. They're >>> certainly out there, though. >> >> If you don't like or need the parallel functionality of FPGAs why would >> you like multiple processors? > > Who said they didn't like parallelism? Well-written requirement > specs never say, "must be implemented in parallel", because > requirements shouldn't say anything about implementation. > Requirements often say, must cost <$, must consume <Watts, must do > this much processing in this much time. Whether it is parallel or > serial or FPGA or CPU shouldn't matter to the system engineer as long > as the requirements are met and important parameters minimized or > maximized.
Earlier you said FPGAs were not needed unless the task required parallelism. Opps, that was someone else.
> Parellel or serial or FPGA or CPU is an implementation decision and > sometimes requirements and applications favor one over the other. My > observation is that CPUs are contenders more today than they've ever > been. I'm currently spec'ing a job that will likely be multiple CPUs > because it appears to be the quickest path with the least risk. I > love FPGAs, they just don't still fit in all the places that they used > to.
What is risky about FPGAs? I assume you have work to do that requires the large code base that comes with an OS or complex comms like Ethernet or USB?
>> Parallel CPUs are *much* harder to use >> effectively than FPGAs. > > No, they're not. They're very easy, especially these days with > high-speed serial connections between them.
Coordinating the assignment of tasks in multiple processors is a difficult task to manage dynamically.
>> In CPUs it is always about keeping the CPU >> busy. > > Why? I don't care if a core sits on its thumbs for a while if it > isn't burning excessive power and is otherwise a superior solution. > I don't think I've ever seen a requirement document that said, > "resources must be kept busy". > > Many CPUs have convenient sleep modes, like clock management in > silicon or FPGAs. > >> With multiple processors it is a much more difficult task and >> still has all the issues of making a sequential CPUs process parallel >> functionality. In reality there is little that is better about CPUs >> over FPGAs. > > There's that bias that you mentioned before again.
Again, not bias, experience. I suppose if you use hardware with gobs of excess capacity it makes the problems easier to deal with, but performing parallel tasks on sequential hardware requires a special level of analysis that you just don't need to do when designing with FPGAs.
> They're different, and many aspects of CPUs are FAR better in a > project than FPGAs. Some things about FPGAs are better. You > apparently don't see both sides.
I never said CPUs don't have uses or advantages. I just think most people don't fully understand FPGAs or appreciate what can be done with them. Nothing you have said shows me any different. -- Rick C
On Wed, 22 Mar 2017 18:13:49 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/22/2017 4:37 PM, eric.jacobsen@ieee.org wrote: >> On Wed, 22 Mar 2017 15:47:02 -0400, rickman <gnuarm@gmail.com> wrote: >> >>> On 3/22/2017 2:10 PM, eric.jacobsen@ieee.org wrote: >>>> On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote: >>>> >>>>> On 3/22/2017 1:10 AM, Steve Pope wrote: >>>>>> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >>>>>> >>>>>>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >>>>>> >>>>>>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>>>>>> Real men use FPGAs. >>>>>> >>>>>>> Every CPU chip >>>>>>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>>>>>> are limited from doing sequential operations. I designed a test fixture >>>>>>> that had to drive a 33 MHz SPI like control interface, another high >>>>>>> speed serial data interface and step through test sequences, all in an >>>>>>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>>>>>> wasn't standard enough. Then they would have had a hard time pumping >>>>>>> data in and out of the UUT. >>>>>> >>>>>>> FPGAs start to loose steam when the sequential task gets very large. If >>>>>>> you want to run Ethernet or USB you are likely going to want a CPU of >>>>>>> some sort, but even that can be rolled in an FPGA with no trouble. I >>>>>>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>>>>>> of other soft cores. >>>>>> >>>>>> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >>>>>> >>>>>> But, in between winding my own transformers, and mining my own >>>>>> tantulum in east Congo so as to fabricate my own capacitors, >>>>>> I might not have time left to roll my own CPU-containing FPGA's. >>>>>> I might just, like, buy a CPU. >>>>> >>>>> As I mentioned, there are a few apps that are easier on a CPU chip. >>>>> It's just not many who can distinguish which those are. >>>>> >>>> >>>> There are a number of people here (including Steve) who've done >>>> hardware (HDL) and software work, and are familiar with hardware >>>> implementations and software implementations and the tradeoffs >>>> involved. I've mixed both, including FPGA and silicon design for >>>> hardware, and bare metal CPU software on DSPs to Linux apps for >>>> software. I've probably a lot more hardware design experience than >>>> software, personally. >>>> >>>> I don't think it's as hard to sort out as you're making it out to be, >>> >>> I'm not saying it's hard. I find that *many* have opinions of FGPAs >>> that are far, far out of date. It *is* hard to make informed decisions >>> when your information is not accurate. >> >> Absolutely, and likewise for the CPU side. >> >>> There is also a huge bias regarding the ease of developing sequential >>> software vs. HDL. People so often make it sound like debugging an FPGA >>> is grueling work. I find the opposite. I can test 99.999% of an FPGA >>> design before I ever hit the bench by using simulation. I think it is >>> much harder to do that with CPUs. >> >> It is not harder, and the tools tend to be more plentiful and mature >> because it's a larger, more varied market. >> >>> So instead there is a huge focus on >>> highly complex debuggers to attempt to unravel the sequential execution >>> of many functions. No small feat. No thanks if there is any way I can >>> avoid it. >> >> That's that bias you were talking about earlier. > >Not bias, fact. It is a lot easier to debug in simulation where I can >touch everything going on than in hardware where I am isolated by the >debugger.
Not fact. Opinion.
>>>> and the tradeoffs have changed over the years. What I'm seeing is >>>> that the economics (e.g., part costs), power consumption, and >>>> development costs have been moving in favor of CPUs as a trend for >>>> quite a while. As CPUs get cheaper and more powerful and consume >>>> less power, they encroach on more and more of the territory that used >>>> to favor FPGAs or custom silicon. >>> >>> That is a perfect example. Why is power consumption moving in favor of >>> CPUs? I expect you have not looked hard at the low power FPGAs >>> available. FPGAs are not a stationary target. They are advancing along >>> with process technology just like CPUs. >> >> Generally in the past if there was a lot of DSP to be done and a tight >> power budget, it was an argument heavily in favor of FPGAs or custom >> silicon since power only needed to spend on circuits dedicated to the >> specific task. Basically, the number of switching gates per unit >> time was smaller with an FPGA than a CPU. The trend has been that >> the small, low-power micros have become more and more efficient, and >> while FPGAs have also been improving, the rate has not been as fast. >> So the borderline between which might be more favorable for a given >> task has generally been moving in favor of small, powerful CPUs. > >You aren't up to date on FPGA technology. There are devices that will >perform as well or better than CPUs on a power basis.
No, I'm up to date. It is a broad tradeoff space where clock rate and overall complexity and a lot of other things come into play. It's been a long-term trend, and vendors used to point it out regularly in both spaces.
>>>> FPGAs and custom silicon still have their place, it's just shrinking. >>>> >>>> And the FPGAs that have multiple ARM cores on them have looked >>>> extremely attractive to me since they came out, but I've not yet >>>> personally had a project where even that made sense. They're >>>> certainly out there, though. >>> >>> If you don't like or need the parallel functionality of FPGAs why would >>> you like multiple processors? >> >> Who said they didn't like parallelism? Well-written requirement >> specs never say, "must be implemented in parallel", because >> requirements shouldn't say anything about implementation. >> Requirements often say, must cost <$, must consume <Watts, must do >> this much processing in this much time. Whether it is parallel or >> serial or FPGA or CPU shouldn't matter to the system engineer as long >> as the requirements are met and important parameters minimized or >> maximized. > >Earlier you said FPGAs were not needed unless the task required >parallelism. Opps, that was someone else. > > >> Parellel or serial or FPGA or CPU is an implementation decision and >> sometimes requirements and applications favor one over the other. My >> observation is that CPUs are contenders more today than they've ever >> been. I'm currently spec'ing a job that will likely be multiple CPUs >> because it appears to be the quickest path with the least risk. I >> love FPGAs, they just don't still fit in all the places that they used >> to. > >What is risky about FPGAs? I assume you have work to do that requires >the large code base that comes with an OS or complex comms like Ethernet >or USB?
In this particular case there are many risk areas, not the least of which are available talent, tools, vendor support, library support, etc., etc., etc., not the least of which is schedule risk In this particular case the majority of these point heavily in favor of a CPU solution. For jobs where there is a lot of integration of myriad tasks that are easily handled by plopping in public-domain libraries, it gets harder to allocate schedule and talent to doing those in an FPGA where they are a bit harder to find. I hope you don't think the talent pool of FPGA people is larger than the talent pool of competent C coders.
>>> Parallel CPUs are *much* harder to use >>> effectively than FPGAs. >> >> No, they're not. They're very easy, especially these days with >> high-speed serial connections between them. > >Coordinating the assignment of tasks in multiple processors is a >difficult task to manage dynamically.
On the contrary, multi-threading is supported in most C compilers these days and is native in most of the free C tools and libraries. I do it all the time and don't find it difficult at all. Do you think it's trivial to schedule multiple blocks across different clock boundaries and trigger events in an FPGA? I don't really see much difference there.
>>> In CPUs it is always about keeping the CPU >>> busy. >> >> Why? I don't care if a core sits on its thumbs for a while if it >> isn't burning excessive power and is otherwise a superior solution. >> I don't think I've ever seen a requirement document that said, >> "resources must be kept busy". >> >> Many CPUs have convenient sleep modes, like clock management in >> silicon or FPGAs. >> >>> With multiple processors it is a much more difficult task and >>> still has all the issues of making a sequential CPUs process parallel >>> functionality. In reality there is little that is better about CPUs >>> over FPGAs. >> >> There's that bias that you mentioned before again. > >Again, not bias, experience.
How broad is your DSP software implementation experience? It sounds like not much.
> I suppose if you use hardware with gobs >of excess capacity it makes the problems easier to deal with, but >performing parallel tasks on sequential hardware requires a special >level of analysis that you just don't need to do when designing with >FPGAs.
Not at all, it just requires being able to think about it in the context of the available hardware. Even in an FPGA or other hardware implementation there's always a tradeoff between parellelism and serialization when managing resource utilization and time available. If you just wantonly make everything parallel in an FPGA you may easily wind up spending WAY more than you need to on an excessively large FPGA when some serializing could have been utilized. This goes exactly to my point; faster, cheaper, lower-power CPUs move the threshold in that overall parell-serial tradeoff, WHICH YOU HAVE TO DO IN AN FPGA ANYWAY, where a CPU implementation occupies more of the space in the serialization direction than it used to. Basically, where in the past you might have just done more serialization of the architecture in an FPGA to reduce recurring cost, now it just makes more sense to plop in a CPU and save money, power, and development time. But, if you want to use an FPGA instead because it's "better", feel free, but your solution may easily cost more, use more power, and take longer to develop and debug than mine. My clients like it when we optimize the resources they care about, like cost and time. Maybe yours don't. Sometimes a client doesn't care as much.
>> They're different, and many aspects of CPUs are FAR better in a >> project than FPGAs. Some things about FPGAs are better. You >> apparently don't see both sides. > >I never said CPUs don't have uses or advantages. I just think most >people don't fully understand FPGAs or appreciate what can be done with >them. Nothing you have said shows me any different.
Sometimes I suspect that you're a fairly short person, because a lot seems to go over your head. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 3/22/2017 7:03 PM, eric.jacobsen@ieee.org wrote:
> On Wed, 22 Mar 2017 18:13:49 -0400, rickman <gnuarm@gmail.com> wrote: > >> On 3/22/2017 4:37 PM, eric.jacobsen@ieee.org wrote: >>> On Wed, 22 Mar 2017 15:47:02 -0400, rickman <gnuarm@gmail.com> wrote: >>> >>>> On 3/22/2017 2:10 PM, eric.jacobsen@ieee.org wrote: >>>>> On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote: >>>>> >>>>>> On 3/22/2017 1:10 AM, Steve Pope wrote: >>>>>>> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >>>>>>> >>>>>>>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >>>>>>> >>>>>>>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>>>>>>> Real men use FPGAs. >>>>>>> >>>>>>>> Every CPU chip >>>>>>>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>>>>>>> are limited from doing sequential operations. I designed a test fixture >>>>>>>> that had to drive a 33 MHz SPI like control interface, another high >>>>>>>> speed serial data interface and step through test sequences, all in an >>>>>>>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>>>>>>> wasn't standard enough. Then they would have had a hard time pumping >>>>>>>> data in and out of the UUT. >>>>>>> >>>>>>>> FPGAs start to loose steam when the sequential task gets very large. If >>>>>>>> you want to run Ethernet or USB you are likely going to want a CPU of >>>>>>>> some sort, but even that can be rolled in an FPGA with no trouble. I >>>>>>>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>>>>>>> of other soft cores. >>>>>>> >>>>>>> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >>>>>>> >>>>>>> But, in between winding my own transformers, and mining my own >>>>>>> tantulum in east Congo so as to fabricate my own capacitors, >>>>>>> I might not have time left to roll my own CPU-containing FPGA's. >>>>>>> I might just, like, buy a CPU. >>>>>> >>>>>> As I mentioned, there are a few apps that are easier on a CPU chip. >>>>>> It's just not many who can distinguish which those are. >>>>>> >>>>> >>>>> There are a number of people here (including Steve) who've done >>>>> hardware (HDL) and software work, and are familiar with hardware >>>>> implementations and software implementations and the tradeoffs >>>>> involved. I've mixed both, including FPGA and silicon design for >>>>> hardware, and bare metal CPU software on DSPs to Linux apps for >>>>> software. I've probably a lot more hardware design experience than >>>>> software, personally. >>>>> >>>>> I don't think it's as hard to sort out as you're making it out to be, >>>> >>>> I'm not saying it's hard. I find that *many* have opinions of FGPAs >>>> that are far, far out of date. It *is* hard to make informed decisions >>>> when your information is not accurate. >>> >>> Absolutely, and likewise for the CPU side. >>> >>>> There is also a huge bias regarding the ease of developing sequential >>>> software vs. HDL. People so often make it sound like debugging an FPGA >>>> is grueling work. I find the opposite. I can test 99.999% of an FPGA >>>> design before I ever hit the bench by using simulation. I think it is >>>> much harder to do that with CPUs. >>> >>> It is not harder, and the tools tend to be more plentiful and mature >>> because it's a larger, more varied market. >>> >>>> So instead there is a huge focus on >>>> highly complex debuggers to attempt to unravel the sequential execution >>>> of many functions. No small feat. No thanks if there is any way I can >>>> avoid it. >>> >>> That's that bias you were talking about earlier. >> >> Not bias, fact. It is a lot easier to debug in simulation where I can >> touch everything going on than in hardware where I am isolated by the >> debugger. > > Not fact. Opinion.
You opinion is that my facts are just opinion.
>>>>> and the tradeoffs have changed over the years. What I'm seeing is >>>>> that the economics (e.g., part costs), power consumption, and >>>>> development costs have been moving in favor of CPUs as a trend for >>>>> quite a while. As CPUs get cheaper and more powerful and consume >>>>> less power, they encroach on more and more of the territory that used >>>>> to favor FPGAs or custom silicon. >>>> >>>> That is a perfect example. Why is power consumption moving in favor of >>>> CPUs? I expect you have not looked hard at the low power FPGAs >>>> available. FPGAs are not a stationary target. They are advancing along >>>> with process technology just like CPUs. >>> >>> Generally in the past if there was a lot of DSP to be done and a tight >>> power budget, it was an argument heavily in favor of FPGAs or custom >>> silicon since power only needed to spend on circuits dedicated to the >>> specific task. Basically, the number of switching gates per unit >>> time was smaller with an FPGA than a CPU. The trend has been that >>> the small, low-power micros have become more and more efficient, and >>> while FPGAs have also been improving, the rate has not been as fast. >>> So the borderline between which might be more favorable for a given >>> task has generally been moving in favor of small, powerful CPUs. >> >> You aren't up to date on FPGA technology. There are devices that will >> perform as well or better than CPUs on a power basis. > > No, I'm up to date. It is a broad tradeoff space where clock rate > and overall complexity and a lot of other things come into play. > It's been a long-term trend, and vendors used to point it out > regularly in both spaces.
Hardly. I am in the process of building a "zero power" clock that will run off ambient energy *and* update with WWVB... with an FPGA. That's low power.
>>>>> FPGAs and custom silicon still have their place, it's just shrinking. >>>>> >>>>> And the FPGAs that have multiple ARM cores on them have looked >>>>> extremely attractive to me since they came out, but I've not yet >>>>> personally had a project where even that made sense. They're >>>>> certainly out there, though. >>>> >>>> If you don't like or need the parallel functionality of FPGAs why would >>>> you like multiple processors? >>> >>> Who said they didn't like parallelism? Well-written requirement >>> specs never say, "must be implemented in parallel", because >>> requirements shouldn't say anything about implementation. >>> Requirements often say, must cost <$, must consume <Watts, must do >>> this much processing in this much time. Whether it is parallel or >>> serial or FPGA or CPU shouldn't matter to the system engineer as long >>> as the requirements are met and important parameters minimized or >>> maximized. >> >> Earlier you said FPGAs were not needed unless the task required >> parallelism. Opps, that was someone else. >> >> >>> Parellel or serial or FPGA or CPU is an implementation decision and >>> sometimes requirements and applications favor one over the other. My >>> observation is that CPUs are contenders more today than they've ever >>> been. I'm currently spec'ing a job that will likely be multiple CPUs >>> because it appears to be the quickest path with the least risk. I >>> love FPGAs, they just don't still fit in all the places that they used >>> to. >> >> What is risky about FPGAs? I assume you have work to do that requires >> the large code base that comes with an OS or complex comms like Ethernet >> or USB? > > In this particular case there are many risk areas, not the least of > which are available talent, tools, vendor support, library support, > etc., etc., etc., not the least of which is schedule risk In this > particular case the majority of these point heavily in favor of a CPU > solution. For jobs where there is a lot of integration of myriad > tasks that are easily handled by plopping in public-domain libraries, > it gets harder to allocate schedule and talent to doing those in an > FPGA where they are a bit harder to find.
You list risk issues that are real, then you mention schedule risk which is not a source of risk, but a consequence. Meanwhile you tie it to FPGAs in no useful way. I have already conceded usages of complex sequential activities such as Ethernet interfaces.
> I hope you don't think the talent pool of FPGA people is larger than > the talent pool of competent C coders.
A project only needs one engineer for each slot. There are plenty enough to go around. Does it matter if you have 100,000 to choose from or only 50,000?
>>>> Parallel CPUs are *much* harder to use >>>> effectively than FPGAs. >>> >>> No, they're not. They're very easy, especially these days with >>> high-speed serial connections between them. >> >> Coordinating the assignment of tasks in multiple processors is a >> difficult task to manage dynamically. > > On the contrary, multi-threading is supported in most C compilers > these days and is native in most of the free C tools and libraries. > I do it all the time and don't find it difficult at all.
It is supported, but you can't just use it willy nilly. It requires care in establishing priorities and using resources. There are many pitfalls.
> Do you think it's trivial to schedule multiple blocks across different > clock boundaries and trigger events in an FPGA? I don't really see > much difference there.
Actually yes. Clock boundary crossing is a very simple design issue. But mostly designs are much improved if a single fast clock is used and all the slow clocks are emulated.
>>>> In CPUs it is always about keeping the CPU >>>> busy. >>> >>> Why? I don't care if a core sits on its thumbs for a while if it >>> isn't burning excessive power and is otherwise a superior solution. >>> I don't think I've ever seen a requirement document that said, >>> "resources must be kept busy". >>> >>> Many CPUs have convenient sleep modes, like clock management in >>> silicon or FPGAs. >>> >>>> With multiple processors it is a much more difficult task and >>>> still has all the issues of making a sequential CPUs process parallel >>>> functionality. In reality there is little that is better about CPUs >>>> over FPGAs. >>> >>> There's that bias that you mentioned before again. >> >> Again, not bias, experience. > > How broad is your DSP software implementation experience? It sounds > like not much.
If you say so.
>> I suppose if you use hardware with gobs >> of excess capacity it makes the problems easier to deal with, but >> performing parallel tasks on sequential hardware requires a special >> level of analysis that you just don't need to do when designing with >> FPGAs. > > Not at all, it just requires being able to think about it in the > context of the available hardware. Even in an FPGA or other hardware > implementation there's always a tradeoff between parellelism and > serialization when managing resource utilization and time available. > If you just wantonly make everything parallel in an FPGA you may > easily wind up spending WAY more than you need to on an excessively > large FPGA when some serializing could have been utilized. This goes > exactly to my point; faster, cheaper, lower-power CPUs move the > threshold in that overall parell-serial tradeoff, WHICH YOU HAVE TO DO > IN AN FPGA ANYWAY, where a CPU implementation occupies more of the > space in the serialization direction than it used to. Basically, > where in the past you might have just done more serialization of the > architecture in an FPGA to reduce recurring cost, now it just makes > more sense to plop in a CPU and save money, power, and development > time.
I think you are confused about what I wrote. Processors are inherently sequential. Parallelism is emulated. That adds lots of complexity to make it all work without conflicts. Adding a handful of processors does nothing to improve the issue and actually makes it more complex because of task scheduling on multiple processors.
> But, if you want to use an FPGA instead because it's "better", feel > free, but your solution may easily cost more, use more power, and take > longer to develop and debug than mine. My clients like it when we > optimize the resources they care about, like cost and time. Maybe > yours don't. Sometimes a client doesn't care as much.
Or using an FPGA may cost less in both recurring and non-recurring costs, use less power and get through development in less time with fewer bugs to be discovered in the field. My clients like it when my stuff just works.
>>> They're different, and many aspects of CPUs are FAR better in a >>> project than FPGAs. Some things about FPGAs are better. You >>> apparently don't see both sides. >> >> I never said CPUs don't have uses or advantages. I just think most >> people don't fully understand FPGAs or appreciate what can be done with >> them. Nothing you have said shows me any different. > > Sometimes I suspect that you're a fairly short person, because a lot > seems to go over your head.
Ok, if you feel the need to sling personal insults I guess you have nothing left to say. -- Rick C
On Wed, 22 Mar 2017 23:38:06 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/22/2017 7:03 PM, eric.jacobsen@ieee.org wrote: >> On Wed, 22 Mar 2017 18:13:49 -0400, rickman <gnuarm@gmail.com> wrote: >> >>> On 3/22/2017 4:37 PM, eric.jacobsen@ieee.org wrote: >>>> On Wed, 22 Mar 2017 15:47:02 -0400, rickman <gnuarm@gmail.com> wrote: >>>> >>>>> On 3/22/2017 2:10 PM, eric.jacobsen@ieee.org wrote: >>>>>> On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote: >>>>>> >>>>>>> On 3/22/2017 1:10 AM, Steve Pope wrote: >>>>>>>> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >>>>>>>> >>>>>>>>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >>>>>>>> >>>>>>>>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>>>>>>>> Real men use FPGAs. >>>>>>>> >>>>>>>>> Every CPU chip >>>>>>>>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>>>>>>>> are limited from doing sequential operations. I designed a test fixture >>>>>>>>> that had to drive a 33 MHz SPI like control interface, another high >>>>>>>>> speed serial data interface and step through test sequences, all in an >>>>>>>>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>>>>>>>> wasn't standard enough. Then they would have had a hard time pumping >>>>>>>>> data in and out of the UUT. >>>>>>>> >>>>>>>>> FPGAs start to loose steam when the sequential task gets very large. If >>>>>>>>> you want to run Ethernet or USB you are likely going to want a CPU of >>>>>>>>> some sort, but even that can be rolled in an FPGA with no trouble. I >>>>>>>>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>>>>>>>> of other soft cores. >>>>>>>> >>>>>>>> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >>>>>>>> >>>>>>>> But, in between winding my own transformers, and mining my own >>>>>>>> tantulum in east Congo so as to fabricate my own capacitors, >>>>>>>> I might not have time left to roll my own CPU-containing FPGA's. >>>>>>>> I might just, like, buy a CPU. >>>>>>> >>>>>>> As I mentioned, there are a few apps that are easier on a CPU chip. >>>>>>> It's just not many who can distinguish which those are. >>>>>>> >>>>>> >>>>>> There are a number of people here (including Steve) who've done >>>>>> hardware (HDL) and software work, and are familiar with hardware >>>>>> implementations and software implementations and the tradeoffs >>>>>> involved. I've mixed both, including FPGA and silicon design for >>>>>> hardware, and bare metal CPU software on DSPs to Linux apps for >>>>>> software. I've probably a lot more hardware design experience than >>>>>> software, personally. >>>>>> >>>>>> I don't think it's as hard to sort out as you're making it out to be, >>>>> >>>>> I'm not saying it's hard. I find that *many* have opinions of FGPAs >>>>> that are far, far out of date. It *is* hard to make informed decisions >>>>> when your information is not accurate. >>>> >>>> Absolutely, and likewise for the CPU side. >>>> >>>>> There is also a huge bias regarding the ease of developing sequential >>>>> software vs. HDL. People so often make it sound like debugging an FPGA >>>>> is grueling work. I find the opposite. I can test 99.999% of an FPGA >>>>> design before I ever hit the bench by using simulation. I think it is >>>>> much harder to do that with CPUs. >>>> >>>> It is not harder, and the tools tend to be more plentiful and mature >>>> because it's a larger, more varied market. >>>> >>>>> So instead there is a huge focus on >>>>> highly complex debuggers to attempt to unravel the sequential execution >>>>> of many functions. No small feat. No thanks if there is any way I can >>>>> avoid it. >>>> >>>> That's that bias you were talking about earlier. >>> >>> Not bias, fact. It is a lot easier to debug in simulation where I can >>> touch everything going on than in hardware where I am isolated by the >>> debugger. >> >> Not fact. Opinion. > >You opinion is that my facts are just opinion. > > >>>>>> and the tradeoffs have changed over the years. What I'm seeing is >>>>>> that the economics (e.g., part costs), power consumption, and >>>>>> development costs have been moving in favor of CPUs as a trend for >>>>>> quite a while. As CPUs get cheaper and more powerful and consume >>>>>> less power, they encroach on more and more of the territory that used >>>>>> to favor FPGAs or custom silicon. >>>>> >>>>> That is a perfect example. Why is power consumption moving in favor of >>>>> CPUs? I expect you have not looked hard at the low power FPGAs >>>>> available. FPGAs are not a stationary target. They are advancing along >>>>> with process technology just like CPUs. >>>> >>>> Generally in the past if there was a lot of DSP to be done and a tight >>>> power budget, it was an argument heavily in favor of FPGAs or custom >>>> silicon since power only needed to spend on circuits dedicated to the >>>> specific task. Basically, the number of switching gates per unit >>>> time was smaller with an FPGA than a CPU. The trend has been that >>>> the small, low-power micros have become more and more efficient, and >>>> while FPGAs have also been improving, the rate has not been as fast. >>>> So the borderline between which might be more favorable for a given >>>> task has generally been moving in favor of small, powerful CPUs. >>> >>> You aren't up to date on FPGA technology. There are devices that will >>> perform as well or better than CPUs on a power basis.
This is where you missed the point. FPGAs beating CPUs on power has been kind of the expected result for many tasks, but CPU solutions have been eating into the territory and moving the threshold. So stating the obvious, default historical state as an argument by existence proof, or something, I'm not sure, doesn't add anything.
>> No, I'm up to date. It is a broad tradeoff space where clock rate >> and overall complexity and a lot of other things come into play. >> It's been a long-term trend, and vendors used to point it out >> regularly in both spaces. > >Hardly. I am in the process of building a "zero power" clock that will >run off ambient energy *and* update with WWVB... with an FPGA. That's >low power.
I'm a little puzzled about the relevance of that. For one thing it is a single parameter in a single usage case. It is easy to cite single examples for any particular corner case, but they're hardly relevant to the rest of the tradeoff space. And, duh, too. As I have attempted previously to say, dedicated clocked hardware usually has an advantage power-wise, and CPUs have historically been the underdogs in power consumption, especially for *small* *slow* tasks where all of the constantly-clocking gates of a CPU that may not be helping complete the task continue to consume power. So it is not at all surprising, expected, in fact, that it is not difficult at all to use less power with an FPGA than a CPU in many tasks. But, the tradeoff space has been trending for a long time for smaller, lower-power, lower-cost CPUs, so that many tasks that would otherwise have easily gone to an FPGA are now well within reason of using a CPU. Yes, FPGA power has gone down, too, but the changes in the tradeoff space are still real. But, I really shouldn't have to explain that to somebody who claims to understand the tradeoffs and yet gives a not-very-relevant example.
>>>>>> FPGAs and custom silicon still have their place, it's just shrinking. >>>>>> >>>>>> And the FPGAs that have multiple ARM cores on them have looked >>>>>> extremely attractive to me since they came out, but I've not yet >>>>>> personally had a project where even that made sense. They're >>>>>> certainly out there, though. >>>>> >>>>> If you don't like or need the parallel functionality of FPGAs why would >>>>> you like multiple processors? >>>> >>>> Who said they didn't like parallelism? Well-written requirement >>>> specs never say, "must be implemented in parallel", because >>>> requirements shouldn't say anything about implementation. >>>> Requirements often say, must cost <$, must consume <Watts, must do >>>> this much processing in this much time. Whether it is parallel or >>>> serial or FPGA or CPU shouldn't matter to the system engineer as long >>>> as the requirements are met and important parameters minimized or >>>> maximized. >>> >>> Earlier you said FPGAs were not needed unless the task required >>> parallelism. Opps, that was someone else. >>> >>> >>>> Parellel or serial or FPGA or CPU is an implementation decision and >>>> sometimes requirements and applications favor one over the other. My >>>> observation is that CPUs are contenders more today than they've ever >>>> been. I'm currently spec'ing a job that will likely be multiple CPUs >>>> because it appears to be the quickest path with the least risk. I >>>> love FPGAs, they just don't still fit in all the places that they used >>>> to. >>> >>> What is risky about FPGAs? I assume you have work to do that requires >>> the large code base that comes with an OS or complex comms like Ethernet >>> or USB? >> >> In this particular case there are many risk areas, not the least of >> which are available talent, tools, vendor support, library support, >> etc., etc., etc., not the least of which is schedule risk In this >> particular case the majority of these point heavily in favor of a CPU >> solution. For jobs where there is a lot of integration of myriad >> tasks that are easily handled by plopping in public-domain libraries, >> it gets harder to allocate schedule and talent to doing those in an >> FPGA where they are a bit harder to find. > >You list risk issues that are real, then you mention schedule risk which >is not a source of risk, but a consequence. Meanwhile you tie it to >FPGAs in no useful way.
Schedule risk is a real risk that many (most) clients care deeply about. Some things take longer to do in one technology or other or have more uncertainty in duration than others. The relevance to the discussion should be pretty obvious.
>I have already conceded usages of complex sequential activities such as >Ethernet interfaces.
I don't care as much about the peripherals, we're talking primarily about implementing DSP tasks. Peripherals may or may not be important depending on their role in a system, and that's true for either FPGA or CPU.
>> I hope you don't think the talent pool of FPGA people is larger than >> the talent pool of competent C coders. > >A project only needs one engineer for each slot. There are plenty >enough to go around. Does it matter if you have 100,000 to choose from >or only 50,000?
Staffed projects much? Doesn't sound like it.
>>>>> Parallel CPUs are *much* harder to use >>>>> effectively than FPGAs. >>>> >>>> No, they're not. They're very easy, especially these days with >>>> high-speed serial connections between them. >>> >>> Coordinating the assignment of tasks in multiple processors is a >>> difficult task to manage dynamically. >> >> On the contrary, multi-threading is supported in most C compilers >> these days and is native in most of the free C tools and libraries. >> I do it all the time and don't find it difficult at all. > >It is supported, but you can't just use it willy nilly. It requires >care in establishing priorities and using resources. There are many >pitfalls.
You can't do much of anything will-nilly in this business and expect to be successful. But multi-threading in C is not any harder than dealing with buffering and data scheduling across clock or other time boundaries in an FPGA. I've done plenty of both, and multi-threading isn't the hurdle you're making it out to be.
>> Do you think it's trivial to schedule multiple blocks across different >> clock boundaries and trigger events in an FPGA? I don't really see >> much difference there. > >Actually yes. Clock boundary crossing is a very simple design issue. >But mostly designs are much improved if a single fast clock is used and >all the slow clocks are emulated.
Uh, yeah, but you can't always do that, and depending on the characteristics of the multi-rate boundaries and block interfaces and data behavior it can be difficult trying to schedule data to get across those boundaries in time. Time and rate boundaries are a common area for difficult hardware bugs. I don't think either software or hardware is really harder or easier than the other, but they are a bit different. Regardless, I see no advantage to FPGAs in this area.
>>>>> In CPUs it is always about keeping the CPU >>>>> busy. >>>> >>>> Why? I don't care if a core sits on its thumbs for a while if it >>>> isn't burning excessive power and is otherwise a superior solution. >>>> I don't think I've ever seen a requirement document that said, >>>> "resources must be kept busy". >>>> >>>> Many CPUs have convenient sleep modes, like clock management in >>>> silicon or FPGAs. >>>> >>>>> With multiple processors it is a much more difficult task and >>>>> still has all the issues of making a sequential CPUs process parallel >>>>> functionality. In reality there is little that is better about CPUs >>>>> over FPGAs. >>>> >>>> There's that bias that you mentioned before again. >>> >>> Again, not bias, experience. >> >> How broad is your DSP software implementation experience? It sounds >> like not much. > >If you say so. > > >>> I suppose if you use hardware with gobs >>> of excess capacity it makes the problems easier to deal with, but >>> performing parallel tasks on sequential hardware requires a special >>> level of analysis that you just don't need to do when designing with >>> FPGAs. >> >> Not at all, it just requires being able to think about it in the >> context of the available hardware. Even in an FPGA or other hardware >> implementation there's always a tradeoff between parellelism and >> serialization when managing resource utilization and time available. >> If you just wantonly make everything parallel in an FPGA you may >> easily wind up spending WAY more than you need to on an excessively >> large FPGA when some serializing could have been utilized. This goes >> exactly to my point; faster, cheaper, lower-power CPUs move the >> threshold in that overall parell-serial tradeoff, WHICH YOU HAVE TO DO >> IN AN FPGA ANYWAY, where a CPU implementation occupies more of the >> space in the serialization direction than it used to. Basically, >> where in the past you might have just done more serialization of the >> architecture in an FPGA to reduce recurring cost, now it just makes >> more sense to plop in a CPU and save money, power, and development >> time. > >I think you are confused about what I wrote. Processors are inherently >sequential. Parallelism is emulated. That adds lots of complexity to >make it all work without conflicts. Adding a handful of processors does >nothing to improve the issue and actually makes it more complex because >of task scheduling on multiple processors.
So you dislike parallelism in processors but you like it in FPGAs? It's actually a continuum, as I attempted to explain earlier, as architecture decisions in FPGAs necessarily involve trading off how much the implementation should be parallel or serial (i.e., hardware reuse). So you should do the special level of analysis when designing with FPGAs so that you aren't buying more FPGA than you need. Extending it the other direction to serializing the task doesn't mean you need to stop when you've sorted out how to put it in an FPGA, you can see whether it can be done as well or better in a CPU. If it can and it's cheaper or smaller or uses less power or somehow better meets whatever requirements are important, why would you not do that? That analysis isn't any harder or really any different in deciding to go to a CPU instead. It really isn't difficult at all, and people who are used to working with CPUs do it all the time, too. Resource requirements and tradeoffs are a normal part of most projects, and restricting one's vision to one particular solution space when others are available makes it easier for competitors to kick your butt when they pick a better solution that was outside of your narrowed vision. But if it is a conceptual hurdle for you I can see why that would influence you to stick with what you understand.
>> But, if you want to use an FPGA instead because it's "better", feel >> free, but your solution may easily cost more, use more power, and take >> longer to develop and debug than mine. My clients like it when we >> optimize the resources they care about, like cost and time. Maybe >> yours don't. Sometimes a client doesn't care as much. > >Or using an FPGA may cost less in both recurring and non-recurring >costs, use less power and get through development in less time with >fewer bugs to be discovered in the field. My clients like it when my >stuff just works.
Absolutely and likewise, only I'll plug in an FPGA when that's best, and it often is, or a CPU when that's best, and it often is. Sounds like you're reluctant to provide the option. When all you have is a hammer, all your problems look like nails. Even when they aren't.
>>>> They're different, and many aspects of CPUs are FAR better in a >>>> project than FPGAs. Some things about FPGAs are better. You >>>> apparently don't see both sides. >>> >>> I never said CPUs don't have uses or advantages. I just think most >>> people don't fully understand FPGAs or appreciate what can be done with >>> them. Nothing you have said shows me any different. >> >> Sometimes I suspect that you're a fairly short person, because a lot >> seems to go over your head. > >Ok, if you feel the need to sling personal insults I guess you have >nothing left to say.
Cheers. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 3/23/2017 12:40 AM, eric.jacobsen@ieee.org wrote:
> On Wed, 22 Mar 2017 23:38:06 -0400, rickman <gnuarm@gmail.com> wrote: > >> On 3/22/2017 7:03 PM, eric.jacobsen@ieee.org wrote: >>> On Wed, 22 Mar 2017 18:13:49 -0400, rickman <gnuarm@gmail.com> wrote: >>> >>>> On 3/22/2017 4:37 PM, eric.jacobsen@ieee.org wrote: >>>>> On Wed, 22 Mar 2017 15:47:02 -0400, rickman <gnuarm@gmail.com> wrote: >>>>> >>>>>> On 3/22/2017 2:10 PM, eric.jacobsen@ieee.org wrote: >>>>>>> On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote: >>>>>>> >>>>>>>> On 3/22/2017 1:10 AM, Steve Pope wrote: >>>>>>>>> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >>>>>>>>> >>>>>>>>>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >>>>>>>>> >>>>>>>>>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>>>>>>>>> Real men use FPGAs. >>>>>>>>> >>>>>>>>>> Every CPU chip >>>>>>>>>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>>>>>>>>> are limited from doing sequential operations. I designed a test fixture >>>>>>>>>> that had to drive a 33 MHz SPI like control interface, another high >>>>>>>>>> speed serial data interface and step through test sequences, all in an >>>>>>>>>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>>>>>>>>> wasn't standard enough. Then they would have had a hard time pumping >>>>>>>>>> data in and out of the UUT. >>>>>>>>> >>>>>>>>>> FPGAs start to loose steam when the sequential task gets very large. If >>>>>>>>>> you want to run Ethernet or USB you are likely going to want a CPU of >>>>>>>>>> some sort, but even that can be rolled in an FPGA with no trouble. I >>>>>>>>>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>>>>>>>>> of other soft cores. >>>>>>>>> >>>>>>>>> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >>>>>>>>> >>>>>>>>> But, in between winding my own transformers, and mining my own >>>>>>>>> tantulum in east Congo so as to fabricate my own capacitors, >>>>>>>>> I might not have time left to roll my own CPU-containing FPGA's. >>>>>>>>> I might just, like, buy a CPU. >>>>>>>> >>>>>>>> As I mentioned, there are a few apps that are easier on a CPU chip. >>>>>>>> It's just not many who can distinguish which those are. >>>>>>>> >>>>>>> >>>>>>> There are a number of people here (including Steve) who've done >>>>>>> hardware (HDL) and software work, and are familiar with hardware >>>>>>> implementations and software implementations and the tradeoffs >>>>>>> involved. I've mixed both, including FPGA and silicon design for >>>>>>> hardware, and bare metal CPU software on DSPs to Linux apps for >>>>>>> software. I've probably a lot more hardware design experience than >>>>>>> software, personally. >>>>>>> >>>>>>> I don't think it's as hard to sort out as you're making it out to be, >>>>>> >>>>>> I'm not saying it's hard. I find that *many* have opinions of FGPAs >>>>>> that are far, far out of date. It *is* hard to make informed decisions >>>>>> when your information is not accurate. >>>>> >>>>> Absolutely, and likewise for the CPU side. >>>>> >>>>>> There is also a huge bias regarding the ease of developing sequential >>>>>> software vs. HDL. People so often make it sound like debugging an FPGA >>>>>> is grueling work. I find the opposite. I can test 99.999% of an FPGA >>>>>> design before I ever hit the bench by using simulation. I think it is >>>>>> much harder to do that with CPUs. >>>>> >>>>> It is not harder, and the tools tend to be more plentiful and mature >>>>> because it's a larger, more varied market. >>>>> >>>>>> So instead there is a huge focus on >>>>>> highly complex debuggers to attempt to unravel the sequential execution >>>>>> of many functions. No small feat. No thanks if there is any way I can >>>>>> avoid it. >>>>> >>>>> That's that bias you were talking about earlier. >>>> >>>> Not bias, fact. It is a lot easier to debug in simulation where I can >>>> touch everything going on than in hardware where I am isolated by the >>>> debugger. >>> >>> Not fact. Opinion. >> >> You opinion is that my facts are just opinion. >> >> >>>>>>> and the tradeoffs have changed over the years. What I'm seeing is >>>>>>> that the economics (e.g., part costs), power consumption, and >>>>>>> development costs have been moving in favor of CPUs as a trend for >>>>>>> quite a while. As CPUs get cheaper and more powerful and consume >>>>>>> less power, they encroach on more and more of the territory that used >>>>>>> to favor FPGAs or custom silicon. >>>>>> >>>>>> That is a perfect example. Why is power consumption moving in favor of >>>>>> CPUs? I expect you have not looked hard at the low power FPGAs >>>>>> available. FPGAs are not a stationary target. They are advancing along >>>>>> with process technology just like CPUs. >>>>> >>>>> Generally in the past if there was a lot of DSP to be done and a tight >>>>> power budget, it was an argument heavily in favor of FPGAs or custom >>>>> silicon since power only needed to spend on circuits dedicated to the >>>>> specific task. Basically, the number of switching gates per unit >>>>> time was smaller with an FPGA than a CPU. The trend has been that >>>>> the small, low-power micros have become more and more efficient, and >>>>> while FPGAs have also been improving, the rate has not been as fast. >>>>> So the borderline between which might be more favorable for a given >>>>> task has generally been moving in favor of small, powerful CPUs. >>>> >>>> You aren't up to date on FPGA technology. There are devices that will >>>> perform as well or better than CPUs on a power basis. > > This is where you missed the point. FPGAs beating CPUs on power has > been kind of the expected result for many tasks, but CPU solutions > have been eating into the territory and moving the threshold. So > stating the obvious, default historical state as an argument by > existence proof, or something, I'm not sure, doesn't add anything.
You are saying this is how it was, but not so much anymore. I'm saying you have not kept up with FPGA technology. There are some exceedingly low power FPGAs.
>>> No, I'm up to date. It is a broad tradeoff space where clock rate >>> and overall complexity and a lot of other things come into play. >>> It's been a long-term trend, and vendors used to point it out >>> regularly in both spaces. >> >> Hardly. I am in the process of building a "zero power" clock that will >> run off ambient energy *and* update with WWVB... with an FPGA. That's >> low power. > > I'm a little puzzled about the relevance of that. For one thing it is > a single parameter in a single usage case. It is easy to cite single > examples for any particular corner case, but they're hardly relevant > to the rest of the tradeoff space. > > And, duh, too. As I have attempted previously to say, dedicated > clocked hardware usually has an advantage power-wise, and CPUs have > historically been the underdogs in power consumption, especially for > *small* *slow* tasks where all of the constantly-clocking gates of a > CPU that may not be helping complete the task continue to consume > power. So it is not at all surprising, expected, in fact, that it is > not difficult at all to use less power with an FPGA than a CPU in many > tasks. But, the tradeoff space has been trending for a long time for > smaller, lower-power, lower-cost CPUs, so that many tasks that would > otherwise have easily gone to an FPGA are now well within reason of > using a CPU. Yes, FPGA power has gone down, too, but the changes in > the tradeoff space are still real.
The "tradeoff space"???
> But, I really shouldn't have to explain that to somebody who claims to > understand the tradeoffs and yet gives a not-very-relevant example.
You definitely need to explain your terminology. The point is that I don't think you are up to speed with some of the very low power FPGA devices available.
>>>>>>> FPGAs and custom silicon still have their place, it's just shrinking. >>>>>>> >>>>>>> And the FPGAs that have multiple ARM cores on them have looked >>>>>>> extremely attractive to me since they came out, but I've not yet >>>>>>> personally had a project where even that made sense. They're >>>>>>> certainly out there, though. >>>>>> >>>>>> If you don't like or need the parallel functionality of FPGAs why would >>>>>> you like multiple processors? >>>>> >>>>> Who said they didn't like parallelism? Well-written requirement >>>>> specs never say, "must be implemented in parallel", because >>>>> requirements shouldn't say anything about implementation. >>>>> Requirements often say, must cost <$, must consume <Watts, must do >>>>> this much processing in this much time. Whether it is parallel or >>>>> serial or FPGA or CPU shouldn't matter to the system engineer as long >>>>> as the requirements are met and important parameters minimized or >>>>> maximized. >>>> >>>> Earlier you said FPGAs were not needed unless the task required >>>> parallelism. Opps, that was someone else. >>>> >>>> >>>>> Parellel or serial or FPGA or CPU is an implementation decision and >>>>> sometimes requirements and applications favor one over the other. My >>>>> observation is that CPUs are contenders more today than they've ever >>>>> been. I'm currently spec'ing a job that will likely be multiple CPUs >>>>> because it appears to be the quickest path with the least risk. I >>>>> love FPGAs, they just don't still fit in all the places that they used >>>>> to. >>>> >>>> What is risky about FPGAs? I assume you have work to do that requires >>>> the large code base that comes with an OS or complex comms like Ethernet >>>> or USB? >>> >>> In this particular case there are many risk areas, not the least of >>> which are available talent, tools, vendor support, library support, >>> etc., etc., etc., not the least of which is schedule risk In this >>> particular case the majority of these point heavily in favor of a CPU >>> solution. For jobs where there is a lot of integration of myriad >>> tasks that are easily handled by plopping in public-domain libraries, >>> it gets harder to allocate schedule and talent to doing those in an >>> FPGA where they are a bit harder to find. >> >> You list risk issues that are real, then you mention schedule risk which >> is not a source of risk, but a consequence. Meanwhile you tie it to >> FPGAs in no useful way. > > Schedule risk is a real risk that many (most) clients care deeply > about. Some things take longer to do in one technology or other or > have more uncertainty in duration than others. The relevance to the > discussion should be pretty obvious.
Risk to the schedule is a result. Schedules don't slip by themselves. They slip because some task was not as expected for some reason. That reason is then the risk factor to consider when evaluating a technology. There is no way to assign schedule risk to a technology unless you can show how the issue with the technology that would cause a schedule slip. There is no inherent extra schedule risk in working with FPGAs. Same with cost.
>> I have already conceded usages of complex sequential activities such as >> Ethernet interfaces. > > I don't care as much about the peripherals, we're talking primarily > about implementing DSP tasks. Peripherals may or may not be > important depending on their role in a system, and that's true for > either FPGA or CPU. > >>> I hope you don't think the talent pool of FPGA people is larger than >>> the talent pool of competent C coders. >> >> A project only needs one engineer for each slot. There are plenty >> enough to go around. Does it matter if you have 100,000 to choose from >> or only 50,000? > > Staffed projects much? Doesn't sound like it. > >>>>>> Parallel CPUs are *much* harder to use >>>>>> effectively than FPGAs. >>>>> >>>>> No, they're not. They're very easy, especially these days with >>>>> high-speed serial connections between them. >>>> >>>> Coordinating the assignment of tasks in multiple processors is a >>>> difficult task to manage dynamically. >>> >>> On the contrary, multi-threading is supported in most C compilers >>> these days and is native in most of the free C tools and libraries. >>> I do it all the time and don't find it difficult at all. >> >> It is supported, but you can't just use it willy nilly. It requires >> care in establishing priorities and using resources. There are many >> pitfalls. > > You can't do much of anything will-nilly in this business and expect > to be successful. But multi-threading in C is not any harder than > dealing with buffering and data scheduling across clock or other time > boundaries in an FPGA. I've done plenty of both, and multi-threading > isn't the hurdle you're making it out to be.
That statement is imply not correct. Data scheduling across clock boundaries is trivial. I've never had any real concerns with it. If I understand the problem, the issues are easy to resolve. I have no idea why you think it is otherwise. In multitasking on a CPU you have to make sure the CPU is available for the high priority tasks. This can get very complex when sharing resources.
>>> Do you think it's trivial to schedule multiple blocks across different >>> clock boundaries and trigger events in an FPGA? I don't really see >>> much difference there. >> >> Actually yes. Clock boundary crossing is a very simple design issue. >> But mostly designs are much improved if a single fast clock is used and >> all the slow clocks are emulated. > > Uh, yeah, but you can't always do that, and depending on the > characteristics of the multi-rate boundaries and block interfaces and > data behavior it can be difficult trying to schedule data to get > across those boundaries in time. Time and rate boundaries are a > common area for difficult hardware bugs. I don't think either > software or hardware is really harder or easier than the other, but > they are a bit different. Regardless, I see no advantage to FPGAs in > this area.
Actually you can always use the high speed clock for the lower speed data. I have no idea why you find it hard. A customer was working on a design with so many separate clocks that his FPGA didn't have enough primary or even secondary clock lines. I explained to him how he can immediately cross the clock boundary to the highest speed clock domain as soon as each alternate clock signals entered the chip. He tried it and found it worked a charm. It's not so much that FPGAs have "advantages", they don't have as many problems.
>>>>>> In CPUs it is always about keeping the CPU >>>>>> busy. >>>>> >>>>> Why? I don't care if a core sits on its thumbs for a while if it >>>>> isn't burning excessive power and is otherwise a superior solution. >>>>> I don't think I've ever seen a requirement document that said, >>>>> "resources must be kept busy". >>>>> >>>>> Many CPUs have convenient sleep modes, like clock management in >>>>> silicon or FPGAs. >>>>> >>>>>> With multiple processors it is a much more difficult task and >>>>>> still has all the issues of making a sequential CPUs process parallel >>>>>> functionality. In reality there is little that is better about CPUs >>>>>> over FPGAs. >>>>> >>>>> There's that bias that you mentioned before again. >>>> >>>> Again, not bias, experience. >>> >>> How broad is your DSP software implementation experience? It sounds >>> like not much. >> >> If you say so. >> >> >>>> I suppose if you use hardware with gobs >>>> of excess capacity it makes the problems easier to deal with, but >>>> performing parallel tasks on sequential hardware requires a special >>>> level of analysis that you just don't need to do when designing with >>>> FPGAs. >>> >>> Not at all, it just requires being able to think about it in the >>> context of the available hardware. Even in an FPGA or other hardware >>> implementation there's always a tradeoff between parellelism and >>> serialization when managing resource utilization and time available. >>> If you just wantonly make everything parallel in an FPGA you may >>> easily wind up spending WAY more than you need to on an excessively >>> large FPGA when some serializing could have been utilized. This goes >>> exactly to my point; faster, cheaper, lower-power CPUs move the >>> threshold in that overall parell-serial tradeoff, WHICH YOU HAVE TO DO >>> IN AN FPGA ANYWAY, where a CPU implementation occupies more of the >>> space in the serialization direction than it used to. Basically, >>> where in the past you might have just done more serialization of the >>> architecture in an FPGA to reduce recurring cost, now it just makes >>> more sense to plop in a CPU and save money, power, and development >>> time. >> >> I think you are confused about what I wrote. Processors are inherently >> sequential. Parallelism is emulated. That adds lots of complexity to >> make it all work without conflicts. Adding a handful of processors does >> nothing to improve the issue and actually makes it more complex because >> of task scheduling on multiple processors. > > So you dislike parallelism in processors but you like it in FPGAs?
Of course. Parallelism in FPGAs is only problematic as your problem being solved. In sequential processors it creates many problems because it is emulated.
> It's actually a continuum, as I attempted to explain earlier, as > architecture decisions in FPGAs necessarily involve trading off how > much the implementation should be parallel or serial (i.e., hardware > reuse). So you should do the special level of analysis when designing > with FPGAs so that you aren't buying more FPGA than you need. > Extending it the other direction to serializing the task doesn't mean > you need to stop when you've sorted out how to put it in an FPGA, you > can see whether it can be done as well or better in a CPU. If it can > and it's cheaper or smaller or uses less power or somehow better meets > whatever requirements are important, why would you not do that? That > analysis isn't any harder or really any different in deciding to go to > a CPU instead. It really isn't difficult at all, and people who are > used to working with CPUs do it all the time, too. Resource > requirements and tradeoffs are a normal part of most projects, and > restricting one's vision to one particular solution space when others > are available makes it easier for competitors to kick your butt when > they pick a better solution that was outside of your narrowed vision.
The issue is that while it is relatively easy to trade off parallelism in FPGAs, it is harder in sequential CPUs because all the parallel tasks actually run sequentially. I'm sure you are aware of the issues this creates.
> But if it is a conceptual hurdle for you I can see why that would > influence you to stick with what you understand.
More ad hominem. Do you have to make it so personal?
>>> But, if you want to use an FPGA instead because it's "better", feel >>> free, but your solution may easily cost more, use more power, and take >>> longer to develop and debug than mine. My clients like it when we >>> optimize the resources they care about, like cost and time. Maybe >>> yours don't. Sometimes a client doesn't care as much. >> >> Or using an FPGA may cost less in both recurring and non-recurring >> costs, use less power and get through development in less time with >> fewer bugs to be discovered in the field. My clients like it when my >> stuff just works. > > Absolutely and likewise, only I'll plug in an FPGA when that's best, > and it often is, or a CPU when that's best, and it often is. Sounds > like you're reluctant to provide the option. > > When all you have is a hammer, all your problems look like nails. > Even when they aren't.
That would be true if that were the case. I have experience with every level of computing with the possible exception of massively parallel multi-CPU machines. Heck, I worked on DSP chips when they were actually rack cabinets. I've also programmed many sizes of CPUs across a wide range of size, weight and power. But more recently as FPGAs have gotten out of their rut of being bloated power hungry porcupines, I've come to realize there are a lot more apps that would be better suited to FPGAs than most people realize.
>>>>> They're different, and many aspects of CPUs are FAR better in a >>>>> project than FPGAs. Some things about FPGAs are better. You >>>>> apparently don't see both sides. >>>> >>>> I never said CPUs don't have uses or advantages. I just think most >>>> people don't fully understand FPGAs or appreciate what can be done with >>>> them. Nothing you have said shows me any different. >>> >>> Sometimes I suspect that you're a fairly short person, because a lot >>> seems to go over your head. >> >> Ok, if you feel the need to sling personal insults I guess you have >> nothing left to say. > > Cheers.
-- Rick C
On Thu, 23 Mar 2017 01:43:14 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/23/2017 12:40 AM, eric.jacobsen@ieee.org wrote: >> On Wed, 22 Mar 2017 23:38:06 -0400, rickman <gnuarm@gmail.com> wrote: >> >>> On 3/22/2017 7:03 PM, eric.jacobsen@ieee.org wrote: >>>> On Wed, 22 Mar 2017 18:13:49 -0400, rickman <gnuarm@gmail.com> wrote: >>>> >>>>> On 3/22/2017 4:37 PM, eric.jacobsen@ieee.org wrote: >>>>>> On Wed, 22 Mar 2017 15:47:02 -0400, rickman <gnuarm@gmail.com> wrote: >>>>>> >>>>>>> On 3/22/2017 2:10 PM, eric.jacobsen@ieee.org wrote: >>>>>>>> On Wed, 22 Mar 2017 03:56:02 -0400, rickman <gnuarm@gmail.com> wrote: >>>>>>>> >>>>>>>>> On 3/22/2017 1:10 AM, Steve Pope wrote: >>>>>>>>>> In article <oas04t$1vf$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>>> On Thursday, March 16, 2017 at 5:51:33 AM UTC+13, rickman wrote: >>>>>>>>>> >>>>>>>>>>>>> If you want real time, ditch the OS. Or better yet, ditch the CPU. >>>>>>>>>>>>> Real men use FPGAs. >>>>>>>>>> >>>>>>>>>>> Every CPU chip >>>>>>>>>>> has parallel I/O units we call "peripherals". There is no reason FPGAs >>>>>>>>>>> are limited from doing sequential operations. I designed a test fixture >>>>>>>>>>> that had to drive a 33 MHz SPI like control interface, another high >>>>>>>>>>> speed serial data interface and step through test sequences, all in an >>>>>>>>>>> FPGA. Most CPUs couldn't even do the job because the SPI like interface >>>>>>>>>>> wasn't standard enough. Then they would have had a hard time pumping >>>>>>>>>>> data in and out of the UUT. >>>>>>>>>> >>>>>>>>>>> FPGAs start to loose steam when the sequential task gets very large. If >>>>>>>>>>> you want to run Ethernet or USB you are likely going to want a CPU of >>>>>>>>>>> some sort, but even that can be rolled in an FPGA with no trouble. I >>>>>>>>>>> believe it is the ARM CM1 that is designed for FPGAs along with a host >>>>>>>>>>> of other soft cores. >>>>>>>>>> >>>>>>>>>> Why sure there are freeware CPU cores (8051 on up) for FPGA's. >>>>>>>>>> >>>>>>>>>> But, in between winding my own transformers, and mining my own >>>>>>>>>> tantulum in east Congo so as to fabricate my own capacitors, >>>>>>>>>> I might not have time left to roll my own CPU-containing FPGA's. >>>>>>>>>> I might just, like, buy a CPU. >>>>>>>>> >>>>>>>>> As I mentioned, there are a few apps that are easier on a CPU chip. >>>>>>>>> It's just not many who can distinguish which those are. >>>>>>>>> >>>>>>>> >>>>>>>> There are a number of people here (including Steve) who've done >>>>>>>> hardware (HDL) and software work, and are familiar with hardware >>>>>>>> implementations and software implementations and the tradeoffs >>>>>>>> involved. I've mixed both, including FPGA and silicon design for >>>>>>>> hardware, and bare metal CPU software on DSPs to Linux apps for >>>>>>>> software. I've probably a lot more hardware design experience than >>>>>>>> software, personally. >>>>>>>> >>>>>>>> I don't think it's as hard to sort out as you're making it out to be, >>>>>>> >>>>>>> I'm not saying it's hard. I find that *many* have opinions of FGPAs >>>>>>> that are far, far out of date. It *is* hard to make informed decisions >>>>>>> when your information is not accurate. >>>>>> >>>>>> Absolutely, and likewise for the CPU side. >>>>>> >>>>>>> There is also a huge bias regarding the ease of developing sequential >>>>>>> software vs. HDL. People so often make it sound like debugging an FPGA >>>>>>> is grueling work. I find the opposite. I can test 99.999% of an FPGA >>>>>>> design before I ever hit the bench by using simulation. I think it is >>>>>>> much harder to do that with CPUs. >>>>>> >>>>>> It is not harder, and the tools tend to be more plentiful and mature >>>>>> because it's a larger, more varied market. >>>>>> >>>>>>> So instead there is a huge focus on >>>>>>> highly complex debuggers to attempt to unravel the sequential execution >>>>>>> of many functions. No small feat. No thanks if there is any way I can >>>>>>> avoid it. >>>>>> >>>>>> That's that bias you were talking about earlier. >>>>> >>>>> Not bias, fact. It is a lot easier to debug in simulation where I can >>>>> touch everything going on than in hardware where I am isolated by the >>>>> debugger. >>>> >>>> Not fact. Opinion. >>> >>> You opinion is that my facts are just opinion. >>> >>> >>>>>>>> and the tradeoffs have changed over the years. What I'm seeing is >>>>>>>> that the economics (e.g., part costs), power consumption, and >>>>>>>> development costs have been moving in favor of CPUs as a trend for >>>>>>>> quite a while. As CPUs get cheaper and more powerful and consume >>>>>>>> less power, they encroach on more and more of the territory that used >>>>>>>> to favor FPGAs or custom silicon. >>>>>>> >>>>>>> That is a perfect example. Why is power consumption moving in favor of >>>>>>> CPUs? I expect you have not looked hard at the low power FPGAs >>>>>>> available. FPGAs are not a stationary target. They are advancing along >>>>>>> with process technology just like CPUs. >>>>>> >>>>>> Generally in the past if there was a lot of DSP to be done and a tight >>>>>> power budget, it was an argument heavily in favor of FPGAs or custom >>>>>> silicon since power only needed to spend on circuits dedicated to the >>>>>> specific task. Basically, the number of switching gates per unit >>>>>> time was smaller with an FPGA than a CPU. The trend has been that >>>>>> the small, low-power micros have become more and more efficient, and >>>>>> while FPGAs have also been improving, the rate has not been as fast. >>>>>> So the borderline between which might be more favorable for a given >>>>>> task has generally been moving in favor of small, powerful CPUs. >>>>> >>>>> You aren't up to date on FPGA technology. There are devices that will >>>>> perform as well or better than CPUs on a power basis. >> >> This is where you missed the point. FPGAs beating CPUs on power has >> been kind of the expected result for many tasks, but CPU solutions >> have been eating into the territory and moving the threshold. So >> stating the obvious, default historical state as an argument by >> existence proof, or something, I'm not sure, doesn't add anything. > >You are saying this is how it was, but not so much anymore. I'm saying >you have not kept up with FPGA technology. There are some exceedingly >low power FPGAs.
Which covers perhaps one corner of the possible cases. The point is it's a broad space.
>>>> No, I'm up to date. It is a broad tradeoff space where clock rate >>>> and overall complexity and a lot of other things come into play. >>>> It's been a long-term trend, and vendors used to point it out >>>> regularly in both spaces. >>> >>> Hardly. I am in the process of building a "zero power" clock that will >>> run off ambient energy *and* update with WWVB... with an FPGA. That's >>> low power. >> >> I'm a little puzzled about the relevance of that. For one thing it is >> a single parameter in a single usage case. It is easy to cite single >> examples for any particular corner case, but they're hardly relevant >> to the rest of the tradeoff space. >> >> And, duh, too. As I have attempted previously to say, dedicated >> clocked hardware usually has an advantage power-wise, and CPUs have >> historically been the underdogs in power consumption, especially for >> *small* *slow* tasks where all of the constantly-clocking gates of a >> CPU that may not be helping complete the task continue to consume >> power. So it is not at all surprising, expected, in fact, that it is >> not difficult at all to use less power with an FPGA than a CPU in many >> tasks. But, the tradeoff space has been trending for a long time for >> smaller, lower-power, lower-cost CPUs, so that many tasks that would >> otherwise have easily gone to an FPGA are now well within reason of >> using a CPU. Yes, FPGA power has gone down, too, but the changes in >> the tradeoff space are still real. > >The "tradeoff space"??? > >> But, I really shouldn't have to explain that to somebody who claims to >> understand the tradeoffs and yet gives a not-very-relevant example. > >You definitely need to explain your terminology. > >The point is that I don't think you are up to speed with some of the >very low power FPGA devices available.
I am, but they don't cover all the needs. Neither do CPUs. Neither do other forms of FPGAs, Neither do the FPGAs with silicon CPU cores already in them, or plopping in a soft-core CPU in an FPGA.
>>>>>>>> FPGAs and custom silicon still have their place, it's just shrinking. >>>>>>>> >>>>>>>> And the FPGAs that have multiple ARM cores on them have looked >>>>>>>> extremely attractive to me since they came out, but I've not yet >>>>>>>> personally had a project where even that made sense. They're >>>>>>>> certainly out there, though. >>>>>>> >>>>>>> If you don't like or need the parallel functionality of FPGAs why would >>>>>>> you like multiple processors? >>>>>> >>>>>> Who said they didn't like parallelism? Well-written requirement >>>>>> specs never say, "must be implemented in parallel", because >>>>>> requirements shouldn't say anything about implementation. >>>>>> Requirements often say, must cost <$, must consume <Watts, must do >>>>>> this much processing in this much time. Whether it is parallel or >>>>>> serial or FPGA or CPU shouldn't matter to the system engineer as long >>>>>> as the requirements are met and important parameters minimized or >>>>>> maximized. >>>>> >>>>> Earlier you said FPGAs were not needed unless the task required >>>>> parallelism. Opps, that was someone else. >>>>> >>>>> >>>>>> Parellel or serial or FPGA or CPU is an implementation decision and >>>>>> sometimes requirements and applications favor one over the other. My >>>>>> observation is that CPUs are contenders more today than they've ever >>>>>> been. I'm currently spec'ing a job that will likely be multiple CPUs >>>>>> because it appears to be the quickest path with the least risk. I >>>>>> love FPGAs, they just don't still fit in all the places that they used >>>>>> to. >>>>> >>>>> What is risky about FPGAs? I assume you have work to do that requires >>>>> the large code base that comes with an OS or complex comms like Ethernet >>>>> or USB? >>>> >>>> In this particular case there are many risk areas, not the least of >>>> which are available talent, tools, vendor support, library support, >>>> etc., etc., etc., not the least of which is schedule risk In this >>>> particular case the majority of these point heavily in favor of a CPU >>>> solution. For jobs where there is a lot of integration of myriad >>>> tasks that are easily handled by plopping in public-domain libraries, >>>> it gets harder to allocate schedule and talent to doing those in an >>>> FPGA where they are a bit harder to find. >>> >>> You list risk issues that are real, then you mention schedule risk which >>> is not a source of risk, but a consequence. Meanwhile you tie it to >>> FPGAs in no useful way. >> >> Schedule risk is a real risk that many (most) clients care deeply >> about. Some things take longer to do in one technology or other or >> have more uncertainty in duration than others. The relevance to the >> discussion should be pretty obvious. > >Risk to the schedule is a result. Schedules don't slip by themselves. >They slip because some task was not as expected for some reason. That >reason is then the risk factor to consider when evaluating a technology. > There is no way to assign schedule risk to a technology unless you can >show how the issue with the technology that would cause a schedule slip.
You're catching on. Slowly, but you're catching on.
> There is no inherent extra schedule risk in working with FPGAs.
Nor is there with CPUs, but depending on a project and what needs to be done, one may present more risk than the other due to associated items.
>Same with cost.
Yes, same with cost.
> >>> I have already conceded usages of complex sequential activities such as >>> Ethernet interfaces. >> >> I don't care as much about the peripherals, we're talking primarily >> about implementing DSP tasks. Peripherals may or may not be >> important depending on their role in a system, and that's true for >> either FPGA or CPU. >> >>>> I hope you don't think the talent pool of FPGA people is larger than >>>> the talent pool of competent C coders. >>> >>> A project only needs one engineer for each slot. There are plenty >>> enough to go around. Does it matter if you have 100,000 to choose from >>> or only 50,000? >> >> Staffed projects much? Doesn't sound like it. >> >>>>>>> Parallel CPUs are *much* harder to use >>>>>>> effectively than FPGAs. >>>>>> >>>>>> No, they're not. They're very easy, especially these days with >>>>>> high-speed serial connections between them. >>>>> >>>>> Coordinating the assignment of tasks in multiple processors is a >>>>> difficult task to manage dynamically. >>>> >>>> On the contrary, multi-threading is supported in most C compilers >>>> these days and is native in most of the free C tools and libraries. >>>> I do it all the time and don't find it difficult at all. >>> >>> It is supported, but you can't just use it willy nilly. It requires >>> care in establishing priorities and using resources. There are many >>> pitfalls. >> >> You can't do much of anything will-nilly in this business and expect >> to be successful. But multi-threading in C is not any harder than >> dealing with buffering and data scheduling across clock or other time >> boundaries in an FPGA. I've done plenty of both, and multi-threading >> isn't the hurdle you're making it out to be. > >That statement is imply not correct. Data scheduling across clock >boundaries is trivial. I've never had any real concerns with it. If I >understand the problem, the issues are easy to resolve. I have no idea >why you think it is otherwise.
Just a jump across a clock boundary is trivial. That's not what I'm talking about. Consider multiplexing four different streams at different, arbitray rates, perhaps that can even change in real-time with their own input clock, into a single, fixed-rate stream. There can't be holes in the stream. Then, on the other end of the link, take the fixed-rate stream and separate it into it's different, lesser variable-rate streams and reproduce the different clocks that put them into the big stream. That's just one example, there are many. Multi-rate interfaces and buffering are often sources of timing issues, either in actual timing or in timing logic. These sorts of things have tripped up very experienced FPGA implementers in the details. They're sort of similar issues, between these kinds of FPGA partitioning problems and multi-threading in software, but not totally. Multi-threading has it's own challenges, but it's definitely NOT harder than dealing with certain partitioning issues in FPGAs.
>In multitasking on a CPU you have to make sure the CPU is available for >the high priority tasks. This can get very complex when sharing >resources.
It can. When you push any technology you can be up for challenges. FPGAs are most certainly that way as well. So is silicon. So is rf. It's pretty much why there are engineers.
>>>> Do you think it's trivial to schedule multiple blocks across different >>>> clock boundaries and trigger events in an FPGA? I don't really see >>>> much difference there. >>> >>> Actually yes. Clock boundary crossing is a very simple design issue. >>> But mostly designs are much improved if a single fast clock is used and >>> all the slow clocks are emulated. >> >> Uh, yeah, but you can't always do that, and depending on the >> characteristics of the multi-rate boundaries and block interfaces and >> data behavior it can be difficult trying to schedule data to get >> across those boundaries in time. Time and rate boundaries are a >> common area for difficult hardware bugs. I don't think either >> software or hardware is really harder or easier than the other, but >> they are a bit different. Regardless, I see no advantage to FPGAs in >> this area. > >Actually you can always use the high speed clock for the lower speed >data. I have no idea why you find it hard. A customer was working on a >design with so many separate clocks that his FPGA didn't have enough >primary or even secondary clock lines. I explained to him how he can >immediately cross the clock boundary to the highest speed clock domain >as soon as each alternate clock signals entered the chip. He tried it >and found it worked a charm.
>It's not so much that FPGAs have "advantages", they don't have as many >problems.
Well, after working a LOT with FPGAs, silicon implementations, discrete logic implementation, and software implementations for DSP for thirty years, I can't agree. They all have their challenges and advantages and disadvantages, as one would expect. I think if you were right the world would be flocking to FPGA implementations, but they're not and never have. FPGAs have a unique niche where they're the right choice, and I've worked in that niche for much of my career, but they ain't a good fit everywhere. My point has just been that the natural evolutions in the technologies and the tools has made CPUs more attractive in many places that used to be ruled by FPGAs, or even ASICs. And you DO see that happening in the industry. Now there are CPUs and GPUs in many places where there used to be dedicated hardware.
>>>>>>> In CPUs it is always about keeping the CPU >>>>>>> busy. >>>>>> >>>>>> Why? I don't care if a core sits on its thumbs for a while if it >>>>>> isn't burning excessive power and is otherwise a superior solution. >>>>>> I don't think I've ever seen a requirement document that said, >>>>>> "resources must be kept busy". >>>>>> >>>>>> Many CPUs have convenient sleep modes, like clock management in >>>>>> silicon or FPGAs. >>>>>> >>>>>>> With multiple processors it is a much more difficult task and >>>>>>> still has all the issues of making a sequential CPUs process parallel >>>>>>> functionality. In reality there is little that is better about CPUs >>>>>>> over FPGAs. >>>>>> >>>>>> There's that bias that you mentioned before again. >>>>> >>>>> Again, not bias, experience. >>>> >>>> How broad is your DSP software implementation experience? It sounds >>>> like not much. >>> >>> If you say so. >>> >>> >>>>> I suppose if you use hardware with gobs >>>>> of excess capacity it makes the problems easier to deal with, but >>>>> performing parallel tasks on sequential hardware requires a special >>>>> level of analysis that you just don't need to do when designing with >>>>> FPGAs. >>>> >>>> Not at all, it just requires being able to think about it in the >>>> context of the available hardware. Even in an FPGA or other hardware >>>> implementation there's always a tradeoff between parellelism and >>>> serialization when managing resource utilization and time available. >>>> If you just wantonly make everything parallel in an FPGA you may >>>> easily wind up spending WAY more than you need to on an excessively >>>> large FPGA when some serializing could have been utilized. This goes >>>> exactly to my point; faster, cheaper, lower-power CPUs move the >>>> threshold in that overall parell-serial tradeoff, WHICH YOU HAVE TO DO >>>> IN AN FPGA ANYWAY, where a CPU implementation occupies more of the >>>> space in the serialization direction than it used to. Basically, >>>> where in the past you might have just done more serialization of the >>>> architecture in an FPGA to reduce recurring cost, now it just makes >>>> more sense to plop in a CPU and save money, power, and development >>>> time. >>> >>> I think you are confused about what I wrote. Processors are inherently >>> sequential. Parallelism is emulated. That adds lots of complexity to >>> make it all work without conflicts. Adding a handful of processors does >>> nothing to improve the issue and actually makes it more complex because >>> of task scheduling on multiple processors. >> >> So you dislike parallelism in processors but you like it in FPGAs? > >Of course. Parallelism in FPGAs is only problematic as your problem >being solved. In sequential processors it creates many problems because >it is emulated.
No, parallel processors. You seem to dislike parallel processors. Even a Raspberry Pi 3 has four cores, and as cheap as many fo the ARM SoCs are, you can plop down parallel SoCs, too. Parallelism isn't just for FPGAs, there is a huge tradeoff space of how to get things done these days.
>> It's actually a continuum, as I attempted to explain earlier, as >> architecture decisions in FPGAs necessarily involve trading off how >> much the implementation should be parallel or serial (i.e., hardware >> reuse). So you should do the special level of analysis when designing >> with FPGAs so that you aren't buying more FPGA than you need. >> Extending it the other direction to serializing the task doesn't mean >> you need to stop when you've sorted out how to put it in an FPGA, you >> can see whether it can be done as well or better in a CPU. If it can >> and it's cheaper or smaller or uses less power or somehow better meets >> whatever requirements are important, why would you not do that? That >> analysis isn't any harder or really any different in deciding to go to >> a CPU instead. It really isn't difficult at all, and people who are >> used to working with CPUs do it all the time, too. Resource >> requirements and tradeoffs are a normal part of most projects, and >> restricting one's vision to one particular solution space when others >> are available makes it easier for competitors to kick your butt when >> they pick a better solution that was outside of your narrowed vision. > >The issue is that while it is relatively easy to trade off parallelism >in FPGAs, it is harder in sequential CPUs because all the parallel tasks >actually run sequentially. I'm sure you are aware of the issues this >creates.
They don't run sequentially in parallel processors. But even with a single processor, if you have the processing bandwidth it still comes out in the wash. A signal goes into a box and has a lot of DSP done to it and comes out the other end. Can you tell whether it was done in parallel or not? No, you can't.
>> But if it is a conceptual hurdle for you I can see why that would >> influence you to stick with what you understand. > >More ad hominem. Do you have to make it so personal?
Uh, personal experience or viewpoints affecting behavior is ad hominem?
>>>> But, if you want to use an FPGA instead because it's "better", feel >>>> free, but your solution may easily cost more, use more power, and take >>>> longer to develop and debug than mine. My clients like it when we >>>> optimize the resources they care about, like cost and time. Maybe >>>> yours don't. Sometimes a client doesn't care as much. >>> >>> Or using an FPGA may cost less in both recurring and non-recurring >>> costs, use less power and get through development in less time with >>> fewer bugs to be discovered in the field. My clients like it when my >>> stuff just works. >> >> Absolutely and likewise, only I'll plug in an FPGA when that's best, >> and it often is, or a CPU when that's best, and it often is. Sounds >> like you're reluctant to provide the option. >> >> When all you have is a hammer, all your problems look like nails. >> Even when they aren't. > >That would be true if that were the case. I have experience with every >level of computing with the possible exception of massively parallel >multi-CPU machines. Heck, I worked on DSP chips when they were actually >rack cabinets. I've also programmed many sizes of CPUs across a wide >range of size, weight and power. > >But more recently as FPGAs have gotten out of their rut of being bloated >power hungry porcupines, I've come to realize there are a lot more apps >that would be better suited to FPGAs than most people realize.
Yes, the breadth of coverage of FPGAs has been increasing, which is a good thing. There are places where it wasn't previously practical to put an FPGA, but it is now. That doesn't mean that CPUs aren't competing with FPGAs in places where they didn't before.
>>>>>> They're different, and many aspects of CPUs are FAR better in a >>>>>> project than FPGAs. Some things about FPGAs are better. You >>>>>> apparently don't see both sides. >>>>> >>>>> I never said CPUs don't have uses or advantages. I just think most >>>>> people don't fully understand FPGAs or appreciate what can be done with >>>>> them. Nothing you have said shows me any different. >>>> >>>> Sometimes I suspect that you're a fairly short person, because a lot >>>> seems to go over your head. >>> >>> Ok, if you feel the need to sling personal insults I guess you have >>> nothing left to say. >> >> Cheers. > > > >-- > >Rick C
--- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
Eric <Eric@spamspamorspam.com> writes:

> Just curious about how much Linux is being used for embedded DSP apps. > If you're using it, what are your normal development tools?
Eric, Lately I've been doing non-DSP things like equipment control libraries (has anyone heard of BACnet?) lately using linux for both the development as well as the final target. Folks will probably shake their heads, but my normal development tools are emacs, gdb, and gnumake. I don't even have a JTAG or IDE, other than gdb running inside emacs. Since the target OS is linux and has a network interface, I have been able to use gdbserver and gdb to debug on the target when necessary. Most of the time, due to the availability of compatible hardware on my desktop system, I build and test on my desktop linux, then only retarget the embedded linux for the final "does it still work" step. I have generated the cross-compile tools using yocto, bitbake, etc, so I build the target on my development system (F24). I have a gnumake-based build system I've used for, literally, decades, over multiple projects. In many of the DSP projects I've done in the past, you would just append Matlab or GNUOctave to the list of tools above. -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com
On 03/24/2017 07:19 PM, Randy Yates wrote:
> Eric <Eric@spamspamorspam.com> writes: > >> Just curious about how much Linux is being used for embedded DSP apps. >> If you're using it, what are your normal development tools? > > Eric, > > Lately I've been doing non-DSP things like equipment control libraries > (has anyone heard of BACnet?) lately using linux for both the > development as well as the final target. > > Folks will probably shake their heads, but my normal development tools > are emacs, gdb, and gnumake. I don't even have a JTAG or IDE, other than > gdb running inside emacs. Since the target OS is linux and has a network > interface, I have been able to use gdbserver and gdb to debug on the > target when necessary. Most of the time, due to the availability of > compatible hardware on my desktop system, I build and test on my desktop > linux, then only retarget the embedded linux for the final "does it > still work" step.
That's just stupid. Why would you want to use time-tested tools that have been generally stable for well over a decade when there's a perennial stream of latest and greatest tools that you can invest time and energy into learning only to find the project stalled with hundreds of logged bugs and no workarounds? -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.
On Fri, 24 Mar 2017 22:19:02 -0400, Randy Yates wrote:

> Eric <Eric@spamspamorspam.com> writes: > >> Just curious about how much Linux is being used for embedded DSP apps. >> If you're using it, what are your normal development tools? > > Eric, > > Lately I've been doing non-DSP things like equipment control libraries > (has anyone heard of BACnet?) lately using linux for both the > development as well as the final target. > > Folks will probably shake their heads, but my normal development tools > are emacs, gdb, and gnumake. I don't even have a JTAG or IDE, other than > gdb running inside emacs. Since the target OS is linux and has a network > interface, I have been able to use gdbserver and gdb to debug on the > target when necessary. Most of the time, due to the availability of > compatible hardware on my desktop system, I build and test on my desktop > linux, then only retarget the embedded linux for the final "does it > still work" step.
I do more straight embedded than DSP these days, but with the replacement of Eclipse for emacs, and the addition of OpenOCD ('cuz, no linux on the target), my toolchain is the same. I understand that emacs is great if you have all the keystrokes memorized -- but I've come to like the point- and-click of Eclipse, and if you use their plain-jane environment it rarely breaks, and never for long. I try to maximize the amount of unit testing I do on the PC in native format, to boot.
> I have generated the cross-compile tools using yocto, bitbake, etc, > so I build the target on my development system (F24). > > I have a gnumake-based build system I've used for, literally, decades, > over multiple projects. > > In many of the DSP projects I've done in the past, you would just append > Matlab or GNUOctave to the list of tools above.
Scilab for me, but yes. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Rob Gaddi  <rgaddi@highlandtechnology.invalid> wrote:

>On 03/24/2017 07:19 PM, Randy Yates wrote:
>> Eric <Eric@spamspamorspam.com> writes: >> Folks will probably shake their heads, but my normal development tools >> are emacs, gdb, and gnumake. I don't even have a JTAG or IDE, other than >> gdb running inside emacs. Since the target OS is linux and has a network >> interface, I have been able to use gdbserver and gdb to debug on the >> target when necessary. Most of the time, due to the availability of >> compatible hardware on my desktop system, I build and test on my desktop >> linux, then only retarget the embedded linux for the final "does it >> still work" step.
>That's just stupid. Why would you want to use time-tested tools that >have been generally stable for well over a decade when there's a >perennial stream of latest and greatest tools that you can invest time >and energy into learning only to find the project stalled with hundreds >of logged bugs and no workarounds?
Right. I've never been drawn towards using an IDE for the C, C++ and Verilog projects I've worked on. The text based and command line tools are very capable and reliable. Similarly I somewhat dislike (and have lower productivity) using Simulink instead of or in addition to Matlab, but that's a trickier trade-off. Steve