DSPRelated.com
Forums

Future of Signal Processing

Started by Mannai_Murali 4 months ago18 replieslatest reply 4 months ago423 views

Are all signal processing algorithms in communication and control going to be replaced by machine learning and AI.Is it still worth to look in to conventional signal processing like pll,synchronization,equalization etc using conventional signal processing algorithms

Is it a must for a signal processing based communication engineer to learn machine learning and AI?

Any suggetions will be appreciated.

Thanks,

[ - ]
Reply by jbrowerJune 9, 2024

Hi Mannai-

AI doesn't (can't) replace physics. After all possible information has been extracted from physical signals using available signal processing theory, then AI can be applied. You can look to evolution to see some examples of such boundaries, for example the human cochlea converts sound waves to frequency domain, equivalent to one of the most famous developments in signal processing, the FFT. From there 3D "frequency images" can be further analyzed by convolutional neural networks and large language models, and so on.

As for DSP in the job market, signal processing is now taught as a foundational discipline, sort of like linear systems or EM fields, but not a "destination skill". In 2017 former Texas Instruments guys wrote about the future of DSP here:

  https://www.comsoc.org/publications/ctn/death-and-...

and in 2020 I wrote about what happened to Texas Instruments guys here:

  https://www.linkedin.com/pulse/dsps-dead-jeff-brow...

Researchers and engineers who know the latest AI and data science developments, and how to apply them within the Nvidia eco-system, can earn USD 600k plus in the Bay Area. Probably the last remaining area where signal processing experts can come close to that is in communications, especially radio (5G, 6G, etc).

No, it's not a must for a communication engineer to learn ML/AI, but it's highly advisable to gain familiarity with those areas, which will steadily become unavoidable. The communications engineer who can clearly see when ML/AI is hype and when it's truly applicable - now that guy is valuable.

-Jeff

[ - ]
Reply by Mannai_MuraliJune 9, 2024

Thank you for the detailed reply.

[ - ]
Reply by omersayliJune 9, 2024

I agree that it is not must, but ability to make use of AI tools will be (or already? have been) beneficial. 

may be not the right place but; on the other hand, I don't think "evolution theory" have the explanatory ability to explain biological information and complexity, ie through "prunning" of infinite branches of randomness(?) tree, which is said to be possible by "time".    

[ - ]
Reply by jbrowerJune 9, 2024

Omersayli

"pruning of infinite branches of randomness" ... I don't think we'd even be discussing AI right now if there hadn't been some pruning ! haha


-Jeff


[ - ]
Reply by omersayliJune 9, 2024

good, you know prunning then :)

so you know that one should spend millions (maybe more) dollars for special equipment (ie nvidia cards), use math, develop algorithms and software and do R/D for AI also?

[ - ]
Reply by jbrowerJune 9, 2024

No actually not. Big AI's current direction is completely opposite evolution, which prizes efficiency above all else:

  https://www.linkedin.com/pulse/sum-all-inference-j...

  https://www.linkedin.com/posts/jeff-brower-1a51565...

-Jeff

[ - ]
Reply by omersayliJune 9, 2024

I like your photo, you were there in Nvidia presentation, cool indeed!


I understand your blogs, I agree your comment on efficiency of the brain and also your "wishful thinking" about other types of RAMs  etc, but don't see those as direction(s) yet? 

I understand your efficiency based products for AI but when it is hopefully made, that would be again the result of engineers like you, scientists, technicians etc (accountants also :)  That wont change my understanding of 'design' process output for a "product" 


Regards 

Omer

[ - ]
Reply by jbrowerJune 9, 2024

Hi Omer, yes my thinking is wishful, indeed :-)  But the pressure is on Big AI to find more efficient approaches and create decentralized AI. They can't be using small nuclear reactors for each AI data center, even if Gates thinks that's a good idea. Not to mention no one is happy spending huge mega millions for GPUs.

With that level of pressure -- both climate and financial -- there will be breakthroughs soon enough.

-Jeff

[ - ]
Reply by zwitter689June 9, 2024

Jeff, I like your answer, it is thoughtful and well spoken.  I agree with you but wonder if it is my experience and insight or my own reluctance to accept new paradigms. This is not meant to be a criticism of your response.

[ - ]
Reply by LKoppJune 9, 2024

Hi Mannai-

Signal Processing is indeed a frustrating discipline. It's like DNA for living bodies, everybody knows it's there and essential but very few really care, it is now taken for granted and embedded in systems. It is only when "evolution" or "illness" is at sake that we look back to the fundamental.

I am not talking about business, career making or circuits..it is more general. And to a large extent may be it is a poof of maturity.. the corpus of knowledge in SP has reached such a level of understanding, that it is no more a game of speculation and hot discussion, it is vastly predictable.. but by a reduced set of experts, it has generated a sufficiently large amount of "tricks" that it is generally not necessary to think about it. It's part of the "mineralisation" of the technic, no more fun...and a lot of work to master the details.

Signal processing might rather be called "Noise" processing or "Uncertainties" processing, because it is the uncertainties that make the SP so specific. And uncertainties is certainly something that will always be present in the future. The more you deal with a situation the more what is initially called "noise" becomes a "nuisance" and then a "jamming process" that required to be handled..

The physics is also a domain where SP will always be essential and specific. Clocks for instance is a critical element. Embedding DSP in sensors seems to me an essential direction. For instance an hydrophone that will directly deliver a digital flow on a optical fiber. New Physical devices appear regularly with increased accuracy (magnetic, gravity), they are used in arrays sometimes on very large bases (like Virgo and Ligo for gravitational waves). There is so many applications where SP is required..Communicating and controling drones on the moon or on the sea bottom are places where every thing has to be done. On the moon circuits have to survive cosmic-rays and the sea bottom is even farther away seen thru the acoustic channel. OK, all this is not big business today, but without solving it there is no deployment to Space and no future for humanity.

SP, DSP and circuits have a great future.. What is called AI is a simply a proof of maturity of the decision making process, it is a mechanisation of the traditional Fisher's methodology. Of course most people don't know anything about Likelihood Ratio, they are happy to use an "application" on their device.

To summarize DSP engineers have matured tomato plants and we are now submerged by pizza..

I don't know if it helps but thank you for reading

Cheers

Laurent


[ - ]
Reply by kazJune 9, 2024
"To summarize DSP engineers have matured tomato plants and we are now submerged by pizza.."

This is the best analogy I have to admit.

[ - ]
Reply by omersayliJune 9, 2024

I remember talking of electronics society (supposedly IEEE?) membership of the scientist in the Terminator movie series :)

[ - ]
Reply by Mannai_MuraliJune 10, 2024

Thanks

[ - ]
Reply by jekain314June 10, 2024

A key to robust-to-disturbance high-speed data transfer is the multicarrier waveform. The preamble's individual tones with predefined phase/amplitude act to calibrate any frequency-dependent channel error. An AI approach to processing this occasional dataset might lead to better channel characterization: adaptive multipath delay, noise assessment, stream detection/use, receiver frequency/phase/time synchronization .... 

The batch-processing nature of the receiver's FFT means there is little feedback on the disturbance characterization of the channel during symbol sampling -- even though we are sampling within a symbol period at the signal bandwidth rate. An FFT alternative would allow oversampling with sample-rate independent of the #subcarriers, and per-sample processing that supports channel disturbance characterization. The oversampling allows skipping all over-a-threshold measurements so as to mitigate PAPR. The over-sampling would offer a more information-rich dataset for channel characterization using AI.

[ - ]
Reply by bclevyJune 9, 2024

Ultimately, DSP will need to be used in combination with ML tools to properly train ML systems. See for example the discussion on Chad Spooner's blog on how to combine cyclic signal processing with ML:

Machine Learning – Cyclostationary Signal Processing

[ - ]
Reply by kazJune 10, 2024

Eventually it is matter of replacing human skills by machines for the benefit of few entrepreneurs. It started with mechanical robots that learned from human movements then replaced it. Now it is AI learning from human brain skills then replacing it. I wonder if those entrepreneurs have any plans for management of social problems they create. I dream one day we get a tool to replace those who are controlling the course of history and then see what they would do or say.

[ - ]
Reply by CharlieRaderJune 13, 2024
To discuss the relationship of signal processing and artificial intelligence, we first need to define each of them.

I think of digital signal processing as three different things.

First, there is the process of mapping a problem to an algorithm or an architecture. This is a step that has, so far, required human creativity. That creativity may be assisted machines, but it is rather uncommon for a machine to actually invent an algorithm or an architecture.

Second, there is the process by which data, which is the "signal" in signal processing, is processed to give answers to questions about the data. This is certainly signal processing, but no human does the processing any more. A hundred years ago, before computers were available, humans used algorithms to process signals.

The third part of signal processing is to look at data and recognize patterns that help understand the data. For example, the process of spectral estimation uses algorithms to compute the parameters of model which could describe a large collection of signals as if they were created by playing white noise into a linear filter with the estimated parameters. The aim is not to eliminate the white noise, but to use imaginary white noise, structureless, as a starting point and to identify the parameters of a hypothetical class of linear systems that fit well with the observed data. Here, as with the first two cases, it takes human creativity to originate the models, other creativity to invent fast algorithms that derive the parameters from the observed signals, and, entirely by computers, actually execute those algorithms. This third part of signal processing could certainly be called pattern recognition.

So let's go to that third subset of the field of signal processing. Everything artificial intelligence has done so far can be called pattern recognition. There has never, to my knowledge, been a serious demonstration of artificial intelligence showing creativity. But the methods of artificial intelligence could almost certainly recognize other kinds of patterns in data collection, beyond spectral estimation, and in fact that is where A.I. has been able to out-do humans.

I would like to give an example. My old colleague, Tom Stockham, wanted to use old recordings of singer Enrico Caruso, and make them sound more like Caruso by deleting artifacts of the primitive recording process. Stockham knew that the old recordings added noise to the recorded music and that the original music has a spectrum that was, in many cases, spectral lines. Anything in between the hypothetical spectrum lines would usually be recording artifacts and they could be filtered out. The process would not be a perfect reproduction because the spectral lines could themselves be distorted by the recording process. But the biggest part of the energy in the artifacts would be between the spectral lines. Tom's accurate understanding of the spectral structure of voice, musical instruments, and recording artifacts let him develop and use algorithms that permitted a computer to vastly improve the quality of the old recordings of the otherwise long gone sounds of a great singer. I would be very surprised if modern A.I. could learn, from the data, a small number of surviving recordings, how to improve them.
[ - ]
Reply by sami_aldalahmahJune 16, 2024

Hello,


I would argue that signal processing and DSP will become (if it is not already) an essential component in the AI/ML processing pipeline.  

AI is data driven and hence requires data-intensive computations, mainly for training. Currently, GPU and cloud servers are used to carry out those computation, but with significant price tags and training time (delay). At some point, cost and training delay should be reduced. Reducing the training delay gives more time to explore different AI/ML models and faster time to marker to state a few advantages. In my opinion, this can be done by (at least) two ways:

1- Reduce noise in the data, hence reduce model over-fitting.

2- Data/signal pre-processing may provides new or better features used for training. Such as using the FFT or DWT to convert a 1D signal to a 2D image that can be fed to a convolutional neural network. Of course other types of processing can be used to extract meaningful features from the data.