MSc Digital Audio Engineering Project Idea
Started by 5 years ago●12 replies●latest reply 5 years ago●262 viewsHi there guys, I've joined this site to further help me and keep me in the loop with digital audio related topics. I am currently an MSc student and I need to write a research paper. I did Mathematics as my BSc and so I am inclined to do a lot of maths in my research rather than coding. I am interested really in Artificial Reverberation. The area however is so saturated that it is hard to pick what specifically to work on. As Audio Engineers, could anyone tell me what they would be interested to see further explored in Artificial Reverberation? Any suggestions would be of so much help. Thank you kindly!
#Convolution #Reverb
That's a really broad question, but here are a few things that jump to mind.
1. How can you make a stable IIR filter that approximates the sound of a given FIR "convolution reverb?" Specifically, look at something like the Jot reverb, and find a mathematical mapping from an FIR filter to a set of coefficients for that structure.
2. For reverb in a 3D modeled space, how can you use the functions of Graphics Processors (GPUs) to accelerate the generation of a reverb filter? (VR gaming is making this very important) New Nvidia GPUs are designed with audio modeling in mind.
These are great suggestions. First one sounds like a lot of linear algebra so I think I would enjoy that too. Thank you so much!
Okay, do you know what "Convolutional Reverb" is? What "Fast Convolution" is?
Do you know what the Schroeder Reverb model is or the Jot Reverb model? There might be some other more proprietary digital reverbs (like Alesis or Lexicon or some plugins) to look at.
What kinda reverb do you want to do? Something that runs on a DSP board in real time? Or something that processes a file?
What platform(s) and/or language(s) are you working with to do this with?
I have actually not heard of fast convolution! I have however heard of convolution reverb. Thank you for the suggestions on the plugins. I was reading some research papers on reverb decay yesterday. I would prefer to do real time DSP. Also the languages that I am currently working with are C++ and MATLAB. I enjoy MATLAB thoroughly.
Here's a couple of dumb observer ideas:
1) Which is a better approach to reverb; traditional DSP or Artificial Intelligence?
2) In tiny spaces, what is the difference between a steel body guitar and a wood guitar?
I really like the first idea! If I were to do that I would probably look at the theoretical approach of both and then practical work. Thank you so much!
Hi there Tappedtima
It's always difficult choosing a research topic! Not sure if you've experimented with any reverb toolkits. It might be useful to look at the Audacity audio editor. On the main menu, under Effect, there is a Reverb... option. You can generate a sample signal and then apply reverb effects to it. One of the nice things about Audacity is that it is open source. So, (with a little effort) you can have a look at the way the reverb facility is coded! This might even inspire you to add a coding element to your mathematical approach :-).
Hi there!
I actually have but only briefly. I am actually quite new to the applications aspect. However I will check this out! Thank you!
how to eliminate reverb from a recorded source - e.g to get more intelligibility (there are some Plugins on the market)
Kind regards chris
Yes, while synthesizing reverb has been studied a lot, dereverberation is a hot topic and to my knowledge there is no best-practice on this yet. Most papers on the topic are new, conclusions have not been drawn. Interested?
To elaborate on the suggestion of derverberation as a research topic.
Dereverberation isn't a new topic, the general problem is just under-determined and so there are as many solutions as there are constraints on the problem.
Take the basic convolution equation:
y[t] = (h * x)[t] + n[t]
Where y is the measured output, h is the impulse response of the reverb, x is the signal, and n is additive noise. We want x, we have y.
If the noise n[t] is 0, and we know h (and h is invertible), then the problem is no longer under-determined. We can determine x precisely using Wiener deconvolution. Furthermore, if we have a test with a known signal we can find h, then we can use that known h to deconvolve future measurements where the signal is not known.
The new solutions cover situations where we have less complete information (so-called "blind deconvolution"). What's the best solution if we don't know x, but we do know that x is speech? What if we don't know h, but we do know that h is room reverb? What if h is non-invertible, but we know something about the statistics of x?
In a lot of these cases, neural networks seem like promising solutions. I say that mostly because when I hear a sound with reverb I can easily imagine what that sound would be without the reverb, and my brain is nothing but a fleshy neural network. So I assume with a suitable training set that a computer could emulate that behavior.
I just did so much research on this and it is SO interesting. Thank you so much! This sounds like it's going to be a lot of fun!!!