I'm posting this here because both a) I'm amazed, and b) of the fantastic signal processing actually required to make something like this happen. Someone has posted at http://vimeo.com/57685359 a tweaked video of REM's "Losing My Religion", in which they've changed the song from the original minor scale to a major one. Vocals, guitar chords, whole nine yards. Not a cover, just reprocessing the original. The effect on the song is radical; the emotional impact entirely changed. But what I don't understand is how it was done. Anyone have a handle on the math/theory behind something like this? -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.
Very OT: "Losing My Religion" on major scale
Started by ●January 21, 2013
Reply by ●January 21, 20132013-01-21
On Mon, 21 Jan 2013 10:06:59 -0800, Rob Gaddi <rgaddi@technologyhighland.invalid> wrote:>I'm posting this here because both a) I'm amazed, and b) of the >fantastic signal processing actually required to make something like >this happen. > >Someone has posted at http://vimeo.com/57685359 a tweaked video of >REM's "Losing My Religion", in which they've changed the song from the >original minor scale to a major one. Vocals, guitar chords, whole nine >yards. Not a cover, just reprocessing the original. > >The effect on the song is radical; the emotional impact entirely >changed. But what I don't understand is how it was done. Anyone have a >handle on the math/theory behind something like this?The last I saw (some demo/instructional videop a few years ago) in Antares Autotune or similar software you can vary the pitch of an individual note within a polyphonic track (recording with several notes playing at once). I presume it's a magic combination of FFT, windowing, resampling, and assorted magic coefficients.
Reply by ●January 21, 20132013-01-21
On Mon, 21 Jan 2013 10:06:59 -0800, Rob Gaddi <rgaddi@technologyhighland.invalid> wrote:>I'm posting this here because both a) I'm amazed, and b) of the >fantastic signal processing actually required to make something like >this happen. > >Someone has posted at http://vimeo.com/57685359 a tweaked video of >REM's "Losing My Religion", in which they've changed the song from the >original minor scale to a major one. Vocals, guitar chords, whole nine >yards. Not a cover, just reprocessing the original. > >The effect on the song is radical; the emotional impact entirely >changed. But what I don't understand is how it was done. Anyone have a >handle on the math/theory behind something like this? > >-- >Rob Gaddi, Highland Technology -- www.highlandtechnology.com >Email address domain is currently out of order. See above to fix.Like one of the comments: It's cool but I hate it. I think it helps reveal the genius of doing it in a minor key in the first place. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Reply by ●January 21, 20132013-01-21
Rob Gaddi wrote:> I'm posting this here because both a) I'm amazed, and b) of the > fantastic signal processing actually required to make something like > this happen. > > Someone has posted at http://vimeo.com/57685359 a tweaked video of > REM's "Losing My Religion", in which they've changed the song from the > original minor scale to a major one. Vocals, guitar chords, whole nine > yards. Not a cover, just reprocessing the original. > > The effect on the song is radical; the emotional impact entirely > changed. But what I don't understand is how it was done. Anyone have a > handle on the math/theory behind something like this? >It's probably done with Melodyne. http://www.youtube.com/watch?v=jFCjv4_jqAY Melodyne can split individual notes out from a guitar chord. -- Les Cargill
Reply by ●January 22, 20132013-01-22
Ben Bradley <ben_u_bradley@etcmail.com> wrote:> On Mon, 21 Jan 2013 10:06:59 -0800, Rob Gaddi > <rgaddi@technologyhighland.invalid> wrote:(snip)>>Someone has posted at http://vimeo.com/57685359 a tweaked video of >>REM's "Losing My Religion", in which they've changed the song from the >>original minor scale to a major one. Vocals, guitar chords, whole nine >>yards. Not a cover, just reprocessing the original.(snip)> The last I saw (some demo/instructional videop a few years ago) in > Antares Autotune or similar software you can vary the pitch of an > individual note within a polyphonic track (recording with several > notes playing at once). I presume it's a magic combination of FFT, > windowing, resampling, and assorted magic coefficients.Most instruments generate harmonics, or more often partials (nearly, but not exactly, integer multiples of the fundamental). For guitars, the stiffness of the strings would shift them from being exact multiples, but maybe not so much. For wind instruments it is the end effect due to the cross section area of the tube or the size of the holes that shifts them. Seems like it might be doable with FFT, but would have to be done carefully to keep the harmonic content. -- glen
Reply by ●January 22, 20132013-01-22
On 1/22/13 12:56 AM, glen herrmannsfeldt wrote:> Ben Bradley<ben_u_bradley@etcmail.com> wrote: >> On Mon, 21 Jan 2013 10:06:59 -0800, Rob Gaddi >> <rgaddi@technologyhighland.invalid> wrote: > > > (snip) >>> Someone has posted at http://vimeo.com/57685359 a tweaked video of >>> REM's "Losing My Religion", in which they've changed the song from the >>> original minor scale to a major one. Vocals, guitar chords, whole nine >>> yards. Not a cover, just reprocessing the original. > > (snip) > >> The last I saw (some demo/instructional videop a few years ago) in >> Antares Autotune or similar software you can vary the pitch of an >> individual note within a polyphonic track (recording with several >> notes playing at once). I presume it's a magic combination of FFT, >> windowing, resampling, and assorted magic coefficients. > > Most instruments generate harmonics, or more often partials > (nearly, but not exactly, integer multiples of the fundamental). > > For guitars, the stiffness of the strings would shift them > from being exact multiples, but maybe not so much. > > For wind instruments it is the end effect due to the cross section > area of the tube or the size of the holes that shifts them. > > Seems like it might be doable with FFT, but would have to be done > carefully to keep the harmonic content. >i have an idea how sinusoidal modeling works. i've done variants of it. just FFTing something is far from enough. a single sinusoid will light up a lot of FFT bins. choosing a good window is necessary to be able to identify a smear of bins that would correspond to a single sinusoidal component. sidelobes are hard to deal with: is it a sidelobe or is it a main lobe of a different sinusoid? i do not know how melodyne does it. separating all of the sinusoids from the transients and the noise is a sufficiently hard problem in itself (there are some good papers on it). then tracking and connecting between frames the sinusoids with varying frequency is hard. then grouping the sinusoids into a single musical note is another really hard problem to do correctly, particularly with multiple notes happening simultaneously. when harmonics of two different notes overlap (like the 2nd harmonic of G falling on top of the 3rd harmonic of the C immediately below it) then it gets even worse. and when overtones are not harmonic at all (like with bells). what to do when two notes are played simultaneously and share lots of harmonics, i just don't know on what basis a shared harmonic gets divided up. it's pretty hard. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by ●January 22, 20132013-01-22
robert bristow-johnson <rbj@audioimagination.com> wrote: (snip, someone wrote)>>> The last I saw (some demo/instructional videop a few years ago) in >>> Antares Autotune or similar software you can vary the pitch of an >>> individual note within a polyphonic track (recording with several >>> notes playing at once). I presume it's a magic combination of FFT, >>> windowing, resampling, and assorted magic coefficients.(snip, then I wrote)>> Most instruments generate harmonics, or more often partials >> (nearly, but not exactly, integer multiples of the fundamental).(snip)>> Seems like it might be doable with FFT, but would have to be done >> carefully to keep the harmonic content.> i have an idea how sinusoidal modeling works. i've done variants of it. > just FFTing something is far from enough. a single sinusoid will > light up a lot of FFT bins. choosing a good window is necessary to be > able to identify a smear of bins that would correspond to a single > sinusoidal component. sidelobes are hard to deal with: is it a sidelobe > or is it a main lobe of a different sinusoid?I have wondered for some time now about doing an FFT on a whole CD track. It is O(N logN) so it shouldn't be all that hard to do. Could even be done in-core on most computers now, though it shouldn't be all that hard to do it like an external sort, keeping only some data in core. That removes the need for windows.> i do not know how melodyne does it. separating all of the sinusoids > from the transients and the noise is a sufficiently hard problem in > itself (there are some good papers on it). then tracking and connecting > between frames the sinusoids with varying frequency is hard.Also, if you FFT the whole thing at once, no frames to deal with.> then grouping the sinusoids into a single musical note is another > really hard problem to do correctly, particularly with multiple > notes happening simultaneously. when harmonics of two different > notes overlap (like the 2nd harmonic of G falling on top of the > 3rd harmonic of the C immediately below it) then it gets even worse.If you FFT the whole thing, then the bins will be especially narrow. Maybe narrow enough to separate what are not perfectly harmonic. But then I have never tried, so I don't know what it would actually look like.> and when overtones are not harmonic at all (like with bells). > what to do when two notes are played simultaneously and share > lots of harmonics, i just don't know on what basis a shared > harmonic gets divided up.> it's pretty hard.Much easier to write about then actually do. Also, depends on how good the result needs to be. -- glen
Reply by ●January 22, 20132013-01-22
glen herrmannsfeldt wrote:> robert bristow-johnson <rbj@audioimagination.com> wrote: > > (snip, someone wrote) >>>> The last I saw (some demo/instructional videop a few years ago) in >>>> Antares Autotune or similar software you can vary the pitch of an >>>> individual note within a polyphonic track (recording with several >>>> notes playing at once). I presume it's a magic combination of FFT, >>>> windowing, resampling, and assorted magic coefficients. > > (snip, then I wrote) >>> Most instruments generate harmonics, or more often partials >>> (nearly, but not exactly, integer multiples of the fundamental). > > (snip) > >>> Seems like it might be doable with FFT, but would have to be done >>> carefully to keep the harmonic content. > >> i have an idea how sinusoidal modeling works. i've done variants of it. >> just FFTing something is far from enough. a single sinusoid will >> light up a lot of FFT bins. choosing a good window is necessary to be >> able to identify a smear of bins that would correspond to a single >> sinusoidal component. sidelobes are hard to deal with: is it a sidelobe >> or is it a main lobe of a different sinusoid? > > I have wondered for some time now about doing an FFT on a whole CD > track. It is O(N logN) so it shouldn't be all that hard to do. > Could even be done in-core on most computers now, though it shouldn't > be all that hard to do it like an external sort, keeping only some > data in core. That removes the need for windows. >A ... five minute CD track isn't that large. You can keep all the data in memory these days. 211,680,000 bytes for a mono FFT using 8 bytes double floats.>> i do not know how melodyne does it. separating all of the sinusoids >> from the transients and the noise is a sufficiently hard problem in >> itself (there are some good papers on it). then tracking and connecting >> between frames the sinusoids with varying frequency is hard. > > Also, if you FFT the whole thing at once, no frames to deal with. > >> then grouping the sinusoids into a single musical note is another >> really hard problem to do correctly, particularly with multiple >> notes happening simultaneously. when harmonics of two different >> notes overlap (like the 2nd harmonic of G falling on top of the >> 3rd harmonic of the C immediately below it) then it gets even worse. > > If you FFT the whole thing, then the bins will be especially narrow. > Maybe narrow enough to separate what are not perfectly harmonic. > But then I have never tried, so I don't know what it would actually > look like. >It's a bit too zoomed in.>> and when overtones are not harmonic at all (like with bells). >> what to do when two notes are played simultaneously and share >> lots of harmonics, i just don't know on what basis a shared >> harmonic gets divided up. > >> it's pretty hard. > > Much easier to write about then actually do. > > Also, depends on how good the result needs to be. > > -- glen >-- Les Cargill
Reply by ●January 24, 20132013-01-24
On Tuesday, January 22, 2013 7:06:59 AM UTC+13, Rob Gaddi wrote:> I'm posting this here because both a) I'm amazed, and b) of the > > fantastic signal processing actually required to make something like > > this happen. > > > > Someone has posted at http://vimeo.com/57685359 a tweaked video of > > REM's "Losing My Religion", in which they've changed the song from the > > original minor scale to a major one. Vocals, guitar chords, whole nine > > yards. Not a cover, just reprocessing the original. > > > > The effect on the song is radical; the emotional impact entirely > > changed. But what I don't understand is how it was done. Anyone have a > > handle on the math/theory behind something like this? > > > > -- > > Rob Gaddi, Highland Technology -- www.highlandtechnology.com > > Email address domain is currently out of order. See above to fix.a phase vocoder.
Reply by ●January 24, 20132013-01-24
On Tue, 22 Jan 2013 01:44:46 GMT, eric.jacobsen@ieee.org (Eric Jacobsen) wrote:>On Mon, 21 Jan 2013 10:06:59 -0800, Rob Gaddi ><rgaddi@technologyhighland.invalid> wrote: > >>I'm posting this here because both a) I'm amazed, and b) of the >>fantastic signal processing actually required to make something like >>this happen. >> >>Someone has posted at http://vimeo.com/57685359 a tweaked video of >>REM's "Losing My Religion", in which they've changed the song from the >>original minor scale to a major one. Vocals, guitar chords, whole nine >>yards. Not a cover, just reprocessing the original. >> >>The effect on the song is radical; the emotional impact entirely >>changed. But what I don't understand is how it was done. Anyone have a >>handle on the math/theory behind something like this? >> >>-- >>Rob Gaddi, Highland Technology -- www.highlandtechnology.com >>Email address domain is currently out of order. See above to fix. > >Like one of the comments: It's cool but I hate it. > >I think it helps reveal the genius of doing it in a minor key in the >first place. > > >Eric JacobsenHi Eric, I agree with you. I don't understand all the "key" lingo used here (the only key I know is the one I use to start my car), but there's definitely 'something' missing in the new version of that song. The original version had this sense of intriging desparation (like an unsung plea for help) that's missing in the new version. Ya' know what I like about that new version? ......Nothing. [-Rick-]






