how to measure entropy of music?

Started by lucy December 2, 2004
I have vaguely heard about this method... and I am very interested in it... 
could anybody give me some pointers?

After learning how to measure entropy of music... I can begin to measure 
entropy of texts, etc.. that's going to be fun! 


is this some joke ?!


"lucy" <losemind@yahoo.com> wrote in message 
news:coo4it$2s2$1@news.Stanford.EDU...
>I have vaguely heard about this method... and I am very interested in it... >could anybody give me some pointers? > > After learning how to measure entropy of music... I can begin to measure > entropy of texts, etc.. that's going to be fun! >
Funnily enough, no.

A quick google trawl finds:

http://pharos.cpsc.ucalgary.ca/Dienst/UI/2.0/Describe/ncstrl.ucalgary_cs/1990-390-14

Given more time, I may be able to find some more recent documents. But time is 
not my friend at the moment...

Richard Dobson




Someonekicked wrote:

> is this some joke ?! > > > "lucy" <losemind@yahoo.com> wrote in message > news:coo4it$2s2$1@news.Stanford.EDU... > >>I have vaguely heard about this method... and I am very interested in it... >>could anybody give me some pointers? >> >>After learning how to measure entropy of music... I can begin to measure >>entropy of texts, etc.. that's going to be fun! >> > > >
In article <coo4it$2s2$1@news.Stanford.EDU>,
 "lucy" <losemind@yahoo.com> wrote:

> I have vaguely heard about this method... and I am very interested in it... > could anybody give me some pointers? > > After learning how to measure entropy of music... I can begin to measure > entropy of texts, etc.. that's going to be fun!
Learn all you want and more at Brian Whitman's site... <http://web.media.mit.edu/~bwhitman/> Whitman is also the producer of Eigenradio (their motto: "Statistically optimal music since 2003"). Go to their site and you'll find links to their iTunes and WMP links... <http://eigenradio.media.mit.edu/> -- Remove _me_ for e-mail address
On 2004-12-03 14:19:03 +0100, Ken Prager <prager_me_@ieee.org> said:
> <http://eigenradio.media.mit.edu/>
ROTFL!!! This is *really* cool. -- Stephan M. Bernsee http://www.dspdimension.com
>"lucy" <losemind@yahoo.com> wrote in message >news:coo4it$2s2$1@news.Stanford.EDU... >> After learning how to measure entropy of music... I can begin to measure >> entropy of texts, etc.. that's going to be fun! >
On Thu, 2 Dec 2004 17:53:45 -0500, "Someonekicked" <someonekicked@comcast.net> wrote:
>is this some joke ?!
The model of entropy that can be readily recognized: In music--how often can an informed listener infer the next note in a phrase; ie how many bits are needed to specifiy the next note. In sound--how many bits are needed to specify the next value of the signal. For written language, the analog is how many bits are needed to confirm an informed guess as to the next letter in a text. Perhaps no more than three. For some TV shows these days, its only two. For music, the question gets really challenging as one considers the entropy of scores--the parts played by accompanying instruments and the choices of these instruments requires a lot of encoding. In this case, the number of bits needed to feed a high level orchestral synthesizer might establish a lower bound. Using a predictor such as a Hidden Markov Model using the Viterbi algorithm or one of its descendants might go a long way in evaluating the predictability/aka entropy of sounds. (This is already the case in speech recognition.) A really interesting question would be to rank famous composers by the average entropy of their music--eg Bach, Mozart, the Beatles at the top, Andrew Lloyd Webber, Elton John, Cole Porter next--well, you get the idea. The basics are at: http://www.music-cog.ohio-state.edu/Music829D/Notes/Infotheory.html Thanks to the stimulation of your question, with a search based on the key words: hmm, entropy, and music I have learned the question is of significance today for serveral reasons: 1) Recognizing when TV commercials are playing. 2) Separating out background music behind speech in speech recognition. 3) Locating specific music contained in a large database of sound Try: http://crl.research.compaq.com/publications/techreports/techreports.html search the page for Logan or music. Also: http://www.idiap.ch/publications/ajmera-rr-01-26.bib.abs.html Speech/Music Discrimination using Entropy and Dynamism Features in a HMM Classification Framework http://www.speech.kth.se/qpsr/tmh/2004/04-46-041-059.pdf Speech/Music Discrimination Using Discrete Hidden Markov Models http://crl.research.compaq.com/publications/techreports/reports/2000-1.pdf MUSIC SUMMARY USING KEY PHRASES M. Brand, &#2013266067;Structure Discovery in Conditional Probability Models via an Entropic Prior and Parameter Extinction,&#2013266068; Neural Computation, July 1999 John Bailey http://home.rochester.rr.com/jbxroads/mailto.html
John Bailey wrote:

> The basics are at: > http://www.music-cog.ohio-state.edu/Music829D/Notes/Infotheory.html > > Thanks to the stimulation of your question, with a search based on the > key words: hmm, entropy, and music I have learned the question is of > significance today for serveral reasons: > 1) Recognizing when TV commercials are playing. > 2) Separating out background music behind speech in speech > recognition.
I wonder if they could figure a why to identify background music and separate it from the sound reaching my ears :-) Regards, Steve
John Bailey wrote:

> A really interesting question would be to rank famous composers by the > average entropy of their music--eg Bach, Mozart, the Beatles at the > top, Andrew Lloyd Webber, Elton John, Cole Porter next--well, you get > the idea.
As a matter of interest, do you consider high entropy good or bad? Presumably a completely random series of notes would have very high entropy, while absolute silence has very low entropy. I wouldn't have thought either was very enjoyable. -- Timothy Murphy e-mail (<80k only): tim /at/ birdsnest.maths.tcd.ie tel: +353-86-2336090, +353-1-2842366 s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland
On Sat, 04 Dec 2004 01:07:34 +0800, Steve Underwood <steveu@dis.org>
wrote:

>John Bailey wrote: > >> The basics are at: >> http://www.music-cog.ohio-state.edu/Music829D/Notes/Infotheory.html >> >> Thanks to the stimulation of your question, with a search based on the >> key words: hmm, entropy, and music I have learned the question is of >> significance today for serveral reasons: >> 1) Recognizing when TV commercials are playing. >> 2) Separating out background music behind speech in speech >> recognition. > >I wonder if they could figure a why to identify background music and >separate it from the sound reaching my ears :-)
Good point! If the predictor works then you can make a pretty good canceller! Or better yet, a transmogriphier; if it's punk metal coming out of the speaker and you want classical, you could put some filter and adaptation circuits in that take the punk metal energy and convert it based on the predictor rules for classical. Punk metal out of the speaker, classical into your ear. Or you could buy an MP3 player and some headphones, either way. ;) Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
On Fri, 3 Dec 2004 15:36:04 +0100, Stephan M. Bernsee
<spam@dspdimension.com> wrote:

>On 2004-12-03 14:19:03 +0100, Ken Prager <prager_me_@ieee.org> said: >> <http://eigenradio.media.mit.edu/> > >ROTFL!!! This is *really* cool.
Very interesting. I doubt any of it is going to make it onto my MP3 player, but it's an intereting concept. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org