Reply by Eric Jacobsen October 25, 20042004-10-25
On 21 Oct 2004 12:45:49 -0700, kewlkarun@yahoo.com (KK) wrote:

>Stan Pawlukiewicz <spam@spam.mitre.org> wrote in message news:<cl5svb$2qp$1@newslocal.mitre.org>... >> KK wrote: >> > Greetings, >> > I always assumed that MAP and ML performances differ, with former >> > being superior. However, under certain conditions ML approaches MAP >> > performance. I know of trivial case when the variables to be decoded >> > are uncorrelated, ML is MAP. I am interested in knowing about other >> > cases when this is true. >> > Could anyone throw some light on this issue? >> > Regards, >> > KK >> >> Look at chapter 2 of VanTrees volume 1. Van Trees shows that the MAP >> with a "defuse" prior reduces to the ML. For a set of discrete symbols >> a diffuse prior implies that the symbols are equally likely. > >Hi, > Thank you Stan for the reference. Just for the archives, the term >MAP is misleading- I was carried away with the meaning of maximizing >bit marginals, where as in the context that Stan's explanation it is >maximizing symbol (i.e., codeword) probability which would be >equivalent to ML when all the codewords are equally likely. >Regards, >KK
FWIW, some knowledgable folks (e.g., Divsalar, Benedetto) have expressed that MAP is a misnomer, as you said, since it isn't maximizing anything. So some people prefer the term A Posteriori Probability (APP) instead. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Reply by KK October 21, 20042004-10-21
Stan Pawlukiewicz <spam@spam.mitre.org> wrote in message news:<cl5svb$2qp$1@newslocal.mitre.org>...
> KK wrote: > > Greetings, > > I always assumed that MAP and ML performances differ, with former > > being superior. However, under certain conditions ML approaches MAP > > performance. I know of trivial case when the variables to be decoded > > are uncorrelated, ML is MAP. I am interested in knowing about other > > cases when this is true. > > Could anyone throw some light on this issue? > > Regards, > > KK > > Look at chapter 2 of VanTrees volume 1. Van Trees shows that the MAP > with a "defuse" prior reduces to the ML. For a set of discrete symbols > a diffuse prior implies that the symbols are equally likely.
Hi, Thank you Stan for the reference. Just for the archives, the term MAP is misleading- I was carried away with the meaning of maximizing bit marginals, where as in the context that Stan's explanation it is maximizing symbol (i.e., codeword) probability which would be equivalent to ML when all the codewords are equally likely. Regards, KK
Reply by Stan Pawlukiewicz October 20, 20042004-10-20
KK wrote:
> Greetings, > I always assumed that MAP and ML performances differ, with former > being superior. However, under certain conditions ML approaches MAP > performance. I know of trivial case when the variables to be decoded > are uncorrelated, ML is MAP. I am interested in knowing about other > cases when this is true. > Could anyone throw some light on this issue? > Regards, > KK
Look at chapter 2 of VanTrees volume 1. Van Trees shows that the MAP with a "defuse" prior reduces to the ML. For a set of discrete symbols a diffuse prior implies that the symbols are equally likely.
Reply by KK October 19, 20042004-10-19
Greetings,
 I always assumed that MAP and ML performances differ, with former
being superior. However, under certain conditions ML approaches MAP
performance. I know of trivial case when the variables to be decoded
are uncorrelated, ML is MAP. I am interested in knowing about other
cases when this is true.
Could anyone throw some light on this issue?
Regards,
KK