Reply by January 27, 20202020-01-27
On Thursday, April 11, 2019 at 11:58:41 AM UTC-7, Steve Pope wrote:
> The Event Horizon Telescope team of course deserves a huge amount > of credit for their results. > > But it's interesting how they got there. > > Here's an article with some non-technical discussion: > > https://fivethirtyeight.com/features/forget-the-black-hole-picture-check-out-the-sweet-technology-that-made-it-possible/
> Their data is too noisy to actually create a black hole image. So > they augment the data with (wait for it..) models of what a black > hole should look like, and come up with the most likely black hole > that fits the data.
> I've been noticing that astrophysicists have been doing this > kind of thing for awhile.
Without actually answering your question, it seems that more than astrophysicists do it. Why do we use DFT and DCT in DSP, because our signals tend to be sinusoidal. Or are the signals sinusoidal so we can use DFT and DCT? There is a post asking about coding theory and convolutional codes or other such coding systems. The ones used are the ones that can be done computationally fast enough. Or is it that people design computational systems to do the appropriate transform. As for sinusoids, they were convenient in the analog days, as you can generate and filter them with RLC filters. But now in the digital days, does that argument still hold? If radio was invented now, would modulated sinusoid transmitters be used? As for cosmic background, there is a very accurate fit to the expected distribution.
Reply by Steve Pope May 16, 20192019-05-16
robert bristow-johnson  <rbj@audioimagination.com> wrote:

>On 4/11/19 11:58 AM, Steve Pope wrote:
>> The Event Horizon Telescope team of course deserves a huge amount >> of credit for their results. >> >> But it's interesting how they got there. >> >> Their data is too noisy to actually create a black hole image. So >> they augment the data with (wait for it..) models of what a black >> hole should look like, and come up with the most likely black hole >> that fits the data. > >ain't this sorta like Matched Filtering or MAP detection? > >like if that distant galaxy was broadcasting a symbol and we get to >choose what symbol (and what parameters for that symbol) best fits the >received signal.
Well, kinda sorta. The signal from the Event Horizon Telescope is nowhere nearly good enough to create an image without leaning heavily on "priors". It's much more than a matched filter. It's more like, "if a black hole looks like we think it looks like, then here is the black hole that we would think is there". And a matched filter is based on a known transmitted signal, not a theoretical process (although as theories go, General Relatively is a good one to bank upon). I remember from when I took medical imaging in grad school, how important it was to perform the appropriate statistical test, and then to communicate that to the radiologist. The difference was, "is there a tumor" vs. "if there is a tumor, what does it look like". As you can imagine medical marketing tended to blur such distinctions. Black hole imaging may also have a marketing component. Still it is not as bad as the alleged fluctuations in the microwave background, which clearly don't jibe with other data, and are reliant on weird signal processing. "Astronomers have no limits." Steve
Reply by robert bristow-johnson May 15, 20192019-05-15
On 4/11/19 11:58 AM, Steve Pope wrote:
> The Event Horizon Telescope team of course deserves a huge amount > of credit for their results. > > But it's interesting how they got there. > > Their data is too noisy to actually create a black hole image. So > they augment the data with (wait for it..) models of what a black > hole should look like, and come up with the most likely black hole > that fits the data.
ain't this sorta like Matched Filtering or MAP detection? like if that distant galaxy was broadcasting a symbol and we get to choose what symbol (and what parameters for that symbol) best fits the received signal. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Steve Pope April 13, 20192019-04-13
Steve Pope <spope384@gmail.com> wrote:

>Phil Martel <pomartel@comcast.net> wrote: > >>The Wikipedia article indicates that the patch prior algorithm was not used >>"Bouman developed an algorithm known as Continuous High-resolution Image >>Reconstruction using Patch priors, or CHIRP.[19][16] This algorithm was >>ultimately not used to create the image of the supermassive black hole >>inside the core of the galaxy Messier 87.[20] An algorithm that was used >>was the CLEAN algorithm[21] which was introduced by Jan H&ouml;gbom >>[Wikidata].[22]" >>The CLEAN algorithm is described (a bit) here: >>https://en.wikipedia.org/wiki/CLEAN_(algorithm) >>It does not appear to involve a model of the expected black hole physics > >Thanks. This must be a really recent edit.
And in even a more recent edit, the page now says both CHIRP and CLEAN were used to produce the image. The quotes in the article from 538 seem to clearly indicate black hole models were used. (Don't expect an accurate Wikipedia page until the trolling settles down.) Steve
Reply by Steve Pope April 12, 20192019-04-12
Phil Martel  <pomartel@comcast.net> wrote:

>The Wikipedia article indicates that the patch prior algorithm was not used >"Bouman developed an algorithm known as Continuous High-resolution Image >Reconstruction using Patch priors, or CHIRP.[19][16] This algorithm was >ultimately not used to create the image of the supermassive black hole >inside the core of the galaxy Messier 87.[20] An algorithm that was used >was the CLEAN algorithm[21] which was introduced by Jan H&ouml;gbom >[Wikidata].[22]" >The CLEAN algorithm is described (a bit) here: >https://en.wikipedia.org/wiki/CLEAN_(algorithm) >It does not appear to involve a model of the expected black hole physics
Thanks. This must be a really recent edit. Another article today said that Bouman's algorithm was used to combine the four images from the four independent teams. I suspect more edits are in the future for this Wiki article before it settles down. Steve
Reply by Phil Martel April 12, 20192019-04-12
On 4/11/2019 14:58, Steve Pope wrote:
> The Event Horizon Telescope team of course deserves a huge amount > of credit for their results. > > But it's interesting how they got there. > > Here's an article with some non-technical discussion: > > https://fivethirtyeight.com/features/forget-the-black-hole-picture-check-out-the-sweet-technology-that-made-it-possible/ > > Their data is too noisy to actually create a black hole image. So > they augment the data with (wait for it..) models of what a black > hole should look like, and come up with the most likely black hole > that fits the data. > > I've been noticing that astrophysicists have been doing this > kind of thing for awhile. First it was the cosmic microwave > background, in which they combed over the data looking for things > predicted by their theories (and even used Mexican Hat filters, which > are well known to produce artifacts). Gravitational wave detection, > the same deal. > > These seem like useful approaches when you actually do know what > you're looking for, say an incoming missle. But when you're trying > to confirm basic physics or astrophysics, it hardly seems like > a neutral way to investigate theories experimentally, although it > will be very useful in answering certain questions. > > According to the Wikipedia page for Caltech professor Katie Bouman, the > algorithm uses a technique (unfamiliar to me) called patch priors. > > Does anybody have any insight as to what a patch prior algorithm is? > > https://en.wikipedia.org/wiki/Katie_Bouman > > Steve >
The Wikipedia article indicates that the patch prior algorithm was not used "Bouman developed an algorithm known as Continuous High-resolution Image Reconstruction using Patch priors, or CHIRP.[19][16] This algorithm was ultimately not used to create the image of the supermassive black hole inside the core of the galaxy Messier 87.[20] An algorithm that was used was the CLEAN algorithm[21] which was introduced by Jan H&ouml;gbom [Wikidata].[22]" The CLEAN algorithm is described (a bit) here: https://en.wikipedia.org/wiki/CLEAN_(algorithm) It does not appear to involve a model of the expected black hole physics -- Best wishes, --Phil pomartel At Comcast(ignore_this) dot net
Reply by Steve Pope April 11, 20192019-04-11
The Event Horizon Telescope team of course deserves a huge amount
of credit for their results.

But it's interesting how they got there.  

Here's an article with some non-technical discussion:

https://fivethirtyeight.com/features/forget-the-black-hole-picture-check-out-the-sweet-technology-that-made-it-possible/

Their data is too noisy to actually create a black hole image.  So
they augment the data with (wait for it..) models of what a black
hole should look like, and come up with the most likely black hole
that fits the data.

I've been noticing that astrophysicists have been doing this
kind of thing for awhile.  First it was the cosmic microwave
background, in which they combed over the data looking for things
predicted by their theories (and even used Mexican Hat filters, which
are well known to produce artifacts).  Gravitational wave detection, 
the same deal.

These seem like useful approaches when you actually do know what
you're looking for, say an incoming missle.  But when you're trying
to confirm basic physics or astrophysics, it hardly seems like
a neutral way to investigate theories experimentally, although it
will be very useful in answering certain questions.

According to the Wikipedia page for Caltech professor Katie Bouman, the 
algorithm uses a technique (unfamiliar to me) called patch priors.  

Does anybody have any insight as to what a patch prior algorithm is?

https://en.wikipedia.org/wiki/Katie_Bouman

Steve