DSPRelated.com
Forums

Questions regarding Octave

Started by Eric Jacobsen March 23, 2008
On Mar 24, 3:18 pm, dbd <d...@ieee.org> wrote:
> On Mar 24, 9:56 am, robert bristow-johnson <r...@audioimagination.com> > wrote: > > > > > On Mar 23, 11:11 pm, dbd <d...@ieee.org> wrote: > > > > On Mar 23, 7:49 pm, robert bristow-johnson <r...@audioimagination.com> > > > wrote: > > > > > ... > > > > yer not the only one, after all it seems intended to be a gnu freeware > > > > counterpart to MATLAB. i contacted Eaton (and got on the Octave > > > > developer's list for a little while) to propose they do to Octave what > > > > i have, for more than a decade, been trying to get MATLAB to do: > > > > generalize handling array indices so that an array can have non- > > > > positive integer indices; so that the indices of the upper-left corner > > > > need not always be (1,1). > > > > > fell on deaf ears. > > > > > So, doesn't that make Octave an accurate counterpart to MATLAB? > > > > But, in the spirit of GNU freeware, what did they say when you offered > > > to implement the change for them? > > > i can't do it all by myself. i downloaded the whole code base, even > > got it to build on my Linux machine, but i could not, for the life of > > me, figure out specifically where they were doing their index > > calculations. specifically, for arrays of two dimensions or more, the > > *must* be subtracting 1 from the index and multiplying it by the row > > length (or, in MATLAB/Octave i think it's the column length) and > > adding that to the "less significant" index where either they subtract > > one and add it to the array base address, or they don't bother to > > subtract 1 and the base address is fudged by 1. also this should be > > intimately related to the index bounds checking that either MATLAB or > > Octave do. i can't find that and no one who was a regular developer > > would help me because (like the Math Works) they didn't think that > > this modification was a good thing to do which is still just > > inexplicable to me. > > > the Octave source code is spaghetti. it's shit. i cannot believe > > that people take any pride in code written like that. (well, i dunno, > > i see a lot of shit code where i work.) > > > Your experience with the code seems to be good evidence that the cost > benefit ratio is too high to justify the effort of the conversion. >
it's not that i wouldn't find the benefit is worth the effort, it's just that i cannot take on a project where i have to read and decode dozens of files of C++. it should be constructed so that whereever the code is that needs to know the limits (or lengths in present Octave) of each dimension of the array, the code that messes with that should be in only a very few places, and they should all have to reference these parameters (NxM for a 2-dim array) through a single classes member functions. (what do they call that in C++? i can't remember.) then whenever indices of an array need to be checked at the top end, the lower limits of the indices can also be checked and the net offset is calculated as a linear combination of each index minus its lower limit. but, i cannot be confident that i'll know every place where this address arithmetic needs to be done. but someone who actually wrote the code that does the limit checking and address arithmetic that might know all of the salient places to look. so the first thing i would want to do is add to the class that has the array length for each dimension, an index base or origin value that is initialized to 1 when an object of that class is created. and then wherever we have to subtract 1 from an index, usually to access an array element, but for any reason, we would subtract the index origin for that dimension of that particular instantiation of an array. it should work the same as it does presently (perhaps a microscopic amount slower). if someone who knows the code better than me can point to the places where that needs to be done, then i think it would be worth my effort to try to enhance Octave so we could have 0-based or even negatively based arrays. i wonder how long the fft() with the present definition would be used once someone writes a "dft()" that does the same thing but assumes (and returns) the correct indexing? we could pass it an array with indices from -512 to +511 and the dft() would know that (because the array parameters like origin and length of each dimension is available) and know where t=0 is. and the DC value would be returned in bin #0. (to me, it's just astonishing that it's 2008 and we don't have an fft() in the most common signal processing model/emulating software that returns the DC value in bin #0! that's just stupid, and like some other software shit, compared to the 1990s, i think our tools today are ickyer and harder to use than in the 90s. i guess i'm just a Mac wus, but even the OSX Mac stuff is harder to deal with than what we were using in the olden days.) r b-j
On Mar 24, 3:18 pm, dbd <d...@ieee.org> wrote:
> On Mar 24, 9:56 am, robert bristow-johnson <r...@audioimagination.com> > wrote: > > > > > On Mar 23, 11:11 pm, dbd <d...@ieee.org> wrote: > > > > On Mar 23, 7:49 pm, robert bristow-johnson <r...@audioimagination.com> > > > wrote: > > > > > ... > > > > yer not the only one, after all it seems intended to be a gnu freeware > > > > counterpart to MATLAB. i contacted Eaton (and got on the Octave > > > > developer's list for a little while) to propose they do to Octave what > > > > i have, for more than a decade, been trying to get MATLAB to do: > > > > generalize handling array indices so that an array can have non- > > > > positive integer indices; so that the indices of the upper-left corner > > > > need not always be (1,1). > > > > > fell on deaf ears. > > > > > So, doesn't that make Octave an accurate counterpart to MATLAB? > > > > But, in the spirit of GNU freeware, what did they say when you offered > > > to implement the change for them? > > > i can't do it all by myself. i downloaded the whole code base, even > > got it to build on my Linux machine, but i could not, for the life of > > me, figure out specifically where they were doing their index > > calculations. specifically, for arrays of two dimensions or more, the > > *must* be subtracting 1 from the index and multiplying it by the row > > length (or, in MATLAB/Octave i think it's the column length) and > > adding that to the "less significant" index where either they subtract > > one and add it to the array base address, or they don't bother to > > subtract 1 and the base address is fudged by 1. also this should be > > intimately related to the index bounds checking that either MATLAB or > > Octave do. i can't find that and no one who was a regular developer > > would help me because (like the Math Works) they didn't think that > > this modification was a good thing to do which is still just > > inexplicable to me. > > > the Octave source code is spaghetti. it's shit. i cannot believe > > that people take any pride in code written like that. (well, i dunno, > > i see a lot of shit code where i work.) > > > Your experience with the code seems to be good evidence that the cost > benefit ratio is too high to justify the effort of the conversion. >
it's not that i wouldn't find the benefit is worth the effort, it's just that i cannot take on a project where i have to read and decode dozens of files of C++. it should be constructed so that whereever the code is that needs to know the limits (or lengths in present Octave) of each dimension of the array, the code that messes with that should be in only a very few places, and they should all have to reference these parameters (NxM for a 2-dim array) through a single classes member functions. (what do they call that in C++? i can't remember.) then whenever indices of an array need to be checked at the top end, the lower limits of the indices can also be checked and the net offset is calculated as a linear combination of each index minus its lower limit. but, i cannot be confident that i'll know every place where this address arithmetic needs to be done. but someone who actually wrote the code that does the limit checking and address arithmetic that might know all of the salient places to look. so the first thing i would want to do is add to the class that has the array length for each dimension, an index base or origin value that is initialized to 1 when an object of that class is created. and then wherever we have to subtract 1 from an index, usually to access an array element, but for any reason, we would subtract the index origin for that dimension of that particular instantiation of an array. it should work the same as it does presently (perhaps a microscopic amount slower). if someone who knows the code better than me can point to the places where that needs to be done, then i think it would be worth my effort to try to enhance Octave so we could have 0-based or even negatively based arrays. i wonder how long the fft() with the present definition would be used once someone writes a "dft()" that does the same thing but assumes (and returns) the correct indexing? we could pass it an array with indices from -512 to +511 and the dft() would know that (because the array parameters like origin and length of each dimension is available) and know where t=0 is. and the DC value would be returned in bin #0. (to me, it's just astonishing that it's 2008 and we don't have an fft() in the most common signal processing model/emulating software that returns the DC value in bin #0! that's just stupid, and like some other software shit, compared to the 1990s, i think our tools today are ickyer and harder to use than in the 90s. i guess i'm just a Mac wus, but even the OSX Mac stuff is harder to deal with than what we were using in the olden days.) r b-j
On Mar 24, 3:18 pm, dbd <d...@ieee.org> wrote:
> On Mar 24, 9:56 am, robert bristow-johnson <r...@audioimagination.com> > wrote: > > > > > On Mar 23, 11:11 pm, dbd <d...@ieee.org> wrote: > > > > On Mar 23, 7:49 pm, robert bristow-johnson <r...@audioimagination.com> > > > wrote: > > > > > ... > > > > yer not the only one, after all it seems intended to be a gnu freeware > > > > counterpart to MATLAB. i contacted Eaton (and got on the Octave > > > > developer's list for a little while) to propose they do to Octave what > > > > i have, for more than a decade, been trying to get MATLAB to do: > > > > generalize handling array indices so that an array can have non- > > > > positive integer indices; so that the indices of the upper-left corner > > > > need not always be (1,1). > > > > > fell on deaf ears. > > > > > So, doesn't that make Octave an accurate counterpart to MATLAB? > > > > But, in the spirit of GNU freeware, what did they say when you offered > > > to implement the change for them? > > > i can't do it all by myself. i downloaded the whole code base, even > > got it to build on my Linux machine, but i could not, for the life of > > me, figure out specifically where they were doing their index > > calculations. specifically, for arrays of two dimensions or more, the > > *must* be subtracting 1 from the index and multiplying it by the row > > length (or, in MATLAB/Octave i think it's the column length) and > > adding that to the "less significant" index where either they subtract > > one and add it to the array base address, or they don't bother to > > subtract 1 and the base address is fudged by 1. also this should be > > intimately related to the index bounds checking that either MATLAB or > > Octave do. i can't find that and no one who was a regular developer > > would help me because (like the Math Works) they didn't think that > > this modification was a good thing to do which is still just > > inexplicable to me. > > > the Octave source code is spaghetti. it's shit. i cannot believe > > that people take any pride in code written like that. (well, i dunno, > > i see a lot of shit code where i work.) > > > Your experience with the code seems to be good evidence that the cost > benefit ratio is too high to justify the effort of the conversion. >
it's not that i wouldn't find the benefit is worth the effort, it's just that i cannot take on a project where i have to read and decode dozens of files of C++. it should be constructed so that whereever the code is that needs to know the limits (or lengths in present Octave) of each dimension of the array, the code that messes with that should be in only a very few places, and they should all have to reference these parameters (NxM for a 2-dim array) through a single classes member functions. (what do they call that in C++? i can't remember.) then whenever indices of an array need to be checked at the top end, the lower limits of the indices can also be checked and the net offset is calculated as a linear combination of each index minus its lower limit. but, i cannot be confident that i'll know every place where this address arithmetic needs to be done. but someone who actually wrote the code that does the limit checking and address arithmetic that might know all of the salient places to look. so the first thing i would want to do is add to the class that has the array length for each dimension, an index base or origin value that is initialized to 1 when an object of that class is created. and then wherever we have to subtract 1 from an index, usually to access an array element, but for any reason, we would subtract the index origin for that dimension of that particular instantiation of an array. it should work the same as it does presently (perhaps a microscopic amount slower). if someone who knows the code better than me can point to the places where that needs to be done, then i think it would be worth my effort to try to enhance Octave so we could have 0-based or even negatively based arrays. i wonder how long the fft() with the present definition would be used once someone writes a "dft()" that does the same thing but assumes (and returns) the correct indexing? we could pass it an array with indices from -512 to +511 and the dft() would know that (because the array parameters like origin and length of each dimension is available) and know where t=0 is. and the DC value would be returned in bin #0. (to me, it's just astonishing that it's 2008 and we don't have an fft() in the most common signal processing model/emulating software that returns the DC value in bin #0! that's just stupid, and like some other software shit, compared to the 1990s, i think our tools today are ickyer and harder to use than in the 90s. i guess i'm just a Mac wus, but even the OSX Mac stuff is harder to deal with than what we were using in the olden days.) r b-j
On Mar 24, 3:18 pm, dbd <d...@ieee.org> wrote:
> On Mar 24, 9:56 am, robert bristow-johnson <r...@audioimagination.com> > wrote: > > > > > On Mar 23, 11:11 pm, dbd <d...@ieee.org> wrote: > > > > On Mar 23, 7:49 pm, robert bristow-johnson <r...@audioimagination.com> > > > wrote: > > > > > ... > > > > yer not the only one, after all it seems intended to be a gnu freeware > > > > counterpart to MATLAB. i contacted Eaton (and got on the Octave > > > > developer's list for a little while) to propose they do to Octave what > > > > i have, for more than a decade, been trying to get MATLAB to do: > > > > generalize handling array indices so that an array can have non- > > > > positive integer indices; so that the indices of the upper-left corner > > > > need not always be (1,1). > > > > > fell on deaf ears. > > > > > So, doesn't that make Octave an accurate counterpart to MATLAB? > > > > But, in the spirit of GNU freeware, what did they say when you offered > > > to implement the change for them? > > > i can't do it all by myself. i downloaded the whole code base, even > > got it to build on my Linux machine, but i could not, for the life of > > me, figure out specifically where they were doing their index > > calculations. specifically, for arrays of two dimensions or more, the > > *must* be subtracting 1 from the index and multiplying it by the row > > length (or, in MATLAB/Octave i think it's the column length) and > > adding that to the "less significant" index where either they subtract > > one and add it to the array base address, or they don't bother to > > subtract 1 and the base address is fudged by 1. also this should be > > intimately related to the index bounds checking that either MATLAB or > > Octave do. i can't find that and no one who was a regular developer > > would help me because (like the Math Works) they didn't think that > > this modification was a good thing to do which is still just > > inexplicable to me. > > > the Octave source code is spaghetti. it's shit. i cannot believe > > that people take any pride in code written like that. (well, i dunno, > > i see a lot of shit code where i work.) > > > Your experience with the code seems to be good evidence that the cost > benefit ratio is too high to justify the effort of the conversion. >
it's not that i wouldn't find the benefit is worth the effort, it's just that i cannot take on a project where i have to read and decode dozens of files of C++. it should be constructed so that whereever the code is that needs to know the limits (or lengths in present Octave) of each dimension of the array, the code that messes with that should be in only a very few places, and they should all have to reference these parameters (NxM for a 2-dim array) through a single classes member functions. (what do they call that in C++? i can't remember.) then whenever indices of an array need to be checked at the top end, the lower limits of the indices can also be checked and the net offset is calculated as a linear combination of each index minus its lower limit. but, i cannot be confident that i'll know every place where this address arithmetic needs to be done. but someone who actually wrote the code that does the limit checking and address arithmetic that might know all of the salient places to look. so the first thing i would want to do is add to the class that has the array length for each dimension, an index base or origin value that is initialized to 1 when an object of that class is created. and then wherever we have to subtract 1 from an index, usually to access an array element, but for any reason, we would subtract the index origin for that dimension of that particular instantiation of an array. it should work the same as it does presently (perhaps a microscopic amount slower). if someone who knows the code better than me can point to the places where that needs to be done, then i think it would be worth my effort to try to enhance Octave so we could have 0-based or even negatively based arrays. i wonder how long the fft() with the present definition would be used once someone writes a "dft()" that does the same thing but assumes (and returns) the correct indexing? we could pass it an array with indices from -512 to +511 and the dft() would know that (because the array parameters like origin and length of each dimension is available) and know where t=0 is. and the DC value would be returned in bin #0. (to me, it's just astonishing that it's 2008 and we don't have an fft() in the most common signal processing model/emulating software that returns the DC value in bin #0! that's just stupid, and like some other software shit, compared to the 1990s, i think our tools today are ickyer and harder to use than in the 90s. i guess i'm just a Mac wus, but even the OSX Mac stuff is harder to deal with than what we were using in the olden days.) r b-j
robert bristow-johnson <rbj@audioimagination.com> writes:
> [...]
Robert, what mechanism would you propose to choose indexing mechanism while simultaneously preserving proper operation for legacy code? -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://www.digitalsignallabs.com
robert bristow-johnson <rbj@audioimagination.com> writes:
> [...]
Robert, what mechanism would you propose to choose indexing mechanism while simultaneously preserving proper operation for legacy code? -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://www.digitalsignallabs.com
robert bristow-johnson <rbj@audioimagination.com> writes:
> [...]
Robert, what mechanism would you propose to choose indexing mechanism while simultaneously preserving proper operation for legacy code? -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://www.digitalsignallabs.com
robert bristow-johnson <rbj@audioimagination.com> writes:
> [...]
Robert, what mechanism would you propose to choose indexing mechanism while simultaneously preserving proper operation for legacy code? -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://www.digitalsignallabs.com
Hi,

  For plot zooming read my earlier reply on the chain.

  Another thing someone has asked is how to get the figures
  saved for documentation purposes:

  The answer is use the "print" command.

  print -dpng "filename.png"

  just below the plot command which you want to save.
  This way you could save the figure to file and then
  continue with your documentation.

Thanks and Regards
Bharat Pathak

Arithos Designs
www.Arithos.com

On Mar 24, 6:58 pm, Randy Yates <ya...@ieee.org> wrote:
> robert bristow-johnson <r...@audioimagination.com> writes:
> > [...] > > Robert, what mechanism would you propose to choose indexing mechanism > while simultaneously preserving proper operation for legacy code? > -- > % Randy Yates %
And providing proper operation with legacy data? Dale B. Dalrymple