09-23-2015 12:15 PM
09-23-2015 01:28 PM
How accurate does the median need to be? How wide is the range of input values?
If all you want is an approximation of the median, you could maintain a fixed size histogram in memory. You can even maintain a resolution that is higher than the number of bins by always filling two adjacent bins with fractional values depending on the fractional value of the input with respect to the bin spacing. All my suggestion operate on fixed size arrays and are thus very efficient.
09-23-2015 03:33 PM
The longest range (history) overwhich I will need to calculate a median is about 200 samples. I have 8 channels of data and 1000 scan points per channel, so I will need to calculate 8000 medians. My data update rate is 20 Hz.
With respect to the accuracy of the median, an approximation might work, but I'm not sure.
09-23-2015 03:41 PM
Since you seems to know all final sizes, it would help to do everything in-place. Prepending array data to a 2D array (as you do in your first code image) is the least efficient, because things need to be constantly reallocated from scratch.
@tysonl wrote:
The longest range (history) overwhich I will need to calculate a median is about 200 samples. I have 8 channels of data and 1000 scan points per channel, so I will need to calculate 8000 medians. My data update rate is 20 Hz.
I don't understand your calculation. Do you need a "rolling median" of the last 200 values updated for each new point or just a median of consecutive 200 point sections.
Do you need to display the medians as data is aqcuired or could all be calculated in a post processing step once all data is available?
09-24-2015 09:13 AM
I am doing a rolling median on the data as it arrives. As is typical, the median is calculated on whatever data is available until the 200 buffer is filled. Once filled, the newest data is pushed into the buffer and the oldest data is popped off. This is basically a rolling median filter.