01-13-2012 01:58 PM
Our system test data is saved as text.
Sample text Run file(simplified):
"Test 1 of 100
Pass: reading 1 XX
Pass: reading 2 XX
.....
Pass: reading 120 XX
End of Test 1"
I want to read in multiple files and generate histograms for each of the readings. My problem is there are 120 readings per test and they have very little common structure from reading to reading
Typical readings:
Pass: V/UHF Reference Downconverter Synth cal tone amplitude is 1.5 dB from nominal @ 35 MHz in narrow band
Pass: UHF Antenna Sample channel element 1 cal tone amplitude is -42.1 dBm (nominal -40 dBm)"
The runs are generally in order but I need to parse each line one at a time, pull the value(s) from the line and assign it to the proper 2d array. I'll end up with an array with 120 rows(readings) and as many columns as tests(100's). Once I get the 2d array the histograms are easy.
I have two approaches
1. parse each line through a series of vi's. Each vi is specific for a single reading. (120 vi's; that's ugly)
2. a large vi (Attached) that parses a section of the test at a time. Only gets about 15 readings.
Both of these don't scale well from the current situation of watching only a few problem readings to generating distrubutions of them all.
Can anyone point me in the right direction?
Norm
01-13-2012 02:21 PM
I want to read in multiple files and generate histograms for each of the readings. My problem is there are 120 readings per test and they have very little common structure from reading to reading
Bullseye! there is no cheap solution to poor archetecture. The test data should have been consistant by design. Lacking that- either approach you offered will work and there is not a lot you can do about finding an elegant solution. It is a pigs ear and it will be a pigs ear- no silk purse for you today.
01-13-2012 04:01 PM - edited 01-13-2012 04:02 PM
If it was a pigs ear at least my dog would be happy. Thanxs for the chuckle.
Norm