LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Covariance Matrix in the Levenberg-Marquardt Fit

Hi,
I'm using the Levenberg-Marquardt Fitting VI to fit a gaussian curve. The VI is working fine. My problem now is how to find out the error of the coefficents. I guess I can do that using the Covariance Matrix, but I'm not sure if I can use directly the diagonal elements of the matrix as error or if I have to considerate also the weighted mean error (residue) of the fitted model.

I also found a conversation in this forum when they suggest to estimate the error to multiply the diagonal elements by the MSE output and then take the square root..... but I cannot understand why.....

Could someone help me to understand?!?

Many Thanks
0 Kudos
Message 1 of 12
(15,713 Views)
You can find out about the mathematics in Numerical Recipes in C:
http://www.fizyka.umk.pl/nrbook/bookcpdf.html
in Ch. 15.5 and 15.6

The Covariance Matrix gives you a Variance (as the name says), the other conversion gives you a confidence interval.

It really depends what you need to know...

Have fun digging in the math.

Felix
Message 2 of 12
(15,688 Views)
Hi Felix,

Thanks for the reply.

I went to have a look to the book. I knew that the diagonal terms in the covariance matrix are the variance of the coefficents. So, if I take the root square of the variance I should get the standard deviation for each coefficient. But in LabView the value are ridiculously low.... I found in another conversation in this forum (calls "covaiance matrix and error estimates") that you have to multiply the diagonal elements by the MSE output (that I guess is the Residue (so the weighted mean error for the fitting)) and then take the square root. Yesterday I tried to do that and the errors are much more reasonables..... but still I cannot understand why.......
0 Kudos
Message 3 of 12
(15,676 Views)
Do you have the Standard Deviation input wired? If not, then LabVIEW calculates the covariance matrix assuming that the standard deviation of each of your data points is equal to one. Therefore the diagonal terms of the matrix may not reflects the variance of the fitting coefficients. One must know that the variance of the coefficients is given by the sum of the variance of each of the data points multiplied by the square of the effect that each data point has on the determination of the coefficients. So the suggestion to multiply the diagonal elements by the MSE  is correct in the sense that if you don't known the variance of your data points, the MSE is usually a good estimate of it.  I suggest you to wire the Standard Deviation input with an array of estimated values of you data point standard deviation and see what you get.
 
Reading suggestion: Data Reduction and Error Analysis for the Physical Sciences from P. R. Bevington et al. It covers the above subject in a very comprehensive form   
 
 
Gaétan
0 Kudos
Message 4 of 12
(15,551 Views)
The Nonlinear Curve Fit.vi computes the covariance matrix as inverse(J'*J) where J is the Jacobian of the weighted least squares function.  As you have discovered, some additional scaling is required to obtain the results you are looking for.  The scaling needed is an unbiased estimate of the noise variance.  MSE is a biased estimate, but for problems with large degrees of freedom it is a good estimate.  MSE=SSE/N where SSE is the sum-of-squared error, N is the number of data points.  A better estimate of noise variance is SSE/DOF where DOF is degrees-of-freedom and is equal to the number of data-points minus the number of model-parameters.

We test the LabVIEW implementation of Nonlinear Curve Fit.vi against some freely available datasets from NIST.  
(http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml)
Parameter std. dev. estimates from LabVIEW agree with the NIST results very nicely.  To demonstrate this I am attaching one of the NIST datasets (Lanczos3) and test that demonstrates the std. dev. scaling mentioned above and compares to the NIST results.  I hope this gives you some confidence that this approach will give good results.

The reference Gaétan suggests is a good one.

-Jim
Message 5 of 12
(15,526 Views)

Thanks Jim for the additonal information on the subject

Gaétan

 

0 Kudos
Message 6 of 12
(15,514 Views)

Excellent explanation Jim. 🙂

Of course the MSE is only related to the noise if your model is sufficient for the data. So if you have a banana and try to fit it to a straight line, the MSE is not a true description of the "real" noise in the data, it is more a reflection of the "wrongness" of the model. If the parameters don't make sense, their error estimate don't make sense either. ;).

Also, since we are dealing with nonlinear models, a description of the parameter error as "standard deviation" might not be appropriate because it might not be normally distributed or symmetric. For example, you could have a case where increasing the parameter from the best fit by a small amount gives you a big penalty in chisquare, while reducing the parameter changes chisquare only very little. So the best fit parameter could be 5 with a confidence interval of [2...5.01].

Still, a simple parameter error estimate if often sufficient as a first approximation.


@DSPGuy wrote:
To demonstrate this I am attaching one of the NIST datasets (Lanczos3) and test that demonstrates the std. dev. scaling mentioned above and compares to the NIST results. 

I think there is something wrong with your example. Maybe the dataset is wrong?

The model actually uses 6 parameters, the model description inside the model vi lists 7 paramters (b1..b6, e), but you are only feeding it two paramters. The results don't agree with NIST at all. What am I doing wrong?

(There is also a typedef that's not included. Since it's not hooked up I can just delete it).

Message 7 of 12
(15,508 Views)
Good points Christian.

Apologies for the example.  Forgot to save default values after editing.  Also got rid of typedef constant instance.  Smiley Sad

Please let me know if you still have problems with it.

-Jim
Message 8 of 12
(15,477 Views)
Thanks to everyone for the reply. You gave to me very important information!

Thanks a lot!

Roberto

0 Kudos
Message 9 of 12
(15,392 Views)

I would like to make the following correction to Jim's comment: it's just a minus 1 in the formula but only then you are producing a bias free estimate of the noise variance

.....

A better estimate of noise variance is SSE/DOF where DOF is

degrees-of-freedom and is equal to the number of data-points minus (the number of model-parameters-1)

......

0 Kudos
Message 10 of 12
(14,435 Views)