LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Conversion from U32 to FXP on host side after FIFO DMA transfer

Solved!
Go to solution

Hi Guys, 

I have data acquired at FPGA side from Analog Input ( voltage +/- 10V). 

verification at FPGA side gives for example:

raw value= 0.010025  and after conversion to U32 = 6570

 

after transfer to host via FIFO DMA I'm receiving:

Integer U32 - 6570  but after conversion with Integer to Fixed-Point Cast function Instead of 0.010025 I have 0.1025.

Where is my mistake?

 

Tomasz_Poland_0-1701265543106.png

It seems that conversion permanently loosing data

 

Could You support, please

Thank You

 

0 Kudos
Message 1 of 14
(1,497 Views)

Hi Tomasz,

 

do you use the very same FXP datatypes in all places? (Do you know we cannot debug images with LabVIEW!?)

 

Why don't you send the FXP directly from FPGA to host? Why do you need to convert to/from U32?

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 14
(1,494 Views)

Hi Gerd,

I use 4 channel  dedicated module for receiving signal in SENT ( SAE 2010/2016).standard.

Signal is in FPGA as Integer U32.

Furtheromore I have parallel signal from Analog Input  (Mod6 /AI0). This signal is converted from FXP to U32 and then both are transferred via DMA FIFO to the Host. FIFO DMA is configured for U32 as Data type and 2 as Number of Elements Per Write.

 

FPGA code looks like below. If I verify value at point 1 and point 2 is "almost the same"  ( I guess that difference in values appears  because AI changes as fast as possible and value which arrived to FXP Indicator is "fresher" than value converted by Fixed-Point to integer Cast and shown on Integer Indicator.

 

Tomasz_Poland_1-1701288436748.png

 

On Host side the code looks like below:

Tomasz_Poland_2-1701289019289.png

 

Integer Indicator (point 3) shows values which are provided from FPGA but decoding shows:

FXP values are 10 times multiplied vs original data.

Tomasz_Poland_3-1701289383355.png

Where is  data corrupted? Has the Integer to Fixed-Point Cast some specific configuration? 

My is configured as below:

Tomasz_Poland_4-1701289541846.png

 

 

Thank You for eventual feedback.

Tomasz

0 Kudos
Message 3 of 14
(1,443 Views)

You should not use a typecast to convert between the calibrated and raw data. See Working With Calibrated and Uncalibrated Data on NI R Series to learn how to do the conversion properly.

-------------------------------------------------------
Control Lead | Intelline Inc
0 Kudos
Message 4 of 14
(1,435 Views)

Hi Zyong,

I work with Crio 9068 and NI 9222 as AI C Series therefore I try to follow:  Switching Between Calibrated Fixed-Point and Raw Integer Modes for FPGA I/O Node

I see a little bit differences.

According to the documentation  C Series Module Properties Dialog Box for the NI 9222/9223 (FPGA Interface)   I must  calibrate on Host side:

Tomasz_Poland_0-1701333487606.png

Maybe I do something wrong with typecast conversion?

 

 

 

0 Kudos
Message 5 of 14
(1,399 Views)

The factor of 10 in your difference is the 10V range.

The FPGA ready the ADC as +-1, but in real-world calibration terms, this is +-10. Your host software is applying this correction and the values are not as they are read from the ADC, but rather calibrated to represent the actual voltage measured.

 

This is what is meant by RAW and CALIBRATED values.

 

the 0.006 is the ADC value, the 0.06 is the VOLTAGE.

0 Kudos
Message 6 of 14
(1,391 Views)

So,on the Host side can I simply only divide FXP value  by 10?

0 Kudos
Message 7 of 14
(1,382 Views)

Try using a "Number to Boolean array" and then "Boolean Array" to Number with the correct target datatype set and see if your results are different.

 

What I said cannot be true if your decimal value is the same, I need to read better. Where are the values of 0.06 and 0.006 in your post above from? I think you are reporting multiple different sources for these values, there may be driver operations in between the direct transmission and the values you're showing. Your incorrect typecast just happens to be approximately (but not quite) a factor of 10 off. The 0.06 and 0.006 do seem tobe exactly factor 10 off though, which looks like a calibrated / raw difference. It's very hard to track doen exactly which values you're reporting where.

 

Typecast is not the correct function for this binary conversion. On FPGA, there's "Reinterpret number" which keeps the bit-pattern but tell LV to interpret it as a new datatype. Typecast does NOT guarantee keeping the bit pattern at all.

 

Tip: You can create a< VI with "Reinterpret" on FPGA, open it on host and it will work. The function is just not on the pallette outside of FPGA targets. It's right there next to the typecast.

 

Intaris_0-1701340396226.png

 

0 Kudos
Message 8 of 14
(1,373 Views)
Solution
Accepted by Tom_wolf

Hi Tomasz,

 

You just forgot to configure the correct fixed-point representation of the constant wired to "Integer to Fixed-Point Cast" on the host side.

This should be Signed, Word length: 24, Integer word length: 5, as are all analog inputs from module NI 9222.

Here I see it is set to the wrong type:

raphschru_0-1701356571266.png

 

Change the representation of your constant, it should then reflect automatically in the Output Configuration of the Cast function:

raphschru_1-1701356799683.png

 

Also, I don't see why the "Integer to Fixed-Point Cast" and vice-versa should not be used, and the issue has nothing to do with calibration or raw data.

 

Regards,

Raphaël.

 

PS: You should have kept posting on the same issue you created initially https://forums.ni.com/t5/LabVIEW/Two-data-types-transferred-via-DMA-FIFO-Hexadecimal-and-FXP/m-p/434...

since this is exactly the same problem. Otherwise each time new people will come in trying to help without having the history of your previous attempts.

Message 9 of 14
(1,339 Views)

@raphschru wrote:

 

Also, I don't see why the "Integer to Fixed-Point Cast" and vice-versa should not be used

My take on this is that the cast allows for types of different widths, whereas the "reinterpret" does not, it keeps the bit width. Since the config of this exchange is kind of hidden (hence the confusion in this thread), there are more opportunities to make mistakes using the type cast than the reinterpret. I've been programming FPGA for a while and have stopped using the type cast completely. It's more a personal preference and experience thing than anything fundamental.

 

If I'm specifically looking to change the bit width, there are more "visible" ways to do it than via Type cast. When debugging FPGA code, finding hidden configuration errors like that can be a major PITA. So I avoid them and make everything as self-explanatory as possible.

 

But as with a lot of things in any programming language, YMMV.

0 Kudos
Message 10 of 14
(1,331 Views)