LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hi,

 

I'd like to request that NI enables the use of Xilinx XPM Macros in Component Level IP (CLIP) and Socketed CLIP and custom user IP:

 

Xilinx recomments XPM macros for Clock Domain Crossing (CDC) and FIFO / Memory instantiation because they are more easily reconfigured / managed than classical IP and (especially in case of CDCs) auto-generate timing constraints to ease getting a correctly constrained design.

 

XPM Macros are available for all current Xilinx/AMD devices (7-Series to Versal):

XPM Macros in 7-Series Libraries:

https://docs.amd.com/r/2021.1-English/ug953-vivado-7series-libraries/Xilinx-Parameterized-Macros 

XPM Macros in UltraScale Libraries:

https://docs.amd.com/r/en-US/ug974-vivado-ultrascale-libraries/Xilinx-Parameterized-Macros 

XPM Macros in Versal (Premium/AI) Libraries:

https://docs.amd.com/r/en-US/ug1344-versal-architecture-libraries/Xilinx-Parameterized-Macros 

https://docs.amd.com/r/en-US/ug1485-versal-architecture-premium-series-libraries/Xilinx-Parameterized-Macros 

https://docs.amd.com/r/en-US/ug1353-versal-architecture-ai-libraries/Xilinx-Parameterized-Macros 

 

Unfortunately, XPM macros are only available in SystemVerilog at lowest level, although VHDL instantiation templates do exist (see documentation above).

AMD/Xilinx seems to have no plans to add pure VHDL XPM Macros:

https://support.xilinx.com/s/question/0D52E00006hpeAySAI/no-vhdl-simulation-models-for-xpms-?language=en_US 

Excerpt:

graces (AMD) on 2017-12-07
There's no plan to support VHDL model for XPM in future releases.

 

graces (AMD) on 2017-12-14

The Xilinx direction for any new development is in Verilog. It can be overridden with business justification though.

 

If you have a strong demand, I'd suggest that you open a Service Request with Technical Support and get a CR filed. The SR will be linked to the CR. If quite a few SRs are linked to the CR, the chance to get it implemented will be larger.

 

Since NI does not document that the FPGA module does not play nicely with CLIP that uses XPM macros, I had to find out what is needed to get them to work:

My preliminary result is that I can get a bit file if I only change/patch the call of xelab that the FPGA module performs from

xelab.bat -m64 xil_defaultlib.conf12B308D38326465793B06F85282B8708 -L xil_defaultlib -L unisim -L unimacro -L xilinxcorelib -L secureip -snapshot my_clip_top_level_entity -dll -prj clipsyn.prj

to 

xelab.bat -m64 xil_defaultlib.conf12B308D38326465793B06F85282B8708 -L xil_defaultlib -L unisim -L unimacro -L xilinxcorelib -L secureip -L xpm -snapshot my_clip_top_level_entity xil_defaultlib.glbl -dll -prj clipsyn.prj

and add the line 

verilog xil_defaultlib "C:\NIFPGA\programs\Vivado2021_1\data\verilog\src\glbl.v"

to clipsyn.prj.

I'm using Labview 2022Q3 with the 2022Q3 FPGA module, targeting a sbRIO-9629's Artix 7 with the bundled Vivado 2021.1

So it seems all that is needed to enable the use of XPM macros in (socketed) CLIP is a configuration option that performs a simple extension of the elaboration command arguments and file list.

 

If NI decides not to support XPM macros for whatever reason, this should be documented very prominently in the CLIP sections of the user manuals. Also, NI should consider providing their own macros with the same functionality in this case, at least for CDCs.

 

Using a netlist or user IP to wrap XPM CDC macros does not work, since the design constraints are added by tcl files (partially only as exceptions), that are not retained in the netlist or IP, see

https://support.xilinx.com/s/question/0D54U00008aHc9ESAS/handleretainexport-xpm-cdc-macro-timing-constraints-for-ooc-design-exported-to-a-netlist-that-is-then-used-and-implemented-with-another-design?language=en_US 

So there is currently no workaround to use a design that depends on them.

 

Thanks for considering this.

It would be nice to have control of clock.-independent assignments of signals from I/O nodes (without synchronising registers) without having to specifically having to use a clock for the connection.

 

Intaris_0-1719315020552.png

 

 

Image says it all.

 

We have tried using a static assignment on the top-level diagram, without using a SCTL but it appears that does not work. The example links within a single CLIP, but the idea is aimed at actually doing some connections between multiple different CLIPs without the need for a specific VHDL wrapper for each individual configuration.

We need a way to simply reinterpret the bits in our FPGAs.  I currently have a situation where I need to change my SGL values into U32 for the sake of sending data up to the host.  Currently, the only way is to make an IP node.  That is just silly.  We should be able to use the Type Cast simply for the purpose of reinterpreting the bits.

It would be nice to be able to use logic operators on arrays in Single Cycle Timed Loops.

  17863i0D7A4F514670B8AB

Many data streams contain information for multiple channels or multiple samples. Today one must pack this data into larger integer types or interleave the data manually into multiple writes to the DMA FIFO API. It would be much simpler if the DMA natively support cluster and array data types. The local FIFO, Memory, and Register APIs already support this; extend it to DMA.

I'm currently looking for a way to read out the FPGA version number from the FPGA.

All I found was a way to parse the *.lvbitx, but that's not what I want.

Are there any plans to store the version number in a FPGA register to be read out at runtime?

 

Best regards

Thomas

We're starting development on an Ultrascale device, KU40 and am missing the option to utilise the DSP48E2 primitive as we have for DSP48 and DSP48E1.

 

Intaris_0-1670929335347.png

 

NXG seemed to have it, but as we all know, NXG is no more.

 

https://www.ni.com/docs/en-US/bundle/labview-nxg-fpga-module-cdl-api-ref/page/dsp48e2.html

 

Can we please have a DSP48E2 primitive for LabVIEW FPGA? I would really like access to the new features supported, including the wider multiplier.

Vision is available under LabVIEW 64 bit and this makes sense since vision can generate very large amounts of data.  I think now is the time to bring FPGA over to LabVIEW 64 bit as well.  With FPGA systems you can also generate very large data sets.  Also with cards like the PCIE 1473R, you have a VISION requiring card that generates lots of data, but it requires FPGA, so you can only use it in LabVIEW 32 bit.  This is not a good thing.  It has been 5 years since LabVIEW 64 bit has been released  it is time to finish moving the addons over to 64 bit.

Old Title: FPGA Case Structure Needs To Display Enum Values

 

In LabVIEW the case structure can show enum values, while the FPGA case only shows the numeric value. Would like to see the below example capable in FPGA.

Untitled.png

The Vision FPGA has come around for maybe ten years? Nowadays, FPGA has much more resources than their ancestor, but in Vision FPGA, we can still only handle 8 pixels a single cycle, which sometimes comes as a bottleneck.

 

Also, I think it would be good to add a signed 16-bit image data type for Vision FPGA; when we use u8 subtraction, we are losing some of the information for the output is only another u8 image. If we have the i16 datatype, it will be possible to do a lossless subtraction. Sometimes every bit counts.

 

 

Malleable FPGA VIs import into the Desktop Execution Node with the same datatype as the FPGA VI's "malleable terminal".  The Desktop Execution Node does not mutate the input type to match the "malleable terminal" of the FPGA VI.  As a result, host VI test benches cannot iterate Type Specialization Structure cases in the malleable FPGA VI.

 

The "anything" input to this Assert Structural Type Match node is an I16, which breaks this case against an I16, which is the "malleable terminal" of this VI.

 

PIE5669450_0-1687372970404.png

 

The Desktop Execution Node only sees the I16, and coerces other datatypes.

 

PIE5669450_1-1687373121899.png

 

IMO the compiled Type Specialization Structure case is a critical unit test, which depends on the data type of the control wired to the "malleable terminal", so this is a critical limitation of the Desktop Execution Node.

 

I think the intended use-case for the DEN is to hook into an FPGA VI that's in a loop, and if so, the inputs to the malleable VI are selected by the calling VI.  So, maybe this isn't a limitation of the DEN itself, but of the DEN workflow.

 

Thanks for your consideration,

 

Steve K

Now that most numeric operators have the ability to saturate it would be nice to be able to differentiate these operations.  I know that the majority of the time you can determine this information easily with the context help but this would make it much easier to spot.  I tend to copy operators that are already being used in my vis than to grab a new one off the pallet.  This would let me know which type of operator I'm copying.

 

18007i82E22C521A6F662A

I would like to access class attributes of my FPGA class hierarchy with property nodes.  I prefer the property node API over VIs for data member access because it allows you to grab properties from across the hierarchy in a single node.  This leads to a (much) cleaner block diagram and expedites development.  For example in the screenshot below, the FXP attributes belong to the NI_9205 class, while the "_OP" attributes belong to the parent.  I don't care about the Invoke node API over a subVI, because the wiring work and diagram appearance are about the same IMHO.

 

2013-09-05_233500.jpg

 

Thanks,

 

Steve K

The LabVIEW FPGA module has supported static dispatch of LabVIEW Class types since 2009. This essentially means all class wires must be analyzable and statically determinable at compile-time to a single type of class. However, this class can be a derived class of the original wire type which means, for instance, invoking a dynamic dispatch method can be supported since the compiler knows exactly which function will always be called.

 

http://zone.ni.com/reference/en-XX/help/371599H-01/lvfpgaconcepts/fpgaclassesinvis/

 

This is not sufficient for many applications. Implementations that require message passing or other more event oriented programming models tend to use enums and flattened bit vectors to pass different pieces of data around on the same wire. All of this packing and unpacking can automatically be handled by the compiler if we can use run-time dynamic dispatch to describe the application.

 

We call for the LabVIEW FPGA module to add support for true run-time dynamic dispatch to take care of this tedious, annoying, and down-right boring job of figuring out how to pack and unpack bits everywhere. Whose with me?

When parsing message packets in my FPGA code I'm often using a custom packing method with multiple individual datasets being packed into a single U64 (for example) for transport.

 

Untitled.png

 

With the introduction of Malleable VIs, I would love to be able to create my own packing and unpacking Malleable VIs but don't yet see how.  Problem is that the "Boolean array to Number" does not allow attaching a datatype like the "To FXP" does.  I would like to marry these two functions into a "Boolean array to FXP" node where I can wire in my input datatype and have the output datatype automatically maintained in a malleable VI.

 

 

In addition to the gates and math functions that are available on the FPGA palette, some

basic functions should also be availble:

 

up/down counter

flip flops

mux/demux

 

Thanks,

-Chuck Reed

 

I'd like the Select node to provide an Invert bubble on the Select Terminal, similar to how the Compound Arithmetic node allows the developer to invert any input.  I don't like crossing wires.

 

Select Node with crossed wires in 2012: 2013-09-30_153921.jpg Select Node without crossed wires in future release (optional invert shown): 2013-09-30_154017.jpg

 

Keep your Invert node comments to yourself 😉

 

Thanks!

 

-Steve K

 

With LVFPGA I work almost exclusively with fixed point numbers, and having to convert my numbers to 8 bits or 16 bits just to use the scale by power of 2 function isn't convienient.

 

17949iDCF3B4A5081C518C

We need to have more FPGA Vision example codes. I followed NI introductory articles on image processing using FPGA and they sound great, but was very much disappointed when trying to find usable examples as there are only 5 examples on the IPNet, far fewer than what the intro articles suggest what FPGA can do. 

I posted this suggestion in the forums, but it is something I would like to see improved and included in the FPGA library. The idea is to multiplex multiple inputs/outputs to a single high-throughput math function. If someone has to do a lot of fixed point math on the FPGA, the resources are used up quickly. The multiply block is primarily what I would like to see this implemented for, but I think it would be useful with all of the high-throughput math functions.

 

In one project I quickly ran out of DSP48E's on my FPGA, and since I had many fixed-point multiplies with the same data type configuration, I created a state machine to step through the inputs, allowing me to replace 4 high-throughput multiplies with one multiply block for multiple operations. Sequential operations are possible by feeding the output of one operation into the input of another (I didn't implement that in the forum post below, but it can be done). I think Labview could improve pipelining of the multiplexed function, ease of setting the number of inputs/outputs and data-type, hand-shaking logic for operation in SCTL, etc. LabVIEW could also show separate schematic figures for each of the multiplexed functions (example: a PCB layout software such as Eagle shows separate blocks on the schematic for each opamp on a chip containing multiple opamps).

 

http://forums.ni.com/t5/LabVIEW/Multiplexed-multiply-to-conserve-resources-on-FPGA/m-p/1668138/highlight/true#M595294