01-15-2024 01:47 AM
I have designed a Model Predictive Controller and now I want to deploy it to real-time hardware.
I have NI USB 6363 and NI RIO FPGA Module 9124
I have figured out that I can use Simulink Real-Time with NI DAQ Toolbox to implement MPC real-time using Simulink. But the problem is Simulink is very slow and I don't want to use it.
So, my question is to provide me with some guidelines and resources to understand how can I use the S-Function / C-Code generated by Simulink in the LabVIEW or NI RIO FPGA?
Will it execute faster enough than Simulink Real-Time?
My Controller has the following settings
Np = 08
Nc =3
Output Weight = 10
No constraints
I tried to figure out any hardware solution but these are expensive. I am an MS student and I can't buy any other hardware, I need to utilize everything available in my lab.
I can't buy a speedboat or other hardware associated with Simulink.
I am curious to use the NI FPGA Module rather than USB 6363 because it is faster.
01-15-2024 03:02 AM - edited 01-15-2024 03:09 AM
The USB-6363 is a normal DAQ interface without any user programmable part. The NI-9124 however is a non existing device. There is an NI-9214 module but it's a thermocouple IO module and I'm not sure how to even match your request about high speed with its sampling rate of 68 S/s max.
It is not clear what you want to do. You talk about Realtime execution but to my understanding there is no hardware so far in what you mention that would let you do real real-time in any way.
Definitely no NI FPGA. For that you would need an according cRIO, sbRIO or similar hardware.
01-15-2024 03:11 AM - edited 01-15-2024 03:27 AM
@rolfk I am Sorry its NI cRIO 9024
01-15-2024 09:08 AM
Ok the 9024 is indeed an FPGA device. But it is a rather old one that works with a PPC CPU and vxWorks as OS. https://www.ni.com/en/support/documentation/compatibility/17/real-time-controllers-and-real-time-ope...
As such it is only supported by LabVIEW 2019SP1 and earlier. https://www.ni.com/en/support/documentation/compatibility/21/ni-hardware-and-operating-system-compat...
Also it is very unlikely that Matlab ever supported compilation of their Simulink C files for the VxWorks OS, although I might be wrong. Even if they did, it will be difficult to do, and probably almost impossible to get support for that nowadays, since it is an officially discontinued hardware system.
Integrating the Simulink C code into the FPGA is definitely out of question. You only can import VHDL code into the LabVIEW FPGA project, anything else has to be programmed in LabVIEW itself.
01-15-2024 07:10 PM
Thanks, So, It means that I should just try MATLAB Desktop Real-Time because I want to implement the MPC controller in real-time. I can't afford any expensive hardware.
Thank you
01-16-2024 03:27 AM - edited 01-16-2024 03:36 AM
@UmairMech wrote:
Thanks, So, It means that I should just try MATLAB Desktop Real-Time because I want to implement the MPC controller in real-time. I can't afford any expensive hardware.
That seems your best chance. But please note that real-time doesn't necessarily mean faster. Real-time is about guaranteeing that something always operates within a certain time interval. A loop that always iterates within 1 second can be perfectly real-time as long as it is guaranteed that a single iteration never takes more than 1 second. This guarantee can not be made on normal desktop computers. They may be on average a lot faster than many real-time systems but the loop iteration on occasion can be delayed by several seconds due to the virus scanner kicking in, or the user opening a document in some application which monopolizes the entire system for itself, or a number of other reasons.
Your desktop system is optimized (well at least they attempt to do that, but when opening a typical MS Office application you might feel that they somehow managed to completely bork that) to give fast reaction time to user interactions, but it does not attempt to make extra efforts to let a process always have the same (often very limited) amount of CPU resources. One time process X gets nearly 100% CPU, the next moment it is put on ice for an undetermined amount of time, to let some "more important" operation use those CPU resources!
The most likely problem for your "slow" control loop speed is however the reading of the input value and the writing of the control value to the analog output. Each of these calls goes through many layers of drivers and switches at least once to kernel mode and back each time as only in kernel mode is it allowed to access hardware. This makes that writing (or reading) your single value easily can amount to ms execution time. And no real-time subsystem in Windows is going to solve that unless you can reliably execute the whole hardware control fully inside that real-time context itself. But that is very advanced Windows magic, and the NI DAQmx driver does not support being executed in such a context, so trying to do your Simulink routine in some isolated real-time context might make things only worse.
It either might not be able to access the hardware at all, only producing errors in the DAQmx calls, or it might have to somehow remote execute the DAQmx calls through some RPC mechanism between the real-time context and the normal Windows kernel system and that is not going to be any faster for sure, if it even can be made to work without special NI support, which is as likely to be available as it is that the hell is going to freeze over.