12-03-2024 07:42 AM
Hi,
I am developing a cRIO application.
My setup is:
Development machine = Windows 11 PC running LabVIEW 2024 Q3 (24.3.2f2) 64-bit
Target = cRIO-9045
I am sometimes getting an error that prevents me from running code on the cRIO. The error message is "Deploying <fully qualified VI name here> loaded with errors on the target and was closed.", as can be seen below.
The VI named in the error message is a simple VI that has not been modified in weeks, and has a solid run arrow. So why this error appears is a mystery, and as far as I can tell is a LabVIEW bug.
I encountered the error message above three times in the last week. Each time, performing a mass compile of the project folder resolved the issue. I was then able to deploy the exact same top-level VI (no code changes what-so-ever) to the cRIO. Please note that other times the deployment worked fine.
Today the error message appeared again. I mass compiled the project folder, but this time this did not fix the issue. Instead, I am now getting a similar, but slightly different error message, as seen below.
As can be seen, it is the same type of error message. The VI in question has changed though, from Remove Confirmed Elements.vi, which is a member of a certain lvclass which is a member of a certain lvlib, to DAQ Configure.vi, which is a member of a different lvclass inside a different lvlib.
In short, the full VI names are something like: L1.lvlib:C1.lvclass:Remove Confirmed Elements.vi and L2.lvlib:C2.lvclass:DAQ Configure.vi.
As of right now I am unable to deploy the cRIO VI and test my code.
Questions
Finally, I just want to say that it is so frustrating dealing with situations like this. I completely resonate with the point Darren made in his excellent LUDICROUS ways to Fix Broken LabVIEW Code which is that whenever NI runs surveys such as "What is the next feature you would like to see implemented in LabVIEW?" he is always forced to select "Other" and type "editor and run-time performance and stability improvements" (see between minute 04:00 and 05:00).
If anyone from NI reads this: NI, many of us have built a career out of using your tools. Please invest time and effort into improving the reliability of the LabVIEW development environment! It is not acceptable to have the code failing to build and/or failing to deploy for no apparent reason. The tooling needs to be rock-solid. This is a must, not a luxury. Don't force us to transition to other toolchains/other vendors, which, if pushed enough, we will.
Many thanks for any help!
12-03-2024 08:31 AM
Hi Petru,
Here I suggested an idea that somehow diminished the issue (not solved):
Deploying to cRIO is effectively impossible.
I agree that we should not have to do that and NI / Emerson should invest more resources in correcting bugs.
I sometimes find bugs like this and decide to spend some time to report it to NI while attaching a sample project that reproduces the issue.
I feel like this is the right thing to do to make the bug corrected, but at the same time it is quite frustrating that there is barely any feedback on whether it is being taken into account or not by the NI R&D.
Regards,
Raphaël.
12-03-2024 08:43 AM - edited 12-03-2024 08:50 AM
I feel you. Dunno if this was solved in 24 Q3, but for 23 I had to add this .ini entry:
Modify the LabVIEW.ini file to add the following token: useCacheForDeployment=False. Then, restart LabVIEW
And for logs you could check https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019LpCSAU&l=de-DE
But don't expect too much from that.
Good luck 🙂
12-03-2024 08:49 AM - edited 12-03-2024 08:51 AM
I also remember @joerg.hampel has an entry for this issue in his wiki:
https://dokuwiki.hampel-soft.com/kb/ni-rt/deployment#unclear_error_messages
The suggested solution is to only have 1 target per project.
12-03-2024 02:27 PM
Hi Raphaël and Quiztus,
Thanks for your very useful suggestions. It's heartening to see that others have experienced the same frustrations, and that there are a few possible workarounds. Before I posted the question I too felt the "I am at my wits end" feeling expressed by BBSim in the Deploying to cRIO is effectively impossible. thread.
It seems that the possible workarounds are:
I haven't tried any of these workarounds yet, but I will, and will report back with any findings.
A few pieces of information that I didn't mention in my initial post but which could be very relevant to this issue:
12-03-2024 03:51 PM - edited 12-03-2024 03:52 PM
A progress update. What I have done so far is:
4. I applied useCacheForDeployment=False to the LabVIEW.ini file and restarted LabVIEW.
5. Opened the project, opened the top-level VI and pressed Ctrl + R. The deployment failed again in the exact same way as in step 3 above.
6. Created a new project, added a single cRIO target to it, and added the top-level VI to the cRIO target (all dependent lvlibs and lvclasses were listed under the Dependencies section). Ran the top-level VI. The deployment failed again in the exact same way as in step 3 above. However, this time I noticed some bold text briefly appearing somewhere in the middle of the deployment screen.
7. Ran the VI again, got the same error. Then scrolled back up to see the bold text, which can be seen below.
The bold text referred to a CTL file inside an lvlib (a DQMH module) whose Main VI did contain two Network-Published Shared Variables. All instances of reading/writing to these NPSV's had already been disabled a few weeks ago (a Diagram Disable Structure had been drawn around them). Prompted by the error message I removed the two NPSV's from the Main VI, and from the lvlib.
8. Ran the top-level VI again. The deployment completed and the VI ran successfully.
9. Let the VI run for a few minutes to convince myself that it was running fine. It was.
10. Closed the project that contained a single cRIO target, and opened the main project that contains code under "My Computer" as well as 7 cRIO targets.
11. Opened the same top-level VI from under the appropriate cRIO target. Ran the VI. The deployment completed and the VI ran successfully again.
This is all the testing I have done so far. What stands out for me is:
Because the error message was different from the very beginning of my testing this evening, I'm not sure whether anything written in this reply is relevant to the original post. I will continue keeping an eye on things and report here if the original error returns and/or I can fix it using one of the workarounds.
Thanks!