03-24-2012 11:51 PM
First, let me state I think VIPM is a fantastic tool and I applaude the folks at JKI for creating it and making a community version available to users.
Still, the inability to have multiple versions of the same package installed side-by-side is eventually going to cause people to encounter the Labview developer's equivalent of dll hell. I can easily see situations arise where a developer needs to integrate two components, but one component relies on PublicReuseCodeLibrary version 1 and the other relies on an incompatible PublicReuseCodeLibrary version 2. That's going to be a tough situation when only one of them can be installed on the system at a time.
For my personal packages I started appending a version suffix to the package file name. It has worked reasonably well for me so far, but of course it looks a little odd having multiple listings of the same package in VIPM. Does NI have a recommended set of best practices to avoid VI Package Hell?
03-27-2012 10:36 AM
Hi Daklu,
I've never really experienced that. What kind of incompatibilities have you experienced between versions of packages that have been insurmountible? Could you change the top level components that ended up broken so they all were compatible with the latest version? Ideally the code reuse libraries should never make a change that would break calling code. Also, ideally if one package relies on another that breaks, they should update themselves to be compatible with the newest version. Obviously these things don't always happen, but in a perfect world...
Have you also proposed this discussion in the VIPM Community or the JKI forums? The users here might have seen this problem more frequently and have some other ideas.
David
03-27-2012 01:10 PM
"What kind of incompatibilities have you experienced between versions of packages that have been insurmountible? "
I haven't... yet. But it will happen to someone eventually if there aren't guidelines for preventing it.
I agree in a perfect world reuse libraries don't make breaking changes. In the real world I don't believe that is a realistic goal. APIs evolve over time as needs change. Trying to maintain complete backwards compatibility over the long term results in a bloated API that is confusing and hard to use.
The approach I use is "planned obsolescence." Minor versions and bug fixes are released that maintain compatibility, but every so often major versions are released which break compatibility. This gives the developer an opportunity to trim the API fat that accumulates through the minor releases and keeps the package viable. Side-by-side installations is commonly used to facilitate planned obsolescence. The .Net framework, Labview runtime, and many other "reuse packages" allow side by side installations.
"Could you change the top level components that ended up broken so they all were compatible with the latest version?"
Not always. If PackageA and PackageB on the NI Tools Network rely on different versions of PackageC, which may be on the NI Tools network or may be distributed by other methods, A and B will not be able to coexist on the computer at the same time. This problem cannot be resolved without changing the source code for A or B, which may not be an option and creates problems of its own.
There are two strategies I can think of to handle this potential conflict. One way is for A and B to create copies of C, give it a different namespace, and redistribute it as wholly contained "private" code built into PackageA/B. This works well if C is entirely internal to A/B, and I can imagine using it if C were OpenG code. It breaks down if A/B need to expose parts of C externally, like if C were the Actor Framework and A/B were reusable Actor-based components, because C v1 and C v2 cannot be installed on the computer at the same time.
The other way is side-by-side installation described earlier, which I think is the better solution. I believe creating packages that allow side-by-side installation is the better solution as it at least allows a reasonable path for resolving situations where C is framework code. (Developers can write adapters to convert messages between versions.)
"Have you also proposed this discussion in the VIPM Community or the JKI forums?"
Not recently, because I already know technical solutions to the problem. The question is more about standards and practices for code released via the NI Tools Network. I ask because I am considering submitting LapDog packages to the NITN, and since it is framework code I need to make sure there is a resolution path for developers who decide to build code on it.
03-27-2012 01:24 PM
Is the root of the challenge that LabVIEW doesn't easily allow you to re-link to different versions of libraries and VIs? In most languages, relinking (to a different version of an API) might simply be changing a single line of code. But, in LabVIEW, there is no way to explicitly manage links -- that's abstracted away, for the sake of usability/simplicity. How do you manage which versions of libraries your code is linked to? Do you have an easy way to switch? Do you have tools for visualizing which versions you're calling?
This is an interesting topic, for sure.
03-27-2012 03:00 PM
Is the root of the challenge that LabVIEW doesn't easily allow you to re-link to different versions of libraries and VIs?
That is an associated difficulty, but I don't think it is the root of the problem. Even if relinking were relatively easy, if A and B depend on incompatible versions of external package C, I still have to wait until the authors have updated A and/or B to the same version of C. Easy relinking doesn't mean they'll actually *do* it in the timeframe I need, and until that is done there aren't a lot of good options open to me.
In most languages, relinking (to a different version of an API) might simply be changing a single line of code.
It's been a while since I've done any text-based programming, but wouldn't this only work if 1) v2 maintained a fully compatible interface to v1, or 2) the api developer wrote significant mutation code to automatically convert v1 source into v2 source? Option 1 defeats the purpose of planned obsolescence and option 2 is way too big a task for the typical open source developer me.
How do you manage which versions of libraries your code is linked to? Do you have an easy way to switch? Do you have tools for visualizing which versions you're calling?
I give my library names and package file names a major version suffix. (i.e. LapDog.Messaging.v2.lvlib and lapdog_messaging2-w.x.y.z.vip) When a version is developed breaking compatibility I'll rev the major version number in each of those names so they can be installed side-by-side (SxS.) I had LapDog.Messaging v1 and v2 installed SxS and it worked well. The rule I use is, "breaking compatibility requires a major version rev, and major version numbers are rev'd only when compatibility is broken." (Of course, what constitutes "breaking compatibility" is open to some interpretation...)
Ease of upgrading to a more recent version varies. If the APIs are fairly similar a lot of the upgrading work can be done using find and replace. Dissimilar APIs will need a hefty amount of manual conversion. It can be a tedious and time consuming process, but I really think that is the best way to go about it. For example, I've heard upgrading from traditional DAQ to DAQmx can be a real bear. Imagine if installing DAQmx uninstalled traditional DAQ and automatically tried to upgrade your source code. Maybe it gets it right... more likely it didn't and you have a bunch of broken vis with no way to look at the unupdated source code (other than setting up a second pc/VM with traditional DAQ and the original source code.) Having human eyes and intelligence doing the conversion is the best way to go about it, and to do that we need to have SxS installations possible.
I don't have any custom tools to see which (major) version I'm calling, but since each major version has a unique namespace and has unique vis, changing vi color or adding a version glyph is always an option. Major versions have different palettes with a version glyph on it. I do keep the project dependency branch open a lot so I can keep track of what I'm linked to during development, especially if I'm upgrading code to a more recent reuse package major revision. There are also the standard tools such as Find All Instances and Find Callers. The vi hierarchy window has great potential, but I find it hard to use efficiently so don't bother with it much.
---------------
If there is a root to the problem I'd say it's simply that the topic hasn't been discussed much in the LV community or a concensus reached on how to address it. If it is decided SxS installations are the best path to take, then we can start figuring out tools to simplify the tasks associated with that model. It's no secret what my preferred solution is, and I'll continue doing SxS releases for as long as LapDog remains an out-of-channel distribution. But how the larger NI Tools Network ecosystem wants to manage the risk of vip hell isn't my decision, hence the question...
03-27-2012 08:29 PM
Daklu wrote:
If PackageA and PackageB on the NI Tools Network rely on different versions of PackageC, which may be on the NI Tools network or may be distributed by other methods, A and B will not be able to coexist on the computer at the same time. This problem cannot be resolved without changing the source code for A or B, which may not be an option and creates problems of its own.
We've seen this issue before when working on our internal products (eg: ProductA relies on ResusePackage1_v1.0, ProductB relies on ReusePackage1_v2.0), but we've gotten around it for now by having vipc's for each product, so if a developer is working on both ProductA and ProductB, they can roll-forward/roll-back between Products by applying the associated vipc as they go. (As an aside, it'd be neat to have a vipc associaed to the project, so when they open the project it would ask them if they want to apply the vipc, and when they close it it would ask them if they want to update the vipc).
03-28-2012 01:31 AM
Daklu wrote:Ease of upgrading to a more recent version varies. If the APIs are fairly similar a lot of the upgrading work can be done using find and replace. Dissimilar APIs will need a hefty amount of manual conversion. It can be a tedious and time consuming process, but I really think that is the best way to go about it. For example, I've heard upgrading from traditional DAQ to DAQmx can be a real bear. Imagine if installing DAQmx uninstalled traditional DAQ and automatically tried to upgrade your source code. Maybe it gets it right... more likely it didn't and you have a bunch of broken vis with no way to look at the unupdated source code (other than setting up a second pc/VM with traditional DAQ and the original source code.) Having human eyes and intelligence doing the conversion is the best way to go about it, and to do that we need to have SxS installations possible.
This is not a valid example. DAQmx and traditional DAQ are two totally different software products. Completely different interfaces and APIs. DAQmx is not V2 of Traditional DAQ. They might as well have been written by two different companies. That's how different they are.
03-28-2012 03:32 AM
MichaelAivaliotis wrote:
Daklu wrote:Ease of upgrading to a more recent version varies. If the APIs are fairly similar a lot of the upgrading work can be done using find and replace. Dissimilar APIs will need a hefty amount of manual conversion. It can be a tedious and time consuming process, but I really think that is the best way to go about it. For example, I've heard upgrading from traditional DAQ to DAQmx can be a real bear. Imagine if installing DAQmx uninstalled traditional DAQ and automatically tried to upgrade your source code. Maybe it gets it right... more likely it didn't and you have a bunch of broken vis with no way to look at the unupdated source code (other than setting up a second pc/VM with traditional DAQ and the original source code.) Having human eyes and intelligence doing the conversion is the best way to go about it, and to do that we need to have SxS installations possible.
This is not a valid example. DAQmx and traditional DAQ are two totally different software products. Completely different interfaces and APIs. DAQmx is not V2 of Traditional DAQ. They might as well have been written by two different companies. That's how different they are.
I agree, but I think Daklu was trying to use its differences as an illustrative example, not a practical one.
03-28-2012 11:17 AM
MichaelAivaliotis wrote:
This is not a valid example... DAQmx is not V2 of Traditional DAQ.
Chris is right; it's meant primarily to illustrate the difficulties we might encounter when we cannot install incompatible packages side by side.
"In the late 1990s, the NI-DAQ team recognized that the need to maintain API compatibility with previous releases increased the difficulty of adding new features and devices to Traditional NI-DAQ (Legacy). In addition, the Traditional NI-DAQ (Legacy) API, through its long evolution, had developed many problems that needed to be fixed. NI-DAQ developers were having difficulty intuitively extending the API and optimizing the increasingly broad spectrum of customer applications. NI concluded that a new API design and architecture would help NI-DAQ developers more easily add new features and new devices, fix many existing driver problems, and optimize performance at the same time." (DAQmx FAQ.)
I'm actually not that familiar with NI's line of DAQ products, legacy DAQ, or DAQmx, but it seems to me the decision to call DAQmx a new version or a new product is open to interpretation. It's certainly marketed as a different product, but both are produced by the same company and DAQmx is positioned to replace legacy DAQ. That screams "new version" to me. I'd guess the decision to market it as a new product was based on business (differentiating between them) and practical (allowing sxs installation) issues rather than functional (how different the api is) issues.
Here's a slightly different way to look at it... The NITN and VIPM present everything as packages. As long as the product and the package align all is good. However, packages are a distribution and deployment mechanism that does not always align with product definitions. I consider LapDog.Messaging a single product even though I created one package for v1 and another package for v2 to allow sxs installation. The Actor Framework is on v3 and it breaks backwards compatibility. I believe v3 will replace any previous AF versions. OpenG is on v4. Is everything there fully backwards compatible? (I honestly don't know.) Whenever a product update breaks backwards compatibility there may be developers who need to have both versions installed. The 'one product = one package' deployment strategy doesn't allow that.
Creating a new package line and appending the version number to the lvlib for major revisions are my workaround to sxs installation. Appending the version number to the package's Product Name is how I communicate to other developers that it is the same product. It's not a perfect solution. Sometimes it's hard to decide whether to rev the major version or not. I have to weigh the cost for end users to port their code to the new library against the risk of not being able to install incompatible versions sxs. For instance, suppose shortly after I release v3 I discover a conpane terminal is set to recommended instead of required. Changing it to required definitely breaks backwards compatibility, but it will make the api more robust and easier to use. Do I penalize the early adopters and force them to go through the pain of updating to v4, or do I issue v3.0.1 and hope everybody upgrades? (In the past I've gone the bug fix route.)
Chris wrote: We've seen this issue before when working on our internal products but we've gotten around it for now by having vipc's for each product
Yeah, I've done that too. It works as long as ReusePackage1_v1 and ReusePackage1_v2 don't have to be in memory at the same time. There are work arounds that allow both package versions to coexist on the user's computer at the same time, but they involve copying the package source code to a unique namespace and redirecting the product source code to use the copy. That's not always a good solution.
[Doh! The lightbulb just went off re: Jim's relinking comment... ]
I started typing up some other thoughts about potential solutions, but my brain ran away with ideas of dynamic linking, exporting lvlib interfaces, namespaces, and all sorts of other things I couldn't quite get a handle on. I'll have to mull it over for a while...
03-28-2012 11:40 AM
Daklu,
To answer your specific question about tools on the LabVIEW Tools Network (which must comply with Compatible With LabVIEW), we will always test the product with the newest version of all dependencies. If there is something broken we'll ask the developer to fix their tool so it supports the newest version as well. Each year we re-review products to make sure they work with the newest version of LabVIEW, and therefore we'll also check that they work with all other newest versions of dependencies.
Now for LabVIEW Tools Network tools that ARE the reuse libraries, we would want them to handle any major changes gracefully. This could mean creating wrapper VIs for changed code, renaming the package so it can't conflict, or simply documenting it in the release notes that it is incompatible. It's up to the developer on how they fix it, as long as any changes are documented and a path to resolution is clear for anyone using it.
I hope this sheds some light on our processes here, but let me know if you have any questions...
David