This request was declined after thorough discussion. The reasons have been outlined by Stefan_R and Brad_Turpin, including the requirement to be able to change the structure of the index in order to provide optimizations and extensions in future versions.
Creating a SQL interface for DataFinder seems to be a valid and easy approach to access DataFinder, at least at first sight.
While investigating this option we did the following feasibility studies:
1) Offer direct access to DataFinder’s data base This is the easiest way to offer an SQL interface to DataFinder since the e.g. ODBC driver of the underlying DB can directly be used. At the other side, the DB of DataFinder is subject to change during the ongoing development of DataFinder for e.g. compacting the index, improving access performance or extending capabilities; so the schema of the DB can change in future versions (and has been changed in the past). This being said offering direct access seems not to be a real option
2) Offer a (constant DB) view to the DB tables of DataFinder If this alternative is taken into account one can choose between two options: Either performing optimized queries by assigning indexes to several columns of those views which will lead to decreases in index performance when adding data to the DB, or decide to do it the other way round.
3) DataFinder to perform SQL queries against ‘virtual tables’ This approach takes into account DataFinder to simulate a constant DB schema; meaning offering e.g. three tables like File, Group, Channel and allow to do queries against those tables. This being said DataFinder needs to be able to support the whole SQL syntax (including e.g. joins, ordering, nested queries …) and map these queries to the real underlying DB schema. This can become a real complex approach.
4) Use some kind of ODBC driver toolkit There exists several toolkits to create ODBC, ADO.net or JDBC driver for non-SQL data stores which could potentially be used to map DataFinder’s query language to SQL. These tools offer the ability to do all the complex SQL stuff described in the former topic by requesting the underlying store to provide all necessary data. In case of DataFinder this can be up to the whole DB index – meaning some or all content of DataFinder’s DB index needs to be transferred to the client side to evaluate the query, making this not a real option.
I had always considered option3 as the approach to support. I agree that ultimately the entire SQL syntax should be supported but the initial release doesn't have to support everything. I would be satisfied with queries similar to what we have now plus a few extra things. I will suggest Some aggregate functions like
Using the SQLaggregate functions , you can determine various statistics on sets of values. You can use these functions in a query and aggregate expressions in the SQL property of a QueryDef object or when creating a Recordset object based on an SQL query.
If you want to query the DataFinder from LabVIEW, you should use the LabVIEW DataFinder Toolkit (released Summer of 2009), which was designed to do exactly that.
What I mean is that if you want to query an SQL database via labview, that's easy to do, so as long as you're moving data for manipulation from SQL to labview/DIAdem, you're in business.
I want to clarify the request. I am sure Brad understands what I want but Creese has introduced a completely different topic. The issue is accessing the Datafinder not an "regular” database. However, I want the Datafinder access to be like a regular database.
venting- On the topic of the toolkit and the method of building queries in Diadem, both methods are clumsy to construct complex queries. (What exactly are they teaching about databases in Germany??) Why invent a new query method when the world uses another method. That is water under the bridge, as we say here.
The LabVIEW DataFinder Toolkit query-building VIs were designed to make it easy for customers who have never used SQL data bases to query the DataFinder. R&D felt that the simplified VIs in the toolkit would be a much easier starting point for LabVIEW programmers not familiar with SQL. Since we market the DataFinder primarily as a self-managing data base for engineers who don't know anything about data bases, it was important that the LabVIEW API not rely heavily on SQL syntax or concepts.
There is another reason that many built-in features available through SQL are not available in the LabVIEW DataFinder API or in DIAdem's VBScript DataFinder API. When you're using SQL to address the contents of a data base, you need to know ahead of time what the table stucture and linking is like that you're querying. But since we have changed/tweaked the data base schema of the DataFinder with nearly every release to improve performance and add features, this would be a versioning nightmare for our customers if their queries were based directly on the DataFinder data base schema. So we intentionally abstracted the customers away from the data base schema details and exposed only those data base requests/actions which we thought would be important and which we would be willing to ensure compatibility moving forward. If R&D tried to always offer the same open view table to SQL-savvy customers as the underlying data base schema evolved, then this could potentially become a versioning nightmare for us or perhaps restrict us from offering new features/performance improvements.
I do understand your frustration, because I've had to personally wait years at a time to get pretty standard SQL features such as AND, OR, DISTINCT, ORDER BY in the DataFinder queries, and there will always be things you can easily do with SQL and a static data base schema that we either haven't yet implemented or have decided not to implement in the DataFinder. But the beauty and main appeal of the DataFinder is that you don't have to design a static schema and manage it yourself-- the DataFinder dynamically adds new properties it's never seen before that show up in new data files, with no data base management required. We make this possible by deploying our own custom data base schema and surrounding the DataFinder data base with special automatic processes that keep things ship-shape. But with each new DataFinder version R&D thinks up some new methods of doing these tasks better, and that often results in both our deployed custom data base schema changing as well as those various automatic processes changing.
This request was declined after thorough discussion. The reasons have been outlined by Stefan_R and Brad_Turpin, including the requirement to be able to change the structure of the index in order to provide optimizations and extensions in future versions.
This request was declined after thorough discussion. The reasons have been outlined by Stefan_R and Brad_Turpin, including the requirement to be able to change the structure of the index in order to provide optimizations and extensions in future versions.
Thanks for your understanding,
Marcus