12-08-2016 02:10 AM
Hello all,
I'm new to this forum, so please excuse me if something similar has already been asked and answered. I tried searching both the forum and www in general, but didn't find useful information. Maybe I was using wrong keywords...
Question: In case connection to database is lost, what would be a feasible way to implement some sort of a buffer for writing measurement data into database?
Background: I'm making an user interface application for controlling a machine. This UI application runs on a normal Windows PC, and it communicates with a PLC (which actually controls the machine) by triggering tasks for the PLC and by periodically asking measurement values (temperatures, etc. from sensors connected to PLC). These measurement values are shown in UI application (graphs, etc.). The UI application also stores the measurement values into both a) text based files and b) Microsoft SQL database approximately once per second. This database server will be located on a other computer. I use Database Connectivity Toolkit to handle communications with the database. The UI application, and the whole machine, is now almost ready and working OK.
I have built the UI application based on Queued Message Handler design pattern. Data logging into files and into database is running on a separate loop.
Problem: Customer wants that, in case of "connectivity problems" (i.e. connection to database is lost for some reason), there would be some sort of buffer holding up to about 4000 "DB Tools Insert Data" tasks until the connection is re-established or the maximum number of tasks is reached. If the connection is re-established, all this buffered data should be inserted into database. If connection is not re-established before buffer max size is reached, some sort of error message should be given.
What would be a feasible way to implement this kind of functionality?
I will appreciate all hints, tips, ideas, etc. about this!
Best regards,
// Timo Prusi
Solved! Go to Solution.
12-08-2016 04:00 AM
If you make a DB Transaction it'll only work if it works (it'll roll back if it fails halfway through).
If it fails, save data to file in a "to DB"-folder. When connection is up, send this "to DB" data to DB and delete the file (so you don't send doubles).
/Y
12-08-2016 07:52 AM
You have a variant of the Producer/Consumer situation here. You are Producing data at some rate for some period of time, and (ultimately) want to Consume it in a Database. However, your Consumer is not reliable for "external reasons" (connection to the Database can be lost). So you design a "Monkey-in-the-Middle" Producer/Consumer-Producer/Consumer process -- as the data are being Produced, Consume them in a reliable form, say, a binary (or other) file on your local machine. Depending on the situation, either simultaneously or after this process has finished (and the program has exited), have another Producer read this intermediate file and have a Consumer Write it to the Database (if you do this "off-line", you can probably do it "all at once" without needing Producer/Consumer, as timing won't be an issue).
Bob Schor
12-09-2016 01:22 AM
Thank you Yamaeda and Bob_Schor,
Both of you seem to suggest similar approach: In case of connection to database is lost, write data into (temporary) file. Once the connection is again open, read data from these temporary file(s) and add to database. Basically this is what I was thinking by myself also.
I will discuss this with our customer and hear how they would like this to work: Can or should adding the missed data be done:
Once I have implemented this, I will post it here and mark your suggestions as solutions.
Thank you for your help!
//Timo
02-01-2017 06:08 AM
Hi,
Finally got this done! The actuall implementation was easy and quickly done, but I only recently got to work with this one again.
My implementation is pretty much as Yamaeda suggested:
Seems to be working OK, so thank you Yamaeda and Bob_Schor!