03-27-2013 05:37 PM
So, I'm building a small app. The display includes a graph and a start/pause command button.
The start/pause button toggles the state of the app. When the app enters the running state, it sends a TCP request for data to another device.
When the other device sends data back, a callback is triggered that:
- reads the reply
- processes the reply (converts ASCII to binary)
- plots the data on the graph
- sends a TCP request for more data.
This is intended to go on indefinitely until the user presses pause (same button, just now has different text and color). When the app enters the paused state, the callback drains the input buffer and then exits.
First question: anyone want to express an opinion on the validity of this design? Might there be a better way to do this?
Problem: the pause button is, for lack of a better term, unreliable. Sometimes it works as desired, but sometimes it continues to run (and ignores all user input), and other times it just hangs the program. This can be improved by increasing the timeout value of the TCP read command.
Second question: any ideas on where to look for the cause of this?
I should point out that I'm still not sure I fully understand the mechanics of the ClientTCPRead() function. Its behavior leads me to believe it's treating the input stream as something of a hybrid between packets and streaming. If anyone knows of any supplementary documentation beyond the help page, I'd love to see that.
Thanks. I think this is pretty close to working; I just need to nail down this last issue.
03-28-2013 11:05 AM
Hey mzimmers,
I can't immediately see any major problems with the architecture you've chosen, as long as you're still processing user events in a timely manner (and it sounds like you're aware of this issue since you mentioned blocking calls). To clarify, with the pause button, does its callback always get called and just not work properly, or does the callback sometimes not get called at all? That might tell us if the problem lies in how we process UI events, or in the code in the callback itself.
As for more documentation on the ClientTCPRead() function, we do have a Developer Zone tutorial on TCP communication in CVI that I'd like to show you if you haven't already seen it: http://www.ni.com/white-paper/3067/en It basically provides a high-level overview of TCP communication functions available in CVI, so hopefully it will be helpful in this case. To specifically answer how the ClientTCPRead() function works, it simply reads from the available TCP data until the specified number of bytes has been read into the buffer.
I hope this helps, and let us know if you need any more assistance!
03-28-2013 11:31 AM
Hi, Daniel -
Thanks for the reply.I don't have an answer for your question, but I suspect that the pause callback isn't getting called. The reason I don't have an answer is that the program works (mostly) for me, working over a VPN, but at a user's site, where he's on a LAN, it doesn't. I can't debug there.
Here's what I think was happening: originally, the TCP callback only read the data, and then posted a deferred call that did all the other stuff I mentioned above. I modified the code so that everything is done within the callback (I may undo this). So, at the site with the LAN, perhaps many, many cycles of callback/send for more data were getting queued really quickly before the user had a chance to hit pause.
There are two problems with this idea:
1. I've always had a delay() in my callback/send cycle. Theoretically, shouldn't this allow the pause callback to enter the queue ahead of any later callbacks?
2. Even without the delay, the response from the server is relatively slow. By which I mean it takes some time to return.
Anyway, enough babbling from me. Let me ask this: would it be better if I went back to the TCP callback only doing the read, and posted a recurrent async timer to check the state of the button, the input queue, and process accordingly?
Final note for this post: the behavior for ClientTCPRead() that you describe is how I'd expect it to work, but I've observed different behavior. For example, if I post a read with a maximum size of 5000, and a delay of 20 seconds (20,000 milliseconds), and the server sends a message shorter than 5000, the call returns well before the timeout, and with a return value of ~3750, which is the correct size of the message read. It's almost as though the TCP read "knows" that the server is done sending (for the time).
03-29-2013 11:20 AM
Mzimmers,
One reason the ClientTCPRead function might end before the specified timeout is if all of the data is available and no more is coming in--the function help says "The function returns before the timeout period expires if a portion of the data could not be read or written to the port or an error occurs" It's possible that either some of the data couldn't be read for some reason, or an error occurred. You could check the return status of the function to determine if an error occurred.
I also noticed your other post about the callbacks, and I agree with Roberto's statement that it's fine to perform actions in the callback. However, if this callback is going to be called many times, it does make sense to make that callback as "lightweight" as possible and perform actions in other functions as needed. This is especially useful for blocking calls; if you can do the TCP Read asynchronously in another thread, it will keep the rest of your application from being blocked during the read.
I would suggest trying the code the way you had it before, and see if you see any improvement in the situation.
03-29-2013 11:42 AM
Hi, Daniel –
I can see that I'm not describing this situation with the ClientTCPRead well. Let me try to be more explicit:
1. I make the call with a 5000-character buffer, and a 20-second timeout
2. the function returns MUCH faster than the 20 seconds
3. the return value is around 3700 characters (no error code), and the data in those 3700 characters appears to be valid.
So: no timeout reported (no -11 in the return value), and not a full buffer. I can reproduce this situation.
Regarding your other point: I'm experimenting right now, and will report back shortly.
Thanks for the help...I'm currently a little frustrated, but you guys on the forums helping me out makes a big difference.
04-01-2013 04:50 PM
mzimmers,
I'm still looking into this and trying to figure out what might be happening. To clarify, is the server sending more than 5000 characters, or is the 3700 corresponding to the amount of data being sent by the server? That might help us narrow this down.
Also, how are you determining how quickly the function is returning?
04-01-2013 05:40 PM
Hi, Daniel -
3700 is the number of characters sent by the server. And my time measurements are informal and imprecise. I suspected something was going wrong with my original timeout value which was closer to 1 second, so I just put in a huge time interval -- 20 seconds -- and it returned in less than a second (again, with the 3700-character read).
I don't know if this is noteworthy, but...the 3700 represents a complete message from the server; in other words, the server thinks it's done until it gets another request. Like I mentioned elsewhere, it's almost as though the read command has some "smarts" to it that knows when to finish, but it's not based on the time duration nor the buffer size. Mysterious...
04-02-2013 09:50 AM
Hey mzimmers,
Since the ClientTCPRead() function includes the verbiage that the buffer size is the "maximum number of characters to read", I think that it is receiving the TCP end-of-message indication and is returning after receiving the entire message. This is generally how we'd want a TCP read function to work, so that we can process each message as we receive it. For that reason, and since the 3700 characters is a complete message sent by the server, I believe this is intended and expected behavior for the ClientTCPRead() function.