LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Serial write takes unexpectedly long when more than 7 bytes are written

Hi,

 

My vi is attached. 

As you see, it's very simple.

 

- output buffer fifo is set to 128 bytes, which is generously higher than my needs.

- my baudrate is 2.5 mpbs.

- I write string of 9 bytes such as 012345678, and the execution time of the vi is around 40 us. 

  I thought it's because of the blocking structure of the synchronous write, and I decided to switch to asynchronous

  write, since I need to go above 20 kHz.

- when I switch to asynchronous write, it even gets worse, and I get ~58 us. it seems like asynchronous doesn't work at all.

 

so far, I explained my problem. I also did some simple experiments to debug the problem.

 

- when my string is shorter than 8 bytes, everything is beautiful, asynchronous write takes nearly 15 us.

  when I enter an 8 bytes or longer string, it jumps up to 58 us again.

 

what am I doing wrong? I'm stuck here.

 

Gorkem Secer.

 

0 Kudos
Message 1 of 17
(3,525 Views)

Where are you getting these times from?  Regardless, coming from a Windows machine it is really hard to believe numbers down in the us range.

 

Out of curiousity, what happens if you remove the setting of the IO Buffer?


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 2 of 17
(3,504 Views)

my program runs on a PXI machine with Labview RT.

 

I haven't tried removing the I/O buffer setting, but I tried leaving the buffer size input empty meaning that it should be set to its default value, 4096.

However, that did not work out.

I'll try removing the IO buffer setting.

 

Gorkem

 

0 Kudos
Message 3 of 17
(3,484 Views)

There is quite a bit of overhead in any function call and VISA is no exception there. In fact the data has to go through several driver layers before it arrives at the port. So your example is likely to give you almost the same time independent if you write 9 bytes or quite a bit a larger string. This is because there is an inherent overhead in calling all kinds of layers. But the solution to this is not to go and write a single layer system, even if that would could down the overhead a bit, it will not give you magnitudes of performance gain, at the cost of a totally inflexible, unmodular and unmanageable interface.

 

You might have to investigate other sollutions if you really want to have that high speed control, such as going FPGA. Or change your approach and write bigger blocks at once to the serial port.

 

As a thumb of rule you can get timiing accuracy to a few 100 ms under desktop OSes, (only soft not hard, meaning Windows will be happy to do this in 99.9 % of the cases but can always have outliers in the second range), down to maybe 100 us to 1 ms on realtime system and anything faster than that you have to look at hardware based solutions such as FPGA control.

Rolf Kalbermatter
My Blog
0 Kudos
Message 4 of 17
(3,478 Views)

Thanks for your opinion.

You're right about your guess that going up from 8 bytes do not further degrade the performance (at least up to 15 bytes which I have tried).

 

However, I couldn't see the distinction between 7 bytes or 8 bytes regarding the performance of visa abstraction layers.

Even if it's an issue hidden inside the implementation of visa api, I think we should find out the reason explicitly.

If I have sufficient fifo size, what changes the performance in such a way that execution duration increases exponentially?

Further, I also cannot see the reason why asynchronous write do not decrease the execution time.

It looks like the asynchronous write doesn't achieve nonblocking serial write functionality.

0 Kudos
Message 5 of 17
(3,470 Views)

This is speculation, but most hardware FIFOs (i.e. at the chip level) have an ~8 byte buffer. I would guess that when you write more than 8 bytes, it has to fill the buffer -> transmit -> fill the buffer again -> transmit. In other words, this is just a hardware limitation.

0 Kudos
Message 6 of 17
(3,460 Views)
I think shansen1 is correct. When you set the buffer size, you are setting the system buffer and not the fixed hardware buffer of the uart. Check the uart that your pc has.
0 Kudos
Message 7 of 17
(3,453 Views)

That sounds reasonable but the weird thing is that when I write more than 7 bytes (not eight bytes), I have the performance degradation.

 

Furthermore, if this is the case, it would be very stupid that outbut fifo buffer and asynchronous write make no sense,

because it is inevitably polling 8 bytes of data chunks.

 

------------------

I choose asynchronous write under the synchrnous i/o option from the right click menu. It is enough for that, right?

0 Kudos
Message 8 of 17
(3,442 Views)

The driver might for a lot of reasons not want to or even can't fill up the 8 byte hardware FIFO buffer entirely. This could be for instance since it has to work around some bugs in certain hardware. It might not be necessary for the specific hardware revision in your system but that driver has to work for many different hardware systems.

 

The magnitude of timing control you try to achieve is simply beyond a software system if you require reliable and hard timings. It may be possible to achieve on a simpler but still powerful embedded system with custom made software drivers and RT OS but not on a more general purpose RT OS even if the hardware is pretty powerful. But such more custom made solutions would be more than a few magnitudes more expensive to develop.

 

You can keep barking up this tree but it is unlikely that NI can do much about it without redesigning parts of the RT system, which is pretty much out of question as they simply license it from Ardence/IntervalZero and only adapt it where it is strictly necessary to work with their hardware. Most likely their license doesn't even allow them to customize it at will in any other way than is strictly necessary to get it to work on their own hardware.

 

Your options are as far as I can see, to either rethink the timing requirements or adapt the software in such a way that the bigger delay won't be a problem or to go with a hardware solution based on an FPGA board or similar.

 

As to the difference of asynchronous write and synchronous that is mostly about what VISA API is called underneath. The LabVIEW function remains blocking for as long as is necessary to pass the data to the OS driver. In synchonous mode the LABVIEW VI calls the synchronous VISA API once and that simply waits until VISA returns from the function. For the asynchronous case LabVIEW calls the asynchonous VISA function and then keeps looping inside its own cooperative multithreading layer until VISA indicates that the asynchonous function has been performed. This is mostly for historical reasons when LabVIEW didn't have OS supported multithreading and all the multithreading happened in the cooperative LabVIEW code scheduler. Nowadays asynchonous VISA mode has almost no benefits anymore but genearlly will cause significantly more CPU load.

 

 

 

 

Rolf Kalbermatter
My Blog
0 Kudos
Message 9 of 17
(3,413 Views)

@rolfk wrote:

 

As to the difference of asynchronous write and synchronous that is mostly about what VISA API is called underneath. The LabVIEW function remains blocking for as long as is necessary to pass the data to the OS driver. In synchonous mode the LABVIEW VI calls the synchronous VISA API once and that simply waits until VISA returns from the function. For the asynchronous case LabVIEW calls the asynchonous VISA function and then keeps looping inside its own cooperative multithreading layer until VISA indicates that the asynchonous function has been performed. This is mostly for historical reasons when LabVIEW didn't have OS supported multithreading and all the multithreading happened in the cooperative LabVIEW code scheduler. Nowadays asynchonous VISA mode has almost no benefits anymore but genearlly will cause significantly more CPU load.

 


http://digital.ni.com/public.nsf/allkb/ECCAC3C8B9A2A31186256F0B005EEEF7

 

^^just agreeing with rolf and pointing out that the KB says use sync for high performance

0 Kudos
Message 10 of 17
(3,360 Views)