04-23-2014 11:14 PM
Solved! Go to Solution.
04-24-2014 02:00 AM
@Hornless.Rhino wrote:
So I'm working on improving the efficiency/speed of my producer-consumer system. I have two producers and one consumer (which among other things, writes files to the disk). I currently have both producers feeding the one queue which is then processed by my currently single consumer. Each queue item is a cluster of multiple data types and contains everything needed to "consume" it. There are certain cases where more complex queue items get placed which take longer to process and slow down the consumer process. I was thinking of adding a second consumer loop to run in parallel with the first to take the weight off the single loop so to speak. My question is, would it be more efficient to have both consumers dequeueing from the same queue, or would it be more efficient to have each producer feeding its own queue. For the sake of the exercise assume that I can guarantee that the complex queue elements will only come from a specific producer.
Interesting. I don't know if either way is more efficient. That is assuming that the two consumer loops are identical and you don't care which dequeue gets what element. (I assume you know that whatever dequeue is ready will get the next element, making that element unavailable to the other dequeue.)
Any gurus want to help out? I'm very curious to hear what everyone has to say...
04-24-2014 02:12 AM
04-24-2014 02:21 AM
Hi,
If it's not too hard, you could set up both scenarios and find out which one works better.
Anyway,
Note: If you go down the 2-consumers path, make sure your consumers use reentrant VIs only. Otherwise, your 2 loops will block each other.
04-24-2014 02:28 AM
04-24-2014 03:26 AM
04-24-2014 03:37 AM
@Hornless.Rhino wrote:
So I'm working on improving the efficiency/speed of my producer-consumer system. I have two producers and one consumer (which among other things, writes files to the disk). I currently have both producers feeding the one queue which is then processed by my currently single consumer. Each queue item is a cluster of multiple data types and contains everything needed to "consume" it. There are certain cases where more complex queue items get placed which take longer to process and slow down the consumer process. I was thinking of adding a second consumer loop to run in parallel with the first to take the weight off the single loop so to speak. My question is, would it be more efficient to have both consumers dequeueing from the same queue, or would it be more efficient to have each producer feeding its own queue. For the sake of the exercise assume that I can guarantee that the complex queue elements will only come from a specific producer.
From a performance perspective, the Queue primitives won't care if you create two queues or use one queue in two consumers. To have multiple slaves (consumers) processing from one queue is perfectly acceptable, so long as the order of consumption is not important.
If you create management code to hand off the 'complex' jobs to a dedicated secondary consumer then you are not taking full advantage of the multiple-slave framework. It all depends on the ratio of simple-to-complex jobs, and also the time it takes to complete the jobs. For example, if the complex jobs come in once every 10,000 simple jobs, then a dedicated second consumer for the complex jobs will be largely idle and therefore under-utilised. If however, the ratio is more like 1 complex job for every two or three simple jobs, then you could find the primary consumer is largely under-utilised.
The best balance is to allow both consumers to dequeue all job types and therefore both be working at maximum capacity.
04-24-2014 05:06 AM
It is hard to give a good answer without really knowing what is happening in your consumer loop. Does order matter? Can some of these processes actually be done in parallel?
My first suggestion would be to make another loop (and corresponding queue) just for writing to disk. Since that tends to be a "slow" process anyways, it should help releive your consumer loop at least some. But, again, I do not know what your "complex" commands require of you.
04-24-2014 06:25 AM
I like Gerds idea, then the producers only need to worry about 1 queue.
Another solution, is spawning an asynchronos VI to reduce the queue execution time (as the work will be offloaded to a separate process). It's basically the same idea, but without a 2nd consumer/queue.
As mentioned, it's a matter of style and use case scenario which is better.
/Y
04-24-2014 08:23 AM
If the queue is simply to make sure that you don't lose items as you pass them between producer and consumer, and you don't actually care about processing order I'll echo Yameda's idea of using an asynchronos VI to process the data. You can just launch them from the consumer loop. Another possibility would be to just launch and asynchronous VI for handling the data that takes a long time to process.
That and/or moving your writing to disk to another loop will probably give you a good, scalable architecture.