03-28-2014 09:47 AM
Hello,
everything is fine, just a question out of curiosity:
using
#pragma omp for
before a for loop specifies that the iterations of the associated loop will be executed in parallel.
My question: Let's assume I have a loop with 10000 iterations and two threads will be generated; will thread one calculate the first 5000 iterations while the second starts at index 5000, or will they operate on odd/even indices, or is it random, or....
Thanks!
Solved! Go to Solution.
04-02-2014 01:44 AM
Hi Wolfgang,
omp for and omp do divide loop iterations (if possible) equally among all threads.
If you exactly want to know, how your loop works, look in the DLL to find out these informations.
Bye,
Elisa
04-02-2014 02:02 AM
Of course I meant the documentation of your DLL.
04-02-2014 02:19 AM
ah, so now I am confused...
I am writing a program with CVI, adding first steps of MP, so what documentation are you referring to?
04-02-2014 07:14 AM
CVI is an developement environment and you use OpenMP as an extern programming language, so I am refering to the documentation of your DLL from OpenMP.
Maybe the following link is useful for you, look at the example-code:
http://de.wikipedia.org/wiki/OpenMP
Bye,
Elisa
04-02-2014 07:44 AM
Hello Elisa,
neither Wikipedia nor CVI help address my question (not even the OpenMP Application Program Interface specification), I checked them all before posting
Nevermind, it's an academic question not affecting my project
04-02-2014 09:25 AM
Hi,
If you don't specify a schedule using the schedule clause, the default is the static schedule, where the loop iterations are divided into almost equal sizes amongst the threads. So in your example, one thread will execute loop iterations 0 - 4999 and the second thread will execute iterations 5000-9999. You can always see which thread is executing which iteration by something like this:
#omp parallel for for (int i = 0; i < 10000; i++) printf ("Iteration %d executed by thread %d", i, omp_get_thread_num())
Each thread in the parallel region is given an id, with the 'master' thread having id 0. This is returned by omp_get_thread_num(). In the above case the 'chunk size' is 5000. You can change this by specifying the schedule(static, chuncksize) clause:
#omp parallel for schedule (static, 1000) for (int i = 0; i < 10000; i++) printf ("Iteration %d executed by thread %d", i, omp_get_thread_num())
Here, each thread will execute 1000 loop iterations, then pick up the next available 'chunk' of 1000 iterations and so on until all loop iteations are executed.
You can also specify dynamic and guided schedules which are useful when the loop iterations are not load balanced. See section 2.5.1 of the OpenMP 2.5 specifcation for more details.
04-02-2014 10:00 AM
Thanks!
Sounds like I could have found out myself