|FROM ||Ruben Safir
|SUBJECT ||Subject: [NYLXS - HANGOUT] Re: omp pthread madness
On 04/02/2015 10:26 AM, Jens Thoms Toerring wrote:
> ruben wrote:
>> On Wed, 01 Apr 2015 10:04:59 +0000, Jens Thoms Toerring wrote:
>>> The obvious problem is that you have (at least) 2 threads which
>>> uncoordinatedly write into shared, global variables.
>> How is it uncoordinated. I couldn't think of anything that is shared
>> that would upset the final result.
> It is uncoordinated because it's unpredictable in which sequence
> the threads will access these global variables and, worse, they
> may do it at the same time.
> Are you aware of the 'sig_atomic_t' type in C? The reason
> for its existence is that reads from or writes to even
> something simple as an integer isn't "atomic", i.e. one
> thread may have updated parts of that integer (say one of
> the bytes it consists of) when another thread gets sche-
> duled and tries to read this integer which is now in some
> strange state. So the second thread will get some more or
> less random value instead of what the programmer expected.
> Or let's have thread 1 having started to update the integer,
> then thread 2 comes along, updates all of it and then we're
> back in thread 1 that writes the parts it hadn't got around
> to before. Result? A random value in memory. What's special
> about the 'sig_atomic_t' is that with that type this can't
> happen, you're guaranteed that a thread of execution can't
> be interupted while it writes or reads it.
> Now, if this already can happen with integers, it clearly isn't
> any better with floats or doubles. And your program assumes
> obviously that the random values you write into the 'workspace'
> have a well-definen upper limit (looks like it's 1). Unccor-
> dinated writes to the same location may result in values that
> doesn't satisfy this condition. And in other cases values
> may be smaller. You probably won't notice it, but it can
> happen and add some additional noise to the results of
> your calculations.
> Another place were things can go wrong is with your
> 'inside_the_circle' variable. There's a non-vanishing
> chance that, for the line
> the following happens: thread 1 reads its value to increment
> it. Along comes thread 2 and possibly increments it several
> times before thread 1 gets to finish what it set out to do.
> Then it will increment ther value it had read and write it
> into memory, thereby destroying all the work the other thread
> had done in between.
> Or you may end up with a situation where the compiler
> optimizes your program in way that during the count_inside()
> function the value of 'inside_the_circle' is kept in a CPU
> register and only this copy is incremented. It's then only
> written out at the very end of the function. And then, when
> both threads run that function at the same time, they both
> increment their copy in the CPU register and write it out at
> the end, with the thread getting there last simply overwri-
> ting whatever any other thread had put there before.
> So you can't trust what this variable is set to when both
> threads are done with it.
> In any case, if you'd run your program several times with
> the same seed for the random generator you'll rather likely
> end up with different results. Your program has become in-
> deterministic with its result depending on subtle timing
> differences in how the threads get scheduled or access
> your global variables. That's something one tries to avoid
> like the plague.
> Regards, Jens