Now imagine that this global counter is a reference count for some resource, and that each thread decrements the counter the same number of times as it increments it. The intention for the counter is that when it reaches zero then the object is freed. If some of the increments or decrements do not behave as expected, then this could easily lead to a memory leak or to the resource being freed too early. To demonstrate this behavior, I modified the program code:
unsigned const increment_count=2000000;
unsigned const thread_count=2;
std::cout<<thread_count<<" threads, Final i="<<i
Three consecutive runs of the modified code produced the following output:
2 threads, Final i=0, increments=4000000
2 threads, Final i=4294345393, increments=4000000
2 threads, Final i=169708, increments=4000000
The first run has the same number of increments as decrements overall, but something clearly went wrong in the other runs. In the second, the counter shows a value higher than the maximum number of increments, possibly due to wrap-around from decrementing below zero. The resource likely would have been freed prematurely, thus leading to random crashes or other bizarre behavior. In the third run, the counter is showing a final value that is non-zero but less than the number of increments. In that case, you would have a memory leak.
Choose the Right Synchronization Scheme
In order to avoid problems from race conditions in general and data races in particular, you need to look carefully at the data being shared and the operations performed on it. Eliminating any unnecessary sharing will remove the potential for problems related to that piece of data. By thinking carefully about the remaining sharing, you can decide on an appropriate synchronization scheme, whether that is using atomic variables, mutex locks, or something else.