RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX


Avoiding the Perils of C++0x Data Races : Page 3

Find out what dangers race conditions in general and C++0x data races in particular pose to your concurrent code, as well as the strategies for avoiding them.

Now imagine that this global counter is a reference count for some resource, and that each thread decrements the counter the same number of times as it increments it. The intention for the counter is that when it reaches zero then the object is freed. If some of the increments or decrements do not behave as expected, then this could easily lead to a memory leak or to the resource being freed too early. To demonstrate this behavior, I modified the program code:
#include <thread>
#include <iostream>
#include <vector>

unsigned const increment_count=2000000;
unsigned const thread_count=2;

unsigned i=0;

void func()
    for(unsigned c=0;c<increment_count;++c)
    for(unsigned c=0;c<increment_count;++c)

int main()
    std::vector<std::thread> threads;
    for(unsigned c=0;c<thread_count;++c)
    for(unsigned c=0;c<threads.size();++c)

    std::cout<<thread_count<<" threads, Final i="<<i
             <<", increments="<<(thread_count*increment_count)<<std::endl;

Three consecutive runs of the modified code produced the following output:

2 threads, Final i=0, increments=4000000
2 threads, Final i=4294345393, increments=4000000
2 threads, Final i=169708, increments=4000000

The first run has the same number of increments as decrements overall, but something clearly went wrong in the other runs. In the second, the counter shows a value higher than the maximum number of increments, possibly due to wrap-around from decrementing below zero. The resource likely would have been freed prematurely, thus leading to random crashes or other bizarre behavior. In the third run, the counter is showing a final value that is non-zero but less than the number of increments. In that case, you would have a memory leak.

Choose the Right Synchronization Scheme

In order to avoid problems from race conditions in general and data races in particular, you need to look carefully at the data being shared and the operations performed on it. Eliminating any unnecessary sharing will remove the potential for problems related to that piece of data. By thinking carefully about the remaining sharing, you can decide on an appropriate synchronization scheme, whether that is using atomic variables, mutex locks, or something else.

Anthony Williams is the Technical Director for Just Software Solutions Ltd., where he spends most of his time developing custom software for clients, mostly for Windows, and mostly C++. He is the maintainer of the Boost Thread library and is also a member of the BSI C++ Standards Panel. His latest book, "C++ Concurrency in Action: Practical Multithreading" is currently available in the Early Access Edition from Manning's web site.
Email AuthorEmail Author
Close Icon
Thanks for your registration, follow us on our social networks to keep up-to-date