Browse DevX
Sign up for e-mail newsletters from DevX


Simpler Multithreading in C++0x : Page 4

The new standard will support multithreading, with a new thread library. Find out how this will improve porting code, and reduce the number of APIs and syntaxes you use.




Building the Right Environment to Support AI, Machine Learning and Deep Learning

Waiting for Events
If you're sharing data between threads, you often need one thread to wait for another to perform some action, and you want to do this without consuming any CPU time. If a thread is simply waiting for its turn to access some shared data, then a mutex lock can be sufficient. However, generally doing so won't have the desired semantics.

The simplest way to wait is to put the thread to sleep for a short period of time. Then check to see if the desired action has occurred when the thread wakes up. It's important to ensure that the mutex you use to protect the data indicating that the event has occurred is unlocked whilst the thread is sleeping:

std::mutex m; bool data_ready; void process_data(); void foo() { std::unique_lock<std::mutex> lk(m); while(!data_ready) { lk.unlock(); std::this_thread::sleep_for(std::chrono::milliseconds(10)); lk.lock(); } process_data(); }

This method may be simplest, but it's less than ideal for two reasons. Firstly, on average, the thread will wait five ms (half of ten ms) after the data is ready before it will wake in order to check. This may cause a noticeable lag in some cases. Though this can be improved by reducing the wait time, it exacerbates the second problem: the thread has to wake up, acquire the mutex, and check the flag every ten ms—even if nothing has happened. This consumes CPU time and increases contention on the mutex, and thus potentially slows down the thread performing the task for which it's waiting!

If you find yourself writing code like that, don't: Use condition variables instead. Rather than sleeping for a fixed period, you can let the thread sleep until it has been notified by another thread. This ensures that the latency between being notified and the thread waking is as small as the OS will allow, and effectively reduces the CPU consumption of the waiting thread to zero for the entire time. You can rewrite foo to use a condition variable like this:

std::mutex m; std::condition_variable cond; bool data_ready; void process_data(); void foo() { std::unique_lock<std::mutex> lk(m); while(!data_ready) { cond.wait(lk); } process_data(); }

Note that the above code passes in the lock object lk as a parameter to wait(). The condition variable implementation then unlocks the mutex on entry to wait(), and locks it again on exit. This ensures that the protected data can be modified by other threads whilst this thread is waiting. The code that sets the data_ready flag then looks like this:

void set_data_ready() { std::lock_guard<std::mutex> lk(m); data_ready=true; cond.notify_one(); }

You still need to check that the data is ready though, since condition variables can suffer from what are called spurious wakes: The call to wait() may return even though it wasn't notified by another thread. If you're worried about getting this wrong, you can pass that responsibility off to the standard library too, if you tell it what you're waiting for with a predicate. The new C++0x lambda facility makes this really easy:

void foo() { std::unique_lock<std::mutex> lk(m); cond.wait(lk,[]{return data_ready;}); process_data(); }

What if you don't want to share your data? What if you want exactly the opposite: For each thread to have its own copy? This is the scenario addressed by the new thread_local storage duration keyword.

Thanks for your registration, follow us on our social networks to keep up-to-date