devxlogo

Master Managed Threading and Synchronization Techniques

Master Managed Threading and Synchronization Techniques

hile a “process” is the way that operating systems compartmentalize running applications from each other, a “thread” is a path of execution?the basic unit to which an operating system allocates processor time. A single process comprises one or multiple threads.

Preemptive multitasking operating systems such as Windows store all threads in a thread queue, and execute them by allocating a processor time slice to each thread consecutively. The OS suspends the currently executing thread when its time slice elapses, and resumes running another thread. As the system switches from one thread to another, it saves the thread context (all the information the thread needs to resume execution) of the preempted thread, and reloads the saved thread context of the next thread in the thread queue. Because each time slice is small, it appears to users as if multiple threads are executing simultaneously, even if there is only one processor.

Multiprocessor or multi-core systems achieve threading via multiprocessing, wherein different threads and processes can run literally simultaneously on different processors or cores.

Thread switching is always considered cheaper than process switching on Windows OSs.

Threading Basics
The Microsoft .NET runtime divides OS processes into managed sub-processes called Application Domains (see the System.AppDomain namespace documentation for details). Each managed process can have one or many application domains. Similarly, each managed process can have multiple managed threads. Managed threads can move freely across application domains within the same managed process.

Foreground and Background Threads
A managed thread can either be a foreground or background thread. The only difference is that a background thread does not keep the managed parent process running. When all foreground threads complete, the OS shuts the process down regardless of whether any background threads belonging to that process are still running. It stops all background threads at the point of process shutdown.

Advantages/Use of Multithreading Apps
The main advantages of using multithreading are as follows:

  • Doing multiple tasks in parallel to save time and utilize CPU to the maximum.
  • Doing tasks/calculations in the background so that application can still respond to user’s action while doing calculations.

Programming Challenges
The main challenges faced while developing multithreading applications are:

  • Managing synchronization if threads access common resources.
  • Managing deadlocks and context switches when there are too many threads, because context switches consume resources.
  • Scheduling of threads to avoid unexpected results at runtime.

Thread Local Storage
Thread Local Storage (TLS) is the method by which each thread is able to store thread-specific data in the memory. Threads need to store two main types of data: Runtime/Dynamically bound and Load time/statically bound data.

The .NET framework also provides two ways to use managed TLS:

  • Thread-relative static fields specified at compile time. These provide the best performance.
  • Data slots for runtime/dynamically bound fields. Data is stored as type Object, so casting is required. This type has slower performance than static fields.

Managed TLS is unique to a thread and application domain. Two important points to note here are:

  • A thread needs to maintain an independent copy of its data in each application domain in which it operates.
  • Within an application domain, a thread cannot modify another thread’s data even if both threads are using the same memory locations.

Thread Management
The .NET framework offers numerous ways for developers to manage threads. For example, you can create threads using both Thread and ThreadStart types. The code below shows an example:

   namespace ThreadingInDotNet   {      class Program      {         static void Main(string[] args)         {            ThreadSafeType tst = new ThreadSafeType();            Thread t1 = new Thread(new                ThreadStart(tst.PrintThreadId));            Thread t2 = new Thread(new                ThreadStart(tst.PrintThreadId));            t1.Start();            t2.Start();         }      }         class ThreadSafeType      {         public void PrintThreadId()         {            Console.WriteLine("Thread id : " +                Thread.CurrentThread.ManagedThreadId);         }      }   }

The ParameterizedThreadStart delegate was introduced in the .NET Framework 2.0. It lets you pass an object containing data to a thread when calling a Thread.Start method overload.

You can use the Managed Thread Pool to obtain information about background threads. The ThreadPool class provides applications with a pool of managed worker threads. Here are few points to note about using ThreadPool class:

  • All ThreadPool threads are background threads.
  • All ThreadPool threads are in the multithreaded apartment, which means multiple threads can access the same instance of an object at any given point in time, and the object itself is responsible for thread synchronization.
  • There can be only one ThreadPool per managed process.
  • There is a limit to the number of threads that can be queued and the number of threads per processor that can be active while using the ThreadPool class.

Thread States
A thread is always in at least one of the possible states shown in Table 1.

Table 1. Possible Thread States: The table shows all the possible states for a thread.
Thread State Description
Unstarted The thread has been created but has not yet begun executing.
Running The thread is executing. This state begins when a program calls Thread.Start.
Aborted This state results from a Thread.Abort call. The thread is dead and will move to the Stopped state.
Suspended Thread execution has been halted by a Thread.Suspend call.
WaitSleepJoin The thread is blocked, and waiting for some condition to occur.
Stopped The thread has stopped, and cannot restart.
StopRequested, SuspendRequested, AbortRequested These are intermittent states that occur before a thread moves to one of the final states of Stopped, Suspended, and Aborted, respectively.
Background Occurs when the thread is a background thread.

Using this basic thread management information, you can explore the various techniques for thread synchronization.

Synchronization Techniques
Thread synchronization is the biggest challenge developers face when developing multi-threaded applications. COM components use apartments to synchronize access to resources. While managed development uses different techniques, such as locking, signaling, and so on, to ensure thread-safe execution, for compatibility with the older COM model, the framework initializes all managed threads in the multi-threaded apartment (MTA) unless the thread’s apartment state has been set prior to starting the thread. You can find all the code examples in this article in the downloadable sample code.

The following code invokes multiple threads, which in turn execute a method on an object. The method simply prints the ID of the executing thread. There are no synchronization issues, because the threads work on different objects.

   namespace ThreadingInDotNet   {      class Program      {         static void Main(string[] args)         {            ThreadSafeType tst1 = new ThreadSafeType();            ThreadSafeType tst2 = new ThreadSafeType();            Thread t1 = new Thread(new ThreadStart(               tst1.PrintThreadId));            Thread t2 = new Thread(new ThreadStart(               tst2.PrintThreadId));            t1.Start();            t2.Start();         }      }         class ThreadSafeType      {         int counter = 0;         public void PrintThreadId()         {            for (int i = 0; i 

In contrast, consider the following code, where two threads both work on the same object:

   namespace ThreadingInDotNet   {      class Program      {         static void Main(string[] args)         {            ThreadSafeType tst = new ThreadSafeType();            Thread t1 = new Thread(new ThreadStart(               tst.PrintThreadId));            Thread t2 = new Thread(new ThreadStart(               tst.PrintThreadId));            t1.Start();            t2.Start();         }      }         class ThreadSafeType      {         int counter = 0;         public void PrintThreadId()         {            for (int i = 0; i 

The preceding code has two threads working on the same object instance, so results can be unpredictable, and errors can occur randomly. In this case, both threads execute the PrintThreadID method at the same time. Sometimes, both threads will update the value of the counter within a very small interval, so by the time either thread can print the counter value, it has already increased by two, as shown in the output below. You can see that the counter value printed moves from 6 directly to 8, and again from 8 to 10.

   Thread id : 3 added one to counter. Counter Value : 1   Thread id : 4 added one to counter. Counter Value : 2   Thread id : 4 added one to counter. Counter Value : 3   Thread id : 3 added one to counter. Counter Value : 4   Thread id : 3 added one to counter. Counter Value : 5   Thread id : 4 added one to counter. Counter Value : 6   Thread id : 4 added one to counter. Counter Value : 8   Thread id : 3 added one to counter. Counter Value : 8   Thread id : 4 added one to counter. Counter Value : 10   Thread id : 3 added one to counter. Counter Value : 10

The .NET Framework provides various techniques for synchronizing access to shared resources. Some work on an "always exclusive" lock basis, while others allow exclusive locking only for writing. Some can provide thread synchronization for only a single process, while others provide system-level synchronization.

Here's a list of the various synchronization techniques discussed in this article:

  • Lock/Monitor
  • Mutexes
  • Interlocked Operations
  • Read-Writer Locks
  • Semaphores
  • Signaling using EventWaitHandle

Synchronization with Lock/Monitor
Locking is a managed synchronization technique where you can synchronize access to a region of code by taking and releasing a lock on a particular object. The points to remember are:

  • Lock implements Monitor to make sure that there are no orphan locks.
  • The Lock technique cannot be serialized across application domains.
  • A Lock has thread affinity.

The following code illustrates synchronization using locks:

   class ThreadSafeType   {      int counter = 0;      private Object thisLock = new Object();      public void PrintThreadId()      {         lock (thisLock)         {            for (int i = 0; i 

The preceding code makes the ThreadSafeType class results more deterministic by introducing a lock in the method. Now, whichever thread gets the lock first will be able to increment the counter five times at one go.

The output from the preceding code looks like this:

   Thread id : 3 added one to counter. Counter Value : 1   Thread id : 3 added one to counter. Counter Value : 2   Thread id : 3 added one to counter. Counter Value : 3   Thread id : 3 added one to counter. Counter Value : 4   Thread id : 3 added one to counter. Counter Value : 5   5   Thread id : 4 added one to counter. Counter Value : 6   Thread id : 4 added one to counter. Counter Value : 7   Thread id : 4 added one to counter. Counter Value : 8   Thread id : 4 added one to counter. Counter Value : 9   Thread id : 4 added one to counter. Counter Value : 10   10

Synchronization with Mutexes
A mutex is similar to a lock, but:

  • You can use mutexes to synchronize threads across processes.
  • A mutex has thread affinity, meaning that the thread owning the mutex must release it.
  • A mutex is an instantiable type.
  • There are two types of mutexes: local mutexes and Named System mutexes.

Consider the following code:

   class ThreadSafeType   {      int counter = 0;      Mutex mutex = new Mutex();      public void PrintThreadId()      {         mutex.WaitOne();         for (int i = 0; i 

When you run the preceding code, the output looks like this:

   Process/Thread id : 1484 / 3 added one to counter. Counter Value : 1   Process/Thread id : 1484 / 3 added one to counter. Counter Value : 2   Process/Thread id : 1484 / 3 added one to counter. Counter Value : 3   Process/Thread id : 1484 / 3 added one to counter. Counter Value : 4   Process/Thread id : 1484 / 3 added one to counter. Counter Value : 5   5   Process/Thread id : 1484 / 4 added one to counter. Counter Value : 6   Process/Thread id : 1484 / 4 added one to counter. Counter Value : 7   Process/Thread id : 1484 / 4 added one to counter. Counter Value : 8   Process/Thread id : 1484 / 4 added one to counter. Counter Value : 9   Process/Thread id : 1484 / 4 added one to counter. Counter Value : 10   10

Another common developer need is synchronizing access to files. The code below tries to write to a file without any synchronization mechanism implemented:

   // no synchronization example   class ThreadSafeType   {      public void PrintThreadId()      {         using (StreamWriter sw = new StreamWriter("TestFile.txt," true))         {            for (int i = 0; i 

A single process running multiple threads will throw the following IOException:

   System.IO.IOException: The process cannot access the file       'D:TestFile.txt' because it is being used by another process.

You can solve the issue by using a mutex as follows (the mutex used here is a local mutex):

   // Using a local mutex for File Access   class ThreadSafeType   {      Mutex mutex = new Mutex();      public void PrintThreadId()      {         mutex.WaitOne();         using (StreamWriter sw = new StreamWriter(            "TestFile.txt," true))         {            for (int i = 0; i 

Here's the output in the file after implementing the local mutex and running the preceding code:

   Process/Thread id : 2560 / 3   Process/Thread id : 2560 / 3   Process/Thread id : 2560 / 3   Process/Thread id : 2560 / 3   Process/Thread id : 2560 / 3   Process/Thread id : 2560 / 4   Process/Thread id : 2560 / 4   Process/Thread id : 2560 / 4   Process/Thread id : 2560 / 4   Process/Thread id : 2560 / 4

Unfortunately, when you run two instances of this process at the same time, you still get the IOException shown previously. Instead, for inter-process synchronization, you must use named system mutexes as shown in the code below:

   // Named System Mutex example   class ThreadSafeType   {      Mutex mutex = new Mutex(false, "TestFile");      public void PrintThreadId()      {         mutex.WaitOne();         using (StreamWriter sw = new StreamWriter(            "TestFile.txt," true))         {            for (int i = 0; i 

After running the named system mutex code, here's the output you'll find in the file:

   Process/Thread id : 3480 / 3   Process/Thread id : 3480 / 3   Process/Thread id : 3480 / 3   Process/Thread id : 3480 / 3   Process/Thread id : 3480 / 3   Process/Thread id : 4552 / 3   Process/Thread id : 4552 / 3   Process/Thread id : 4552 / 3   Process/Thread id : 4552 / 3   Process/Thread id : 4552 / 3   Process/Thread id : 3480 / 4   Process/Thread id : 3480 / 4   Process/Thread id : 3480 / 4   Process/Thread id : 3480 / 4   Process/Thread id : 3480 / 4   Process/Thread id : 4552 / 4   Process/Thread id : 4552 / 4   Process/Thread id : 4552 / 4   Process/Thread id : 4552 / 4   Process/Thread id : 4552 / 4

Note that both the processes and their threads participate.

Interlocked Operations
The methods exposed by the System.Threading.Interlocked class provide synchronized access to a variable in shared memory across multiple threads and processes. They provide safeguards from dirty reads in preemptive multithreading operating system scenarios, where a thread can be suspended after reading the memory location, but before updating it?as part of the OS's threading context switching process. The System.Threading.Interlocked methods ensure safety by providing atomic transactions on variables. The methods include support for the following operations:

  • Add
  • Increment
  • Decrement
  • Read
  • Exchange (for setting the variable to a particular value)
  • CompareExchange (for comparing two values and performing an operation based on the result)

Here's an example:

   // Using the Interlocked class   namespace ThreadingInDotNet   {      class Program      {         static void Main(string[] args)         {            ThreadSafeType tst = new ThreadSafeType();            Thread th;            for (int i = 0; i 

You can see the output of the program below:

   Thread ID:3 acquired the lock      Thread ID:4 was denied the lock      Thread ID:5 was denied the lock      Thread ID:6 was denied the lock    Thread ID:3 exiting lock       Thread ID:7 acquired the lock      Thread ID:8 was denied the lock      Thread ID:9 was denied the lock    Thread ID:7 exiting lock       Thread ID:10 acquired the lock      Thread ID:11 was denied the lock      Thread ID:12 was denied the lock    Thread ID:10 exiting lock       Thread ID:13 acquired the lock      Thread ID:14 was denied the lock      Thread ID:15 was denied the lock    Thread ID:13 exiting lock       Thread ID:16 acquired the lock      Thread ID:17 was denied the lock      Thread ID:18 was denied the lock    Thread ID:16 exiting lock       Thread ID:19 acquired the lock      Thread ID:20 was denied the lock      Thread ID:21 was denied the lock    Thread ID:19 exiting lock       Thread ID:22 acquired the lock    Thread ID:22 exiting lock

Exchange sets the variable to the specified value and returns the original value, as an atomic operation. In a more typical scenario, where you would check the current variable value before setting it to the new value, it is possible that the executing thread could be persisted before performing the set operation, and another thread would be able to enter the code.

Read-Writer Locks
Read-Writer locks help in optimizing synchronization where you have more threads executing read operations than write operations. Read-Writer locks allow multiple read-only threads to enter the code, but allow only one writer thread at a time. You use the ReadWriterLockSlim class to implement Read-Writer locks.

Consider the code sample below, in which three threads try to acquire a read lock while one thread tries to acquire a writer lock. The writer thread is able to acquire the lock only after all the reader threads have released their locks. Similarly, a reader thread can acquire the lock only after the writer thread releases it. However, multiple reader threads can acquire read locks at the same time:

   // Read-Writer Lock example   namespace ThreadingInDotNet   {      class Program      {         static void Main(string[] args)         {            ThreadSafeType tst = new ThreadSafeType();            Thread t1 = new Thread(new ThreadStart(tst.GetValue));            Thread t2 = new Thread(new ThreadStart(tst.GetValue));            Thread t3 = new Thread(new ThreadStart(tst.Increment));            Thread t4 = new Thread(new ThreadStart(tst.GetValue));            t1.Start();            t2.Start();            t3.Start();            Thread.Sleep(500);            t4.Start();         }      }         class ThreadSafeType      {         ReaderWriterLockSlim counterLock = new ReaderWriterLockSlim();         int counter = 0;         public void Increment()         {            counterLock.EnterWriteLock();            Console.WriteLine(" In Writer Lock, Thread ID : " +                Thread.CurrentThread.GetHashCode());            counter++;            counterLock.ExitWriteLock();            Console.WriteLine(" Exited Writer Lock, Thread ID : " +                Thread.CurrentThread.GetHashCode());         }            public void GetValue()         {               counterLock.EnterReadLock();            Console.WriteLine("In Reader Lock, Thread ID : " +                Thread.CurrentThread.GetHashCode());            Thread.Sleep(3000);            Console.WriteLine("Thread ID : "+                Thread.CurrentThread.GetHashCode() +                " Counter Value : " + counter);            counterLock.ExitReadLock();            Console.WriteLine("Exited Reader Lock, Thread ID : " +                Thread.CurrentThread.GetHashCode());         }      }   }

Running the preceding code produces the following output:

   In Reader Lock, Thread ID : 3   In Reader Lock, Thread ID : 4   Thread ID : 3 Counter Value : 0   Exited Reader Lock, Thread ID : 3   Thread ID : 4 Counter Value : 0   Exited Reader Lock, Thread ID : 4    In Writer Lock, Thread ID : 5    Exited Writer Lock, Thread ID : 5   In Reader Lock, Thread ID : 6   Thread ID : 6 Counter Value : 1   Exited Reader Lock, Thread ID : 6

Semaphores
Semaphores control access to a resource or pool of resources by restricting the number of clients that an access the resource at any given moment. They can be either local or named system-wide. You use the System.Threading.Semaphore class to implement semaphores.

Semaphores do not enforce thread affinity. Threads enter the semaphore by calling the WaitOne method. The semaphore count is decremented each time a thread enters and incremented when the thread exits.

In the code sample below, five threads try to enter the code region, but the code restricts the maximum access count to three during the semaphore's construction:

   // local Semaphore example   class Program   {      static Semaphore sph;      static void Main(string[] args)      {         sph = new Semaphore(3, 3);         Thread th;         for (int i = 0; i 

Below is the output of the above code, which shows that the fourth thread (Thread ID 6) could enter the PrintThreadId method only after one of the first three threads exits the semaphore:

   Thread ID:3 enters semaphore   Thread ID:4 enters semaphore   Thread ID:5 enters semaphore   Thread ID:3 exiting semaphore   Thread ID:6 enters semaphore   Thread ID:4 exiting semaphore   Thread ID:7 enters semaphore   Thread ID:5 exiting semaphore   Thread ID:6 exiting semaphore   Thread ID:7 exiting semaphore

Signaling using EventWaitHandle
Wait handles provide simple and powerful thread synchronization using signaling, where threads wait for each other's signal to bring them out of the wait state. Wait handles can be local or named system wide. They have two modes: AutoReset and ManualReset modes. In AutoReset mode, a single thread is released and the event automatically gets reset. In ManualReset mode, threads are released until the event gets reset manually. You'll see examples of both. Here's the AutoReset example:

   // AutoResetEvent example   class Program   {      private static EventWaitHandle ewh =          new EventWaitHandle(false, EventResetMode.AutoReset);      static void Main(string[] args)      {         Thread th;         for (int i = 0; i 

The program outputs the following text:

   Thread ID:3 waiting   Thread ID:4 waiting      Thread ID:3 enters code block      Thread ID:4 enters code block

As you can see, in AutoReset mode, each execution of the Set method releases one thread.

In contrast, consider the code below, which demonstrates using the ManualReset mode:

   // ManualResetEvent example      class Program      {         private static EventWaitHandle ewh =             new EventWaitHandle(false, EventResetMode.ManualReset);            static void Main(string[] args)         {            Thread th;            for (int i = 0; i 

Finally, here's the output of the ManualReset mode program:

   Thread ID:3 waiting   Thread ID:4 waiting      Thread ID:4 enters code block   Thread ID:3 enters code block

Choosing a Synchronization Technique
Choosing the appropriate synchronization technique isn't difficult, but because there are so many possibilities, it can be difficult to remember which technique is appropriate for any particular situation. Table 2 can help; it shows a summary of the various synchronization techniques, along with each technique's unique features.

Table 2. Synchronization Techniques: This quick and handy guide shows the main features of each synchronization technique to help you determine which technique might be the most appropriate in a given situation.
Technique Main Feature
Lock/Monitor Synchronizes access to code regions.
Mutex Same as Lock/Monitor but can work system-wide.
Interlocked Use when performing atomic arithmetic operations and comparisons. Avoid dirty read scenarios.
Read?Writer Useful when there are many read-only threads but fewer writer threads.
Semaphores Controls the number of threads that can access a resource at any point in time.
EventWaitHandle Threads participate in synchronization by signaling each other.

With all these synchronizing techniques in hand, you'll find that you can safely introduce threading operations to your applications, while also limiting the possibility of threading problems.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist