RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX


Cooperative Multithreading in BREW with IThread

Learn how to harness cooperative multithreading in BREW to avoid writing applications that block the CPU during lengthy operations.

y last article mentioned the BREW IThread as a possible solution to the challenges of dealing with blocking code on BREW. Whether you're porting existing code to BREW or writing new code, the IThread interface can be quite handy. Sadly, in my experience it's one of the most misunderstood and maligned interfaces provided by BREW—developers seem to overlook it and write code which provides the same functionality (using AEECallback and ISHELL_Resume), often because they have decided that since BREW doesn't provide preemptive multithreading there's little point in using IThread. Unfortunately, this approach leads to additional testing and debugging, as well as code that requires additional documentation (or time invested in understanding where no documentation is presented).

This article begins by explaining the basic principle behind cooperative multithreading and then shows you the basics of the IThread interface. Rather than just give you some code samples along the way, you can download hellothread.c and follow along; bits of the actual file are included here for your reference.

Understanding Cooperative Multithreading
Most readers are comfortable with the basic notion of a thread of execution: you create a thread using some system API such as pthread_create, passing a function pointer called the thread's main function. This function executes in a separate thread of execution in the same memory space at the same time as the main thread of execution, and you can share data using synchronization tools such as mutexes. When your main function exits or you call the equivalent of pthread_exit, your thread terminates and any functions joined to the thread using pthread_join are invoked at that time. Under the hood, the native operating system (typically in conjunction with a user-level library) is responsible for doing the work of sharing processor and memory and providing mutexes and the like. Because the host platform and OS support threads natively, there are typically some guarantees regarding the availability of resources—one thread can't starve another thread for CPU access, for example. For this reason, these environments are called preemptive, because the operating system can preempt one thread (or process) another to ensure that everything gets a fair share of resources.

Not so with the BREW application environment at present, where memory and processor are shared cooperatively. That is, if your application is running, it owns the CPU for the duration of an event handler's execution; spend too long handling an event or callback, and the handset will reset, because its watchdog timer has assumed that your application has crashed. Many developers criticize this approach as placing an unwelcome burden on application developers, and in truth it does make some kinds of applications (those requiring a guarantee of deterministic execution) impossible to create. However, the majority of applications do not have critical scheduling requirements, and, in fact, cooperative threads are not inordinately more complex than their preemptive kin. There are two key differences, however. First, threads on BREW must yield to the processor occasionally, giving other threads of execution, which requires two function calls (or one, if you'd like to write a wrapper function). Second, because your threads explicitly yield the processor, you don't need synchronization interfaces when accessing shared data, because only one thread is running during the shared data access.

Close Icon
Thanks for your registration, follow us on our social networks to keep up-to-date