NetKernel: Moving Beyond Java’s Concurrency

NetKernel: Moving Beyond Java’s Concurrency

s multi-core and multi-CPU systems become more prevalent, the opportunity to perform several tasks at once is now a reality. Unfortunately, the way most systems are designed it is not as easy as having another thread take a task on. The programming language you use needs to ask the execution environment to schedule work on system resources. The constructs of a language influence how easy it is to take advantage of higher-order concurrency functions.

Historically, you could hand things off to a “helper” through multiple threads. As the CPU waited for network or file I/O activity to complete on one thread, it could execute another thread to get some work done and switch back when the first one was ready. This certainly helped applications feel more responsive, but the computer was still really only doing one thing at a time.

Prior to the Java language and JVM, only sophisticated developers could wrap their heads around these concurrent-programming models and the opportunity for reuse never reached its full potential. Because the constructs were not directly part of the language, you needed to choose a threading library and stick with it. Any libraries built on top of those threading libraries were often incompatible with other approaches. POSIX standardization helped, but it remained complicated and out of reach for most software engineers.

Java elevated the concurrency game by introducing relatively easy, relatively cross-platform threading mechanisms at the language and JVM levels. It was exciting that Java supported threads, the Runnable interface and monitors. Some early JVMs supported native threads while others only offered “green threads,” which were behind-the-scenes JVM magic that approximated native thread concurrency. With these basic tools, it became simple to write fairly portable multi-threaded code:

Thread t1 = new Thread() {    public void run() {        System.out.println("Hello from Thread 1!");    }}Thread t2 = new Thread() {    public void run() {        System.out.println("Hello from Thread 2!");    }}t1.start();t2.start();

The good news is that this was easy to do. The bad news is that this profligate use of Thread instances was expensive and did not scale well. Simply creating a ton of Threads would bring an application to its knees swapping contexts between all of them. Developers were encouraged to avoid creating new Thread objects and instead use Runnable instances scheduled on some kind of a Thread pool. There was not any standard ThreadPool instance for several years, but many effective ones were developed and proliferated in the meantime.

Runnable r1 = new Runnable() {    public void run() {        System.out.println("It's good to be an r1 Runnable!");    }};Runnable r2 = new Runnable() {    public void run() {        System.out.println("It's good to be an r2 Runnable!");    }}// Create a ThreadPool with 3 threads waiting for something to doThreadPool tp = new ThreadPool(3); tp.execute(r1);tp.execute(r2);

This model, in which a pool of threads handles client requests, scaled reasonably well and ultimately formed the server infrastructure base for many organizations. A pool of threads would handle client requests. Most servlet engines worked under the hood like this, but even thick Swing clients appeared more responsive by off-loading longer running tasks like querying a database or issuing an RMI request to another thread (failure to do so resulted in unresponsive and unpopular Swing applications!).

It seemed as if the Java platform, like Prometheus before it, had stolen Multi-Threaded “fire” from the Gods of Concurrency and made it available for all Developerkind. The downside of giving people fire to play with is that it can burn! This model usually required the use of listeners for notification when a scheduled task was done. This decoupled the normal flow of a program, which usually makes things harder to follow. Also, after developers saw what they could do, they wanted to eke out ever more performance. Rather than straight task queuing, they wanted to have priority queues. Doug Lea famously introduced rich and exotic concurrency structures like Syncs, Barriers, Rendezvouses and the like to address these and other issues. This was all very exciting and sophisticated developers built powerful software, but average developers frequently still ran into problems.

The state-based nature of Object-Oriented programming is difficult to get right when multiple threads might attempt to update or retrieve this state. Multi-threaded errors are among the most difficult to debug because they often appear random and hard to reproduce. The Swing team understood this and decided to make Swing a single-threaded API. While some folks complained about this limitation, the decision was completely about making it simple for developers to create new components. Given the popularity of “Java Concurrency in Practice” over 10 years after Java was released, it is clear that many programmers still struggle with getting it right.

When you run into production code that looks like this, something is not right:

synchronized(new Object() {}) {    // ... unsynchronized synchronized block here    // DO NOT EMULATE THIS CODE! Why won't this do what     // intended?}

Imagine moving away from low-level state management and into higher-order functionality like issuing web service calls, querying databases or transforming XML. These workflows impose new burdens on how processes are interrelated, how often a service can be invoked, etc. These processes often change based on business rules, so you do not want to have to rely on language-level constructs for coordinating all of these activities. In order to fully benefit from ThreadPools, Executor frameworks and similar constructs, you need to write your code around interfaces such as Runnable and understand the various contexts under which you might use the code. Sometimes you may wish to block on a step, sometimes you might not. Sometimes you might want to orchestrate asynchronous calls to multiple data sources and then transform the results into HTML. If not everything is separable into unrelated, executable blocks of code, you might find it difficult to scale to take full advantage of extra CPUs. Even if you somehow designed your applications to work that way, your code might be deployed into an environment like a modern application server that controls what threads can and cannot do.

Despite Java’s rich and powerful platform and language-level features, using them correctly and effectively is harder than it needs to be for many applications. These issues are pushing developers to consider alternate languages like Erlang, Scala, and Clojure. These functional and hybrid languages introduce new concepts like Software Transactional Memory and programming styles that minimize the amount of state maintained and needing to be protected by concurrency locks. Scala and Clojure both run on the JVM so it is possible to reuse a fair amount of your Java code, but Erlang requires new tools and runtime environments.

Rather than simply changing languages, if you are willing to look to a next-generation runtime environment, you might find that it is easier than you think to write software that takes full advantage of a system’s cores or CPUs while reusing much of what you have already written. The trick is to shift from an object-oriented mindset to one that focuses on information resources. Objects are still used as an implementation technology, but your code dependencies become resilient to change and remain concurrency-friendly!

Introducing NetKernel
1060 Research Ltd. has created a URI-based microkernel environment called NetKernel to help solve some of these problems, take advantage of multiple CPUs seamlessly, and build logically-connected, layered applications. NetKernel has a dual license allowing both open source and commercial development. This article focuses on how NetKernel simplifies building high-concurrency systems without much effort. The full scope of NetKernel’s resource-oriented approach is beyond the scope of this article, but you are strongly encouraged to read more about it.

NetKernel’s magic starts with its well-designed microkernel architecture. It is an efficient environment that can do “real” work in a 10MB VM (many production NetKernel applications operate effectively in 64MB or less!). Out of the box, NetKernel only uses a handful of threads although this can be tuned if your requirements demand more. Requests come in through a transport (usually HTTP, but others are possible) and are scheduled asynchronously on one of the microkernel threads. This is a key point to remember. Everything is ultimately scheduled the same way: asynchronously. For convenience sake, synchronous patterns overlay this asynchronous backend. This is a key aspect to NetKernel’s ability to scale. Every incoming request is handed off to a different kernel thread whether you intend it to or not. You can control this behavior as you will see below, but this environment already is compelling from an operational, CPU utilization perspective. The same application will probably take advantage of extra CPUs automatically and you do not have to do anything!

Everything in NetKernel is URI-addressable. Files, specific functionality, everything! Internally, NetKernel uses a URI scheme for referencing behavior called active URIs. There are also data URIs for referring to things like files on disk. If you want to retrieve a file from disk, you just ask for its URI. If you want to invoke some XSLT processing, you schedule a request for the functionality that responds to XSLT requests (active:xslt). Behind the scenes, there is a module that advertises this URI. It will be asked to perform the transformation with specified inputs. While you do not need to care what is actually used to do the transformation, you may be interested to know that it is Saxon, the faithful Java XML transformation library. You can often wrap up existing code with a logical URI. Dependent code can invoke it without having to directly create or talk to your object instances. This way, if and when you decide to swap the implementation technology, clients do not have to care. Sure, you can approximate this flexibility with Java interfaces, but it is much cleaner and easier with just a logical name. This is one of the harder shifts for Java developers to make when looking at resource-oriented environments, but once you do, it is hard to go back to the old way of thinking.

When you ask NetKernel to run something, you create a subrequest and ask for it to be executed. Although it ultimately runs on a microkernel thread behind the scenes, from the client perspective, it blocks until the activity is done. This is how most developers want code to behave unless they specify otherwise. NetKernel itself is written in Java, but uses the resource-oriented abstraction to invoke behavior. You are free to implement modules or write client code in just about any major language that runs on the JVM (e.g., Java, JavaScript, Groovy, python, Ruby, BeanShell and even Scala and Clojure are becoming options!).

See Sidebar 1 for information on running the examples and see Sidebar 2 to read about BeanShell.

Asynchronous Calls
While all of these subrequests on the previous block until they are complete, they do not have to. You can combine multiple steps into an asynchronous pipeline. Instead of getting results back, you get the equivalent of a Future object handle. By joining on these handles, the whole process will block, but each subrequest is potentially scheduled in parallel (and thus taking advantage of extra CPUs) (see Listing 1).

In this example, the image is not actually being used after it is fetched but it is still retrieved via HTTP. A proper pipeline would catch errors and exceptions as well, but this is just a demonstration of how easy it is to orchestrate asynchronous calls. The logical abstractions can represent invocation of all manner of elaborate backend processing, but the clients are protected from these details and can pick and choose when they want asynchronous or synchronous subrequest handling. Not only is this a tremendously simpler approach than trying to manage all of this by yourself, but it is likely to scale better too.

An important point is that the results of issuing these calls in a resource-oriented environment are immutable resource representations. In this way, NetKernel is very much a stateless request mechanism; a little bit like REST, a little bit like functional programming languages. It takes a while for OO programmers to change how they think about these ideas, but there are great benefits to doing so.

The point is not that you could not do this with regular Java concurrency constructs, but that it would be significantly more difficult to get right. Having a microkernel-based architecture of URI-addressable behavior is a powerful combination. You can easily imagine doing asynchronous federated queries across multiple data sources from heterogeneous backend systems (relational databases, web services, etc.).

While it is great to have a flexible, fluid, scalable environment at your disposal, you do not always want your system to hammer a particular resource as much or as fast as it can. There may be operational or legal limits to usage of a service, a library or a data source.

As an example, some commercial entity extractors have really obnoxious licensing terms. You can only use one thread at a time or risk needing to pay tens of thousands of dollars more for additional thread use (this is per-CPU licensing to the extreme!). At the very moment you are trying to take advantage of extra CPUs for the rest of your system, the lawyers are telling you not to in this case!

Let us assume you can access the functionality needed through the API as follows:Results r = ExpensiveTool.extractEntity(myDocument);The goal is to limit access to a single thread. Given what you know about Java threading, you might be tempted to do something like:

Results r = null;synchronized(someLock) {    r = ExpensiveTool.extractEntity(myDocument);}

This solves the legal issue, but a code-level monitor here is too blunt of a weapon to employ in this situation. If you decide for business reasons that paying the extra money is worth it, you cannot simply move from enforcing use of a single thread to enforcing the use of two or three threads because the Java monitor is only a mutex. You need to change your locking mechanism to counting semaphores (see java.util.concurrent.Semaphore in JDK 5 or later) or some other more sophisticated tool. Additionally, if you want to apply the limited thread use policy across a variety of resources, tools or systems, this suddenly starts to feel very complicated.

NetKernel’s URI abstractions and throttling capabilities work well here. First, put the expensively-licensed tool behind a URI like active:expensive-tool. Rather than issuing the call directly, you could wrap the request with a call to throttle requests based on a configuration file:

req=context.createSubRequest("active:throttle");req.addArgument("id","throttle:expensive-tool");req.addArgument("configuration", "ffcpl:/etc/MyThrottleConfig.xml");req.addArgument("uri","active:expensive-tool");req.addArgument("doc", "");handle = context.issueAsyncSubRequest(req);result = handle.join();

The configuration file indicates how many requests you allow for that URI into the kernel at a time and how many requests to queue up before you start rejecting them:

  1  10

Not only are you legally compliant, but also if you do buy a second or third license, you do not have to change the policy, you just change the configuration file. You could also wrap requests for other resources with the same throttle definition if that made sense to do. It is very cool that you can trivially enforce throttles against arbitrary tools and services whether they offer the capability some other way or not! The issue is not that the problem is intractable using language-level concurrency constructs, it is just that they are hard to universalize, are very detail-oriented, and painful to get right.

Concurrent Threading Is Useful Yet Nuanced
Java’s support for concurrent threading constructs is tremendously useful in the right hands, but they remain nuanced and error-prone. Language-level multithreading tools are often not the right abstractions for mixing arbitrary processing behavior with complicated system orchestrations and changing business rules. By shifting your focus to a resource-oriented mindset like that provided by NetKernel, you could easily take advantage of modern hardware and the surfeit of CPUs they offer while reusing large amounts of your existing code. Environments like NetKernel heavily leverage the language concurrency tools so you do not necessarily have to. Consider letting it do the heavy work!


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist