A More Complex Java/DTrace Example
Let's take a look at something a bit more complex. Perhaps you are seeing monitor contention that is affecting scalability in your application. One question you might have is: "How long do I wait from the time I attempt to acquire the contended monitor until I actually enter the monitor?" Now you can create a little DTrace script to answer that question (see Listing 4
The output of this script shows that two threads had contended monitor events. It shows the thread ID, the values (which are nanosecond power-of-two buckets), and the count (which is the number of times the value fell in a particular bucket). Thread 14 was the most impacted with eight wait times of higher than 131 microseconds.
So far, you have looked only inside the JVM. The real power of the DTrace/Java relationship is being able to correlate events from across the software stacks (see Listing 5). If you were to have a method that was utilizing underlying packages that you felt may have some JNI dependencies, you could identify what they were quite easily. For this example, you will check whether one of the methods of your demo application has any libc (an operating system library) dependencies.
The output from this script shows the function name and the number of times that it was called, but only from a thread that is currently in the renderPath method or its children. In this way, you can easily identify which native code your Java application is inducing from its use of other classes or packages. This particular case has a fairly high rate of native lock calls (mutex_lock). This is one of those areas that has always been difficult for Java developers to observe. No matter how well you write your Java code, you could be at the mercy of a scalability issue from JNI libraries. With DTrace, you can easily track these down with very little effort. In fact, for native code that may be involved in your application, a Solaris commandplockstat(1M)will report lock contention statistics (see Listing 6).
This output (from a different application than Java2Demo, since it is more interesting) shows that you do indeed have a high contention rate on malloc_lock from libc.so. This could easily be corrected by using an alternate allocator such as libumem(3LIB). Of course, you could apply the jstack() here to monitor the Java code responsible for the native calls.
No Hiding Place for Performance Problems
With just a few simple examples, you have seen how easy it is for you to gain tremendous insight into your Java application, the JVM, or any other aspect of the software upon which your business tasks depend. With systemic observability, performance problems have nowhere to hide. Pretty amazing that all the requisite components are completely free of charge. Oh happy day!