I’m thinking of reimplementing a computationally intensive scientific app in Java. What fraction of native C speed do people usually see with current Java compilers?
This is not a question that can be answered easily, but depends on your application, compiler, Java runtime, and JIT. Many factors affect Java performance, including its interpreted nature, the overhead of just-in-time compilation, garbage collection algorithms, the security model, reflection, and synchronization implementation to name just a few. A computationally intensive scientific application will presumably make heavy use of floating point arithmetic. Floating point is one of Java’s weak points. Before Java 1.2, all floating point was handled in a platform-independent manner, which often sacrificed the extra precision you could get from the native platform. This has changed with Java 1.2, which uses platform-independent floating point only if you use the
strictfp keyword in variable declarations. But floating point is still slow across all Java version. Programs performing purely integer operations that avoid excessive memory allocation and thread synchronization can approach with a factor of 3 or 4 of C when using an exceptional JIT. But floating point-intensive programs will typically yield far poorer results, on the order of 10 or 20 times slower. You should write a test Java program that performs computations representative of your application and time its performance in your chosen Java runtime. Then time an equivalent C program. That is the only way you will be able to know for sure whether or not your application will perform well in Java. In general, Java is not well-suited for scientific computing.