As we begin deploying the first real production apps using DataRush, we find ourselves studying and digging ever deeper into the best ways to improve performance. (see Jim Falgout's JDJ cover story for some discussion of this). We are fortunate that we have good tools and sharp engineers, but more information is always better. This interesting article discusses some aspects of the JVM as a black box, and the challenges of that perspective.
"...most important of all, so long as the JVM remains a black box, the "myths, legends and lore" will haunt us forever. Remember when all the Java performance articles went on and on about how method marked "final" were better-performing and so therefore should be how you write your Java code? Now, close to ten years later, we can look back at that and laugh, seeing it for the micro-optimization it is, but if challenged on this idea, we have no proof. There is no way to create demonstrable evidence to prove or disprove this point. Which means, then, that Java developers can argue this back and forth based on nothing more than our mental model of the JVM and what "logically makes sense"."
"Some will suggest that we can use micro-benchmarks to compare the two options and see how, after a million iterations, the total elapsed time compares. Brian Goetz has spent a lot of time and energy refuting this myth, but to put it in some degree of perspective, a micro-benchmark to prove or disprove the performance benefits of "final" methods is like changing the motor oil in your car and then driving across the country over and over again, measuring how long until the engine explodes. You can do it, but there's going to be so much noise from everything else around the experiment--weather, your habits as a driver, the speeds at which you're driving, and so on--that the results will be essentially meaningless unless there is a huge disparity, capable of shining through the noise."