In the olden days, the use of floating point numbers imposed a significant computation and speed overhead compared to integer arithmetic. For this reason, many optimization guidebooks and IT veterans still recommend that you use integers instead of floating point data. However, this adage isn’t true anymore. On modern processors, (e.g., the Pentium family) floating type arithmetic is faster than integer arithmetic. Therefore, if you wish to accelerate a computation-intensive application, you should actually reverse this rule: use floating point data instead of integers. Don’t get me wrong here: I don’t recommend using floating point instead of integers as loop counters, for example. The lesson here is that programmers shouldn’t heed obsolescent adages and myths about optimization that are no longer true.