Jacques Mattheij wrote a blog post about “The rise of the destructive programmer”:

I think it started when money first got directly involved. Some programmer in a bank somewhere figured out that rounding errors are an excellent source of income if the number of transactions is large enough. Nobody would ever miss all those half pennies. But that was an inside job and an easy one at that. That person is probably still alive and living the good life on a tropical island. As victimless crimes go it even has a certain charm.

While he has some good points all round in the post, I do find this one more than a little ridiculous. Yes, of course, one might deliberately choose to floor a number rather than applying natural rounding, but have you ever looked at an invoice? This happens all the time, and it sure as hell happened way before anyone moved to the digital world.

Apart from that, I think there’s a point to be made as to why this approach seems to have settled in the digital world, where natural rounding could be easily implemented. Floating point values are notoriously bad at representing large numbers accurately, so whenever I’m dealing with financial data, I always represent amounts as integers rather than floating point numbers. Why? Integers don’t lose accuracy, and unless you don’t check for overflow, you’re safe. I can only imagine that back in the days, where floating point operations were even more costly, this was the approach taken by developers as well. This might very well be the source of the so called “crime”; integer division, contrary to floating point division, always floors the result. That’s how integers work.

So in reality, it’s more likely that this behavior is a legacy from a lazy programmer not having the time or wanting to put the effort into implementing natural rounding on his integers more than some conscious wrong-doing. And, we all know what happens to legacy behavior. It propagates. Think a little deeper before publishing this kind of slander.