Solution to JavaScript's 0.1 + 0.2 = 0.30000000000000004 problem?

*(cross posted from a The Hub [discussion][1])* Open your JavaScript console and try this > 0.1 + 0.2 == 0.3 false > 0.1 + 0.2 0.30000000000000004 You may ask yourself [Is floating point math broken?][2]. The answer is that JavaScript uses floating point math based on the [IEEE 754 standard][3], same as the Java's double. If you want the details on what that means, either quickly read [The Floating-Point Guide - What Every Programmer Should Know About Floating-Point Arithmetic][4] or fully digest [What Every Computer Scientist Should Know About Floating-Point Arithmetic][5]. But if I really want 0.1 + 0.2 == 0.3 then what do I do? The [Java BigDecima][6]l class solved this problem, and there similar solutions for JavaScript: [GWT compiled to JavaScript][7] version; a [JavaScript translation][8] of the ICU4J's com.ibm.icu.math.BigDecimal. There are also other JavaScript libraries like [big.js][9]. Here is big.js in action: > Big(0.1).plus(0.2).eq(0.3) true > Big(0.1).plus(0.2).toString() "0.3" How have other teams solved this problem? Is there a de facto JavaScript library to use that has been well tested? [1]:
https://thehub.thomsonreuters.com/thread/110833 [2]:
http://stackoverflow.com/questions/588004/is-floating-point-math-broken [3]:
https://en.wikipedia.org/wiki/IEEE_754#Basic_formats [4]:
http://floating-point-gui.de/ [5]:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html [6]:
http://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html [7]:
https://github.com/iriscouch/bigdecimal.js [8]:
https://github.com/dtrebbien/BigDecimal.js [9]: http://mikemcl.github.io/big.js/

Answers

  • While I agree that BigDecimal has solved the problem in terms of accuracy of the computation, beware if you are writing a block of code which is latency critical... Also, BigDecimal has some instance fields (at least 5 at a quick glance). If you plan on your application having a lot of instances of BigDecimal, the memory footprint will be much larger than if you stuck with a double.
  • If it is monetary values try something dumb like:

    > (10 + 20) / 100
    0.3

    i.e. convert dollars to cents and use integer math, convert back to dollars for display.
  • It's an old and pretty well-known problem, one of the Awful Parts according to Crockford. The most commonly-used solution/workaround I've seen is scaling: `1 + 2 === 3`. Depending on your application domain you can choose a scale factor that will result in numbers always being integers when performing arithmetic operations. For example, when using money, use a factor of 100 (cents). And when you actually need to present the number to the user, scale it back.
  • When come to comparing floating point number in computer, I always think about [`epsilon`](
    http://floating-point-gui.de/errors/comparison/) approach i.e. as long as 2 numbers are different insignificantly, they can be considered equal. In short, `Real Number` in mathematics is continuous i.e. **indefinite**, while floating-point number in computer which represented by number of **definite** bits is [discrete](
    http://en.wikipedia.org/wiki/Discrete_mathematics) and hence the numbers they can be represented are also **definite**.
  • Hello, try the following to get the value of your sum function numberOfDecimals(decimalNumber){ return (decimalNumber.split('.')[1] || []).length; } function totalSum(a, b) { var precisionA = numberOfDecimals(a.toString()); var precisionB = numberOfDecimals(b.toString()); var precision = precisionA>precisionB?precisionA:precisionB; var x = Math.pow(10, precision || 2); return (Math.round(a * x) + Math.round(b * x)) / x; }
  • Crockford is pitching his DEC64 format (a decimal floating point format for the next generation of application programming languages.)
    https://github.com/douglascrockford/DEC64