### WEBINAR:

On-Demand

Building the Right Environment to Support AI, Machine Learning and Deep Learning

**Two Engineering Standards: Get Something Working**
Science is fine, but how can you get something done? First, there are standards. For example,

IEEE standard 754—Binary Floating Point Arithmetic is the standard for computer encoding of real numbers. IEEE standards can cost an arm and a leg. This one's been in review; you can

get a virtually identical free draft.

JavaScript's numeric processing is built on such standards. For example, it's Number type is IEEE 754 double precision, according to the

ECMA standard. Because there's no perfect solution for encoding numbers, this format is full of engineering compromises. So are algorithms that manipulate this format, such as division. Because the representation isn't perfect, operations invariably accumulate tiny errors, which might not seem like a big deal in most cases, but becomes considerably more important when you realize the implications, as in the following examples.

- The expensive FDIV defect in Intel's Pentium processor was caused by an IEEE 754 division error. Intel eventually issued a public apology for shipping processors containing this bug, which ended up costing them approximately $500 million.
- Daily interest calculations on your debts and interest can be performed with floating point calculations. No one wants to pay extra interest or lose income because of calculation errors, so the standard must be good.

The IEEE format stores floating point numbers as accurately as a decimal number with either 15, 16 or 17 decimal digits. The number of digits varies due to translation between binary and decimal. To get the highest conversion accuracy, specify decimal numbers with 17 digits—and rely on at most 15 being right.

Listing 1 shows how the 17 digit limit can cause truncation and rounding. Here's the output of that listing:

```
Displaying PI
Value's Origin Value Displayed
True pi to 40 digits
3.1415926535897932384626433832795028
Math.PI 3.141592653589793
40 digit pi literal 3.141592653589793
Variable = to a 100 digit literal 3.141592653589793
Displaying a Rational Number
1/3 expressed as a real number: 0.3333333333333333
Displaying The Impossible: 0.1
Calculated and displayed value in all cases: "0.1"
Literals used:
0.1
0.1000000000000000000000000001
0.100000000000000000000000001
0.1000000000000000000000000456
0.09999999999999999999999
```

In the first part of this output, the interpreter truncates pi to 16 digits regardless of the number of digits originally typed in. A literal string can of course be longer—but a string is not a number. In the second part of this output, the interpreter does its best to generate as many significant digits as possible to represent the true value of 1/3. The final part of the output shows how the interpreter handles several numbers whose value is very close to 1/10; however, there's no exact floating point representation for those values. In addition, the numbers supplied differ in accuracy only beyond the 20th digit. So the interpreter must transform them all into some floating point number very close to a tenth, which it turns out is the same value—0.1—in each case.

To generate this last piece of output, JavaScript converts a decimal to a floating point Number object and back again, which is how it prints the "0.1" output. A subtle question is: how does the interpreter choose which decimal representation to use when converting back? A hand-crafted algorithm inside the interpreter usually finds the value with the most trailing zeros, yielding the best solution. In this case, "0.1" is not only the best solution, but the ideal solution.