devxlogo

JavaScript: Playing the Numbers Game

JavaScript: Playing the Numbers Game

f you’re a Web developer, you may well have written something like the following input tag, which accepts a number that a user types in. Unhappily, you’ve probably also experienced the failure that can result:

The problem is that user-typed strings don’t all convert to numbers?the user can enter “XXY” as easily as “23.1”. A correct solution, maybe with extra processing, is of course:

Mostly, however, JavaScript’s string and number handling is flexible and invisible. But how flexible? Are there traps? Can you have an array of 3.5 elements? Can you add 1 to 1,000,000 and be sure to get 1,000,001, not 1.000E6? Will it display correctly? In the dim past, browsers had horrible numeric bugs. Can you relax yet, or are there still pitfalls?This article explores the limits of numeric processing by the script engine in modern browsers, versions 5.x onwards (and Netscape 4.7x, and Mozilla 1.x). At the end you’ll know what’s safe and what’s not safe about numbers in browsers.

One Case of Science: Modeling the Real World
A number is a very handy concept. The most popular kinds are integers and reals. In your head you probably know what these numbers are. Trivial cases like 2 and 23.45 are easy to write down. Some, though, are not trivial. How do you write down the integer recently found by the Great Mersenne Prime Search? It has four million-plus digits. How do you write down the decimal value of pi (just under 3.141592654)? It has infinite digits. You can’t.

To write down numbers on paper or in a computer, you must use an encoding which represents the number. It’s important to realize that the encoded representation isn’t actually the number itself. All encodings have shortcomings, whether that’s a size limit or some other complexity. Is “2” an integer OR a real, or is it an integer AND a real? From your school days you may remember that you can represent a repeating digit with a dot, for example, by putting a dot over the 9 in 1.9, rather than writing 1.99999999 forever. It turns out that one-point-nine-recurring is exactly the same number as 2. It’s a convergent geometric sum of 1 + (0.9 + 0.09 + 0.009 + ) which equals 2. So there are two 2s. Three if you include 2.0000. Not all number concepts are written uniquely, even on paper! That’s highly inconvenient?and barely believable.

Two Engineering Standards: Get Something Working
Science is fine, but how can you get something done? First, there are standards. For example, IEEE standard 754?Binary Floating Point Arithmetic is the standard for computer encoding of real numbers. IEEE standards can cost an arm and a leg. This one’s been in review; you can get a virtually identical free draft.

JavaScript’s numeric processing is built on such standards. For example, it’s Number type is IEEE 754 double precision, according to the ECMA standard. Because there’s no perfect solution for encoding numbers, this format is full of engineering compromises. So are algorithms that manipulate this format, such as division. Because the representation isn’t perfect, operations invariably accumulate tiny errors, which might not seem like a big deal in most cases, but becomes considerably more important when you realize the implications, as in the following examples.

  • The expensive FDIV defect in Intel’s Pentium processor was caused by an IEEE 754 division error. Intel eventually issued a public apology for shipping processors containing this bug, which ended up costing them approximately $500 million.
  • Daily interest calculations on your debts and interest can be performed with floating point calculations. No one wants to pay extra interest or lose income because of calculation errors, so the standard must be good.
See also  Custom Java Web Development - The Heartbeat of Modern Web Development

The IEEE format stores floating point numbers as accurately as a decimal number with either 15, 16 or 17 decimal digits. The number of digits varies due to translation between binary and decimal. To get the highest conversion accuracy, specify decimal numbers with 17 digits?and rely on at most 15 being right.Listing 1 shows how the 17 digit limit can cause truncation and rounding. Here’s the output of that listing:

Displaying PI Value's Origin                     Value Displayed True pi to 40 digits                 3.1415926535897932384626433832795028 Math.PI                            3.141592653589793 40 digit pi literal                3.141592653589793 Variable = to a 100 digit literal  3.141592653589793Displaying a Rational Number 1/3 expressed as a real number: 0.3333333333333333Displaying The Impossible: 0.1 Calculated and displayed value in all cases: "0.1" Literals used: 0.1 0.1000000000000000000000000001 0.100000000000000000000000001 0.1000000000000000000000000456 0.09999999999999999999999

In the first part of this output, the interpreter truncates pi to 16 digits regardless of the number of digits originally typed in. A literal string can of course be longer?but a string is not a number. In the second part of this output, the interpreter does its best to generate as many significant digits as possible to represent the true value of 1/3. The final part of the output shows how the interpreter handles several numbers whose value is very close to 1/10; however, there’s no exact floating point representation for those values. In addition, the numbers supplied differ in accuracy only beyond the 20th digit. So the interpreter must transform them all into some floating point number very close to a tenth, which it turns out is the same value?0.1?in each case.

To generate this last piece of output, JavaScript converts a decimal to a floating point Number object and back again, which is how it prints the “0.1” output. A subtle question is: how does the interpreter choose which decimal representation to use when converting back? A hand-crafted algorithm inside the interpreter usually finds the value with the most trailing zeros, yielding the best solution. In this case, “0.1” is not only the best solution, but the ideal solution.

Three Number Formats: JavaScript Number Implementation
JavaScript has a second number standard at work, a common format used in many computer languages called 32-bit twos-compliment, or in C typically just int. This is a 4 octet (4 byte) value used to store small integers with no error. Although you probably benefit from this representation, you can’t declare it yourself in JavaScript. It’s hidden.

You can store JavaScript numbers in three ways: as decimal content of a string, as an IEEE double precision floating point, or as an integer.

Why does JavaScript have a hidden int-like type? Three reasons:

  1. For bit operations such as or (|) and shift (>>).
  2. For array use
  3. For accuracy

Consider arrays. The ECMAScript standard for JavaScript allows 4294967295 array elements per array?that’s one less than 32 bits can hold. The remaining one is used for the length value. When you specify an array index, the JavaScript interpreter decides if the variable item holds a 32 bit integer or not.

var arr = [];var item = 2;arr[item] = 5;

If so, then the third statement is equivalent to:

arr[2] = 5;

But if item looks like a floating point value (or something other than a number), then it is equivalent to:

arr["2"] = 5;

This can confuse you greatly. It appears to allow negative and partial indices, like arr[-6] and arr[2.35]. In fact, these unusual indices are converted to strings. Listing 2 (view output) shows non-integers being silently converted to strings but looking like numeric indices.

See also  Custom Java Web Development - The Heartbeat of Modern Web Development

The last few lines of Listing 2 illustrate what happens if you forget this is an illusion. The array slot at index also11 initially holds 1.1. The code takes the twentieth root and raises the result back to the twentieth power, which should return 1.1 again. But the tiny errors in floating point arithmetic add up and 1.100019?not 1.1?is the result. It is no use using this to recall the 1.1 array element, because as an index it refers to an entirely different array member, arr[1.100019]. Be sure your array indices never take part in floating-point calculations.

As well as array indices, there are accuracy needs for integer values. Listing 3 (view output) shows what can happen if a for loop increments too many times by a real value.

You don’t want to deal with this every time you construct a loop. Fortunately, the JavaScript interpreter has a preference for integers. If the interpreter spots a new number, it will first try reading the number as an integer. If that’s possible, then it stores the number as a 31 bit integer, not as a 64 bit IEEE 754. Yes, that’s 31 bits, not 32 bits. If the number cant be read as an integer or it won’t fit in 31 bits, then it is stored in 64 bit IEEE 754. Similar logic applies to all calculations.

Numbers can get into scripts from many places in the browser. XML, XHTML, DOM, and CSS standards all provide places where numbers creep into scripts. In all cases, such numbers end up as String or Number types. In the latter case they may be stored as integers. But those other standards also allow very long number literals, which JavaScript will truncate to fit the Number type if necessary.

Four Values: Warning Numbers You Need to Know
The million dollar question is: When will your nice integers suddenly become error-prone double precision? Answer: when their magnitude gets too big, or goes between 1 (one) and zero. So, one (1) is the first warning number.

46,340 is the biggest integer which, when multiplied by itself, still fits into 31 or 32 bits. If your script uses multiplication, and your numbers get any bigger than this, you could go over into error-prone floating point. This is your second warning number.

Now, 1,114,111 (seven digits), which is the same as x41777777 is the biggest Unicode value. Furthermore 16,777,215 (eight digits) = #FFFFFF is the biggest RGB color value. But 2,147,483,647 (ten digits) is the biggest 32-bit signed integer, and -2,147,483,648 is the smallest. Therefore, JavaScript will always store Unicode values and RGB colors as integers inside JavaScript. 2,147,483,647 is the third warning number?any bigger than this and your number is stored in double precision form.

See also  Custom Java Web Development - The Heartbeat of Modern Web Development

9,007,199,254,740,992 (sixteen digits) is the biggest floating point number that will still look like an integer when printed. All Date objects (calculated in milliseconds) are smaller than this, so they will always print like an integer. This is your fourth warning number.

Finally, 1.7976931348623157e+308 is the largest double-precision number. Beyond this, there’s only Infinity.

Five Drops of Wisdom: How to Stay Out of Trouble

  1. Most Web pages don’t need fractional numbers. Avoid them and stick with integers. Make sure your array indexes are whole numbers. Count money in cents, not in dollars. Count percentages in whole percents, not in fractions. Minimize use of division (/) and modulo (%) operations, as they lead straight to floating point in most cases. If you must divide, use the Math.round method afterwards to get your integers back.
  2. If you can perform all the mathematics in your script as integer operations, your numbers will never contain any error. If both operands of the +, and * operators are integers, and no extreme values occur, then your numbers will remain integers. This is the normal, everyday case. If you must rely on floating point, use guard digits?extra decimal places specified beyond those you actually need. If you require five-digit decimal accuracy, use six, or better yet eight decimal digits. Rounding error creeps into your five important digits far less rapidly when you use three guard digits. Do not rely on always using 17 decimal places. That won’t always save you, for example, using 22 digits, all browsers display:
      1000200000000000000001- 1000155000000000000001------------------------=      44999999999967230

    In contrast, using (21 digits, all browsers, or merely 7 digits, if you remove the trailing zeros) results in:

      100020000000000000000- 100015500000000000000-----------------------=      4500000000000000

    Despite their differences, both examples are correct to 10 significant figures or more.

  3. Do not use numbers too close together or too far apart. Subtracting or comparing two numbers that are very close together is highly error prone. Adding tiny numbers to large numbers is a waste of time, as the tiny number will just disappear to zero. Multiplying small and large numbers together is not a problem. For example, this line of code will not overflow the integer limits, or cause any accuracy problems, even though 12,345,678 is greater than 46,000:
    var total = 2 * 12345678;  // = 24691356

    But the following line of code is likely to be inaccurate because the many significant digits are required to get the difference exactly right:

  4. var total = 0.1 - 0.09;    // = 0.010000000000000009
  5. Check your results with isFinite() and isNaN(). If you submit any numerical data via a form, you should always do these checks beforehand, anyway:
    var data = document.forms[0].elements[0].value;if (data.isFinite() && ! data.isNaN())  data = parseFloat(data);
  6. Use calculations parsimoniously. The less mathematics you do, the less error will creep in. Treat your floating point numbers gently, keep them safe, and don’t work them over and over. You browser (and your users) will love you.
devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist