Representation of Floating Point Numbers

Representation of Floating Point Numbers

People sometimes complain about the inaccuracy of floating point arithmetic. To demonstrate the level of floating pint inaccuracy, consider the following program:

   #include    using namespace std;  int main()  {    float f1 = 2000.58;    float f2 =  2000.0;    cout << f1 - f2 << endl;  }  

On my machine, this program prints 0.579956 instead of 0.58. More complex calculations yield higher inaccuracy. What is going on here?

First, remember that rounding, approximation and truncation are not the responsibility of C++ but rather, they depend on the particular hardware that your machine uses. Note also that floating point numbers are merely an approximation based on the IEEE standard. On most machines, type float occupies 32 bits. The IEEE standard requires that 1 bit be used for the sign representation (i.e., negative/positive values), 8 bits for exponent, and the remaining 23 bits are for the mantissa. Because a mantissa always has the form 1.nnnn

Share the Post:
data observability

Data Observability Explained

Data is the lifeblood of any successful business, as it is the driving force behind critical decision-making, insight generation, and strategic development. However, due to its intricate nature, ensuring the

Heading photo, Metadata.

What is Metadata?

What is metadata? Well, It’s an odd concept to wrap your head around. Metadata is essentially the secondary layer of data that tracks details about the “regular” data. The regular