You can look this up in detail. This will either be in your processor or in a math library if you have it at all. Simply put, this representation splits a number into a fraction and an exponent as is done in scientific notation. The range of the exponent and digits in the fraction are limited by the specifics. There are really only two standardized representations of 32 (single precision) and 64 (double) bits in each value.