The second syntax also sets it to a new value.
The floating-point precision determines the maximum number of digits to be written on insertion operations to express floating-point values. How this is interpreted depends on whether the floatfield format flag is set to a specific notation (either fixed or scientific) or it is unset (using the default notation, which is neither fixed nor scientific):
- On the default floating-point notation, the precision field specifies the maximum number of meaningful digits to display in total counting both those before and those after the decimal point. Notice that it is not a minimum and therefore it does not pad the displayed number with trailing zeros if the number can be displayed with less digits than the precision.
- In both the fixed and scientific notations, the precision field specifies exactly how many digits to display after the decimal point, even if this includes trailing decimal zeros. The number of digits before the decimal point does not matter in this case.
This decimal precision can also be modified using the parameterized manipulator setprecision.
- New value for the floating-point precision. This is an integral value of type streamsize.
Return ValueThe value set as precision for the stream before the call.
The execution of this example displays something similar to:
3.1416 3.14159 3.1415900000
Notice how the first number written is just 5 digits long, while the second is 6, but not more, even though the stream's precision is now 10. That is because precision with the default floatfield only specifies the maximum number of digits to be displayed, but not the minimum.
The third number printed displays 10 digits after de decimal point because the floatfield format flag is in this case set to fixed.
|setprecision||Set decimal precision (manipulator function)|
|ios_base::width||Get/set field width (public member function)|
|ios_base::setf||Set specific format flags (public member function)|