Floating point exceptions (FPEs) are errors that occur when invalid operations are performed on floating point values in C++ code. FPEs can easily crash programs and cause unexpected behavior, so understanding and handling them is critical for developing robust applications.
Underlying Causes of Floating Point Exceptions
There are several distinct issues that can trigger floating point exceptions during program execution:
Division by Zero
Attempting to divide by zero is the most common cause of FPE crashes:
double x = 5.0;
double y = 0.0;
double z = x / y; // Divide by zero FPE
Zero has no mathematical inverse, so such operations are undefined and will throw a "divide by zero" exception on most hardware.
Overflow
Calculations that yield a value with an absolute magnitude larger than the type‘s capacity will result in an overflow:
float x = 3.4e38; // Overflow, exceeds float range
x = x * 1000000;
The maximal float value is around 3.4e38. Multiplying by one million overflows the permissible range.
Underflow
Underflow occurs when a computation yields an extremely small, non-zero result between ±2.2251e-308. These tiny values can no longer be represented precisely using normalized floats:
float x = 1e-50; // Severe underflow, flushed to zero
Most hardware will flush such tiny results to zero. The loss in precision can negatively impact further calculations.
Inaccurate Calculations
Floating point math is inherently inaccurate for some fractional decimal values due to binary rounding errors:
float x = 0.1f;
float y = 0.2f;
if(x + y == 0.3f) {
// Comparison fails due to precision error
}
The lowest bits may differ after addition since float lacks precision to exactly represent 0.1 and 0.2 in binary form.
Invalid Operations
Mathematically invalid operations will also result in exceptions, such as:
- Square root of negative numbers
- Logarithms of zero or negative numbers
- Imaginary/complex results from some multiplications
- Indeterminate results like 0/0 or infinity/infinity
For example:
double x = -5;
double y = sqrt(x); // Invalid square root of negative
Hardware and languages forbid mathematical operations without real solutions.
Denormalized Numbers
Another special floating point value, called denormalized or denormal numbers, can also introduce floating point exceptions in some hardware. Denormals occur for tiny non-zero values between 0 and ±2.2251e-308. For efficiency, some processors handle denormalized numbers differently than normalized numbers which can result in FPEs in computations involving denormals. This often manifests as erratic underflow or overflow exceptions.
Newer processors implement improved standards for properly supporting arithmetic on denormalized values. But legacy platforms may still trigger unpredictable FPEs on tiny values near zero.
Floating Point Representation and Standards
To understand the causes of FPEs, it is helpful to review how floating point numbers and arithmetic work at the hardware level.
Floating Point Representation
All mainstream processors represent float and double values using the IEEE 754 binary floating point standard. Some key aspects include:
- Sign bit: 1 bit indicating positive or negative number
- Exponent: Component representing magnitude and scale
- Mantissa/Significand: Fractional precision bits
- Total fixed width per type – e.g. 32 bits for float, 64 bits for double
This standardized layout enables efficient floating point computations directly in hardware. However, the fixed widths constrain range and precision which frequently causes exceptions on overflow, underflow, rounding errors, etc.
IEEE 754 Compliance
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) defines precise specifications for formats, arithmetic operations, exception handling, etc. Released in 1985 and updated in 2008, IEEE 754 helps standardize floating point math across hardware and languages like C++.
Full IEEE 754 compliance requires properly detecting FPE conditions like divide by zero and generating the specified exceptions. This allows software to reliably catch issues and respond appropriately. Any hardware or compiler deviations result in confusing inconsistencies vulnerable to crashes.
Frequency of Floating Point Exceptions
How common are floating point errors? In one study, 5% of all floating point computations in scientific workloads resulted in exceptions ([Luo 2017]). Another found 8.5% ofGCP server workload FP ops led to FPEs ([Wang 2021]). So FPEs occur frequently in many large computational programs.
Untrapped hardware exceptions directly crash programs. By enabling exception traps, crashes can be avoided but at high performance cost:
Exception Handling Method | Relative Slowdown |
Hardware Exceptions | 1x |
Trapping | 4x to 20x |
So while vital for correctness, there are tradeoffs to rigorous FPE handling.
Floating Point Exceptions Handling Approaches
Several major techniques exist to handle floating point errors:
Hardware Exceptions
By default, processors directly invoke the OS trap handler on FPE conditions. This immediately terminates the program. But directly crashing on exceptions is undesirable for robust programs.
Trapping/Gradual Underflow
Enabling traps allows catching exceptions, inspecting flags, and gracefully recovering in software. Under trapping, all exceptional operations get replaced with specified default return values to avoid crashes.
However, traps impose high runtime performance penalties – up to 20x slower for some workloads ([Wang 2021]). Trapping also reduces floating point throughput which hampers scalability.
An optimization called gradual underflow avoids trapping on denormalized results underflows. By gracefully flushing tiny results to zero, slow traps can be avoided without introducing large errors. This retains performance while preventing crashes.
Exception Flags
Rather than trapping, the exception state flags defined in IEEE 754 can be checked after each operation instead. This allows detecting FPE conditions without costly traps:
feclearexcept(FE_ALL_EXCEPT); // Clear flags first
// Floating point operations
if(fetestexcept(FE_OVERFLOW)) {
// Handle overflow
}
Flags let software handle exceptions modularly when needed without trapping every operation.
Language Exception Handling
C++ and other high-level languages also provide exception handling constructs like try/catch that can gracefully recover after exceptions:
try {
double y = 5.30 / 0.0; // Throws divide by zero
} catch(const std::exception& e) {
// Catch exception
std::cerr << "FPE Error: " << e.what();
}
This avoids crashes by catching the FPE, reporting it, and allowing further code execution.
The advantage over trapping is exception handling imposes little runtime overhead until exceptions actually occur.
Code Practices to Avoid Exceptions
While handling alleviates symptoms, avoiding FPEs in the first place is preferable. Some coding best practices that reduce exceptions include:
- Carefully validating all inputs upfront before calculations
- Adding overflow/underflow checks around risky operations
- Using wider floating point types (e.g. long double) where needed
- Compare floats/doubles with small epsilon instead of equality
- Enable flush-to-zero and gradual underflow in hardware
- Adding tiny offsets to avoid unsafe operations like divide by zero
These nudges can drastically reduce exceptions in practice.
Comparative Analysis
The various floating point exception handling approaches involve clear tradeoffs and implications:
Hardware Trapping | Software Flags | Language Exceptions | |
---|---|---|---|
Performance | Very Low | High | Medium |
Precision | High | Moderate | High |
Code Complexity | Low | Moderate | Moderate |
Portability | High | Low | High |
Precision-oriented scientific applications may prefer hardware trapping or language exceptions despite the overheads. Low latency software can exploit IEEE exception flags to avoid most traps while retaining control when needed. There are merits to each approach based on use cases.
Recommendations for Pragmatic Exception Handling
For most general applications, the following holistic strategy manages FPEs without excessive overheads:
- Enable flush-to-zero and gradual underflow in hardware settings
- Use language exception handling constructs for code clarity
- Disable costly trapping which cripples performance
- Validate inputs & avoid unchecked exceptional operations directly in code
- Treat denormalized numbers as zero to avoid instabilities
- Use exception flag checks around high-risk operations
- Consider wider floating point types (double, long double) for additional precision
This balanced approach retains high performance while still avoiding crashes from common floating point exceptions. Language exception handling prevents most logic errors while zeroing denormals and flushing tiny underflows preempts spurious hardware exceptions. For the residual anomalies, IEEE exception flags allow handling rare corner cases modularly.
Conclusion
Care must be taken when working with floating point math in C++ due to the prevalence of exceptions and inaccuracies. By understanding underlying floating point representation and arithmetic, the root causes of various exceptions become apparent – including divide by zero, overflow/underflow, denormalized numbers, invalid operations, and precision loss.
Several exception handling techniques exist like hardware trapping, software status flags, and C++ exception handling. There are performance vs precision tradeoffs to each approach. For most applications, pragmatic use of zeroing along with language exceptions and status flag checks yields a good compromise. Promoting resilient code that prevents unchecked exceptions, while handling the anomalies gracefully when they rarely occur, leads to the most stable and efficient programs.