Skip to main content

The Fun of Floating Point Numbers in one Image

Programming with floating point is always fun. Here's a nice little screen capture summarizing the insanity that sometimes arises:


.NET keeps 9 digits of precision internally, but typically only displays 7 digits of precision, so I had a hell of a time figuring out why a value from what's effectively a no-op was exceeding the 0.33F threshold I was looking for.

Losing equational reasoning is always fun, but this is even more bizarre than usual. Yay floating point!

Comments

svick said…
> .NET keeps 9 digits of precision internally, but typically only displays 7 digits of precision

That's not true on .Net Core/.Net 5+. When I run `(0.33F + 1F - 1F).ToString()` there, the output I get is "0.33000004".
Sandro Magi said…
What are you claiming is not true exactly? Because if you're disputing the claim that you quoted, then maybe you should take it up with Microsoft whose official documentation says:

All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.

If you're claiming that the expression that I quoted produces different results on your runtime, then great, you just proved my point again: that floating point is a quagmire because different optimizations and compilations of floating point code can produce different results.

Popular posts from this blog

async.h - asynchronous, stackless subroutines in C

The async/await idiom is becoming increasingly popular. The first widely used language to include it was C#, and it has now spread into JavaScript and Rust. Now C/C++ programmers don't have to feel left out, because async.h is a header-only library that brings async/await to C! Features: It's 100% portable C. It requires very little state (2 bytes). It's not dependent on an OS. It's a bit simpler to understand than protothreads because the async state is caller-saved rather than callee-saved. #include "async.h" struct async pt; struct timer timer; async example(struct async *pt) { async_begin(pt); while(1) { if(initiate_io()) { timer_start(&timer); await(io_completed() || timer_expired(&timer)); read_data(); } } async_end; } This library is basically a modified version of the idioms found in the Protothreads library by Adam Dunkels, so it's not truly ground bre

Dual/Codual numbers for Forward/Reverse Automatic Differentiation

In my last  two posts  on automatic differentiation (AD), I described some basic primitives that implement the standard approach to forward mode AD using dual numbers, and then a dual representation of dual numbers that can compute in reverse mode. I'm calling these "co-dual " numbers, as they are the categorical dual of dual numbers. It didn't click at the time that this reverse mode representation seems to be novel. If it's not, please let me know! I haven't seen any equivalent of dual numbers capable of computing in reverse mode. When reverse mode AD is needed, most introductions to AD go straight to building a graph/DAG representation of the computation in order to improve the sharing properties and run the computation backwards, but that isn't strictly necessary. I aim to show that there's a middle ground between dual numbers and the graph approach, even if it's only suitable for pedagogical purposes. Review: Dual Numbers Dual numbers augment