Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The big problem with posits is that its relative error depends on its value, this is terrible for a lot of engineering work and scientific simulations where you need to present an error estimate that includes computational error (for ML it's probably fine)


Isn't that also the usual argument against subnormal numbers? Anyway, you're right. According to that paper, section 4.3:

https://people.eecs.berkeley.edu/~demmel/ma221_Fall20/Dinech...

the relative error is bounded by 2^{-24} only in the interval [1.0e-6,1.0e6] for the Posit32 format, whereas it is [1.17e-38,3.4e38] for the IEEE binary32 format.


IMO, this isn't a big deal. If you want rigorous error estimates, you need to use some form of interval arithmetic (or ball arithmetic). Also, these types of engineering and scientific work are pretty much all 64 bit, while POSIT is mainly useful for <=32 bit. My ideal processor would have 64 bit floating point (with Inf/NaN behavior more like posits) and possits for 16 and 32 bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: