Thoughts on Clojurescript and BigDecimal
Turns out, it's a yak barbershop.
So you're writing an awesome scientific or financial app in Clojurescript with a backend as a service, only to discover the numbers feel really strange, as if they're just so slight off, and they are off, just ever-so-slightly. As it turns out, Clojurescript does not support the Clojure core form or type bigdec
. Meaning, your calculations are using the dreaded IEEE 754. In fact, any attempt to create a BigDecimal (or BigInteger) will result in a no-operation, returning the value provided.
Let's back up a bit here. BigDecimal
is the arbitrary-precision best friend we all know and love for computing (or not, pick your poison), but BigDecimal
really isn't supported because Clojurescript doesn't have support for BigInteger
. They're separate types, so why would that matter?
They're one in the same
Unfortunately, Clojure's implementation of BigInt
and BigDecimal
just wrap the Java versions. If we look at the Java implementation of BigDecimal
, we can see under the hood BigDecimal
uses a BigInteger
object for the value, and a couple ints for the scale and precision.
Going further, we can see BigInteger
stores an array of big-endian 32-bit integers, plus a single integer for the sign of the BigInteger
:
Digging deeper we can see a lot of thought has gone into the BigInteger
implementation. Java's BigInteger
implementation makes use of some very fast algorithms like Karatsuba Algorithm and Toom-Cook Multiplication depending on the arithmetic operation. I'm speculating a bit here, but, for Clojurescript to support a bigint
and bigdec
form, these operations (or similar) would have to be re-written.
Why not use Javascript's BigInt?
The BigInt implementation is fairly new to the ECMAscript spec, and adding it to Clojurescript could break existing implementations of Clojurescript. On top of that the BigInt implementation uses Javascript's Number implementation under the hood, and just like Number, suffers from precision defects since it uses double-precision 64-bit binary format IEEE 754 and is therefore, not arbitrary.
If we do the same thing in plain-old Clojure, we can see there is no loss in precision for our ridiculous integer.
Update: Apparently, as some have pointed out, my repl is converting the integer to a js/Number
before the js/BigInt
constructor. js/BigInt
does work for arbitrary precision.
What you can do about it
Right now, options are limited. One can't really replicate the Java BigInteger
implementation because even integers in Javascript are really just IEEE 754 floating point binary under the hood and Javascript , so building your own Clojurescript library for bigint
form is quite the challenge, but might be doable so long you can dodge the accuracy problems of each integer in your array, or maybe you're wicked smart and can come up with a better implementation for BigInteger
using typed arrays in Clojurescript while ditching older implementations.
Some people have taken to forking Clojurescript and added javascript's bigint as a Clojurescript literal. Of course, this suffers from the caveats explained above, but might be worth pursuing if you can deal with them.
Another option I've seen in the wild was using the Google's closure compiler format
("%2f") function to truncate the floating point values when sent to the view. Not exactly precise, but, if you don't care about the odd 0.00000000000000004
showing up in your calculations, this could work for you, but it is not my favourite option.
The final and obvious solution would be to write a simple backend in Clojure. Certainly an underwhelming conclusion here, but for applications where arbitrary precision matters, the architecture of your application should take into account how to do these calculations even if means adding another component to that architecture.