简体   繁体   中英

BigDecimal optimal scale for encoding

I need to encode a BigDecimal compactly into a ByteBuffer to replace my current (rubbish) encoding scheme (writing the BigDecimal as a UTF-8 encoded String prefixed with a byte denoting the String length).

Given that a BigDecimal is effectively an integer value (in the mathematical sense) and an associated scale I am planning to write the scale as a single byte followed by a VLQ encoded integer. This should adequately cover the range of expected values (ie max scale 127).

My question: When encountering large values such as 10,000,000,000 it is clearly optimal to encode this as the value: 1 with a scale of -10 rather than encoding the integer 10,000,000,000 with a scale of 0 (which will occupy more bytes). How can I determine the optimal scale for a given BigDecimal ? ... In other words, how I can determine the minimum possible scale I set assign a BigDecimal without having to perform any rounding?

Please do not reference the term "premature optimisation" in your answers :-)

BigDecimal#stripTrailingZeros似乎正在这样做。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM