简体   繁体   中英

Python increment float by smallest step possible predetermined by its number of decimals

I've been searching around for hours and I can't find a simple way of accomplishing the following.

Value 1 = 0.00531
Value 2 = 0.051959
Value 3 = 0.0067123

I want to increment each value by its smallest decimal point (however, the number must maintain the exact number of decimal points as it started with and the number of decimals varies with each value, hence my trouble).

Value 1 should be: 0.00532
Value 2 should be: 0.051960
Value 3 should be: 0.0067124

Does anyone know of a simple way of accomplishing the above in a function that can still handle any number of decimals?

Thanks.

Have you looked at the standard module decimal ?

It circumvents the floating point behaviour.

Just to illustrate what can be done.

import decimal
my_number = '0.00531'
mnd = decimal.Decimal(my_number)
print(mnd)
mnt = mnd.as_tuple()
print(mnt)
mnt_digit_new = mnt.digits[:-1] + (mnt.digits[-1]+1,)
dec_incr = decimal.DecimalTuple(mnt.sign, mnt_digit_new, mnt.exponent)
print(dec_incr)
incremented = decimal.Decimal(dec_incr)
print(incremented)

prints

0.00531
DecimalTuple(sign=0, digits=(5, 3, 1), exponent=-5)
DecimalTuple(sign=0, digits=(5, 3, 2), exponent=-5)
0.00532

or a full version (after edit also carries any digit, so it also works on '0.199' )...

from decimal import Decimal, getcontext

def add_one_at_last_digit(input_string):
    dec = Decimal(input_string)
    getcontext().prec = len(dec.as_tuple().digits)
    return dec.next_plus()

for i in ('0.00531', '0.051959', '0.0067123', '1', '0.05199'):
    print(add_one_at_last_digit(i))

that prints

0.00532
0.051960
0.0067124
2
0.05200

As the other commenters have noted: You should not operate with floats because a given number 0.1234 is converted into an internal representation and you cannot further process it the way you want. This is deliberately vaguely formulated. Floating points is a subject for itself. This article explains the topic very well and is a good primer on the topic.

That said, what you could do instead is to have the input as strings (eg do not convert it to float when reading from input). Then you could do this:

from decimal import Decimal

def add_one(v):
    after_comma = Decimal(v).as_tuple()[-1]*-1
    add = Decimal(1) / Decimal(10**after_comma)
    return Decimal(v) + add

if __name__ == '__main__':
    print(add_one("0.00531"))
    print(add_one("0.051959"))
    print(add_one("0.0067123"))
    print(add_one("1"))

This prints

0.00532
0.051960
0.0067124
2

Update :

If you need to operate on floats, you could try to use a fuzzy logic to come to a close presentation. decimal offers a normalize function which lets you downgrade the precision of the decimal representation so that it matches the original number:

from decimal import Decimal, Context

def add_one_float(v):
    v_normalized = Decimal(v).normalize(Context(prec=16))
    after_comma = v_normalized.as_tuple()[-1]*-1
    add = Decimal(1) / Decimal(10**after_comma)
    return Decimal(v_normalized) + add

But please note that the precision of 16 is purely experimental, you need to play with it to see if it yields the desired results. If you need correct results, you cannot take this path.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM