简体   繁体   中英

C - timeval to uint64_t

Morning!

I'm trying to get a nanosecond timestamp (uint64_t) from a timeval struct.

    struct timeval t;
    // initialized so that ....
    print("t sec: %ld, micro sec: %ld", t.tv_sec, t.tv_usec);
    // .... prints "t sec: 4, micro sec: 130728"

    long int sec = (long int) t.tv_sec;
    // just did that because I thought that using time_t in calculations may cause errors
    // print("%ld", sec) -> 4, so this seems to work as expected    

    uint64_t t_int = (sec * 1000000 + t.tv_usec) * 1000; // should be nanoseconds
    print("t_int (uint64_t) %llu", t_int);
    // prints "t_int (uint64_t) 18446744073545312320"

My timevals should definitely fit into uint64_t. Also, the cast to long int from time_t works, so all I'm doing is a simple calculation with two long ints, putting the result into a uint64_t, so I'm not sure why I'm getting an obviously wrong value. Any ideas? Is this just a wrong way of formatting (with %llu)?

Either:

uint64_t t_int = ((uint64_t)t.tv_sec * 1000000 + t.tv_usec) * 1000;

or:

uint64_t t_int = (t.tv_sec * 1000000ULL + t.tv_usec) * 1000;

The latter skips the cast and instead performs the promotion by having the right-hand operand of the * already be a type that's at least 64-bit (instead of casting the left-hand side up to one). Either works; it's a matter of what style you prefer.

Do not go through a temp var of type long like you did. long may be a 32-bit type and will not fit times past January 2038. If you need a temp var, make it time_t or int64_t or similar. But it's not needed.

Also you may want to consider using int64_t (and changing the ULL to LL ) instead of uint64_t , so that you can represent times before 1970.

You are calculating with long int width. Because it overflows undefined behavior happens and some negative value is assigned to uint64_t which results in a big positive value (on your architecture). Calculate with uint64_t or unsigned long long precision. To be pedantic, cast all values:

uint64_t t_int = ((uint64_t)sec * 1000000ull + (uint64_t)t.tv_usec) * 1000ull; 

But because of promotions most probably just casting one value to uint64_t is enough:

uint64_t t_int = ((uint64_t)sec * 1000000 + t.tv_usec) * 1000;

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM