简体   繁体   中英

What is the maximum (and minimum) 16-bit double value?

I am working on audio data manipulation (from/to WAV files, 16 bits), representing samples with double values (64 bits).

Since I am working with a lot of amplitude-domain convolution, a lot of time my resulting samples have (positive or negative ) values that go "above" the "maximum data" that can be represented on 16 bits, and the result is they get truncated.

So I need to normalize my data before writing it on a WAV file.

But it isn't clear to me what are the max (and minimum) double values that can be represented on 16 bits.

notice: here I refer to minimum value as the maximum negative double number that can be represented in 16bits.

edit: with 16-bits double I refer to data read from a 16 bit WAV file, stored in my code as a double value . After amplitude convolution, this data occurs to become greater than 1 or less than -1.

What is the maximum (and minimum) 16-bit double value?

It's unclear what you mean by "16-bit double".

There is a numeric type double in C++. It is a floating point type. The C++ language doesn't define its maximum or minimum representable values (although it does define a minium range, which the implementations may exceed), but it is possible to inspect those limits using std::numeric_limits .

However, on most systems, double is a 64 bit type, namely the "double precision" floating point type as specified in IEEE-754 standard.


A 16 bit type can represent at most 2 16 different values.

If used to represent an unsigned integer, the range will be [0, 2 16 ).

If used to represent a signed integer, the range will depend on how the sign is represented. In the most common, 2's complement representation, the range will be [-2 16-1 , 2 16-1 )

WAV files

In the Microsoft WAVE format, the 16 bit samples are 2's complement signed integers. See the previous paragraph for their value range.

The simple answer is that your denominator (for normalizing 16-bit data) is 2^15 assuming signed PCM.

Divide all your incoming 16-bit data by 32767 is my solution for normalizing. Maybe a case can be made for 32768, as the data ranges from -32768 to 32767. But I've always used 32767.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM