简体   繁体   English

将32位数转换为16位或更少

[英]Converting 32-bit number to 16 bits or less

On my mbed LPC1768 I have an ADC on a pin which when polled returns a 16-bit short number normalised to a floating point value between 0-1. 在我的mbed LPC1768上,我在引脚上有一个ADC,当轮询时返回一个16位短数,归一化为0-1之间的浮点值。 Document here. 记录在这里。

Because it converts it to a floating point number does that mean its 32-bits? 因为它将它转换为浮点数是否意味着它的32位? Because the number I have is a number to six decimal places. 因为我的数字是一个小数到六位小数。 Data Types here 数据类型在这里

I'm running Autocorrelation and I want to reduce the time it takes to complete the analysis. 我正在运行Autocorrelation,我希望减少完成分析所需的时间。 Is it correct that the floating point numbers are 32-bits long and if so is it correct that multiplying two 32-bit floating point numbers will take a lot longer than multiplying two 16-bit short value (non-demical) numbers together? 浮点数是32位长是否正确如果是正确的话,将两个32位浮点数乘以两个16位短值(非化学)数字相比需要更长的时间?

I am working with C to program the mbed. 我正在与C合作对mbed进行编程。

Cheers. 干杯。

I should be able to comment on this quite accurately. 我应该能够非常准确地对此发表评论。 I used to do DSP processing work where we would "integerize" code, which effectively meant we'd take a signal/audio/video algorithm, and replace all the floating point logic with fixed point arithmetic (ie: Q_mn notation, etc ). 我曾经做过DSP处理工作,我们会“整合”代码,这实际上意味着我们采用信号/音频/视频算法,并用定点算法替换所有浮点逻辑(即: Q_mn符号等 )。

On most modern systems, you'll usually get better performance using integer arithmetic, compared to floating point arithmetic, at the expense of more complicated code you have to write. 在大多数现代系统中,与浮点运算相比,使用整数运算通常会获得更好的性能,代价是您必须编写更复杂的代码。

The Chip you are using (Cortex M3) doesn't have a dedicated hardware-based FPU : it only emulates floating point operations, so floating point operations are going to be expensive (take a lot of time). 您正在使用的芯片(Cortex M3)没有专用的基于硬件的FPU :它只模拟浮点运算,因此浮点运算将变得昂贵(需要花费大量时间)。

In your case, you could just read the 16-bit value via read_u16() , and shift the value right 4 times, and you're done. 在您的情况下,您可以通过read_u16()读取16位值,并将值右移4次,然后就完成了。 If you're working with audio data, you might consider looking into companding algorithms (a-law, u-law) , which will give a better subjective performance than simply chopping off the 4 LSBs to get a 12-bit number from a 16-bit number. 如果您正在使用音频数据,您可能会考虑使用压缩算法(a-law,u-law) ,这样可以提供更好的主观性能,而不是简单地切断4个LSB以获得16位的12位数比特数。

Yes, a float on that system is 32bit , and is likely represented in IEEE754 format . 是的,该系统上的浮点数为32位 ,可能以IEEE754格式表示。 Multiplying a pair of 32-bit values versus a pair of 16-bit values may very well take the same amount of time, depending on the chip in use and the presence of an FPU and ALU. 将一对32位值与一对16位值相乘可能需要相同的时间,具体取决于使用的芯片以及FPU和ALU的存在。 On your chip, multiplying two floats will be horrendously expensive in terms of time. 在你的芯片上,乘以两个花车在时间上会非常昂贵。 Also, if you multiply two 32-bit integers, they could potentially overflow, so there is one potential reason to go with floating point logic if you don't want to implement a fixed-point algorithm. 此外,如果将两个32位整数相乘,它们可能会溢出,因此如果您不想实现定点算法,则有一个潜在的理由可以使用浮点逻辑。

如果处理器中不存在特殊硬件(浮点单元),则假设将两个32位浮点数乘以比两个16位短值相乘的时间更长是正确的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM