I am trying to write bits to a Uint8Array buffer in JavaScript. I want to do this (pseudocode):
// write(buffer, offset, value, size)
write(buffer, 0, 3, 2) // 3 == 11 in bits
write(buffer, 2, 0, 2) // 0 == 0 in bits
I would then expect the following to be true:
buffer[0].toString(2) === '1100'
However, I am getting:
buffer[0].toString(2) === '11'
This to me means that it's not properly handling the zero case. (I added a zero which I wanted to take up 2 bits of space).
The code I have to write bits to the buffer is this:
const write = function(buffer, offset, value, num) {
let i = 0
console.log('value:', value.toString(2))
while (i < num) {
const remaining = num - i
const block_offset = offset & 7
const byteOffset = offset >> 3
const finished = Math.min(remaining, 8 - block_offset)
const first = 0xFF << finished
const mask = ~first
const writeBits = value & mask
value >>= finished
const second = mask << block_offset
const destMask = ~(second)
const byte = buffer[byteOffset]
buffer[byteOffset] = (byte & destMask) | (writeBits << block_offset)
console.log(`---
offset: ${offset.toString(2)}
num: ${num.toString(2)}
remaining: ${remaining.toString(2)}
block_offset: ${block_offset.toString(2)}
byteOffset: ${byteOffset.toString(2)}
finished: ${finished.toString(2)}
first: ${first.toString(2)}
mask: ${mask.toString(2)}
writeBits: ${writeBits.toString(2)}
value: ${value.toString(2)}
second: ${second.toString(2)}
destMask: ${destMask.toString(2)}
byte: ${byte.toString(2)}
buffer[byteOffset]: ${buffer[byteOffset].toString(2)}`)
offset += finished
i += finished
}
}
I am calling it like this:
let buffer = new Uint8Array(4096)
write(buffer, 0, 3, 2)
write(buffer, 2, 0, 2)
What it is logging is this:
value: 11
---
offset: 0
num: 10
remaining: 10
block_offset: 0
byteOffset: 0
finished: 10
first: 1111111100
mask: -1111111101
writeBits: 11
value: 0
second: -1111111101
destMask: 1111111100
byte: 0
buffer[byteOffset]: 11
value: 0
---
offset: 10
num: 10
remaining: 10
block_offset: 10
byteOffset: 0
finished: 10
first: 1111111100
mask: -1111111101
writeBits: 0
value: 0
second: -111111110100
destMask: 111111110011
byte: 11
buffer[byteOffset]: 11
Some things that seem off to me:
-negative
value? I get that I negated the bits, but I would simply expect toString(2)
to show the flipped bits. What did I do wrong here?first
seems wrong in the second iteration. Shouldn't it be 1111110000
instead of 1111111100
? I don't really understand what second
and destMask
are doing yet, still need to spend time on that. But do you have any idea how to get this working? That is, so it writes 1100
to the first byte after these two calls?
Lance, if I was in your shoes...
Realize this answer doesn't directly address your question, but rather than dealing with the nuances of bit twiddling with array buffers, I'd take the lazy man's approach and make use of BigInt. Although I'm sure it will involve a performance hit, it offers the huge benefits of more easily visualizing the bit stream in addition to avoiding the nuances of dealing with laying bits across the byte boundaries of the ArrayBuffer. And only after constructing the bit stream in BigInt would I then convert it to an ArrayBuffer. (Eg, see https://coolaj86.com/articles/convert-js-bigints-to-typedarrays/ )
Below is a simple example of stringing together a bit stream using BigInt.
Note that this function lays down any new values to the lower order bits, shifting everything else to the higher order bits. Therefore, the very first bit must be a '1' in order to establish the head end of the bit stream. That is, if you start with adding, say '00000', the value will essentially be truncated. The other option is to track the length of the bit stream, but this gets messy compared to simply starting it with a '1'...
Additionally, if you're packing in bits of varying sizes, and later need to unpack them, you'll need some scheme to determine how to unpack the bits. Eg, preceed every number with '00' if 4 bits, '01' if 8 bits, '10' if 16 bits, and '11' if 32 bits...
In any event, here's the basic concept...
let bigIntBuffer = { buffer: 0n } function pushBitsOnBuffer( biBuffer, value, size ) { biBuffer.buffer <<= BigInt( size ); biBuffer.buffer += ( BigInt( value ) & ( 2n ** BigInt( size ) - 1n ) ); } // To demonstrate, let's push some bits onto the buffer, separating // by a single '0' bit. pushBitsOnBuffer( bigIntBuffer, 1, 1 ); // '1' pushBitsOnBuffer( bigIntBuffer, 0, 1 ); pushBitsOnBuffer( bigIntBuffer, 3, 2 ); // '11' pushBitsOnBuffer( bigIntBuffer, 0, 1 ); pushBitsOnBuffer( bigIntBuffer, 7, 3 ); // '111' pushBitsOnBuffer( bigIntBuffer, 0, 1 ); pushBitsOnBuffer( bigIntBuffer, 255, 8 ); // '11111111' pushBitsOnBuffer( bigIntBuffer, 0, 1 ); pushBitsOnBuffer( bigIntBuffer, 65535, 16 ); // '1111111111111111' pushBitsOnBuffer( bigIntBuffer, 0, 1 ); for ( i = 0; i < 5; i++ ) { pushBitsOnBuffer( bigIntBuffer, 7, 3 ); // '111' pushBitsOnBuffer( bigIntBuffer, 0, 1 ); } console.log( bigIntBuffer.buffer.toString( 2 ) );
Hope this helps.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.