I'm debugging a "Speech to Text" project and would like to save, on server side, the audio bytes sent over websockets, in a wav file.
The audio comes from the microphone, recorded by the web browser. I know those:
On server side, each time I get a new chunk of audio date via websocket, I store it to an array. And when I'm done receiving I do this:
full_audio_bytes = b''.join(audio_bytes) # this is my table of chunks of audio data
with wave.open("myaudiofile.wav", "wb") as audiofile:
audiofile.setsampwidth(16)
audiofile.setnchannels(1)
audiofile.setframerate(44100)
audiofile.writeframesraw(full_audio_bytes) # I tried `writeframes` too
I get this error wave.Error: # channels not specified
Ok, I couldn't find in the amount of logs but there were another error before: wave.error bad sample width
and that's because the setsampwidth
method expects bytes and not bits. With audiofile.setsampwidth(2)
, it is actually able to write my wave file !
Edited code:
full_audio_bytes = b''.join(audio_bytes) # this is my table of chunks of audio data
with wave.open("myaudiofile.wav", "wb") as audiofile:
audiofile.setsampwidth(é)
audiofile.setnchannels(1)
audiofile.setframerate(44100)
audiofile.writeframes(full_audio_bytes)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.