[英]Movie py : importing audio from text-to-speech in memory
I'm trying to use text-to-speech from Azure in combination with movie.py
to create the audio stream for a video.我正在尝试将 Azure 中的文本转语音与movie.py
结合使用来为视频创建音频 stream。
result = synthesizer.speak_ssml_async(xml_string).get()
stream = AudioDataStream(result)
The output of this process is:这个过程的output是:
<azure.cognitiveservices.speech.AudioDataStream at 0x2320cb87ac0>
However, movie.py
is not able to import this with the following command:但是, movie.py
无法使用以下命令导入它:
audioClip = AudioFileClip(stream)
This is giving me the error:这给了我错误:
AudioDataStream' object has no attribute 'endswith' AudioDataStream' object 没有属性 'endswith'
Do I need to convert the Azure Stream to .wav
?我需要将 Azure Stream 转换为.wav
吗? How do I do that?我怎么做? I need to do the entire process without writing .wav
files locally (eg stream.save_to_wav_file
) but just using the memory streams.我需要完成整个过程,而无需在本地编写.wav
文件(例如stream.save_to_wav_file
),而只需使用 memory 流。
Can someone spot a light, please?请问有人能看到灯吗?
I write a HTTP trigger Python function for you, just try the code below:我写了一个 HTTP 触发器 Python function 给你,试试下面的代码:
import azure.functions as func
import azure.cognitiveservices.speech as speechsdk
import tempfile
import imageio
imageio.plugins.ffmpeg.download()
from moviepy.editor import AudioFileClip
speech_key="<speech service key>"
service_region="<speech service region>"
temp_file_path = tempfile.gettempdir() + "/result.wav"
text = 'hello, this is a test'
def main(req: func.HttpRequest) -> func.HttpResponse:
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig()
speech_synthesizer = speechsdk.SpeechSynthesizer(
speech_config=speech_config, auto_detect_source_language_config=auto_detect_source_language_config,audio_config=None)
result = speech_synthesizer.speak_text_async(text).get();
if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:
stream = speechsdk.AudioDataStream(result)
stream.save_to_wav_file(temp_file_path)
myclip = AudioFileClip(temp_file_path)
return func.HttpResponse(str(myclip.duration))
The logic is simple getting a speech stream from speech service and save to a temp file and use AudioDataStream
to get its duration.从语音服务获取语音 stream 并保存到临时文件并使用AudioDataStream
获取其持续时间的逻辑很简单。
If you still get some errors, you can get error details here:如果您仍然遇到一些错误,您可以在此处获取错误详细信息:
Let me know if you have any further questions.如果您还有其他问题,请告诉我。
I'm trying to use text-to-speech from Azure in combination with movie.py
to create the audio stream for a video.我正在尝试将 Azure 中的文本转语音与movie.py
结合使用来为视频创建音频 stream。
result = synthesizer.speak_ssml_async(xml_string).get()
stream = AudioDataStream(result)
The output of this process is:这个过程的output是:
<azure.cognitiveservices.speech.AudioDataStream at 0x2320cb87ac0>
However, movie.py
is not able to import this with the following command:但是, movie.py
无法使用以下命令导入它:
audioClip = AudioFileClip(stream)
This is giving me the error:这给了我错误:
AudioDataStream' object has no attribute 'endswith' AudioDataStream' object 没有属性 'endswith'
Do I need to convert the Azure Stream to .wav
?我需要将 Azure Stream 转换为.wav
吗? How do I do that?我怎么做? I need to do the entire process without writing .wav
files locally (eg stream.save_to_wav_file
) but just using the memory streams.我需要在不本地写入.wav
文件的情况下完成整个过程(例如stream.save_to_wav_file
),而只使用 memory 流。
Can someone spot a light, please?请问有人能看出一盏灯吗?
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.