[英]How to store a text-to-speech audio in an audio file (IOS)
I am trying to create a video in IOS with Text-to-speech (like TikTok does).我正在尝试在 IOS 中使用文本转语音(就像 TikTok 一样)创建一个视频。 The only way to do this that I thought was to merge a video and an audio with AVFoundations, but it seems impossible to insert the audio of a text-to-speech into a.caf file.我认为这样做的唯一方法是将视频和音频与 AVFoundations 合并,但似乎不可能将文本到语音的音频插入到 .caf 文件中。
This is what I tried:这是我尝试过的:
public async Task amethod(string[] _text_and_position)
{
string[] text_and_position = (string[])_text_and_position;
double tts_starting_position = Convert.ToDouble(text_and_position[0]);
string text = text_and_position[1];
var synthesizer = new AVSpeechSynthesizer();
var su = new AVSpeechUtterance(text)
{
Rate = 0.5f,
Volume = 1.6f,
PitchMultiplier = 1.4f,
Voice = AVSpeechSynthesisVoice.FromLanguage("en-us")
};
synthesizer.SpeakUtterance(su);
Action<AVAudioBuffer> buffer = new Action<AVAudioBuffer>(asss);
try
{
synthesizer.WriteUtterance(su, buffer);
}
catch (Exception error) { }
}
public async void asss(AVAudioBuffer _buffer)
{
try
{
var pcmBuffer = (AVAudioPcmBuffer)_buffer;
if (pcmBuffer.FrameLength == 0)
{
// done
}
else
{
AVAudioFile output = null;
// append buffer to file
NSError error;
if (output == null)
{
string filePath = Path.Combine(Path.GetTempPath(), "TTS/" + 1 + ".caf");
NSUrl fileUrl = NSUrl.FromFilename(filePath);
output = new AVAudioFile(fileUrl, pcmBuffer.Format.Settings, AVAudioCommonFormat.PCMInt16 , false ,out error);
}
output.WriteFromBuffer(pcmBuffer, out error);
}
}
catch (Exception error)
{
new UIAlertView("Error", error.ToString(), null, "OK", null).Show();
}
}
This is the same code in objective-c这是 objective-c 中的相同代码
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: "test 123")
utterance.voice = AVSpeechSynthesisVoice(language: "en")
var output: AVAudioFile?
synthesizer.write(utterance) { (buffer: AVAudioBuffer) in
guard let pcmBuffer = buffer as? AVAudioPCMBuffer else {
fatalError("unknown buffer type: \(buffer)")
}
if pcmBuffer.frameLength == 0 {
// done
} else {
// append buffer to file
if output == nil {
output = AVAudioFile(
forWriting: URL(fileURLWithPath: "test.caf"),
settings: pcmBuffer.format.settings,
commonFormat: .pcmFormatInt16,
interleaved: false)
}
output?.write(from: pcmBuffer)
}
}
The problem with this code is that "synthesizer.WriteUtterance(su, buffer);"这段代码的问题是“synthesizer.WriteUtterance(su, buffer);” always crashes, after reading other posts I believe this is a bug that results in the callback method (buffer) never being called.总是崩溃,在阅读其他帖子后,我相信这是一个导致回调方法(缓冲区)永远不会被调用的错误。
Do you know of any workaround to this bug or any other way to achieve what I am trying to do?您是否知道此错误的任何解决方法或任何其他方式来实现我想要做的事情?
Thanks for your time, have a great day.感谢您的宝贵时间,祝您有美好的一天。
The error simply shows An AVSpeechUtterance shall not be enqueued twice
.该错误仅显示An AVSpeechUtterance shall not be enqueued twice
。
So stop making it speak and write in the same time.所以不要让它同时说话和写作。
I used your code and comment out synthesizer.SpeakUtterance(su);
我使用了您的代码并注释掉了synthesizer.SpeakUtterance(su);
, error gone. ,错误消失了。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.