简体   繁体   English

如何将方法调用与 alsa 播放同步?

[英]How can I synchronize a method call with alsa playback?

I'm trying to write a program that will synchronize lights to playback of a basic wav file.我正在尝试编写一个程序,将灯光与基本 wav 文件的播放同步。 I've struggled through all the alsa docs, the source for ffplay.c and searched around on the internet, but it's difficult to figure out how to do what seems like a common and simple task.我一直在努力浏览所有 alsa 文档、ffplay.c 的源代码并在互联网上搜索,但很难弄清楚如何执行看似常见且简单的任务。

Basically I want to do two things, the first is to read keypress events while the audio is playing and store the offsets in a file.基本上我想做两件事,第一件事是在播放音频时读取按键事件并将偏移量存储在文件中。 The second is take those queue files and load them later, this time on a different audio device like a raspberry pi.第二个是获取这些队列文件并稍后加载它们,这次是在不同的音频设备上,比如树莓派。

I'm struggling with how to first account for the latency in the initial capture of the offset positions and then how to handle that latency when I playback on a completely different hardware device.我正在努力解决如何首先考虑偏移位置的初始捕获中的延迟,然后在我在完全不同的硬件设备上播放时如何处理该延迟。

I know snd_pcm_delay() is used by the ffmpeg suite to deal with some of this, but I'm really struggling with even the basic technique.我知道 ffmpeg 套件使用snd_pcm_delay()来处理其中的一些问题,但我什至在基本技术上也很挣扎。 It's not a complicated playback mechanism, just a blocking write in a loop.这不是一个复杂的播放机制,只是一个循环中的阻塞写入。

I'd post some code, but I don't have it with me at the moment and it's just a mess of the current hacks that aren't working.我会发布一些代码,但我目前没有它,它只是当前不起作用的一堆乱七八糟的东西。

So this turned out to be somewhat easy in the end, however it wasn't easy to figure out.所以最终证明这有点容易,但要弄清楚却并不容易。 Using snd_pcm_delay is the correct path.使用snd_pcm_delay是正确的路径。

The current actual audio sample being played is the number of frames written - snd_pcm_delay , this gives you a relatively exact current played frame (the frame that you should be hearing right now).当前正在播放的实际音频样本是写入的帧数 - snd_pcm_delay ,这为您提供了相对准确的当前播放帧(您现在应该听到的帧)。 This should be used to calculate the position of timestamps and as the position index as an app waits for the current position to be ahead or the same as the next cued event.这应该用于计算时间戳的位置,并作为应用程序等待当前位置领先或与下一个提示事件相同的位置索引。

This is something that GStreamer does in a much more documented and more extensible way, building a more complicated synchronization algorithm that allows the audio to run on a separate thread while avoiding locking every time the position clock is needed.这是 GStreamer 以更多文档和更可扩展的方式所做的事情,构建更复杂的同步算法,允许音频在单独的线程上运行,同时避免每次需要位置时钟时锁定。 I'd use GStreamer if possible to implement any similar applications.如果可能的话,我会使用 GStreamer 来实现任何类似的应用程序。

But if you don't need all the complexity of GStreamer and want to cut your dependencies, the basic method is the one I described above.但是如果你不需要 GStreamer 的所有复杂性,并且想削减你的依赖,那么基本的方法就是我上面描述的方法。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM