简体   繁体   English

为什么这个 C++ ALSA(Linux 音频)程序有延迟?

[英]Why is there latency in this C++ ALSA (Linux audio) program?

I am exploring sound generation using C++ in Ubuntu Linux.我正在探索在 Ubuntu Linux 中使用 C++ 生成声音。 Here is my code:这是我的代码:

#include <iostream>
#include <cmath>
#include <stdint.h>
#include <ncurses.h>

//to compile: make [file_name] && ./[file_name]|aplay


int main()
{
    initscr();

    cbreak();
    noecho();
    nodelay(stdscr, TRUE);

    scrollok(stdscr, TRUE);
    timeout(0);

   for ( int t=0;; t++ )
      {

     int ch = getch();
     if (ch == 'q')
     {
         break;
     }

      uint8_t temp = t;
      std::cout<<temp;

    }
}

When this code is run, I want it to generate sound until I press "q" on my keyboard, after which I want the program to quit.运行此代码时,我希望它产生声音,直到我按键盘上的“q”,然后我希望程序退出。 This works fine;这很好用; however, there is a noticeable delay between pressing the keyboard and the program quitting.但是,在按下键盘和程序退出之间存在明显的延迟。 This is not due to a delay with ncurses, as when I run the program without std::cout<<temp;这不是由于 ncurses 的延迟,就像我在没有std::cout<<temp;的情况下运行程序时那样。 (ie no sound generated), there is no latency (即没有声音产生),没有延迟

Is there a way to amend this?有没有办法修改这个? If not, how are real-time responsive audio programs written?如果不是,如何编写实时响应音频程序?

Edits and suggestions to the question are welcome.欢迎对该问题进行编辑和建议。 I am a novice to ALSA, so I am not sure if any additional details are required to replicate the bug.我是 ALSA 的新手,所以我不确定是否需要任何其他详细信息来复制该错误。

The latency in the above loop is most likely due to delays introduced by the ncurses getch function.上述循环中的延迟很可能是由于 ncurses getch function 引入的延迟。

Typically for realtime audio you will want to have a realtime audio thread running and a non-realtime user control thread running.通常对于实时音频,您需要运行实时音频线程和运行非实时用户控制线程。 The user control thread can alter the memory space of the real time audio thread which forces the real time audio loop to adjust synthesis as required.用户控制线程可以改变实时音频线程的 memory 空间,强制实时音频循环根据需要调整合成。

In this gtkIOStream example, a full duplex audio class is created.在这个 gtkIOStream 示例中,创建了一个全双工音频 class The process method in the class can have your synthesis computation compiled in. This will handle the playback of your sound using ALSA. class 中的处理方法可以编译您的合成计算。这将使用 ALSA 处理您的声音播放。

To get user input, one possibility is to add a threaded method to the class by inheriting the FullDuplexTest class, like so:要获得用户输入,一种可能性是通过继承 FullDuplexTest class 向 class 添加线程方法,如下所示:

class UIALSA : public FullDuplexTest, public ThreadedMethod {
    void *threadMain(void){
        while (1){
          // use getchar here to block and wait for user input
          // change the memory in FullDuplexTest to indicate a change in variables
        }
        return NULL;
    }
public:
    UIALSA(const char*devName, int latency) : FullDuplexTest(devName, latency), ThreadedMethod() {};
};

Then change all references to FullDuplexTest to UIALSA in the original test file (you will probably have to fix some compile time errors):然后将原始测试文件中对 FullDuplexTest 的所有引用更改为 UALSA(您可能必须修复一些编译时错误):

UIALSA fullDuplex(deviceName, latency);

Also you will need to call UIALSA::run() to make sure the UI thread is running and listening for user input.您还需要调用 UIALSA::run() 以确保 UI 线程正在运行并侦听用户输入。 You can add the call before you call "go" :您可以在调用 "go" 之前添加调用:

fullDuplex.run(); // start the UI thread
res=fullDuplex.go(); // start the full duplex read/write/process going.

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM