簡體   English   中英

點對點音頻呼叫Android:語音中斷和延遲(接收數據包的延遲)增加

[英]Peer to Peer Audio Calling on Android : Voice breaks and lag(delay in receiving packets) increases

我正在嘗試在Android上建立Peer to Peer音頻調用。 我使用Android手機和平板電腦進行通信,但收到大約40個數據包后,手機幾乎停止接收數據包,然后突然收到一些數據包並播放它們等等,但這個等待時間增加了。 類似地,平板電腦最初接收數據包並播放它們但延遲增加,並且語音在一段時間后開始崩潰,就好像某些數據包丟失一樣。 有什么想法導致這個問題......

這是應用程序的代碼...我只是在兩個設備上運行它時,在RecordAudio類中給出發送者和接收者的IP地址。

public class AudioRPActivity extends Activity implements OnClickListener {

    DatagramSocket socketS,socketR;
    DatagramPacket recvP,sendP;
    RecordAudio rt;
    PlayAudio pt;

    Button sr,stop,sp;
    TextView tv,tv1;
    File rf;

    boolean isRecording = false;
    boolean isPlaying = false;

    int frequency = 44100;
    int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
    int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;

    @Override
    public void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);

        tv = (TextView)findViewById(R.id.text1);
        tv1 = (TextView)findViewById(R.id.text2);

        sr = (Button)findViewById(R.id.sr);
        sp = (Button)findViewById(R.id.sp);
        stop = (Button)findViewById(R.id.stop);

        sr.setOnClickListener(this);
        sp.setOnClickListener(this);
        stop.setOnClickListener(this);

        stop.setEnabled(false);

        try
        {
        socketS=new DatagramSocket();
        socketR=new DatagramSocket(6000);
        }
        catch(SocketException se)
        {
            tv.setText(se.toString());
            finish();
        }
    }

    public void onClick(View v) {

        if(v == sr)
            record();
        else if(v == sp)
            play();
        else if(v == stop)
            stopPlaying();
    }

    public void play()
    {
        stop.setEnabled(true);
        sp.setEnabled(false);
        pt = new PlayAudio();
        pt.execute();
    }

    public void stopPlaying()
    {
        isRecording=false;
        isPlaying = false;
        stop.setEnabled(false);
    }

    public void record()
    {
        stop.setEnabled(true);
        sr.setEnabled(false);
        rt = new RecordAudio();
        rt.execute();
    }



    private class PlayAudio extends AsyncTask<Void,String,Void>
    {

        @Override
        protected Void doInBackground(Void... arg0)
        {
            isPlaying = true;
            int bufferSize = AudioTrack.getMinBufferSize(frequency, channelConfiguration, audioEncoding);

            byte[] audiodata = new byte[bufferSize];

            try
            {
                AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,frequency,channelConfiguration,
                                                        audioEncoding,4*bufferSize,AudioTrack.MODE_STREAM);
                audioTrack.setPlaybackRate(frequency);
                audioTrack.play();

                while(isPlaying)
                {
                    recvP=new DatagramPacket(audiodata,audiodata.length);
                    socketR.receive(recvP);
                    audioTrack.write(recvP.getData(), 0, recvP.getLength());
                }
                audioTrack.stop();
                audioTrack.release();
            }
            catch(Throwable t)
            {
                Log.e("Audio Track","Playback Failed");
            }
            return null;
        }
        protected void onProgressUpdate(String... progress)
        {
            tv1.setText(progress[0].toString());
        }

        protected void onPostExecute(Void result)
        {
            sr.setEnabled(true);
            sp.setEnabled(true);
        }

    }

    private class RecordAudio extends AsyncTask<Void,String,Void>
    {

        @Override
        protected Void doInBackground(Void... arg0)
        {
            isRecording = true;

            try
            {
                int bufferSize = AudioTrack.getMinBufferSize(frequency, channelConfiguration, audioEncoding);

                AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,frequency,channelConfiguration
                                                            ,audioEncoding,4*bufferSize);   
                byte[] buffer = new byte[bufferSize];
                audioRecord.startRecording();
                int r=0;
                while(isRecording)
                {
                    int brr = audioRecord.read(buffer,0,bufferSize);

                    sendP=new DatagramPacket(buffer,brr,InetAddress.getByName("sender's/receiver's ip"),6000);
                    socketS.send(sendP);
                    publishProgress(String.valueOf(r));

                    r++;
                }

                audioRecord.stop();
                audioRecord.release();

            }
            catch(Throwable t)
            {
                Log.e("AudioRecord","Recording Failed....");
            }


            return null;
        }

        protected void onProgressUpdate(String... progress)
        {
            tv.setText(progress[0].toString());
        }

        protected void onPostExecute(Void result)
        {
            sr.setEnabled(true);
            sp.setEnabled(true);
        }
    }
}

通過網絡發送語音時,如果頻率為8000以上,我就會遇到麻煩。 44100聽起來很可怕。 那可能就是我的情況。

另一個困難是,使用UDP很難說數據包到達的順序。我已經看到一個實現將它們按正確的順序重新組合,但我現在找不到它。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM