简体   繁体   中英

Implement a simple MPEG-TS muxer using ffmpeg-lib

I have an application that records raw audio data in LPCM stored in a buffer. I would like to encapsulate the data in a transport stream and send that transport stream through UDP to a stream segmenter (according to HTTP Live Streaming specifications) on another host.

FFmpeg provides a command-line utility to do so but with a file as input ffmpeg -re -i output.aac -acodec copy -f mpegts udp://127.0.0.1:5555 .

My first thought was to use FFmpeg API, especially the libavformat library. Does libavformat provide a muxer that I could use to encapsulate my audio in LPCM into a transport stream or do I have to implement it from scratch?

I have found this source code https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/mpegts.c but I am not sure if it actually does what I'm looking for.

Thanks for your help,

So based on your comment about not needing it to necessarily be LPCM in the TS you will need to:

  1. Decode your audio / read the frames
  2. Encode it as a as something suitable for sending in a Transport Stream eg mp3 or AAC I believe this is the list of options: https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/mpegts.h#L45-L64
  3. Package it in a TS suitable for your network conditions eg packet sizing etc
  4. Send it via UDP

There is a reasonable example of all this here: https://github.com/rvs/ffmpeg/blob/master/libavformat/output-example.c

As mentioned in the prior answer from szatmary you could also just pipe this to ffmpeg which may be simplest

You can use the ts muxer directly via libavformat. However you can also pipe the audio to ffmpeg using -i -

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM