[英]How to extract H264 frames using live555
There's simply no full example anywhere. 根本没有完整的示例。 In live555 folders, there's the following program: testRTSPClient.cpp which accesses an RTSP and receives raw RTP packets but do nothing with them. 在live555文件夹中,有以下程序: testRTSPClient.cpp ,该程序访问RTSP并接收原始RTP数据包,但不对其进行任何处理。 It receives them through the DummySink
class. 它通过DummySink
类接收它们。
There is an example on how to use testRTSPClient.cpp
to receive NAL units from h264, but live555 has custom sink classes specifically for each codec, so it's a lot better to use them. 有一个有关如何使用testRTSPClient.cpp
从h264接收NAL单元的示例 ,但是live555具有专门针对每个编解码器的自定义接收器类,因此使用它们要好得多。 Example: H264or5VideoRTPSink.cpp . 例如: H264or5VideoRTPSink.cpp 。
So if I substitute an instance of DummySink
with an instance of a subclass of H264or5VideoRTPSink
in testRTSPClient.cpp
and make this subclass receive the frames I think it might work. 因此,如果我用H264or5VideoRTPSink
中的testRTSPClient.cpp
子类的实例替换DummySink
的实例,并使该子类接收帧,我认为它可能会起作用。
If I just follow the implementation of DummySink
I just need to write something like this: 如果我只是遵循DummySink
的实现, DummySink
需要编写如下内容:
class MyH264VideoRTPSink: public H264VideoRTPSink {
public:
static MyH264VideoRTPSink* createNew(UsageEnvironment& env,
MediaSubsession& subsession, // identifies the kind of data that's being received
char const* streamId = NULL); // identifies the stream itself (optional)
private:
MyH264VideoRTPSink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId);
// called only by "createNew()"
virtual ~MyH264VideoRTPSink();
static void afterGettingFrame(void* clientData, unsigned frameSize,
unsigned numTruncatedBytes,
struct timeval presentationTime,
unsigned durationInMicroseconds);
void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned durationInMicroseconds);
// redefined virtual functions:
virtual Boolean continuePlaying();
u_int8_t* fReceiveBuffer;
MediaSubsession& fSubsession;
char* fStreamId;
};
If we look at DummySink
it suggests that afterGettingFrame
is the function that receives frames. 如果我们看一下DummySink
则表明afterGettingFrame
是接收帧的函数。 But where is the frame received? 但是在哪里收到框架? How can I access it? 我该如何访问?
void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned /*durationInMicroseconds*/) {
// We've just received a frame of data. (Optionally) print out information about it:
UPDATE: 更新:
I created my own H264 Sink class: https://github.com/lucaszanella/jscam/blob/f6b38eea2934519bcccd76c8d3aee7f58793da00/src/jscam/android/app/src/main/cpp/MyH264VideoRTPSink.cpp but it has a createNew
different from the one in DummySink
: 我创建了自己的H264 Sink类: https : //github.com/lucaszanella/jscam/blob/f6b38eea2934519bcccd76c8d3aee7f58793da00/src/jscam/android/app/src/main/cpp/MyH264VideoRTPSink.cpp但它的createNew
与之前的有所不同在DummySink
:
createNew(UsageEnvironment& env, Groupsock* RTPgs, unsigned char rtpPayloadFormat);
There's simply no mention of what RTPgs
is meant to be, neither rtpPayloadFormat
. 根本没有提及RTPgs
是什么意思,也没有提及rtpPayloadFormat
。 I don't even know if I'm on the right track... 我什至不知道我是否走对了...
The first confusion is betweeen Source & Sink, the FAQ decribe briefly the workflow: 第一个困惑是在Source&Sink之间, FAQ简要描述了工作流程:
'source1' -> 'source2' (a filter) -> 'source3' (a filter) -> 'sink' 'source1'->'source2'(过滤器)->'source3'(过滤器)->'接收器'
The class H264VideoRTPSink
is made to publish data through RTP, not to consume data. H264VideoRTPSink
类H264VideoRTPSink
通过RTP发布数据,而不使用数据。
In the case of the RTSP client sample testRTSPClient.cpp , the source which depends on the codec is created processing the DESCRIBE answer calling MediaSession::createNew
. 对于RTSP客户端示例testRTSPClient.cpp ,将创建依赖于编解码器的源,并处理调用MediaSession::createNew
的DESCRIBE答案。
The Sink doesnot depend on the codec, the startPlaying
method on the MediaSink
register the callback afterGettingFrame
to be called when data will be received by the source. 接收器不依赖于编解码器, MediaSink
上的startPlaying
方法注册将在源将接收数据时调用afterGettingFrame
的回调。 Next when this callback is executed you should call continuePlaying
to register it again for next incoming data. 接下来,当执行此回调时,您应该调用continuePlaying
以便为下一个传入数据再次注册它。
In DummySink::afterGettingFrame
the buffer contains the H264 elementary stream frames extract from RTP buffer. 在DummySink::afterGettingFrame
,缓冲区包含从RTP缓冲区提取的H264基本流帧。
In order to dump H264 elementary stream frame you can have a look to h264bitstream 为了转储H264基本流帧,您可以看一下h264bitstream
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.