简体   繁体   English

nghttp2:使用服务器发送的事件供 EventSource 使用

[英]nghttp2: Using server-sent events to be use by EventSource

I'm using nghttp2 to implement a REST server which should use HTTP/2 and server-sent events (to be consumed by an EventSource in the browser).我正在使用nghttp2来实现 REST 服务器,该服务器应该使用 HTTP/2 和服务器发送的事件(由浏览器中的 EventSource 使用)。 However, based on the examples it is unclear to me how to implement SSE.但是,根据这些示例,我不清楚如何实施 SSE。 Using res.push() as in asio-sv.cc doesn't seem to be the right approach.asio-sv.cc中使用 res.push() 似乎不是正确的方法。

What would be the right way to do it?什么是正确的方法? I'd prefer to use nghttp2's C++ API, but the C API would do as well.我更喜欢使用 nghttp2 的 C++ API,但是 C ZDB974238714CA8DE634A 也可以。

Yup, I did something like that back in 2018. The documentation was rather sparse:).是的,我在 2018 年做过类似的事情。文档相当稀疏:)。

First of all, ignore response::push because that's the HTTP2 push -- something for proactively sending unsolicited objects to the client before it requests them.首先,忽略response::push ,因为那是 HTTP2 推送——在客户端请求它们之前主动向客户端发送未经请求的对象。 I know it sounds like what you need, but it is not -- the typical use case would be proactively sending a CSS file and some images along with the originally requested HTML page.我知道这听起来像是您需要的,但事实并非如此——典型的用例是主动发送 CSS 文件和一些图像以及最初请求的 HTML 页面。

The key thing is that your end() callback must eventually return NGHTTP2_ERR_DEFERRED whenever you run out of data to send.关键是当您发送的数据用完时,您的end()回调必须最终返回NGHTTP2_ERR_DEFERRED When your application somehow obtains more data to be sent, call http::response::resume() .当您的应用程序以某种方式获得更多要发送的数据时,请调用http::response::resume()

Here's a simple code.这是一个简单的代码。 Build it as g++ -std=c++17 -Wall -O3 -ggdb clock.cpp -lssl -lcrypto -pthread -lnghttp2_asio -lspdlog -lfmt .将其构建为g++ -std=c++17 -Wall -O3 -ggdb clock.cpp -lssl -lcrypto -pthread -lnghttp2_asio -lspdlog -lfmt Be careful, modern browsers don't do HTTP/2 over a plaintext socket, so you'll need to reverse-proxy it via something like nghttpx -f '*,8080;no-tls' -b '::1,10080;;proto=h2' .请注意,现代浏览器不会通过明文套接字执行 HTTP/2,因此您需要通过nghttpx -f '*,8080;no-tls' -b '::1,10080;;proto=h2'类的方式对其进行反向代理nghttpx -f '*,8080;no-tls' -b '::1,10080;;proto=h2'

#include <boost/asio/io_service.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/signals2.hpp>
#include <chrono>
#include <list>
#include <nghttp2/asio_http2_server.h>
#define SPDLOG_FMT_EXTERNAL
#include <spdlog/spdlog.h>
#include <thread>

using namespace nghttp2::asio_http2;
using namespace std::literals;

using Signal = boost::signals2::signal<void(const std::string& message)>;

class Client {
    const server::response& res;
    enum State {
        HasEvents,
        WaitingForEvents,
    };
    std::atomic<State> state;

    std::list<std::string> queue;
    mutable std::mutex mtx;
    boost::signals2::scoped_connection subscription;

    size_t send_chunk(uint8_t* destination, std::size_t len, uint32_t* data_flags [[maybe_unused]])
    {
        std::size_t written{0};
        std::lock_guard lock{mtx};
        if (state != HasEvents) throw std::logic_error{std::to_string(__LINE__)};
        while (!queue.empty()) {
            auto num = std::min(queue.front().size(), len - written);
            std::copy_n(queue.front().begin(), num, destination + written);
            written += num;
            if (num < queue.front().size()) {
                queue.front() = queue.front().substr(num);
                spdlog::debug("{} send_chunk: partial write", (void*)this);
                return written;
            }
            queue.pop_front();
            spdlog::debug("{} send_chunk: sent one event", (void*)this);
        }
        state = WaitingForEvents;
        return written;
    }

public:
    Client(const server::request& req, const server::response& res, Signal& signal)
    : res{res}
    , state{WaitingForEvents}
    , subscription{signal.connect([this](const auto& msg) {
        enqueue(msg);
    })}
    {
        spdlog::warn("{}: {} {} {}", (void*)this, boost::lexical_cast<std::string>(req.remote_endpoint()), req.method(), req.uri().raw_path);
        res.write_head(200, {{"content-type", {"text/event-stream", false}}});
    }

    void onClose(const uint32_t ec)
    {
        spdlog::error("{} onClose", (void*)this);
        subscription.disconnect();
    }

    ssize_t process(uint8_t* destination, std::size_t len, uint32_t* data_flags)
    {
        spdlog::trace("{} process", (void*)this);
        switch (state) {
        case HasEvents:
            return send_chunk(destination, len, data_flags);
        case WaitingForEvents:
            return NGHTTP2_ERR_DEFERRED;
        }
        __builtin_unreachable();
    }

    void enqueue(const std::string& what)
    {
        {
            std::lock_guard lock{mtx};
            queue.push_back("data: " + what + "\n\n");
        }
        state = HasEvents;
        res.resume();
    }
};

int main(int argc [[maybe_unused]], char** argv [[maybe_unused]])
{
    spdlog::set_level(spdlog::level::trace);

    Signal sig;
    std::thread timer{[&sig]() {
        for (int i = 0; /* forever */; ++i) {
            std::this_thread::sleep_for(std::chrono::milliseconds{666});
            spdlog::info("tick: {}", i);
            sig("ping #" + std::to_string(i));
        }
    }};

    server::http2 server;
    server.num_threads(4);

    server.handle("/events", [&sig](const server::request& req, const server::response& res) {
        auto client = std::make_shared<Client>(req, res, sig);

        res.on_close([client](const auto ec) {
            client->onClose(ec);
        });
        res.end([client](uint8_t* destination, std::size_t len, uint32_t* data_flags) {
            return client->process(destination, len, data_flags);
        });
    });

    server.handle("/", [](const auto& req, const auto& resp) {
        spdlog::warn("{} {} {}", boost::lexical_cast<std::string>(req.remote_endpoint()), req.method(), req.uri().raw_path);
        resp.write_head(200, {{"content-type", {"text/html", false}}});
        resp.end(R"(<html><head><title>nghttp2 event stream</title></head>
<body><h1>events</h1><ul id="x"></ul>
<script type="text/javascript">
const ev = new EventSource("/events");
ev.onmessage = function(event) {
  const li = document.createElement("li");
  li.textContent = event.data;
  document.getElementById("x").appendChild(li);
};
</script>
</body>
</html>)");
    });

    boost::system::error_code ec;
    if (server.listen_and_serve(ec, "::", "10080")) {
        return 1;
    }
    return 0;
}

I have a feeling that my queue handling is probably too complex.我有一种感觉,我的队列处理可能太复杂了。 When testing via curl , I never seem to run out of buffer space.通过curl进行测试时,我似乎从来没有用完缓冲区空间。 In other words, even if the client is not reading any data from the socket, the library keep invoking send_chunk , asking for up to 16kB of data at a time for me.换句话说,即使客户端没有从套接字读取任何数据,库也会继续调用send_chunk ,一次请求最多 16kB 的数据。 Strange.奇怪的。 I have no idea how it works when pushing more data more heavily.我不知道当更多地推送更多数据时它是如何工作的。

My "real code" used to have a third state, Closed , but I think that blocking events via on_close is enough here.我的“真实代码”曾经有第三个 state, Closed ,但我认为在这里通过on_close阻塞事件就足够了。 However, I think you never want to enter send_chunk if the client has already disconnected, but before the destructor gets called.但是,我认为如果客户端已经断开连接,但在调用析构函数之前,您永远不想输入send_chunk

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM