簡體   English   中英

Rust vs Go並發webserver,為什么Rust在這里慢?

[英]Rust vs Go concurrent webserver, why is Rust slow here?

我正在嘗試對 Rust 書中的多線程 Web 服務器示例進行一些基准測試,為了進行比較,我在 Go 中構建了類似的東西,並使用 ApacheBench 運行了基准測試。 雖然這是一個簡單的例子,但差異太大了。 Go web 服務器執行相同操作的速度提高了 10 倍。 由於我期望 Rust 更快或處於相同水平,因此我嘗試使用期貨和 smol 進行多次修訂(盡管我的目標是僅使用標准庫比較實現)但結果幾乎相同。 這里的任何人都可以建議對 Rust 實現進行更改以使其在不使用大量線程數的情況下更快嗎?

這是我使用的代碼: https://github.com/deepu105/concurrency-benchmarks

tokio-http 版本最慢,其他 3 個 rust 版本給出幾乎相同的結果

以下是基准:

Rust(8線程,100線程的數字更接近Go):

❯ ab -c 100 -n 1000 http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        176 bytes

Concurrency Level:      100
Time taken for tests:   26.027 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      195000 bytes
HTML transferred:       176000 bytes
Requests per second:    38.42 [#/sec] (mean)
Time per request:       2602.703 [ms] (mean)
Time per request:       26.027 [ms] (mean, across all concurrent requests)
Transfer rate:          7.32 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   2.9      1      16
Processing:     4 2304 1082.5   2001    5996
Waiting:        0 2303 1082.7   2001    5996
Total:          4 2307 1082.1   2002    5997

Percentage of the requests served within a certain time (ms)
  50%   2002
  66%   2008
  75%   2018
  80%   3984
  90%   3997
  95%   4002
  98%   4005
  99%   5983
 100%   5997 (longest request)

Go:

ab -c 100 -n 1000 http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        174 bytes

Concurrency Level:      100
Time taken for tests:   2.102 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      291000 bytes
HTML transferred:       174000 bytes
Requests per second:    475.84 [#/sec] (mean)
Time per request:       210.156 [ms] (mean)
Time per request:       2.102 [ms] (mean, across all concurrent requests)
Transfer rate:          135.22 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.4      2       5
Processing:     0  203 599.8      3    2008
Waiting:        0  202 600.0      2    2008
Total:          0  205 599.8      5    2013

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      7
  75%      8
  80%      8
  90%   2000
  95%   2003
  98%   2005
  99%   2010
 100%   2013 (longest request)

我只比較了你的“rustws”和 Go 版本。 在 Go 中,您有無限的 goroutine(即使您將它們全部限制為只有一個 CPU 內核),而在 rustws 中,您創建了一個具有 8 個線程的線程池。

由於您的請求處理程序每 10 個請求休眠 2 秒,因此您將 rustws 版本限制為每秒 80/2 = 40 個請求,這就是您在 ab 結果中看到的。 Go 不受此任意瓶頸的影響,因此它向您展示了單個 CPU 內核上的最大燭台處理能力。

我終於能夠使用 async_std 庫在 Rust 中獲得類似的結果

❯ ab -c 100 -n 1000 http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        176 bytes

Concurrency Level:      100
Time taken for tests:   2.094 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      195000 bytes
HTML transferred:       176000 bytes
Requests per second:    477.47 [#/sec] (mean)
Time per request:       209.439 [ms] (mean)
Time per request:       2.094 [ms] (mean, across all concurrent requests)
Transfer rate:          90.92 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.7      2       7
Processing:     0  202 599.7      2    2002
Waiting:        0  201 600.1      1    2002
Total:          0  205 599.7      5    2007

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      6
  75%      9
  80%      9
  90%   2000
  95%   2003
  98%   2004
  99%   2006
 100%   2007 (longest request)

這是實現

use async_std::net::TcpListener;
use async_std::net::TcpStream;
use async_std::prelude::*;
use async_std::task;
use std::fs;
use std::time::Duration;

#[async_std::main]
async fn main() {
    let mut count = 0;

    let listener = TcpListener::bind("127.0.0.1:8080").await.unwrap(); // set listen port

    loop {
        count = count + 1;
        let count_n = Box::new(count);
        let (stream, _) = listener.accept().await.unwrap();
        task::spawn(handle_connection(stream, count_n)); // spawn a new task to handle the connection
    }
}

async fn handle_connection(mut stream: TcpStream, count: Box<i64>) {
    // Read the first 1024 bytes of data from the stream
    let mut buffer = [0; 1024];
    stream.read(&mut buffer).await.unwrap();

    // add 2 second delay to every 10th request
    if (*count % 10) == 0 {
        println!("Adding delay. Count: {}", count);
        task::sleep(Duration::from_secs(2)).await;
    }

    let contents = fs::read_to_string("hello.html").unwrap(); // read html file

    let response = format!("{}{}", "HTTP/1.1 200 OK\r\n\r\n", contents);
    stream.write(response.as_bytes()).await.unwrap(); // write response
    stream.flush().await.unwrap();
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM