简体   繁体   中英

Rust futures / async - await strange behavior

Edit:

I swapped all of the future / async calls to instead just spawn a new thread, but still the program takes 15 seconds to run, instead of the expected 1 second. What is the reason? total_send_time is time taken to spawn a new thread + time spent sleeping before launching a new thread. Note: I am trying to send requests uniformly at a certain rate.

Interval between calls: 0.001 s
Ran in 15391 ms, total time: 1.0015188 s
use std::time::Duration;
use std::ops::Sub;
use std::net::TcpStream;
use std::io::Read;

const NANOSECOND: u64 = 1000000000;
const SECOND: u64 = 1;

const RPS: u64 = 1000;
const N: u64 = 1000;

fn send_request() {
    let mut stream = TcpStream::connect("127.0.0.1:8080").unwrap();
    let mut buffer = [0; 1024];
    stream.read(&mut buffer).unwrap();
}

fn main() {
    let duration: u64 = ((SECOND as f64 / RPS as f64) as f64 * NANOSECOND as f64) as u64;
    println!("Interval between calls: {} s", (SECOND as f64 / RPS as f64));
    let start = std::time::Instant::now();
    let mut total_send_time: u128 = 0;
    for i in 0..N {
        let start_in = std::time::Instant::now();
        std::thread::spawn(move || send_request());
        let time_to_sleep = ((duration as i128 - start_in.elapsed().as_nanos() as i128) as i128).abs();
        total_send_time += start_in.elapsed().as_nanos();
        if time_to_sleep > 0 {
            std::thread::sleep(Duration::from_nanos(time_to_sleep as u64));
            total_send_time += time_to_sleep as u128;
        }
    }
    println!("Ran in {} ms, total time: {} s", start.elapsed().as_millis(), total_send_time as f64 / NANOSECOND as f64);
}

Original:

I am new to rust and I was reading up on using futures and async / await in rust, and built a simple tcp server using it. I then decided to write a quick benchmark, by sending requests to the server at a constant rate, but I am having some strange issues.

The below code should send a request every 0.001 seconds, and it does, except the program reports strange run times. This is the output:

Interval between calls: 0.001 s
Ran in 15 s, total time: 1 s

Obviously getting system time and calculating time to sleep has some cost, but surely not 14 seconds. What have I done wrong?

use async_std::net::TcpStream;
use futures::AsyncReadExt;
use std::time::Duration;
use async_std::task::spawn;
use std::ops::Sub;

const RPS: u64 = 1000;
const N: u64 = 1000;

async fn send_request() {
    let mut stream = TcpStream::connect("127.0.0.1:8080").await.unwrap();
    let mut buffer = [0; 1024];
    stream.read(&mut buffer).await.unwrap();
}

#[async_std::main]
async fn main() {
    let duration: u64 = ((1 as f64 / RPS as f64) as f64 * 1000000000 as f64) as u64;
    println!("Interval between calls: {} s", (1 as f64 / RPS as f64));
    let start = std::time::Instant::now();
    let mut total_send_time: u128 = 0;
    for _ in 0..N {
        let start_in = std::time::Instant::now();
        spawn(send_request());
        let time_to_sleep = ((duration as i128 - start_in.elapsed().as_nanos() as i128) as i128).abs();
        total_send_time += start_in.elapsed().as_nanos();
        if time_to_sleep > 0 {
            std::thread::sleep(Duration::from_nanos(time_to_sleep as u64));
            total_send_time += time_to_sleep as u128;
        }
    }
    println!("Ran in {} s, total time: {} s", start.elapsed().as_secs(), total_send_time / 1000000000)
}

You are not measuring the elapsed time correctly:

  1. total_send_time measures the duration of the spawn() call, but as the actual task is executed asynchronously, start_in.elapsed() does not give you any information about how much time the task actually takes.

  2. The ran in time, as measured by start.elapsed() is also not useful at all. As you are using blocking sleep operation, you are just measuring how much time your app has spent in the std::thread::sleep()

  3. Last but not least, your time_to_sleep calculation is completely incorrect, because of the issue mentioned in point 1.

Edit

As I've already explained in my previous answer, your program takes 15 seconds to run, because you do not calculate the time to sleep properly. There are also other mistakes such as using blocking sleep in an async function, etc; Here is a corrected version:

use std::time::{Duration, Instant};

const TASKS: u64 = 1000;
const TASKS_PER_SECOND: u64 = 1000;

#[async_std::main]
async fn main() -> std::io::Result<()> {
    let micros_per_task = Duration::from_micros(
        Duration::from_secs(1).as_micros() as u64 / TASKS_PER_SECOND
    );
    let mut spawn_overhead = Duration::default();

    let before_spawning = Instant::now();
    for _ in 0..TASKS {
        let task_start = Instant::now();
        async_std::task::spawn(task());
        let elapsed = task_start.elapsed();
        spawn_overhead += elapsed;

        if elapsed < micros_per_task {
            let sleep = micros_per_task - elapsed;
            async_std::task::sleep(sleep).await;
        }
    }
    let elapsed_spawning = before_spawning.elapsed();
    println!("Took {}ms to spawn {} tasks", elapsed_spawning.as_millis(), TASKS);
    println!("Micros spent in spawn(): {}", spawn_overhead.as_micros());

    Ok(())
}

async fn task() {
    async_std::task::sleep(Duration::from_millis(1000)).await;
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM