简体   繁体   English

蝗虫将响应保存在文件中

[英]Locust Save response in a file

I'm using the event hook "request_success" to store each response in a file.我正在使用事件挂钩“request_success”将每个响应存储在一个文件中。

class Print:  # pylint: disable=R0902
"""
Record every response (useful when debugging a single locust)
"""

def __init__(self, env: locust.env.Environment, include_length=False, include_time=False):
    self.env = env
    self.env.events.request_success.add_listener(self.request_success)

def request_success(self, request_type, name, response_time, response_length, **_kwargs):
    users = self.env.runner.user_count
    data = [datetime.now(), request_type, name, response_time, users]
    state_data.append(data)


@events.init.add_listener
def locust_init_listener(environment, **kwargs):
    Print(env=environment)


@events.quitting.add_listener
def write_statistics(environment, **kwargs):
    with open("output/requests_stats_u150_c.csv", "a+") as f:
        csv_writer = csv.writer(f)
            for row in state_data:
            csv_writer.writerow(row)

However, for multiple workers, some of the lines overlap, I even lost some of the lines.但是,对于多个工人来说,有些线重叠,我什至丢失了一些线。 Here's the sample error response.这是示例错误响应。

2021-03-18 17:08:28.019587,POST,login,262.42033099697437,50
2021-03-18 17:08:28.021776,POST,select_order,16.014199994970113,50
2021-03-18 17:08:28.028505,GET2021-03-18 17:08:28.030823,GET,home,3.924126998754218,50

The 3rd entry got corrupted by double entry.第三个条目被双重输入损坏。 I guess both of the worker tried to write at once.我猜这两个工人都试图同时写作。

Any idea how can I store all successful responses in a multi-worker locust test?知道如何在多人蝗虫测试中存储所有成功的响应吗?

It really depends on exactly what data you need.这实际上取决于您需要什么数据。 The workers already send all the request data to the master and the master has built-in ability to save stats to CSV .工作人员已经将所有请求数据发送到主服务器,并且主服务器具有将统计信息保存到 CSV 的内置功能 If you need more than this, I would still have the master do it.如果你需要更多,我仍然会让主人去做。 Much easier to control writing to a file from one place than coordinate multiple.从一个地方控制对文件的写入比协调多个地方容易得多。

You can use report_to_master event on the workers to add in whatever extra data you may need to be included in the payload reported to the master.您可以在工作人员上使用report_to_master事件来添加您可能需要包含在报告给主服务器的有效负载中的任何额外数据。 Then on the master you can use worker_report for the master to pull out the data from the worker payloads.然后在主服务器上,您可以使用worker_report让主服务器从工作负载中提取数据。 I'd probably save it to some other variables on the master and then have another function periodically write the data to a file so there's no contention in writes.我可能会将其保存到主服务器上的其他一些变量中,然后让另一个 function 定期将数据写入文件,因此写入时不会发生争用。 You could start the writing by using the init hook, spawn your own greenlet that writes the data and then sleeps for 2 seconds (that's the interval that workers report to master at).您可以使用init钩子开始写入,生成您自己的写入数据的 greenlet,然后休眠 2 秒(这是工作人员向 master 报告的时间间隔)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM