简体   繁体   English

以Elasticsearch为数据源的JMeter

[英]JMeter with Elasticsearch as data source

I am capturing http traffic using Packetbeat . 我正在使用Packetbeat捕获http流量。 The captured traffic is stored in Elasticsearch and consists of SOAP requests (including request body, headers etc). 捕获的流量存储在Elasticsearch中,并包含SOAP请求(包括请求主体,标头等)。 In total I have about 500 million requests in the database at any given time. 在任何给定时间,数据库中总共有大约5亿个请求。

My goal is to replay a specific timespan worth of requests (~ 30 million requests) using JMeter . 我的目标是使用JMeter重播特定时间范围内的请求(约3000万个请求)。 I would like to use something like the Throughput Shaping Timer ( https://jmeter-plugins.org/wiki/ThroughputShapingTimer/ ). 我想使用类似吞吐量整形计时器( https://jmeter-plugins.org/wiki/ThroughputShapingTimer/ )之类的东西。 So far I have no good idea how to get the data into a JMeter test plan. 到目前为止,我还不知道如何将数据纳入JMeter测试计划。 Any suggestions? 有什么建议么?

The standard csv approach seems subpar because: 标准的csv方法似乎不理想,因为:
1. Generating a csv-File containing 30 million requests including the request body seems shitty considering I already have the requests in a database 1.考虑到我已经在数据库中拥有请求,生成包含请求主体在内的3000万个请求的csv文件似乎很糟糕
2. The timespan from which I select requests will change consistently. 2.我选择请求的时间范围将不断变化。 So I would have to generate lots of csv-Files 所以我将不得不生成大量的csv文件

Thanks! 谢谢!

Well, for that much record set you need to create the data source of millions of record set and csv is the best option. 好吧,对于这么多的记录集,您需要创建数百万个记录集的数据源,而csv是最佳选择。 I have created so many test plans in my career and found csv data source is quite significant. 我在职业生涯中创建了许多测试计划,发现csv数据源非常重要。 I would recommend to use csv for manipulating in your elastic search. 我建议使用csv进行弹性搜索。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM