简体   繁体   English

Play 2.2中的慢块响应

[英]Slow chunk response in Play 2.2

In my play-framework-based web application users can download all the rows of different database tables in csv or json format. 在我的基于Play框架的Web应用程序中,用户可以以csv或json格式下载不同数据库表的所有行。 Tables are relatively large (100k+ rows) and I am trying to stream back the result using chunking in Play 2.2. 表格相对较大(100k +行),我试图使用Play 2.2中的分块来回传结果。

However the problem is although println statements shows that the rows get written to the Chunks.Out object, they do not show up in the client side! 但问题是虽然println语句显示行被写入Chunks.Out对象,但它们不会显示在客户端! If I limit the rows getting sent back it will work, but it also has a big delay in the beginning which gets bigger if I try to send back all the rows and causes a time-out or the server runs out of memory. 如果我限制发回的行将会起作用,但是如果我尝试发回所有行并导致超时或服务器内存不足,那么它在开始时也会有很大的延迟。

I use Ebean ORM and the tables are indexed and querying from psql doesn't take much time. 我使用Ebean ORM并且表被索引并且从psql查询不需要花费太多时间。 Does anyone have any idea what might be the problem? 有谁知道可能是什么问题?

I appreciate your help a lot! 我非常感谢你的帮助!

Here is the code for one of the controllers: 以下是其中一个控制器的代码:

@SecureSocial.UserAwareAction
public static Result showEpex() {

    User user = getUser();
    if(user == null || user.getRole() == null)
        return ok(views.html.profile.render(user, Application.NOT_CONFIRMED_MSG));

    DynamicForm form = DynamicForm.form().bindFromRequest();
    final UserRequest req = UserRequest.getRequest(form);

    if(req.getFormat().equalsIgnoreCase("html")) {
        Page<EpexEntry> page = EpexEntry.page(req.getStart(), req.getFinish(), req.getPage());
        return ok(views.html.epex.render(page, req));
    }

    // otherwise chunk result and send back
    final ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>();
    Chunks<String> chunks = new StringChunks() {
            @Override
            public void onReady(play.mvc.Results.Chunks.Out<String> out) {

                Page<EpexEntry> page = EpexEntry.page(req.getStart(), req.getFinish(), 0);
                ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>();
                streamer.stream(out, page, req);
            }
    };
    return ok(chunks).as("text/plain");
}

And the streamer: 流光:

public class ResultStreamer<T extends Entry> {

private static ALogger logger = Logger.of(ResultStreamer.class);

public void stream(Out<String> out, Page<T> page, UserRequest req) {

    if(req.getFormat().equalsIgnoreCase("json")) {
        JsonContext context = Ebean.createJsonContext();
        out.write("[\n");
        for(T e: page.getList())
            out.write(context.toJsonString(e) + ", ");
        while(page.hasNext()) {
            page = page.next();
            for(T e: page.getList())
                out.write(context.toJsonString(e) + ", ");
        }
        out.write("]\n");
        out.close();
    } else if(req.getFormat().equalsIgnoreCase("csv")) {
        for(T e: page.getList())
            out.write(e.toCsv(CSV_SEPARATOR) + "\n");
        while(page.hasNext()) {
            page = page.next();
            for(T e: page.getList())
                out.write(e.toCsv(CSV_SEPARATOR) + "\n");
        }
        out.close();
    }else {
        out.write("Invalid format! Only CSV, JSON and HTML can be generated!");
        out.close();
    }
}


public static final String CSV_SEPARATOR = ";";
} 

And the model: 而型号:

@Entity
@Table(name="epex")
public class EpexEntry extends Model implements Entry {

    @Id
    @Column(columnDefinition = "pg-uuid")
    private UUID id;
    private DateTime start;
    private DateTime finish;
    private String contract;
    private String market;
    private Double low;
    private Double high;
    private Double last;
    @Column(name="weight_avg")
    private Double weightAverage;
    private Double index;
    private Double buyVol;
    private Double sellVol;

    private static final String START_COL = "start";
    private static final String FINISH_COL = "finish";
    private static final String CONTRACT_COL = "contract";
    private static final String MARKET_COL = "market";
    private static final String ORDER_BY = MARKET_COL + "," + CONTRACT_COL + "," + START_COL;

    public static final int PAGE_SIZE = 100;

    public static final String HOURLY_CONTRACT = "hourly";
    public static final String MIN15_CONTRACT = "15min";

    public static final String FRANCE_MARKET = "france";
    public static final String GER_AUS_MARKET = "germany/austria";
    public static final String SWISS_MARKET = "switzerland";

    public static Finder<UUID, EpexEntry> find = 
            new Finder(UUID.class, EpexEntry.class);

    public EpexEntry() {
    }

    public EpexEntry(UUID id, DateTime start, DateTime finish, String contract,
            String market, Double low, Double high, Double last,
            Double weightAverage, Double index, Double buyVol, Double sellVol) {
        this.id = id;
        this.start = start;
        this.finish = finish;
        this.contract = contract;
        this.market = market;
        this.low = low;
        this.high = high;
        this.last = last;
        this.weightAverage = weightAverage;
        this.index = index;
        this.buyVol = buyVol;
        this.sellVol = sellVol;
    }

    public static Page<EpexEntry> page(DateTime from, DateTime to, int page) {

        if(from == null && to == null)
            return find.order(ORDER_BY).findPagingList(PAGE_SIZE).getPage(page);
        ExpressionList<EpexEntry> exp = find.where();
        if(from != null)
            exp = exp.ge(START_COL, from);
        if(to != null)
            exp = exp.le(FINISH_COL, to.plusHours(24));
        return exp.order(ORDER_BY).findPagingList(PAGE_SIZE).getPage(page);
    }

    @Override
    public String toCsv(String s) {
        return id + s + start + s + finish + s + contract + 
                s + market + s + low + s + high + s + 
                last + s + weightAverage + s + 
                index + s + buyVol + s + sellVol;   
    }

1. Most of browsers wait for 1-5 kb of data before showing any results. 1.大多数浏览器在显示任何结果之前等待1-5 kb的数据。 You can check if Play Framework actually sends data with command curl http://localhost:9000 . 您可以使用命令curl http://localhost:9000检查Play Framework是否实际发送数据。

2. You create streamer twice, remove first final ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>(); 2.您创建两次流光,删除第一个final ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>();

3. - You use Page class for retrieving large data set - this is incorrect. 3. - 您使用Page类检索大型数据集 - 这是不正确的。 Actually you do one big initial request and then one request per iteration. 实际上,您执行一个大的初始请求,然后每次迭代请求一个请求。 This is SLOW. 这很慢。 Use simple findIterate(). 使用简单的findIterate()。

add this to EpexEntry (feel free to change it as you need) 将此添加到EpexEntry (随意根据需要更改)

public static QueryIterator<EpexEntry> all() {
    return find.order(ORDER_BY).findIterate();
}

your new stream method implementation: 你的新流方法实现:

public void stream(Out<String> out, QueryIterator<T> iterator, UserRequest req) {

    if(req.getFormat().equalsIgnoreCase("json")) {
        JsonContext context = Ebean.createJsonContext();
        out.write("[\n");
        while (iterator.hasNext()) {
            out.write(context.toJsonString(iterator.next()) + ", ");
        }
        iterator.close(); // its important to close iterator
        out.write("]\n");
        out.close();
    } else // csv implementation here

And your onReady method: 你的onReady方法:

            QueryIterator<EpexEntry> iterator = EpexEntry.all();
            ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>();
            streamer.stream(new BuffOut(out, 10000), iterator, req); // notice buffering here

4. Another problem is - you call Out<String>.write() too often. 4.另一个问题是-你叫Out<String>.write()过于频繁。 Call of write() means that server needs to send new chunk of data to client immediately . 调用write()意味着服务器需要立即向客户端发送新的数据块。 Every call of Out<String>.write() have significant overhead. 每次调用Out<String>.write()产生很大的开销。

Overhead appears because server needs to wrap response into chunked result - 6-7 bytes for each message Chunked response Format . 出现开销是因为服务器需要将响应包装到分块结果中 - 每个消息的Chunked响应格式为6-7个字节。 Since you send small messages, overhead is significant. 由于您发送小消息,因此开销很大。 Also, server needs to wrap your reply in TCP packet which size will be far less from optimal. 此外,服务器需要将您的回复包装在TCP数据包中,其大小将远远低于最佳状态。 And, server needs to perform some internal action to send an chunk, this is also require some resources. 并且,服务器需要执行一些内部操作来发送块,这也需要一些资源。 As result, download bandwidth will be far from optimal. 因此,下载带宽将远非最佳。

Here is simple test: send 10000 lines of text TEST0 to TEST9999 in chunks. 这是一个简单的测试:以块的形式将10000行文本TEST0发送到TEST9999。 This takes 3 sec on my computer in average. 平均而言,这需要3秒钟。 But with buffering this takes 65 ms. 但是通过缓冲,这需要65毫秒。 Also, download sizes are 136 kb and 87.5 kb. 此外,下载大小为136 kb和87.5 kb。

Example with buffering: 缓冲示例:

Controller 调节器

public class Application extends Controller {
    public static Result showEpex() {
        Chunks<String> chunks = new StringChunks() {
            @Override
            public void onReady(play.mvc.Results.Chunks.Out<String> out) {
                new ResultStreamer().stream(out);
            }
        };
        return ok(chunks).as("text/plain");
    }
}

new BuffOut class. 新的BuffOut类。 It's dumb, I know 我知道,这是愚蠢的

public class BuffOut {
    private StringBuilder sb;
    private Out<String> dst;

    public BuffOut(Out<String> dst, int bufSize) {
        this.dst = dst;
        this.sb = new StringBuilder(bufSize);
    }

    public void write(String data) {
        if ((sb.length() + data.length()) > sb.capacity()) {
            dst.write(sb.toString());
            sb.setLength(0);
        }
        sb.append(data);
    }

    public void close() {
        if (sb.length() > 0)
            dst.write(sb.toString());
        dst.close();
    }
}

This implementation have 3 second download time and 136 kb size 此实现具有3秒的下载时间和136kb的大小

public class ResultStreamer {
    public void stream(Out<String> out) {
    for (int i = 0; i < 10000; i++) {
            out.write("TEST" + i + "\n");
        }
        out.close();
    }
}

This implementation have 65 ms download time and 87.5 kb size 此实现具有65毫秒的下载时间和87.5 kb的大小

public class ResultStreamer {
    public void stream(Out<String> out) {
        BuffOut out2 = new BuffOut(out, 1000);
        for (int i = 0; i < 10000; i++) {
            out2.write("TEST" + i + "\n");
        }
        out2.close();
    }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM