简体   繁体   中英

Performance problem during deserialization a list with many object in JSON Request

During deserialization JSON Request with list with many objects (more than 100) methods "com.fasterxml.jackson.core.json.UTF8StreamJsonParser#slowParseName" and "com.fasterxml.jackson.core.json.UTF8StreamJsonParser#_loadMore" are executed.

It's consumes a lot of time (45% of the whole calculation). I suspect buffer is exceeded here. Can anyone confirm this? Perhaps there is some way to increase the size of this buffer or another way to avoid this performance problem?

UTF8StreamJsonParser#_loadMore does the reading so I would expect (if everything is perfect) this method to be the major contributor to the whole computation.

slowParseName handles special cases where there is escaping in names and or names cross buffer boundaries, the first case depends on the input and there is nothing you can do about it, the second should happen infrequently.

I don't think that increasing the buffer size (even if possible) would really improve performance.

Last but not least, to make a some performance evaluation you need to have something to compare to.

How much time will it take for that request to be just read (and not processed).

Have you tried another parser?

Parsing the same source in NodeJS yields much different results?

Are there many GC pauses during the request processing?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM