简体   繁体   English

Flink siddhi 的 Kryo 超出 memory 错误

[英]Kryo out of memory error for flink siddhi

I am using flink siddhi and getting out of memory error while processing large objects.我正在使用 flink siddhi 并在处理大型对象时摆脱 memory 错误。 In output stream generated by siddhi cep i have object having more than 200 fields, and I have some operators after that to process this object. In output stream generated by siddhi cep i have object having more than 200 fields, and I have some operators after that to process this object. [flink version 1.7.2] [flink 版本 1.7.2]

java.lang.OutOfMemoryError: Java heap space
    at com.esotericsoftware.kryo.io.Input.readBytes(Input.java:307)
    at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.read(DefaultArraySerializers.java:42)
    at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.read(DefaultArraySerializers.java:25)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
    at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:143)
    at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:21)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
    at org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
    at org.apache.flink.runtime.state.DefaultOperatorStateBackend.deserializeOperatorStateValues(DefaultOperatorStateBackend.java:592)
    at org.apache.flink.runtime.state.DefaultOperatorStateBackend.restore(DefaultOperatorStateBackend.java:378)
    at org.apache.flink.runtime.state.DefaultOperatorStateBackend.restore(DefaultOperatorStateBackend.java:62)
    at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:151)
    at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:123)
    at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.operatorStateBackend(StreamTaskStateInitializerImpl.java:245)
    at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:143)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:250)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
    at java.lang.Thread.run(Thread.java:748)

The exception usually means that the JVM heap is too small for Flink/Siddhi to process.异常通常意味着 JVM 堆太小,Flink/Siddhi 无法处理。

You can increase the JVM Heap size by increasing the total memory of flink.您可以通过增加 flink 的总 memory 来增加 JVM 堆大小。 https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/memory/mem_setup/#configure-total-memory https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/memory/mem_setup/#configure-total-memory

Edit your conf/flink-conf.yaml and add the heap size for Taskmanager and Job manager with the highest appropriate values in mb/gb.编辑您的conf/flink-conf.yaml并添加任务管理器和作业管理器的堆大小,以 mb/gb 为单位的最高适当值。

jobmanager.memory.heap.size: 
taskmanager.memory.task.heap.size:

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM