简体   繁体   English

Java:Hazelcast:java.io.EOFException:无法读取4个字节

[英]Java: Hazelcast : java.io.EOFException: Cannot read 4 bytes

For my web application ,I have 2 instance that I have defined in hazelcast xml. 对于我的Web应用程序,我在hazelcast xml中定义了2个实例。 When I start one server, it started properly but when I start second server i am getting following error : 当我启动一台服务器时,它正常启动,但是当我启动第二台服务器时,出现以下错误:

SEVERE: [192.168.1.32]:5701 [dev] [3.5] java.io.EOFException: Cannot read 4 bytes! 严重:[192.168.1.32]:5701 [dev] [3.5] java.io.EOFException:无法读取4个字节! 2015-07-31 18:08:49 com.hazelcast.nio.serialization.HazelcastSerializationException: java.io.EOFException: Cannot read 4 bytes! 2015-07-31 18:08:49 com.hazelcast.nio.serialization.HazelcastSerializationException:java.io.EOFException:无法读取4个字节! 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:380) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:282) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:200) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:294) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.processPacket(OperationThread.java:142) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.process(OperationThread.java:115) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.doRun(OperationThread.java:101) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.run(OperationThread.java:76) 2015-07-31 18:08:49 Caused by: java.io.EO 2015-07-31 18:08:49在com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:380)2015-07-31 18:08:49在com.hazelcast.nio.serialization.SerializationServiceImpl。 toObject(SerializationServiceImpl.java:282)2015-07-31 18:08:49 at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:200)2015-07-31 18:08:49 at com。 hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:294)2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.processPacket(OperationThread.java :142)2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.process(OperationThread.java:115)2015-07-31 18:08:49 at com.hazelcast .spi.impl.operationexecutor.classic.OperationThread.doRun(OperationThread.java:101)2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.run(OperationThread.java: 76)2015-07-31 18:08:49原因:java.io.EO FException: Cannot read 4 bytes! FException:无法读取4个字节! 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.checkAvailable(ByteArrayObjectDataInput.java:543) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:255) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:249) 2015-07-31 18:08:49 at com.hazelcast.cluster.impl.ConfigCheck.readData(ConfigCheck.java:217) 2015-07-31 18:08:49 at com.hazelcast.cluster.impl.JoinMessage.readData(JoinMessage.java:80) 2015-07-31 18:08:49 at com.hazelcast.cluster.impl.operations.MasterDiscoveryOperation.readInternal(MasterDiscoveryOperation.java:46) 2015-07-31 18:08:49 at com.hazelcast.spi.Operation.readData(Operation.java:451) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:111) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39) 2015-07-31 18:08:49 at com. 2015-07-31 18:08:49在com.hazelcast.nio.serialization.ByteArrayObjectDataInput.checkAvailable(ByteArrayObjectDataInput.java:543)2015-07-31 18:08:49在com.hazelcast.nio.serialization.ByteArrayObjectDataInput。 readInt(ByteArrayObjectDataInput.java:255)2015-07-31 18:08:49 at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:249)2015-07-31 18:08:49 at com。 hazelcast.cluster.impl.ConfigCheck.readData(ConfigCheck.java:217)2015-07-31 18:08:49 at com.hazelcast.cluster.impl.JoinMessage.readData(JoinMessage.java:80)2015-07-31 18:08:49 com.hazelcast.cluster.impl.operations.MasterDiscoveryOperation.readInternal(MasterDiscoveryOperation.java:46)2015-07-31 18:08:49 com.hazelcast.spi.Operation.readData(Operation.java :451)2015-07-31 18:08:49(com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:111)2015-07-31 18:08:49(com.hazelcast.nio.serialization) .DataSerializer.read(DataSerializer.java:39)2015-07-31 18:08:49在com。 hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41) 2015-07-31 18:08:49 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276) 2015-07-31 18:08:49 ... 6 more hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41)2015-07-31 18:08:49 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276)2015-07-31 18:08:49 ...还有6

Can someone help me? 有人能帮我吗? I am not able to find anything :( 我什么都找不到:(

Here is my hazelcast xml: 这是我的hazelcast xml:

- no hazelcast.xml if present

--> dev dev-pass http://localhost:8080/mancenter 5701 0 224.2.2.3 54327 192.168.1.67 192.168.1.75 my-access-key my-secret-key us-west-1 ec2.amazonaws.com hazelcast-sg type hz-nodes 10.10.1.* PBEWithMD5AndDES thesalt thepass 19 16 0 0 1 -> dev dev-pass http:// localhost:8080 / mancenter 5701 0 224.2.2.3 54327 192.168.1.67 192.168.1.75 my-access-key my-secret-key us-west-1 ec2.amazonaws.com hazelcast- sg类型hz节点10.10.1。* PBEWithMD5AndDES盐盐通过19 16 0 0 1

    <!--
        Number of async backups. 0 means no backup.
    -->
    <async-backup-count>0</async-backup-count>

    <empty-queue-ttl>-1</empty-queue-ttl>
</queue>
 <map name="persistent.*">
    <!--
       Data type that will be used for storing recordMap.
       Possible values:
       BINARY (default): keys and values will be stored as binary data
       OBJECT : values will be stored in their object forms
       NATIVE : values will be stored in non-heap region of JVM
    -->
    <in-memory-format>BINARY</in-memory-format>

    <!--
        Number of backups. If 1 is set as the backup-count for example,
        then all entries of the map will be copied to another JVM for
        fail-safety. 0 means no backup.
    -->
    <backup-count>1</backup-count>
    <!--
        Number of async backups. 0 means no backup.
    -->
    <async-backup-count>0</async-backup-count>
    <!--
        Maximum number of seconds for each entry to stay in the map. Entries that are
        older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
        will get automatically evicted from the map.
        Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
    -->
    <time-to-live-seconds>0</time-to-live-seconds>
    <!--
        Maximum number of seconds for each entry to stay idle in the map. Entries that are
        idle(not touched) for more than <max-idle-seconds> will get
        automatically evicted from the map. Entry is touched if get, put or containsKey is called.
        Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
    -->
    <max-idle-seconds>0</max-idle-seconds>
    <!--
        Valid values are:
        NONE (no eviction),
        LRU (Least Recently Used),
        LFU (Least Frequently Used).
        NONE is the default.
    -->
    <eviction-policy>NONE</eviction-policy>
    <!--
        Maximum size of the map. When max size is reached,
        map is evicted based on the policy defined.
        Any integer between 0 and Integer.MAX_VALUE. 0 means
        Integer.MAX_VALUE. Default is 0.
    -->
    <max-size policy="PER_NODE">0</max-size>
    <!--
        When max. size is reached, specified percentage of
        the map will be evicted. Any integer between 0 and 100.
        If 25 is set for example, 25% of the entries will
        get evicted.
    -->
    <eviction-percentage>25</eviction-percentage>
    <!--
        Minimum time in milliseconds which should pass before checking
        if a partition of this map is evictable or not.
        Default value is 100 millis.
    -->
    <min-eviction-check-millis>100</min-eviction-check-millis>
    <!--
        While recovering from split-brain (network partitioning),
        map entries in the small cluster will merge into the bigger cluster
        based on the policy set here. When an entry merge into the
        cluster, there might an existing entry with the same key already.
        Values of these entries might be different for that same key.
        Which value should be set for the key? Conflict is resolved by
        the policy set here. Default policy is PutIfAbsentMapMergePolicy

        There are built-in merge policies such as
        com.hazelcast.map.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
        com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
        com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins.
        com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins.
    -->
  <merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>
     <map-store enabled="true">
        <factory-class-name>com.adeptia.indigo.services.hazelcast.PersistentMapStoreFactory</factory-class-name>
        <write-delay-seconds>0</write-delay-seconds>
    </map-store>

</map>

<multimap name="default">
    <backup-count>1</backup-count>
    <value-collection-type>SET</value-collection-type>
</multimap>

<list name="default">
    <backup-count>1</backup-count>
</list>

<set name="default">
    <backup-count>1</backup-count>
</set>

<jobtracker name="default">
    <max-thread-size>0</max-thread-size>
    <!-- Queue size 0 means number of partitions * 2 -->
    <queue-size>0</queue-size>
    <retry-count>0</retry-count>
    <chunk-size>1000</chunk-size>
    <communicate-stats>true</communicate-stats>
    <topology-changed-strategy>CANCEL_RUNNING_OPERATION</topology-changed-strategy>
</jobtracker>

<semaphore name="default">
    <initial-permits>0</initial-permits>
    <backup-count>1</backup-count>
    <async-backup-count>0</async-backup-count>
</semaphore>

<reliable-topic name="default">
    <read-batch-size>10</read-batch-size>
    <topic-overload-policy>BLOCK</topic-overload-policy>
    <statistics-enabled>true</statistics-enabled>
</reliable-topic>

<ringbuffer name="default">
    <capacity>10000</capacity>
    <backup-count>1</backup-count>
    <async-backup-count>0</async-backup-count>
    <time-to-live-seconds>30</time-to-live-seconds>
    <in-memory-format>BINARY</in-memory-format>
</ringbuffer>

<serialization>
    <portable-version>0</portable-version>
</serialization>

<services enable-defaults="true"/>

I had the same probelm. 我有同样的问题。 I tried to store the following data structure into Hazelcast using portables (row and cell are different portable impl.): 我尝试使用可移植程序将以下数据结构存储到Hazelcast中(行和单元格是不同的可移植隐含)。

row { cell { 'name' : 'cell_0_0', 'value' : 'cell_value_0_0' }, cell { 'name' : 'cell_0_1', 'value' : 1} }, ... 行{单元格{'name':'cell_0_0','value':'cell_value_0_0'},cell {'name':'cell_0_1','value':1}},...

The problem is that for the first cell hazelcast stores for the field name 'value' a field type of UTF, but during the storing of the second cell, hazelcast retrieves stored field definition for the field name 'value' and this was UTF. 问题在于,对于第一个单元格,hazelcast为字段名“值”存储了UTF的字段类型,但是在第二个单元格的存储过程中,hazelcast检索了字段名“值”的存储字段定义,这就是UTF。 So the field type is not Int but UTF and during reading of stored portables from map readUTF was used and that caused the exception for me, because stored field value and stored field type do not correspond to each other. 因此字段类型不是Int而是UTF,并且在从地图readUTF读取存储的可移植对象的过程中使用了字段,这对我造成了异常,因为存储的字段值和存储的字段类型彼此不对应。

EDIT : In your case after starting the second instance stored objects are exchanged and of course read. 编辑 :在您的情况下,在启动第二个实例后,交换存储的对象,当然可以读取。 Perhaps the problem lies at this point. 问题可能就在此刻。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM