简体   繁体   中英

issue with java heap on ec2 AWS microinstance

I'm trying to get dbpedia java package working on an AWS ECs micro instance (info here: https://github.com/dbpedia-spotlight/dbpedia-spotlight/wiki/Run-from-a-JAR )

The problem is that additional java heap space is required and I guess amazon isn't so fond of giving it to me. Here is the command and output. I"ve tried -Xmx10G, etc, no dice. I guess amazon micro instances might be limited in memory / heap space but I'm really not sure how to go about changing it or if that is the issue. Thanks!

$ java -Xmx1024m -jar dbpedia-spotlight.jar en http://localhost:2223/rest
Jan 31, 2015 6:48:04 AM org.dbpedia.spotlight.db.memory.MemoryStore$ load
INFO: Loading MemoryTokenTypeStore...
Jan 31, 2015 6:48:05 AM org.dbpedia.spotlight.db.memory.MemoryTokenTypeStore createReverseLookup
INFO: Creating reverse-lookup for Tokens.
Jan 31, 2015 6:48:06 AM org.dbpedia.spotlight.db.memory.MemoryStore$ load
INFO: Done (1527 ms)
Jan 31, 2015 6:48:06 AM org.dbpedia.spotlight.db.memory.MemoryStore$ load
INFO: Loading MemorySurfaceFormStore...
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ec7a8000, 153452544, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 153452544 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/ubuntu/dbpedia-spotlight-quickstart-0.6.5/hs_err_pid2347.log

If you ask Java to allocate more memory than is available, it's going to barf ( longer, more technical answer here ). A t1.micro has just over 600mb of memory, and a t2.micro has 1000mb of memory.

As a starting point, assuming a modern Ubuntu instance, you should be able to run 350-400mb on a t1.micro and 750-800mb for a t2.micro.

If that isn't enough, use a larger instance. Good next steps would be the t2.medium or m3.large. If it's really a memory hog, the r3.* servers have more memory.

We run Spotlight in a m2.xlarge ( It has 15G ) our custom-entity model eats around 14G.

I think this is impossible to run on a micro instance. You have to put all your models in memory and a micro instance just can't handle it. When you specify -Xmx -Xms to a higher memory than RAM the one available in the machine it will try to use SWAP to compensate. In this case this is horrendous cause the models consume so much memory.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM