简体   繁体   English

JDBC / SQLite 内存泄漏

[英]Memory leak with JDBC / SQLite

I have some code which accesses a SQLite database using JDBC.我有一些使用 JDBC 访问 SQLite 数据库的代码。

I've noticed that every time a query is made, the memory usage increases - and it does not go down, even after the connection is closed.我注意到每次进行查询时,内存使用量都会增加 - 即使在连接关闭后它也不会下降。

Here's what I'm doing:这是我在做什么:

1) Closing the PreparedStatement 1) 关闭PreparedStatement

2) Closing the ResultSet 2)关闭ResultSet

3) Closing the connection 3)关闭连接

Here's a screenshot of the heapdump analysis:下面是 heapdump 分析的截图:

堆转储截图

It shows a lot of java.lang.ref.Finalizer and a lot of PreparedStatement and ResultSet objects.它显示了很多java.lang.ref.Finalizer和很多PreparedStatementResultSet对象。

Here's the code (its in scala but should be easily comparable to java):这是代码(它在scala中,但应该很容易与java相提并论):

val conn: Connection = DriverManager.getConnection(url)


// Gets strings by a query like SELECT .. WHERE foo = ?
def getStringsByQuery(query: String, param: String, field: String):Seq[String] = {

    val st = conn.prepareStatement(query)
    st.setString(1, param) //value of foo = ?
    st.setFetchSize(Integer.MAX_VALUE)
    st.setMaxRows(Integer.MAX_VALUE)

    //Holder of results
    var results = collection.mutable.Seq.empty[String]

    val rs: ResultSet = st.executeQuery()

    //add results to holder
    while (rs.next())
      results :+= rs.getString(field)

    rs.close() //closing ResultSet
    st.close() //closing PreparedStatement
    results
  }

Here's the test I wrote to test this:这是我为了测试这个而编写的测试:

test("detect memory leak") {

    log.info("Starting in 10 sec")
    Thread.sleep(10.seconds.toMillis)

    //Calls a method over and over to see if there's a memory leak or not..
    (1 to 1000).par.foreach(i => {
      val randomWord = getRandomWord() //this produces a random word
      val sql = "SELECT foo FROM myTable where bar = ?"
      val results = getStringsByQuery(sql, randomWord, "bar")
    })

    conn.close() //close the connection
    log.info("Closed conn, closing in 30 sec")

    Thread.sleep(1.minutes.toMillis)
  }

When I run the test - the memory usage steadily increases from 24.6 GB to 33 gb and never goes down (even though ResultSet + PreparedStatement are being closed), and even at the end when the conn is closed and thread sleeps for 1 min - the memory usage still doesn't go down.当我运行测试时 - 内存使用量从 24.6 GB 稳步增加到 33 GB 并且永远不会下降(即使 ResultSet + PreparedStatement 正在关闭),甚至在最后当 conn 关闭并且线程休眠 1 分钟时 -内存使用量仍然没有下降。

Does anyone know what's going on here?有谁知道这里发生了什么? I'd appreciate any help.我很感激任何帮助。

We run sqlite-jdbc in a production environment and we noticed similar behavior when inspecting heap dumps.我们在生产环境中运行 sqlite-jdbc,我们在检查堆转储时注意到类似的行为。

These objects are only on the heap because they haven't yet been run through the garbage collector.这些对象仅在堆上,因为它们尚未通过垃圾收集器运行。 In the sense of https://blog.nelhage.com/post/three-kinds-of-leaks/ this is a "type 2 memory leak", in that the objects have been allocated and are living a bit longer than you expect.https://blog.nelhage.com/post/three-kinds-of-leaks/的意义上讲,这是“类型 2 内存泄漏”,因为对象已被分配并且比您预期的要长一些.

The problem this caused for us is machines would retain memory on the Java heap for unexpectedly long periods of time, running into memory pressure during peak request times.这给我们带来的问题是机器会在 Java 堆上保留内存出乎意料的长时间,在请求高峰期遇到内存压力。 We did a few different things to ensure that these objects would be collected more frequently.我们做了一些不同的事情来确保更频繁地收集这些对象。

  • Decrease overall Java heap size.减少整体 Java 堆大小。 This ensures that the garbage collector sees more of a need to clean up these objects.这确保垃圾收集器看到更多需要清理这些对象。 Our workload is very intense on SQLite, so it's better for machine memory to go to the off-heap memory through JNI rather than being allocated on the Java heap.我们在 SQLite 上的工作量非常大,所以机器内存最好通过 JNI 去堆外内存,而不是在 Java 堆上分配。
  • Tune the G1GC garbage collector to allocate more of the heap for newgen since these objects are usually very short-lived.调整 G1GC 垃圾收集器为 newgen 分配更多的堆,因为这些对象通常非常短暂。 ( https://www.oracle.com/technetwork/articles/java/g1gc-1984535.html ) ( https://www.oracle.com/technetwork/articles/java/g1gc-1984535.html )
  • Increase our garbage collector logging and run them through a service like GCEasy ( https://gceasy.io/ ) to understand exactly what types of garbage collector runs were happening for further tuning.增加我们的垃圾收集器日志记录并通过 GCEasy ( https://gceasy.io/ ) 之类的服务运行它们,以准确了解正在运行的垃圾收集器类型以进行进一步调整。

These may not match your use case but these are some options you can consider depending on what you're running into.这些可能与您的用例不匹配,但您可以根据遇到的情况考虑这些选项。

I closed the connection and database object, created a new object database with the same parameters and a new connection, and finally called garbage collection.我关闭了连接和数据库对象,用相同的参数和新的连接创建了一个新的对象数据库,最后调用了垃圾回收。

Repeat the process as you need.根据需要重复该过程。 I make a counter to repeat the process when I fill the half of my computer's memory.当我填满计算机内存的一半时,我会做一个计数器来重复这个过程。

Now I can run without errors.现在我可以毫无错误地运行了。

I resolved this issue by porting my Sqlite datasets to CQEngine.我通过将我的 Sqlite 数据集移植到 CQEngine 解决了这个问题。 Couldn't be happier.不能更快乐。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM