[英]Getting java.lang.outOfMemoryError when Storing Large Files to a MySQL database using Java's BufferedInputStream
Im currently experimenting on storing large files on a MySQL 5.5 database using java. 我目前正在尝试使用Java在MySQL 5.5数据库上存储大文件。 My main class is called FileDatabaseTest. 我的主类称为FileDatabaseTest。 It has the following method: 它具有以下方法:
import java.sql.*;
import java.io.*;
...
public class FileDatabaseTest {
...
private void uploadToDatabase(File file, String description) {
try {
PreparedStatement stmt = connection.prepareStatement(
"INSERT INTO FILES (FILENAME, FILESIZE, FILEDESCRIPTION, FILEDATA) " +
"VALUES (?, ?, ?, ?)");
stmt.setString(1, file.getName());
stmt.setLong(2, file.length());
stmt.setString(3, description);
stmt.setBinaryStream(4, new FileInputStream(file));
stmt.executeUpdate();
updateFileList();
stmt.close();
} catch(SQLException e) {
e.printStackTrace();
} catch(FileNotFoundException e) {//thrown by FileInputStream constructor
e.printStackTrace();
} catch(SecurityException e) { //thrown by FileInputStream constructor
e.printStackTrace();
}
}
...
}
The database has only one Table - the "FILES" table, and it has the following columns. 该数据库只有一个表-“ FILES”表,并且具有以下列。
ID - AUTOINCREMENT, PRIMARY KEY
FILENAME - VARCHAR(100)
FILESIZE - BIGINT
FILEDESCRIPTION - VARCHAR(500)
FILEDATA - LONGBLOB
The program is working fine when uploading small documents, but when I upload files like 20MB, the upload process is very slow. 上载小文档时,该程序运行良好,但是当我上载20MB之类的文件时,上载过程非常缓慢。 So I tried putting the FileInputStream inside a BufferedInputStream in the following code: 所以我尝试通过以下代码将FileInputStream放入BufferedInputStream中:
stmt.setBinaryStream(4, new BufferedInputStream(new FileInputStream(file));
The upload process became very fast. 上传过程变得非常快。 Its like just copying the file to another directory. 就像只是将文件复制到另一个目录一样。 But when I tried to upload files more than 400mb, I got the following error: 但是,当我尝试上传超过400mb的文件时,出现以下错误:
Exception in thread "Thread-5" java.lang.OutOfMemoryError: Java heap space
at com.mysql.jdbc.Buffer.ensureCapacity(Buffer.java:156)
at com.mysql.jdbc.Buffer.writeBytesNoNull(Buffer.java:514)
at com.mysql.jdbc.PreparedStatement.escapeblockFast(PreparedStatement.java:1169)
at com.mysql.jdbc.PreparedStatement.streamToBytes(PreparedStatement.java:5064)
at com.mysql.jdbc.PreparedStatement.fillSendPacket(PreparedStatement.java:2560)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2401)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2345)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2330)
at FileDatabaseTest$2.run(FileDatabaseTest.java:312)
at java.lang.Thread.run(Thread.java:662)
So I tried using an embedded Apache-Derby database instead of MySQL, and I didn't get the error. 因此,我尝试使用嵌入式Apache-Derby数据库而不是MySQL,但没有得到该错误。 I was able to upload 500MB to 1.5G files in the Derby database using the BufferedInputStream. 我能够使用BufferedInputStream将500MB的文件上传到Derby数据库中的1.5G文件中。 I also observed that when using the BufferedInputStream with the MySQL server in uploading large files, the JVM is eating a lot of memory, while when I used it in the Derby database, the JVM's memory usage is maintaned at around 85MB TO 100MB. 我还观察到,在MySQL服务器上使用BufferedInputStream上传大文件时,JVM占用了大量内存,而在Derby数据库中使用JVM时,JVM的内存使用量保持在85MB至100MB左右。
I am relatively new to MySQL and I am just using its default configurations. 我是MySQL的新手,我只是使用它的默认配置。 The only thing I changed in its configuration is the "max_allowed_packet" size so I can upload up to 2GB file to the database. 我对其配置进行的唯一更改是“ max_allowed_packet”的大小,因此我可以将最多2GB的文件上传到数据库。 So I wonder where the error came from. 所以我想知道错误是从哪里来的。 Is it a bug of MySQL or the MySQL connector/J? 它是MySQL的错误还是MySQL连接器/ J的错误? or is there something wrong with my code? 还是我的代码有问题?
What I am trying to achieve here is to be able to upload large files (up to 2GB) to the MySQL server using java, without increasing the java heap space. 我在这里想要实现的是能够使用Java将大型文件(最大2GB)上传到MySQL服务器,而不会增加Java堆空间。
There are another resolve method, if you don't want to upping your JVM heap size: 如果不想增加JVM堆大小,还有另一种解决方法:
First, your MySQL version should newer than 5.0. 首先,您的MySQL版本应高于5.0。
Second, Statement.getResultSetType() should be TYPE_FORWARD_ONLY and ResultSetConcurrency should be CONCUR_READ_ONLY(default). 其次,Statement.getResultSetType()应该为TYPE_FORWARD_ONLY,而ResultSetConcurrency应该为CONCUR_READ_ONLY(默认)。
Third, include ONE of these lines: 1).statement.setFetchSize(Integer.MIN_VALUE); 第三,包括以下行之一:1).statement.setFetchSize(Integer.MIN_VALUE); 2).((com.mysql.jdbc.Statement)stat).enableStreamingResults(); 2)。((com.mysql.jdbc.Statement)stat).enableStreamingResults();
now you will fetch result rows one by one 现在您将一一读取结果行
It seems more to be a MySQL JDBC problem. 似乎更多是MySQL JDBC问题。 Of course you migth consider a GZip + Piped I/O. 当然,您可以考虑使用GZip +管道I / O。
I also found a terrible solution, doing the insert in parts: 我还找到了一个糟糕的解决方案,将零件插入其中:
UPDATE FILES SET FILEDATA = CONCAT(FILEDATA, ?)
We may conclude, that for large files, it is better to store it on disk. 我们可以得出结论,对于大文件,最好将其存储在磁盘上。
Nevertheless: 不过:
final int SIZE = 1024*128;
InputStream in = new BufferedInputStream(new FileInputStream(file), SIZE);
stmt.setBinaryStream(4, in);
stmt.executeUpdate();
updateFileList();
stmt.close();
in.close(); //?
The default buffer size is 8 KB I think, a larger buffer might show a different memory behaviour, maybe shedding some light on the problem. 我认为默认缓冲区大小为8 KB,较大的缓冲区可能会显示不同的内存行为,也许可以减轻一些问题的困扰。
Closing oneself should not hurt to try. 闭上自己的手应该不会受伤。
Just for the heck of it, try upping your JVM heap size. 仅出于此目的,请尝试增加JVM堆大小。
increase the java heap size permanently? 永久增加Java堆大小? http://javahowto.blogspot.com/2006/06/6-common-errors-in-setting-java-heap.html http://javahowto.blogspot.com/2006/06/6-common-errors-in-setting-java-heap.html
upping JVM heap size when running your java code: 运行Java代码时增加JVM堆大小:
right click your java file
->run as->run configurations->arguments->VM arguments
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.