[英]How to upload Direct Buffer gotten from JNI to S3 directly
我创建了一个共享的 memory(大小为 200MB),它映射到系统上运行的 Java 进程和 C++ 进程。 C++ 进程在此共享 memory 中写入 50MB 数据。在 JAVA 端,已映射相同共享 memory 的 JNI function 将此数据读入直接缓冲区,如下所示:
JNIEXPORT jobject JNICALL Java_service_SharedMemoryJNIService_getDirectByteBuffer
(JNIEnv *env, jclass jobject, jlong buf_addr, jint buf_len){
return env->NewDirectByteBuffer((void *)buf_addr, buf_len);
}
现在,在 JAVA 端,我需要将这 50MB 的数据上传到 S3。 目前,我必须将此直接缓冲区复制到 JVM 堆中的缓冲区,如下所示:
public String uploadByteBuffer(String container, String objectKey, ByteBuffer bb) {
BlobStoreContext context = getBlobStoreContext();
BlobStore blobStore = context.getBlobStore();
byte[] buf = new byte[bb.capacity()];
bb.get(buf);
ByteArrayPayload payload = new ByteArrayPayload(buf);
Blob blob = blobStore.blobBuilder(objectKey)
.payload(payload)
.contentLength(bb.capacity())
.build();
blobStore.putBlob(container, blob);
return objectKey;
}
我想避免这种额外的复制形式共享 memory 到 JVM 堆。 有没有办法直接将Direct buffer中包含的数据上传到S3?
谢谢
BlobBuilder.payload
可以采用ByteSource
,您可以使用ByteBuffer
包装器:
public class ByteBufferByteSource extends ByteSource {
private final ByteBuffer buffer;
public ByteBufferByteSource(ByteBuffer buffer) {
this.buffer = checkNotNull(buffer);
}
@Override
public InputStream openStream() {
return new ByteBufferInputStream(buffer);
}
private static final class ByteBufferInputStream extends InputStream {
private final ByteBuffer buffer;
private boolean closed = false;
ByteBufferInputStream(ByteBuffer buffer) {
this.buffer = buffer;
}
@Override
public synchronized int read() throws IOException {
if (closed) {
throw new IOException("Stream already closed");
}
try {
return buffer.get();
} catch (BufferUnderflowException bue) {
return -1;
}
}
@Override
public void close() throws IOException {
super.close();
closed = true;
}
}
}
您将需要覆盖read(byte[], int, int)
以提高效率。 我还向 jclouds 提出了这个 pull request: https://github.com/apache/jclouds/pull/158你可以改进。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.