[英]Buffer implementing io.WriterAt in go
我正在使用 aws-sdk 從 s3 存儲桶下載文件。 S3 下載功能需要實現 io.WriterAt 但是 bytes.Buffer 沒有實現它。 現在我正在創建一個實現 io.WriterAt 的文件,但我想要內存中的一些東西。
對於涉及 AWS 開發工具包的案例,請使用aws.WriteAtBuffer
將 S3 對象下載到內存中。
requestInput := s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
}
buf := aws.NewWriteAtBuffer([]byte{})
downloader.Download(buf, &requestInput)
fmt.Printf("Downloaded %v bytes", len(buf.Bytes()))
這不是對原始問題的直接回答,而是我在這里登陸后實際使用的解決方案。 這是一個類似的用例,我認為可以幫助其他人。
AWS 文檔定義了合同,如果您將downloader.Concurrency
設置為 1,您將獲得保證的順序寫入。
downloader.Download(FakeWriterAt{w}, s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
})
downloader.Concurrency = 1
因此,您可以使用io.Writer
並將其包裝以實現io.WriterAt
,從而丟棄您不再需要的offset
:
type FakeWriterAt struct {
w io.Writer
}
func (fw FakeWriterAt) WriteAt(p []byte, offset int64) (n int, err error) {
// ignore 'offset' because we forced sequential downloads
return fw.w.Write(p)
}
我不知道在標准庫中有什么方法可以做到這一點,但是您可以編寫自己的緩沖區。
真的不會那么難...
編輯:我無法停止思考這個問題,我最終把整件事都搞砸了,享受:)
package main
import (
"errors"
"fmt"
)
func main() {
buff := NewWriteBuffer(0, 10)
buff.WriteAt([]byte("abc"), 5)
fmt.Printf("%#v\n", buff)
}
// WriteBuffer is a simple type that implements io.WriterAt on an in-memory buffer.
// The zero value of this type is an empty buffer ready to use.
type WriteBuffer struct {
d []byte
m int
}
// NewWriteBuffer creates and returns a new WriteBuffer with the given initial size and
// maximum. If maximum is <= 0 it is unlimited.
func NewWriteBuffer(size, max int) *WriteBuffer {
if max < size && max >= 0 {
max = size
}
return &WriteBuffer{make([]byte, size), max}
}
// SetMax sets the maximum capacity of the WriteBuffer. If the provided maximum is lower
// than the current capacity but greater than 0 it is set to the current capacity, if
// less than or equal to zero it is unlimited..
func (wb *WriteBuffer) SetMax(max int) {
if max < len(wb.d) && max >= 0 {
max = len(wb.d)
}
wb.m = max
}
// Bytes returns the WriteBuffer's underlying data. This value will remain valid so long
// as no other methods are called on the WriteBuffer.
func (wb *WriteBuffer) Bytes() []byte {
return wb.d
}
// Shape returns the current WriteBuffer size and its maximum if one was provided.
func (wb *WriteBuffer) Shape() (int, int) {
return len(wb.d), wb.m
}
func (wb *WriteBuffer) WriteAt(dat []byte, off int64) (int, error) {
// Range/sanity checks.
if int(off) < 0 {
return 0, errors.New("Offset out of range (too small).")
}
if int(off)+len(dat) >= wb.m && wb.m > 0 {
return 0, errors.New("Offset+data length out of range (too large).")
}
// Check fast path extension
if int(off) == len(wb.d) {
wb.d = append(wb.d, dat...)
return len(dat), nil
}
// Check slower path extension
if int(off)+len(dat) >= len(wb.d) {
nd := make([]byte, int(off)+len(dat))
copy(nd, wb.d)
wb.d = nd
}
// Once no extension is needed just copy bytes into place.
copy(wb.d[int(off):], dat)
return len(dat), nil
}
我正在尋找一種直接從 S3 對象獲取io.ReadCloser
的簡單方法。 無需緩沖響應,也無需降低並發性。
import "github.com/aws/aws-sdk-go/service/s3"
[...]
obj, err := c.s3.GetObject(&s3.GetObjectInput{
Bucket: aws.String("my-bucket"),
Key: aws.String("path/to/the/object"),
})
if err != nil {
return nil, err
}
// obj.Body is a ReadCloser
return obj.Body, nil
使用 aws-sdk-go-v2,代碼庫提供的示例顯示:
// Example:
// pre-allocate in memory buffer, where headObject type is *s3.HeadObjectOutput
buf := make([]byte, int(headObject.ContentLength))
// wrap with aws.WriteAtBuffer
w := s3manager.NewWriteAtBuffer(buf)
// download file into the memory
numBytesDownloaded, err := downloader.Download(ctx, w, &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(item),
})
然后使用 w.Bytes() 作為結果。
導入“github.com/aws/aws-sdk-go-v2/feature/s3/manager”和其他需要的組件
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.