[英]AWS S3 presigned url upload returns 200 but the file is not in the bucket using NodeJS & node-fetch
These are the relevant dependencies in my package.json:这些是我的 package.json 中的相关依赖项:
"dependencies": {
"@aws-sdk/client-s3": "^3.52.0",
"@aws-sdk/node-http-handler": "^3.52.0",
"@aws-sdk/s3-request-presigner": "^3.52.0",
"axios": "^0.26.0",
"body-parser": "^1.19.1",
"cors": "^2.8.5",
"express": "^4.17.2",
"express-fileupload": "^1.3.1",
"http-proxy-agent": "^5.0.0",
"lodash": "^4.17.21",
"morgan": "^1.10.0",
"node-fetch": "^2.6.7",
"proxy-agent": "^5.0.0"
}
This is my code:这是我的代码:
const express = require('express');
const fileUpload = require('express-fileupload');
const cors = require('cors');
const bodyParser = require('body-parser');
const morgan = require('morgan');
const _ = require('lodash');
const path = require('path');
const app = express();
const fs = require('fs')
//s3 relevant imports
const { S3Client, PutObjectCommand } = require("@aws-sdk/client-s3");
const { getSignedUrl } = require("@aws-sdk/s3-request-presigner");
const fetch = require('node-fetch');
const HttpProxyAgent = require('http-proxy-agent');
let s3 = null
app.listen(port, async () => {
console.log(`App is listening on port ${port}. Static ${root}`)
try {
s3 = new S3Client({
region: 'ap-southeast-2',
credentials: {
accessKeyId: accessKey.data.access_key,
secretAccessKey: accessKey.data.secret_key
}
});
} catch (e) {
console.error(e)
}
});
app.get('/uploadS3Get', async (req, res) => {
try {
const presignedS3Url = await getSignedUrl(s3, new PutObjectCommand({
Bucket: 'xyz-unique-bucketname',
Key: 'test/carlos-test.txt',
}) );
const optionsForFetch = {
method: 'PUT',
body: fs.readFileSync('carlos-test.txt'),
agent: new HttpProxyAgent ('http://username:password@proxy.com:8080')
}
const respFromUpload = await fetch(presignedS3Url, optionsForFetch).catch( err => {
console.log("error catch from fetch")
console.log(err);
return null;
});
console.log("completed so this is the respFrom Upload")
console.log(respFromUpload)
console.log("sending the response back now")
res.status(200).send(respFromUpload);
} catch (e) {
console.log("general catch error")
console.log(e)
res.status(500).send(e);
}
})
I get 200 from node-fetch and this is when it gets printed out:我从 node-fetch 得到 200,这是打印出来的时候:
Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: {
body: PassThrough {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 5,
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: true,
[Symbol(kCapture)]: false,
[Symbol(kTransformState)]: [Object]
},
disturbed: false,
error: null
},
[Symbol(Response internals)]: {
url: 'https://xyz-unique-bucketname.s3.ap-southeast-2.amazonaws.com/test/carlos-test.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=omitted%2F20220223%2Fap-southeast-2
%2Fs3%2Faws4_request&X-Amz-Date=20220223T022913Z&X-Amz-Expires=900&X-Amz-Signature=sigommitted&X-Amz-SignedHeaders=content-length%3Bhost&x-id=PutObject',
status: 200,
statusText: 'OK',
headers: Headers { [Symbol(map)]: [Object: null prototype] },
counter: 0
}
}
However, even if the node-fetch PUT using the signed url is returning 200, the file itself is not in the bucket.但是,即使使用已签名 url 的节点提取 PUT 返回 200,文件本身也不在存储桶中。
What is the wrong with my code and why is AWS SDK misleading by saying 200 response but file is missing?我的代码有什么问题,为什么 AWS SDK 误导说 200 响应但文件丢失?
Suggestion : use their createPresignedPost
mechanism rather than the getSignedUrl
/ PutObjectCommand建议:使用他们的
createPresignedPost
机制而不是getSignedUrl
/ PutObjectCommand
The server-side code is similar, but offers better features for content-type matching and for conditions like setting a max file size.服务器端代码类似,但为内容类型匹配和设置最大文件大小等条件提供了更好的功能。 See @aws-sdk/s3-presigned-post
请参阅@aws-sdk/s3-presigned-post
const { url, fields } = await createPresignedPost(client, postOpts);
Here's my WIP typescript code so far for the client upload.到目前为止,这是我用于客户端上传的 WIP typescript 代码。 The output of
createPresignedPost
gets passed in like uploadFileAsFormData({ destUrl: url, formFields: fields, targetFile, method: 'POST' })
createPresignedPost
的 output 像uploadFileAsFormData({ destUrl: url, formFields: fields, targetFile, method: 'POST' })
Disclaimer: I've made changes since last running it successfully and haven't yet added functional tests so it might be buggy.免责声明:自上次成功运行以来我已经进行了更改,并且尚未添加功能测试,因此它可能存在错误。 (Am roughing it out as part of a larger feature and may not get back to this file for another week.)
(我正在粗略地把它作为一个更大的功能的一部分,可能再过一周才能回到这个文件。)
import { createReadStream } from "fs";
import http from "http";
import { stat } from "fs/promises";
import FormData from "form-data";
const DefaultHighWaterMark = 1024 * 1024;
export interface UploadFileAsFormDataArgs {
/**
* File to upload.
* REMINDER: verify it is in an authorized directory before calling (/tmp or a mounted working-data volume)
*/
targetFile: string;
/**
* Upload destination URL. (Often a presigned url)
*
*/
destUrl: string;
/**
* Added as both a request header and as a form field (for the nitpicky unpleasant servers that require it.)
* mime type of file. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types
*
* NOTE: if sending to object store using presigned url, this is likely REQUIRED
*/
contentType?: string;
/**
* HTTP request method. Usually POST or PUT
*/
method?: string;
extraHeaders?: Record<string, string>;
skipResponseBody?: boolean;
formFields: Record<string, string>;
/**
* Option to explicitly set buffer size for the stream. Normally, the default is 64kb in newer nodejs versions, while browsers use
* closer to 1MB. The default here follows browser-like default: 1024 * 1024.
*/
highWaterMark?: number;
// todo: option for AbortController
// todo: implement progress reporting w/ callback option
}
/**
* Upload a file similar to a browser using FORM submit.
* @param args
*/
export async function uploadFileAsFormData(args: UploadFileAsFormDataArgs): Promise<UploadFileResult> {
const { formFields, highWaterMark, targetFile, destUrl, contentType } = args;
const form = new FormData();
Object.entries(formFields).forEach(([ field, value ]) => {
form.append(field, value);
});
if (contentType) {
form.append('Content-Type', args.contentType); // in case needed... these object store APIs are finicky and pretty shitty about reporting errors (or success).
}
const readStream = createReadStream(targetFile, { highWaterMark: highWaterMark || DefaultHighWaterMark });
const { size } = await stat(targetFile);
const appendFileOpts: FormData.AppendOptions = {
knownLength: size,
};
if (contentType) {
appendFileOpts.contentType = contentType;
}
form.append('file', readStream, appendFileOpts);
const uploadStartedAt = Date.now();
const responseData: { body: string, statusCode?: number, statusMessage?: string, headers?: Record<string, any> } = await new Promise((resolve, reject) => {
form.submit(destUrl, (err: Error | null, resp: http.IncomingMessage) => {
if (err) {
reject(err);
} else {
const { statusCode, statusMessage, headers } = resp;
const buffers = [] as Buffer[];
resp.on('data', (b: Buffer) => {
buffers.push(b);
});
resp.on('end', () => {
resolve({
body: Buffer.concat(buffers).toString(),
statusCode,
statusMessage,
headers,
});
});
resp.on('error', (err) => {
reject(err);
});
resp.resume();
}
});
});
const { statusCode, statusMessage, body, headers } = responseData || {};
return {
sizeBytes: size,
statusCode,
statusMessage,
rawRespBody: body,
responseHeaders: headers,
uploadStartedAt,
uploadFinishedAt: Date.now(),
}
}
export interface UploadFileResult {
sizeBytes: number;
statusCode: number;
statusMessage?: string;
rawRespBody?: string;
responseHeaders?: Record<string, any>;
uploadStartedAt: number;
uploadFinishedAt?: number;
}
Previous answer:上一个答案:
S3 doesn't support chunked transfer encoding so you should add a Content-Length
header, along the lines of this: S3 不支持分块传输编码,因此您应该添加
Content-Length
header,如下所示:
const { stat } = require('fs/promises'); // move up with the others
const { size } = await stat('carlos-test.txt');
const optionsForFetch = {
method: 'PUT',
body: fs.readFileSync('carlos-test.txt'),
headers: { 'Content-Length': size },
agent: new HttpProxyAgent ('http://username:password@proxy.com:8080')
}
It's a bummer that S3 doesn't support streaming uploads with presigned urls.令人遗憾的是,S3 不支持使用预签名 url 进行流式上传。 I'm shopping around now for object stores that do, but so far the pickings are slim.
我现在正在四处寻找 object 家商店,但到目前为止,选择很少。 :/
:/
I agree with you about that 200 response--it's misleading.我同意你关于 200 回复的看法——这是误导。 Their response body includes an xml error though:
<Error><Code>NotImplemented</Code><Message>A header you provided implies functionality that is not implemented</Message><Header>Transfer-Encoding</Header>...
他们的响应正文包含一个 xml 错误:
<Error><Code>NotImplemented</Code><Message>A header you provided implies functionality that is not implemented</Message><Header>Transfer-Encoding</Header>...
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.