简体   繁体   English

aws转码器覆盖s3上的文件

[英]aws transcoder overwrite files on s3

I'm using the AWS PHP SDK to upload a file to S3 then trancode it with Elastic Transcoder.我正在使用 AWS PHP SDK 将文件上传到 S3,然后使用 Elastic Transcoder 对其进行转码。

First pass everything works fine, the putobject command overwrites the old file (always named the same) on s3:首先通过一切正常,putobject 命令覆盖 s3 上的旧文件(始终命名相同):

$s3->putObject([
      'Bucket'     => Config::get('app.aws.S3.bucket'),
      'Key'        => $key,
      'SourceFile' => $path,          
      'Metadata'   => [
        'title'     => Input::get('title')
      ]
    ]);

However when creating a second transcoding job, i get the error:但是,在创建第二个转码作业时,出现错误:

  The specified object could not be saved in the specified bucket because an object by that name already exists

the transcoder role has full s3 access.代码转换器角色具有完整的 s3 访问权限。 Is there a way around this or will i have to delete the files using the sdk everytime before its transcoded?有没有办法解决这个问题,或者我每次在转码之前都必须使用 sdk 删除文件?

my create job:我的创造工作:

    $result = $transcoder->createJob([
      'PipelineId' => Config::get('app.aws.ElasticTranscoder.PipelineId'),
      'Input' => [
        'Key' => $key
      ],
      'Output' => [
        'Key' => 'videos/'.$user.'/'.$output_key,
        'ThumbnailPattern' => 'videos/'.$user.'/thumb-{count}',
        'Rotate' => '0',
        'PresetId' => Config::get('app.aws.ElasticTranscoder.PresetId')       
      ],
    ]);

The Amazon Elastic Transcoder service documents that this is the expected behavior here: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-output-key . Amazon Elastic Transcoder服务在此处记录了这是预期的行为: http//docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-output-key

If your workflow requires you to overwrite the same key, then it sounds like you should have the job output somewhere unique and then issue an S3 CopyObject operation to overwrite the older file. 如果您的工作流程要求您覆盖相同的密钥,那么听起来您应该将作业输出放在某处,然后发出S3 CopyObject操作来覆盖旧文件。

If you enable versioning on the S3 bucket, then Amazon Elastic Transcoder will be happy overwriting the same key with the transcoded version.如果您在 S3 存储桶上启用版本控制,那么 Amazon Elastic Transcoder 将很乐意用转码后的版本覆盖相同的密钥。

I can think of two ways to implement it: 我可以想到两种方法来实现它:

  1. Create two buckets, one for temp file storage (where its uploaded) and another where transcoded file is placed. 创建两个存储桶,一个用于临时文件存储(上载的位置),另一个用于存放转码文件。 Post transcoding when new file is created, you can delete temp file. 创建新文件后进行转码,可以删除临时文件。
  2. Use single bucket and upload file with some suffix/prefix. 使用单个存储桶并上传带有一些后缀/前缀的文件。 Create transcoded file in same bucket removing prefex/suffix (which you used for temp name). 在同一个桶中创建转码文件,删除prefex / suffix(用于临时名称)。

In both cases for automated deletion of uploaded files you can use Lambda function with S3 notifications. 在自动删除上传文件的两种情况下,您都可以将Lambda函数与S3通知一起使用。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM