简体   繁体   中英

Ignoring content-type when doing PUT upload to S3 in browser?

Having (almost) successfully setup s3 uploads from the browser on my django project, I have ran into one last snag that I can't seem to figure out. There doesn't seem to be any way to ignore setting the content-type when creating a signature to upload something onto s3.

The reason it would be helpful to remove the content-type is that in both safari and chrome some files with uppopular extensions (even .zip won't work) will give me a "The request signature we calculated does not match the signature you provided. Check your key and signing method" due to the fact that the browser cannot recognize the mime-type I believe (at least whenever I print it out and I have an error, it's blank).

This is the guide that I followed: https://devcenter.heroku.com/articles/s3-upload-python , which worked well except for when it can't determine the mime type. Here's a copy of my very slightly modified code as well:

    import base64
    from hashlib import sha1
    import hmac
    AWS_ACCESS_KEY = 'X'
    AWS_SECRET_KEY = 'XX'
    S3_BUCKET = 'XX/X/X'

    object_name =  urllib.quote_plus(request.GET['s3_object_name'])
    print "object_name: ", object_name.lower()
    mime_type = request.GET['s3_object_type']
    #on some files this is blank and thats the ones that give me 403 errors from s3
    print "mime Type: ", mime_type


    expires = int(time.time()+15)
    amz_headers = "x-amz-acl:public-read"
    # Generate the PUT request that JavaScript will use:
    put_request = "PUT\n\n%s\n%d\n%s\n/%s/%s" % (mime_type, expires, amz_headers, S3_BUCKET, object_name)
    # Generate the signature with which the request can be signed:
    signature = base64.encodestring(hmac.new(AWS_SECRET_KEY, put_request, sha1).digest())
    # Remove surrounding whitespace and quote special characters:
    signature = urllib.quote_plus(signature.strip())

    # Build the URL of the file in anticipation of its imminent upload:
    url = 'https://%s.s3.amazonaws.com/media/attachments/%s' % ('S3_BUCKET', object_name)

    content = json.dumps({
        'signed_request': '%s?AWSAccessKeyId=%s&Expires=%d&Signature=%s' % (url, AWS_ACCESS_KEY, expires, signature),
        'url': url
    })
    print content

    # Return the signed request and the anticipated URL back to the browser in JSON format:
    return HttpResponse(content, mimetype='text/plain; charset=x-user-defined')

Basically this problem can be attributed to the fact that in the s3_upload.js, that the guide provides, reading file.type comes out incorrect so I modified this part of my code

    object_name =  urllib.quote_plus(request.GET['s3_object_name'])
    print "object_name: ", object_name.lower()
    mime_type = request.GET['s3_object_type']
    #on some files this is blank and thats the ones that give me 403 errors from s3
    print "mime Type: ", mime_type

to

    mime_type = request.GET['s3_object_type']
    print "mime Type: ", mime_type
    mtype,encoding = mimetypes.guess_type(object_name)
    print "guessed mime type", mtype
    mime_type = mtype

and then changed content to

content = json.dumps({
        'signed_request': '%s?AWSAccessKeyId=%s&Expires=%d&Signature=%s' % (url, AWS_ACCESS_KEY, expires, signature),
        'url': url,
        'mime_type' : mime_type
    })

which passed it back into the javascript script. From there I just modified the script to use my mime_type as the content-type header when doing the put, instead of what it had been doing (using file.type)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM