简体   繁体   中英

Cache Amazon S3 files in Service Worker for Offline usage

I'm using a service worker to allow offline access to my page. I am using workbox, but I think the issue is applicable to service workers in general.

The workflow for a user is that he clicks on a button to download data for offline usage. This includes a number of files, that may or may not be stored on Amazon S3.

Eg, one could imagine that the code when the user clicks the button is:

function cacheFilesForOfflineUse(files) {
    files.forEach(file => fetch(file));
}

Then, in serviceWorker.js, something like:

workbox.routing.registerRoute(
    ({event}) => /* ... omitted ... */,
    new workbox.strategies.CacheFirst({cacheName: 'myFilesCache'})
);

is responsible for intercepting the fetch and storing them in the cache. This is of course simplified.

This mostly works, but for one specific case: if the file is behind a 302 Redirect (which is the case when the file is stored on S3), and I try to download the file (by setting document.location), I get this error in the console:

... a redirected response was used for a request whose redirect mode is not "follow"

And an error page is displayed.

There is a suggestion in Only in Chrome (Service Worker): '... a redirected response was used for a request whose redirect mode is not "follow" ' to store a copy of the response when you get a redirected response, so I tried using this techique as workbox plugin (cleanResponse is as in the linked post):

{
    cacheWillUpdate: async ({request, response, event}) => {
        if (response.redirected && response.ok) {
            // Sanitize redirects
            for (const key of response.headers.keys()) {
                console.log(key, response.headers.get(key));
            }
            return cleanResponse(response);
        }
        return response;
    },
}

Which almost works, but has one big caveat: it does not clone all response headers. The only logged / copied headers are:

content-type image/png last-modified Tue, 18 Jun 2019 12:57:10 GMT

Which critically misses the Content-Disposition: attachment header, which is required for the browser to treat this as a download.

Is there any way around this, or have I run into some security limitation?

I figured out a workaround for my case. Maybe it will be helpful to someone else in the future.

The solution is to avoid using navigation to download the file. So I replaced:

function downloadFile(fileUrl) {
    document.location.assign(fileUrl);
}

with

function downloadFile(fileUrl, filename) {
    fetch(url).then(function (response) {
        response.blob().then(function (blob) {
            downloadFile(blob, filename, response.headers.get('content-type'));
        });
    });
}

Where downloadFile is this package: https://www.npmjs.com/package/downloadjs .

It appears to work okay. But if there is a cleaner answer I'd happily give away the check mark to that.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM