I've written a download script with javascript and php. It works, but if i want to download a large file(example 1GB zip file) it has tooo long to end up with the request. I think it has something to do, that i read the file. If it's this, any idea how to get it faster?
Notice : I need a header, cause of force downloading like images, pdfs, any kind of filetype.
JS is very simple. Look at this:
function downloadFile(file){
document.location.href = "script.php?a=downloadFile&b=."+ file;
}
PHP is simple yet:
function downloadFile($sFile){
#Main function
header('Content-Type: '.mime_content_type($sFile));
header('Content-Description: File Transfer');
header('Content-Length: ' . filesize($sFile));
header('Content-Disposition: attachment; filename="' . basename($sFile) . '"');
readfile($sFile);
}
switch($_GET['a']){
case 'downloadFile':
echo downloadFile($_GET['b']);
break;
}
I guess buffering is the issue for the large files. Try reading your file in small chunks (like a megabyte) and call flush function to flush the output buffer after printing each chunk.
EDIT: eh, okay, here's the code example you should try:
function downloadFile($sFile){
#Main function
if ( $handle = fopen( $sFile, "rb" ) ) {
header('Content-Type: '.mime_content_type($sFile));
header('Content-Description: File Transfer');
header('Content-Length: ' . filesize($sFile));
header('Content-Disposition: attachment; filename="' . basename($sFile) . '"');
while ( !feof($handle) ) {
print fread($handle, 1048576);
flush();
}
fclose($handle);
} else {
header('Status: 404');
header('Content-Type: text/plain');
print "Can't find the requested file";
}
}
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.