File Chunking/Partitioning is now available in 3.2

The newest feature added to Fine Uploader 3.2 is the ability to split large files into smaller chunks and spread them out over as many requests. This has many advantages, but primarily enhances the auto/manual retry feature and sets the stage for an upcoming feature that will allow users to automatically resume an uncompleted upload from a previous session. Chunking also allows you to overcome the hardcoded request limits enforced by most browsers. For example, Chrome and Firefox limit a file upload request size to about 4 GB. By uploading the file in chunks, you can overcome this.

Background

File chunking is made possible by the File API and, more specifically, the slice function on the File object prototype, inherited from the Blob object prototype. This function allows us to cut up a file into parts. Each part is a Blob and is then sent with the XHR request. Currently, file chunking is possible in Chrome, Firefox, Safari in IOS6, Safari in OS X, and Internet Explorer 10. Note that chunking is disabled in all Android browsers, as the Blob.slice implementation is broken in Android.

Callbacks

I have also provided a new callback: onUploadChunk. This is invoked before each chunk is sent. The file ID, name, and some metadata specific to the chunk are all passed into the callback. You can make use of the setParams function on this callback (and all other callbacks) to make any adjustments to the parameters sent along with this chunk. Please see the callbacks section of the readme for more details on this callback.

Server-side Examples

I have updated the Java example to handle file chunking. Andrew Valums has updated the PHP example to handle chunking as well. These changes will address chunking with multipart encoded requests. Hopefully, other server-side examples will be updated in the near future by contributors who are more familiar with the other server-side languages represented in the server directory.

Server-side handling

If you are a potential contributor who would like to modify an existing server-side example to handle chunked files, or if you are simply a user of Fine Uploader who would like to make use of this new feature in your server-side language of choice, I’ll do my best to explain how to best handle these requests in a language-agnostic manner.

Each chunk is sent in order and the response is checked to determine success. If the response from the server indicates failure, the uploader will attempt to retry sending the file starting with the last failed chunk (assuming either auto or manual retry has been enabled). Various parameters are sent by Fine Uploader along with each chunked request. Please see the chunking.paramNames option documentation in the readme for more details about these parameters.

Generally speaking, you can determine if a request refers to a file chunk by checking for the presence of a “qqpartindex” parameter. Other parameters standard with every chunked file request are “qqpartbyteoffset”, “qqchunksize”, “qqtotalfilesize”, and “qqtotalparts”. Note that the “qqtotalfilesize” parameter is also sent with ALL multipart-encoded requests that are initiated by the XHR uploader. Also, the “qqfilename” parameter is only sent along with chunked file requests that are multipart encoded. This parameter is important in this context since the filename reported in the Content-Disposition header has a value of “blob”, or an empty string when a Blob is included in the request. So, in order to determine the original filename when dealing with a MPE request, you must read the “qqfilename” parameter.

Each chunk will be sent in a separate request. After processing the request, you should save the chunk in a temporary location. To avoid collisions with other chunks, you should name the chunks after the “qquuid” parameter value. Note that this is a version 4 UUID, but should be unique enough for most, if not all applications. If you are especially worried about collisions between UUIDs, you can combine the UUID with the original file name (“qqfilename”) to decrease the chance of collisions further. Each temporary chunk file should be also be named with the chunk index. After all chunks have been received for a file, you should then combine all chunks to arrive at the original file. This is typically done as part of handling the final chunked request. If, for some reason, you have lost one of the previous chunks, or one of the previous chunks is no longer valid, you can return a “reset” JSON property with a value of “true” in your response. When Fine Uploader receives such a property in the chunked request response, it will fail the upload and then restart the upload with the first chunk again on the next attempt (assuming auto or manual retry has been enabled).

You may use the java example in the server folder as a reference.

Simple Client-Side Example

Here is an example of a typical, suggested setup (client-side) if file chunking is a desired feature:

var uploader = new qq.FineUploader({
    element: document.getElementById('myUploader'),
    request: {
        endpoint: '/my/endpoint'
    },
    retry: {
        enableAuto: true
    },
    chunking: {
        enabled: true
    }
});

The next planned feature is to allow users to resume a previously stopped or failed upload.

-Ray Nicholus

One thought on “File Chunking/Partitioning is now available in 3.2

Comments are closed.