Category Archives: 3.2

Fine Uploader 3.2

Fine Uploader 3.2 has been released, and I have done my best to include a bunch of improvements, along with some new features. The two new major features are file chunking and auto-resume.

As always, please see the downloads page to download the library.

Known issues

  • #595 – inputName parameter is included in both the query string AND the request payload of XHR requests if forceMultipart is true (default) and paramsInBody is false (default). This will be fixed in 3.3. This will NOT affect most users. If it does negatively impact you, an easy workaround is to set the paramsInBody property of the request option to true. Note that in 3.3, the paramsInBody default value will be changed to true anyway. Please read more about this option in the options, and in the server-side readme.
  • #584 – “Processing…” status message does not appear while waiting for response after sending last byte of last chunk to server. This only affects FineUploader mode. I plan to address this in 3.3. Not a major issue, but it deserves to be addressed. Please see the FAQ in the readme for more information about existing inconsistencies among browsers as far as this “Processing…” message is concerned.

Features & Enhancements

  • #377 – Add support for optional file chunking. I have explained how this works, in detail, in a blog post.
  • #530 – Allow users to resume failed/interrupted uploads from previous sessions. I have explained how this works, in detail, in a blog post.
  • #575 – Add a qqtotalfilesize parameter to FormData multipart encoded requests so server-side code can easily determine the expected file size.
  • #569 – Added an API function to retrieve File object given an ID.
  • #566 – Add an API function that returns the size of a file, given the file’s ID.
  • #546 – Version-stamp the CSS file contained in the released ZIP file.
  • #541 – Allow developers to easily override the logic used to display the file name in FineUploader mode by contributing a formatFileName function option during initialization.
  • #509 – Allow developers to change the endpoint at any time via a new setEndpoint function. The concept for this is the same as the setParams function.
  • #111 – Ensure allowedExtensions validation check handles complex extensions, such as tar.gz, correctly.
  • #63 – Allow developers to easily localize the size symbols (MB, GB, etc) via a new sizeSymbols option.

Bugs Fixed

  • #568 – onValidate is called too many times in browsers that do not support the File API. This fix also resulted in a breaking change to simplify the new “custom validators” feature. Please see the breaking changes section below for more details.
  • #567 – FileData objects passed into the onValidate callback were sometimes File objects instead.
  • #565 – “Upload Failed” message remains during manual retry attempt.
  • #574 – onLeave message appears after canceling in-progress upload w/ autoUpload set to false.
  • #562 – Processing graphic remains and dropping is no longer possible after attempting to drop multiple files w/ multiple option set to false. Thanks to twig for reporting this.
  • #548 – default implementation of showMessage causes Safari to hang on IOS6. Thanks to turntreesolutions for reporting this.

Breaking Changes

I introduced a new callback: onValidateBatch that takes an array of FileData objects and is called once with all files selected. The onValidate callback will then always contain a FileData object and will be called for each of the files selected in the batch. This should simplify the new custom validation feature a bit. Please see the callback entries in the readme for more details.

Also, forceMultipart now defaults to “true”. This really should not be a breaking change, as your server-side code should already properly handle multipart encoded requests.

Note About The Codebase

I continue to refactor the codebase in order to make it more maintainable and more JSHint compliant. I recently switched from JSLint to JSHint, as I have found JSHint to be much more practical for real-world applications due to its flexibility.

Major Features Planned For 3.3?

  • Copy and paste image upload. See #487 for more details.
  • Provide an optional delete button next to each file item (in FineUploader mode) that will send a DELETE request to the server. I may provide an API function for FineUploaderBasic users as well. See #382 for more details.

Important note about 3.3

I plan to set change the default value of the paramsInBody option to true. I suspect that most developers expect parameters of multipart encoded requests to be located in the request body/payload. This should, over time, reduce some of the confusion I have seen in the support forums regarding request parameters.

If you have a question or a suggestion, please use the support forums or the issue tracker. Questions or issue reports will not be addressed in the comments section below.

As always, please let me know (in the forums or the issue tracker) if you have any suggestions for improvement, or any killer features you’d like me to add.

-Ray Nicholus

Resume uploads from previous sessions in 3.2

Suppose you’re sitting in a coffee shop, slowly uploading a very large file. Your lunch break is over and you have to head back to the office, but your upload is no where near complete. In version 3.2 of Fine Uploader, you can simply close your browser, head back to the office, and re-select or drop the file back into the uploader. It will pick up where you left off. Perhaps you are uploading another large file, but your PC blue-screens in the middle of the upload. Once you get your browser back up and running again, simply drop or select the file again and Fine Uploader will resume your file upload. The uses for such a feature are many.

High-level summary

The ability to resume an upload is dependent upon the 2nd-newest feature of Fine Uploader 3.2: file chunking. Before each chunk is sent to the server via a POST request, Fine Uploader creates a persistent cookie with all of the information required to resume the upload. This is done to cover termination of the browser session before the chunk has been successfully received by the server. After Fine Uploader has confirmed that the chunk has been successfully processed, the cookie is either deleted (if there are no more chunks left for this file) or updated with the metadata for the next chunk.

File resume is supported on all browsers the currently support chunking. That is: IOS6, Chrome, Firefox, Safari for Mac, and Internet Explorer 10. Again, file chunking, and therefore resume, is not supported in Android due to a bug in Android’s implementation of Blob.prototype.slice().

Configuring

I have provided the ability to enable or disable the resume feature (it’s disabled by default). Also, the number of days a resume cookie can live is configurable, but defaults to 7 days. Finally, you may specify an ID property that will be used to further distinguish resume cookies stored by the uploader. You may find this useful, if, for instance, you would like to tie resumable files to a specific user.

Note that you must also enable the chunking feature if you want to use resume. The qQuery section of the code now has some more general purpose functions used by the internal resume feature. These general purpose functions allow you to easily create, get, and delete cookies.

Callbacks

I’ve also provided a callback, onResume, with some useful parameters, that is invoked before a resume begins. The file ID, along with the name of the file and some data specific to the chunk to be sent are passed to the callback. If you want to abort the resume attempt client-side and simply start uploading from the first chunk, you can return false in your callback handler. See the callbacks section of the readme for more details.

API

I have also added a new method to the API: getResumableFilesData. This allows you to obtain a list of files that are resumable in the current session. You may find this useful if you want to display a message to the user after the uploader is initialized. Please see the instance methods section of the readme for more details on this function.

Server-side support

On the server side, there is very little you have to do if you are already accounting for chunked uploads. You can determine when a resume has been ordered by looking for a “qqresume” parameter in the request with a value of true. This parameter will be sent with the first request of the resume attempt.

It is important that you keep chunks around on the server until either the entire file has been uploaded and all chunks have been merged, or until the number of days specified in the `cookiesExpireIn` property of the resume option have passed. If, for some reason, you receive a request that indicates a resume has been ordered, and one or more of the previously uploaded chunks is missing or invalid, you can return a valid JSON response containing a “reset” property with a value of “true”. This will let Fine Uploader know that it should start the file upload from the first chunk instead of the last failed chunk.

Basic client-side example

It’s really quite simple to start using the new resume feature, here’s the simplest example I could come up with:

var uploader = new qq.FineUploader({
    element: document.getElementById('myUploader'),
    request: {
        endpoint: '/my/endpoint'
    },
    chunking: {
        enabled: true
    },
    resume: {
        enabled: true
    }
});

Since file upload resume is a new feature, I’m interested to hear any ideas from users who would like to make this feature even more useful. As always, if you have any input or discover any bugs, feel free to file an issue.

-Ray Nicholus

File Chunking/Partitioning is now available in 3.2

The newest feature added to Fine Uploader 3.2 is the ability to split large files into smaller chunks and spread them out over as many requests. This has many advantages, but primarily enhances the auto/manual retry feature and sets the stage for an upcoming feature that will allow users to automatically resume an uncompleted upload from a previous session. Chunking also allows you to overcome the hardcoded request limits enforced by most browsers. For example, Chrome and Firefox limit a file upload request size to about 4 GB. By uploading the file in chunks, you can overcome this.

Background

File chunking is made possible by the File API and, more specifically, the slice function on the File object prototype, inherited from the Blob object prototype. This function allows us to cut up a file into parts. Each part is a Blob and is then sent with the XHR request. Currently, file chunking is possible in Chrome, Firefox, Safari in IOS6, Safari in OS X, and Internet Explorer 10. Note that chunking is disabled in all Android browsers, as the Blob.slice implementation is broken in Android.

Callbacks

I have also provided a new callback: onUploadChunk. This is invoked before each chunk is sent. The file ID, name, and some metadata specific to the chunk are all passed into the callback. You can make use of the setParams function on this callback (and all other callbacks) to make any adjustments to the parameters sent along with this chunk. Please see the callbacks section of the readme for more details on this callback.

Server-side Examples

I have updated the Java example to handle file chunking. Andrew Valums has updated the PHP example to handle chunking as well. These changes will address chunking with multipart encoded requests. Hopefully, other server-side examples will be updated in the near future by contributors who are more familiar with the other server-side languages represented in the server directory.

Server-side handling

If you are a potential contributor who would like to modify an existing server-side example to handle chunked files, or if you are simply a user of Fine Uploader who would like to make use of this new feature in your server-side language of choice, I’ll do my best to explain how to best handle these requests in a language-agnostic manner.

Each chunk is sent in order and the response is checked to determine success. If the response from the server indicates failure, the uploader will attempt to retry sending the file starting with the last failed chunk (assuming either auto or manual retry has been enabled). Various parameters are sent by Fine Uploader along with each chunked request. Please see the chunking.paramNames option documentation in the readme for more details about these parameters.

Generally speaking, you can determine if a request refers to a file chunk by checking for the presence of a “qqpartindex” parameter. Other parameters standard with every chunked file request are “qqpartbyteoffset”, “qqchunksize”, “qqtotalfilesize”, and “qqtotalparts”. Note that the “qqtotalfilesize” parameter is also sent with ALL multipart-encoded requests that are initiated by the XHR uploader. Also, the “qqfilename” parameter is only sent along with chunked file requests that are multipart encoded. This parameter is important in this context since the filename reported in the Content-Disposition header has a value of “blob”, or an empty string when a Blob is included in the request. So, in order to determine the original filename when dealing with a MPE request, you must read the “qqfilename” parameter.

Each chunk will be sent in a separate request. After processing the request, you should save the chunk in a temporary location. To avoid collisions with other chunks, you should name the chunks after the “qquuid” parameter value. Note that this is a version 4 UUID, but should be unique enough for most, if not all applications. If you are especially worried about collisions between UUIDs, you can combine the UUID with the original file name (“qqfilename”) to decrease the chance of collisions further. Each temporary chunk file should be also be named with the chunk index. After all chunks have been received for a file, you should then combine all chunks to arrive at the original file. This is typically done as part of handling the final chunked request. If, for some reason, you have lost one of the previous chunks, or one of the previous chunks is no longer valid, you can return a “reset” JSON property with a value of “true” in your response. When Fine Uploader receives such a property in the chunked request response, it will fail the upload and then restart the upload with the first chunk again on the next attempt (assuming auto or manual retry has been enabled).

You may use the java example in the server folder as a reference.

Simple Client-Side Example

Here is an example of a typical, suggested setup (client-side) if file chunking is a desired feature:

var uploader = new qq.FineUploader({
    element: document.getElementById('myUploader'),
    request: {
        endpoint: '/my/endpoint'
    },
    retry: {
        enableAuto: true
    },
    chunking: {
        enabled: true
    }
});

The next planned feature is to allow users to resume a previously stopped or failed upload.

-Ray Nicholus