Fine Uploader S3: Upload Directly to Amazon S3 from your Browser

Table of Contents

  1. What is This and Why is This Important?
    1. Increased scalability
    2. Less server-side complexity
    3. Save bandwidth
  2. Browser Support
  3. Supported Features
  4. Step-by-Step Guide to Integrating Fine Uploader S3 Into Your Web Application
    1. Configuring your S3 buckets
      1. Editing your bucket’s CORS config
      2. Basic CORS config values
      3. Securing your bucket
        1. CORS restrictions
        2. IAM restrictions
    2. Client-side integration
      1. Simple upload support
      2. Supporting more advanced features
        1. “Successfully uploaded to S3″ server notifications
        2. Dynamic key names
        3. Including user metadata
        4. Error message display
        5. File validation
        6. Auto and manual failed upload retries
        7. Chunking & auto-resume
        8. Delete files
        9. Upload via paste
    3. Server-side integration
      1. Signing policies
        1. Policy document format
        2. Verifying the auto-generated policy
        3. Responding to the signature request
      2. Supporting IE9 and older
      3. Signing chunked/REST/multipart API requests
      4. Delete file support
      5. “Successfully uploaded to S3″ server-notifications
  5. Cross-Domain (CORS) Environment Support
    1. Modern browsers
    2. Internet Explorer 9 and older
  6. Conclusion

TL;DR

There’s quite a bit of detail in this post, and I encourage you to read it all.  If you require support in the future and it is clear that you have not taken the time to read this post, you will likely be directed at this blog post.  If you really want to jump in headfirst and are already comfortable with all of the concepts surrounding this feature, have a look at the following links to get started:

Also, there is a live, fully functional demo of this feature on Fine Uploader’s home page that allows you to play with Fine Uploader S3 by uploading files to one of our S3 buckets. The demo even allows you to view the file after it has been uploaded, or even delete it via Fine Uploader’s UI. Furthermore, some additional options have been enabled in the demo, such as various validation rules.

Also, please don’t be overwhelmed by the length of this blog post. All of this information is here as we are determined to be complete and document anything and everything you might want to be aware of when using Fine Uploader S3. As always, don’t hesitate to open a support request, file a bug, or a feature request. We are here to help you integrate Fine Uploader S3 into your project! See http://fineuploader.com/support for more details.

What is This and Why is This Important?

Starting with Fine Uploader 3.8, you have the option to upload files directly to your S3 buckets client-side. We’re calling this Fine Uploader S3.  Previously, you would have to send file bytes to your local server (and handle the associated request(s)) and then send them up to S3.  This feature cuts out the middleman (your local server) when dealing with file bytes.

Increased scalability

Since your server no longer has to directly handle uploaded files, this makes it easier to scale your web application.  S3 deals with large and small workloads quite well.  So, let Amazon worry about this!

Less server-side complexity

Handling multipart-encoded requests that Fine Uploader sends for each file is complicated enough.  Once you turn on chunking and auto-resume, things get a bit more complicated.  You have to keep track of the chunks.  You must make sure that you keep file chunks around on your server long enough to properly support the auto-resume feature.  You must be sure that you don’t accidentally run out of space server-side, etc, etc.  Or, you can let S3 handle all of this for you.

Save bandwidth

If you aren’t uploading files directly to S3 client-side, you must receive the files on your local server and then send the same exact bytes to your S3 bucket.  That seems a bit inefficient, doesn’t it?  Your files are destined for S3 anyway, why not just send them there directly?

Browser Support

Uploads directly to S3 via your browser in Fine Uploader is supported in ALL browsers that Fine Uploader already supports for “traditional” uploads.  Yes, this includes IE7.

Note that, if you do need to support IE7, you will also have to include Douglas Crockford’s json2.js in your document.  This is required as Fine Uploader must stringify the JSON policy document it generates when sending the policy to your server for signing.  IE7 does not have any native support for converting JavaScript objects into JSON, or vice-versa.  A non-trivial amount of code is required to do this correctly, which is why it is simply easier to rely on json2.js for this task.

Supported Features

All features offered in the “traditional” uploader are also offered by Fine Uploader’s S3 uploader.  This, of course, includes:

  • chunking
  • auto-resume
  • auto & manual retry
  • editing filenames before the upload
  • auto and manual upload mode
  • deleting uploaded files (via your local server)
  • drag & drop
  • upload via paste
  • upload images via mobile devices
  • cross-origin support

Step-by-Step: Integrating Fine Uploader S3 Into Your Web App

Allowing direct-to-S3 uploads with Fine Uploader is quite simple.  The process is as follows:

  1. Configure your S3 bucket(s).
  2. Write your “glue” code to create and configure a Fine Uploader S3 instance client-side.
  3. Include simple code on your server to sign requests and optionally handle other requests sent by Fine Uploader.

Configuring your S3 bucket(s)

If you want to jump right into this, and already know a bit about CORS and configuring your bucket. take a look at the section in the server-side documentation that provides a sample CORS configuration setup with some information on modifying the sample to suit your needs.  Otherwise, read on.

By default, Amazon allows cross-origin GET requests on your S3 bucket.  This is enforced via an XML document in the CORS configuration section of your bucket in S3’s administrator console.  In order to allow direct-to-S3 uploads from Fine Uploader, you will need to extend the default CORS configuration a bit.

Editing your bucket’s CORS configuration

Let’s use a test bucket I created in the Fine Uploader AWS account during development of this feature as an example.  Start by clicking on your bucket under the “All Buckets” section on the right side of the S3 console.  Your page may look something like this (with your own buckets present instead of Fine Uploader’s development buckets):

S3 console view

Next, click on the bucket you wish to edit, and then click on the “Properties” button on the right side of the page:

Screen Shot 2013-08-05 at 1.41.11 PM

After you do this you will see a set of properties associated with this bucket on the right side of your page.  Expand the “Permissions” section:

S3 bucket permissions

Then, click on the “Add CORS Configuration” button, which will expose an overlay that houses your bucket’s CORS configuration:

S3 CORS configuration section

After you are done making changes to your configuration, be sure to click the “Save” button. After you do this, your changes will be live.

Basic CORS configuration values

Fine Uploader requires you to at least specify some very basic CORS rules for any S3 buckets that will receive files from the library. Since Fine Uploader utilizes ajax requests to upload files in many instances, cross-origin request restrictions are an issue. Fortunately, modern browsers provide support for the CORS spec, which describes how browsers may provide support for cross-domain ajax requests. S3 permits CORS (cross-origin) requests from these browsers, with the proper configuration. Without the proper configuration, these requests will be rejected by S3.

If you do not plan on utilizing the chunking feature in Fine Uploader, your S3 CORS configuration can be quite simple.  In this instance, you need nothing more than this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>POST</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

If you turn on the chunking (and possibly the auto-resume) feature, you will need to, at least, include the following XML in the CORS configuration section of your S3 bucket:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <ExposeHeader>ETag</ExposeHeader>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Tightening up ajax request restrictions on your S3 bucket

CORS Restrictions

The AllowedOrigin tag allows you to restrict which domains Amazon should allow requests from. The wildcard value for the AllowedOrigin tags in the earlier examples will allow ajax requests from any domain. You may want to consider replacing the AllowedOrigin wildcard value with something more restrictive. Any domains not specified here will be rejected if they attempt to make a client-side ajax request. If you know your Fine Uploader instance will be hosted only at http://foo.bar.com, you should replace the wildcard AllowedOrigin tag value with this:

<AllowedOrigin>http://foo.bar.com</AllowedOrigin>

The AllowedHeader tag allows you to restrict which headers are acceptable on incoming ajax requests. The wildcard value in the earlier examples tell Amazon to allow any headers from any otherwise acceptable ajax requests. You also may want to consider replacing the AllowedHeader wildcard values with something more specific. If you want to replace the wildcard with something more restrictive, you must, at a minimum, replace the wildcard tag with the following tags:

<AllowedHeader>content-type</AllowedHeader>
<AllowedHeader>origin</AllowedHeader>

If you enable chunking, you will need to add the following tags as well:

<AllowedHeader>x-amz-acl</AllowedHeader>
<AllowedHeader>x-amz-meta-qqfilename</AllowedHeader>
<AllowedHeader>x-amz-date</AllowedHeader>
<AllowedHeader>authorization</AllowedHeader>

If you intend to attach any user metadata to the files uploaded to S3 via the setParams API method or the params property of the request option AND you have the chunking feature enabled, you will need to include additional AllowedHeader tags. If the parameter names cannot all be known ahead of time, you will need to use a wildcard value for the AllowedHeader tag (as displayed in the earlier examples). However, if you do know these parameter names ahead of time, you can specify them in your CORS configuration file if you want to ensure Amazon blocks any ajax requests that include unexpected headers. Each parameter name passed to Fine Uploader will need to be included in an AllowedHeader entry, with “x-amz-meta” prepended to the parameter name. For example, if you know your app will declare associate “foo” and “bar” parameters with some or all of your files, you will need to include the following entries in your bucket’s S3 CORS configuration:

<AllowedHeader>x-amz-meta-foo</AllowedHeader>
<AllowedHeader>x-amz-meta-bar</AllowedHeader>

IAM Restrictions

You should strongly consider provisioning a pair of keys with very restrictive permissions to be used specifically by Fine Uploader S3 client-side. This involves creating a new IAM group with restricted permissions. You must then create an IAM user, assign the user to the group you just created, and then pass the public key to Fine Uploader via the request.accessKey option while storing your secret key server side for the purposes of signing requests. The only permission required by the Fine Uploader user is “S3:PutObject”.

Here’s a simple example, assuming our bucket name is “fineuploadertest”:

Step 1: Create an IAM group for client-side use only

You can create a new group by clicking on the “Create New Group” button in the IAM groups section of your AWS console.
create new IAM group

Then, name the group, (“uploads-client” for example), and finally specify permissions. You can select “Custom Policy” name your policy, and then paste in the following:

{
  "Version":"2012-10-17",
  "Statement":[{
     "Effect":"Allow",
     "Action":"s3:PutObject",
     "Resource":"arn:aws:s3:::fineuploadertest/*"
   }]
}

set permissions
Click “Continue” and then “Create Group”.

Step 2: Create a Fine Uploader S3 user

Now, you must associate a user with the group you just created. Start by clicking the “Create New Users” button in the users section of the IAM console.
create user

Specify a user name, click “Create” and then be sure to click “Download Credentials” on the last step of the wizard. You will need these later.
download credentials

Step 3: Associate the new user with the new group

Click on the user you created in the IAM user’s console, then click on the “Add User to Groups” button at the bottom of the page.
add users to groups

Select the new group you created, then click “Add to Groups”.

Step 4: Start using the Fine Uploader S3 user’s keys

Finally, pass the public key to Fine Uploader via the request.accessKey option while storing your secret key server side for the purposes of signing requests. Remember that you downloaded the keys/credentials back in Step 2.

Client-side integration

If you’ve used Fine Uploader in the past, or if you are an existing user, you’re certainly familiar with writing “glue code” (javascript) to create a Fine Uploader instance on your page, pass appropriate configuration options, register events, and call API methods. This section does not assume any previous experience with Fine Uploader, but is also useful for more experienced users.

Don’t worry, setting up Fine Uploader S3 client-client is a pretty simple task, regardless of the number of features you wish to use, even if you don’t want to use jQuery!

Simple upload support only

First, let’s go over setting up your client-side code for web applications with the most basic needs (just simple upload support). This will allow Fine Uploader to upload files directly to S3 in all supported browsers, without any of the bells and whistles associated with some of the more advanced features of the library.

This first set of examples assumes you are using the default UI created by Fine Uploader. The default UI is customizable, but you may want to create your own entirely unique UI via FineUploderBasic-S3. I’ll provide notes for FineUploaderBasic-S3 users at the end of the examples.

For all examples, it is assumed you have an element with an ID of “fineUploader” present somewhere in your document. Fine Uploader will use this element as a container for any DOM elements it creates. If you are using jQuery, it will also attach an instance of the Fine Uploader S3 jQuery plug-in to that element. Note that if you intend to use multiple instances of Fine Uploader on a page, you will need to adjust the ID to ensure it is unique on your page.

For jQuery users:

$('#fineUploader').fineUploaderS3({
    request: {
        endpoint: "mybucket.s3.amazonaws.com",
        accessKey: "MY_AWS_PUBLIC_ACCESS_KEY"
    },
    signature: {
        endpoint: "/s3/signtureHandler"
    },
    uploadSuccess: {
        endpoint: "success.html"
    }
});

You can read more about using the jQuery plug-in wrapper in the documentation.

Non-jQuery users (native javascript-only):

var uploader = new qq.s3.FineUploader({
    element: document.getElementById("fineUploader"),
    request: {
        endpoint: "mybucket.s3.amazonaws.com",
        accessKey: "MY_AWS_PUBLIC_ACCESS_KEY"
    },
    signature: {
        endpoint: "/s3/signtureHandler"
    },
    uploadSuccess: {
        endpoint: "success.html"
    }
});

In the above examples, you will need to adjust your endpoint to match the URL of your S3 bucket. All formats supported by Amazon are supported, such as “{bucketname}.s3.amazonaws.com”, “s3.amazonaws.com/{bucketname}”, as well as a custom domain that properly points to your S3 bucket. SSL is also supported, in which case your endpoint address must start with https://.

You will also need to include your specific AWS access key as a value for the accessKey property above. This is your public AWS key, NOT your secret key. Also, this should be the public key for the IAM user created specifically for Fine Uploader S3, and not your main account key. Your secret key should remain a secret, server-side. Your access key(s) can be found on the security credentials page of your AWS account. Once on that page, you can create new keys or access existing keys under the “Access Keys” section:
Access Keys console

The signature.endpoint must contain a path to your server where Fine Uploader can send policy documents and request header strings. This endpoint must sign these items using your AWS secret key and include the signature in the response. Signing is discussed more in the server-side integration section of this blog post.

Finally, the iframeSupport.localBlankPagePath value must point at a path on the same origin/domain as the one hosting your Fine Uploader instance. This endpoint needs to be nothing more than an empty HTML file. Fine Uploader S3 requires this if you plan on supporting IE9 or older. The reason for this is explained a bit more in the implementation details section at the end of this post.

Client-side setup for support of some optional features

The previous section provided a simple example and explanation for client-side setup of a simple instance of Fine Uploader without any optional features enabled. This section will describe the other extreme: an uploader instance with most optional features enabled. The beauty of Fine Uploader S3 is that your server-side code (covered later) only requires a few trivial additions in order to support all of these features, as Amazon takes care of most of the work for you.

For jQuery users:

$('#fineUploader').fineUploaderS3({
    request: {
        endpoint: "http://mybucket.s3.amazonaws.com",
        accessKey: "MY_AWS_PUBLIC_ACCESS_KEY",
        params: {category: "foobar"}
    },
    signature: {
        endpoint: "/s3/signtureHandler"
    },
    uploadSuccess: {
        endpoint: "/s3/uploadSuccessful"
    },
    iframeSupport: {
        localBlankPagePath: "success.html"
    },
    objectProperties: {
        key: function(fileId) {
            var keyRetrieval = new qq.Promise(),
                filename = $("#fineUploader").fineUploader("getName", fileId);

            $.post("createKey.html", {name: filename})
                .done(function(data) { keyRetrieval.success(data.key); })
                .fail(function() { keyRetrieval.failure(); });

            return keyRetrieval;
        }
    },
    validation: {
        allowedExtensions: ["gif", "jpeg", "jpg", "png"],
        acceptFiles: "image/gif, image/jpeg, image/png",
        sizeLimit: 5000000,
        itemLimit: 3
    },
    retry: {
        enableAuto: true
    },
    chunking: {
        enabled: true
    },
    resume: {
        enabled: true
    },
    deleteFile: {
        enabled: true,
        endpoint: "/fileHandler"
    },
    paste: {
        targetElement: $(document),
        promptForName: true
    }
});

Non-jQuery users (native javascript-only):

var uploader = new qq.s3.FineUploader({
    element: document.getElementById("fineUploader"),
    request: {
        endpoint: "http://mybucket.s3.amazonaws.com",
        accessKey: "MY_AWS_PUBLIC_ACCESS_KEY",
        params: {category: "foobar"}
    },
    signature: {
        endpoint: "/s3/signtureHandler"
    },
    uploadSuccess: {
        endpoint: "/s3/uploadSuccessful"
    },
    iframeSupport: {
        localBlankPagePath: "success.html"
    },
    objectProperties: {
        key: function(fileId) {
            var keyRetrieval = new qq.Promise(),
                filename = encodeURIComponent(uploader.getName(fileId)),
                xhr = new XMLHttpRequest();

            xhr.onreadystatechange = function() {
                if (xhr.readyState === 4) {
                    var status = xhr.status,
                        key = xhr.responseText;

                    if (status !== 200) {
                        keyRetrieval.failure();
                    }
                    else {
                        keyRetrieval.success(key);
                    }
                }
            }

            xhr.open("POST", "createKey.html?filename=" + filename);
            xhr.send();

            return keyRetrieval;
        }
    },
    validation: {
        allowedExtensions: ["gif", "jpeg", "jpg", "png"],
        acceptFiles: "image/gif, image/jpeg, image/png",
        sizeLimit: 5000000,
        itemLimit: 3
    },
    retry: {
        enableAuto: true
    },
    chunking: {
        enabled: true
    },
    resume: {
        enabled: true
    },
    deleteFile: {
        enabled: true,
        endpoint: "/fileHandler"
    },
    paste: {
        targetElement: document,
        promptForName: true
    }
});

Again, the above examples represent somewhat of an advanced setup. Also, as you can see, jQuery makes your life a bit easier, so use it if you can. Some of the properties of the request option were discussed in the previous section. Let’s step through the new features and options enabled in the above examples:

Fine Uploader S3 can notify your server directly when a file has been uploaded to S3 (success.endpoint)

If you specify this option, Fine Uploader S3 will send a POST request to your server that includes the relevant key name, UUID, bucket, and filename. This can be helpful if you need to perform some server-side tasks related to the file after it is safely stored in your S3 bucket. You can also perform additional checks on the file in S3 at this point, if you wish. Should any of your checks indicate a problem, you can alert Fine Uploader S3 via your server’s response, and the uploader will declare the upload a failure.

Specifying the object (file) key for S3 (objectProperties.key)

As you can see in the S3 options, you can ask Fine Uploader S3 to use the UUID it generates for the file as the object key key: "uuid" (which is the default), the filename key: "filename", or you can specify a function (as in the above examples) where you determine the key name for each file on-demand. Your function will be called once for each file, just before Fine Uploader attempts to upload it for the first time. Fine Uploader S3 will pass the file ID as a parameter when invoking your function as well.

Please understand that use of the filename as S3 object key is strongly discouraged, as the filename is not guaranteed to be unique. If a user uploads a “foo.jpeg” and another user uploads “foo.jpeg” to the same bucket, the last upload will overwrite the existing “foo.jpeg” in your bucket if the filename is the sole identifier of the object key. This is especially problematic if you are supporting iOS devices, as iOS uses the same name for all image files (image.jpg).

When your function is invoked, you can either return the key name immediately based on some simple logic embedded in your client-side code, or you can ask your server via ajax to create a key name. In the latter case, you must return a qq.Promise. In fact, any non-blocking/async calls required to generate the key in this function require your function observe the promise contract. Fine Uploader will delay further handling of that file (but not block the UI thread) until the promise is fulfilled via a call to the promise’s “success” or “failure” methods. You may want to utilize this approach if your server has to, for example, create or lookup an item in the database in order to determine the object’s key name.

Associating user metadata with each object in S3 (request.params)

S3 allows you to store “user metadata” with each object in your bucket(s). This metadata can be retrieved via any one of the AWS SDKs. It is also made available as headers in the response to a simple GET request for the object. In the latter case, the user metadata names will be prefixed by Amazon with “x-amz-meta-“.

Fine Uploader S3 converts any parameters specified via the request.params option or via the setParams API method into “user metadata”. Note that the values of your parameters will be URL encoded by Fine Uploader S3 before they are associated with the object in your S3 bucket.

Displaying error messages for your users (failedUploadTextDisplay)

When a file ultimately fails, Fine Uploader S3 will extract the failure reason from the S3 response (if possible) or provide a canned error message based on some error detection logic in the code (if possible) and display this message next to the failed item. By default, Fine Uploader simply displays “Upload Failed” next to the failed item. The “custom” value for the mode property here instructs Fine Uploader S3 to attempt to display a more specific message based on the failure.

Validation rules

Fine Uploader S3 allows you to optionally put restrictions on files submitted by your users. In the above examples, we are

  • Restricting the allowable file extensions via the validation.allowedExtensions option.
  • Restricting the types of files that are selectable in the file chooser dialog (if the browser supports this) via the validation.acceptFiles option.
  • Limiting the maximum size of any selected files to 5 MB (if supported by the browser) via validation.sizeLimit. Note that Fine Uploader S3 will ALSO ask AWS (server-side) to enforce any size limits you have specified, but only for “simple” (non-chunked) uploads. There doesn’t appear to be a way to ask AWS to enforce this for chunked (multipart) uploads.
  • Preventing users from uploading more than a total of 3 files in this session via validation.itemLimit.

Support for auto & manual retry of failed uploads

Fine Uploader S3 also will automatically retry a failed upload a number of times before giving up (if enabled). After the automatic retries have expired, Fine Uploader S3 will allow the user to manually request a retry via the default UI. You can also programmatically issue retry requests via Fine Uploader S3’s API.

Support for file partitioning/chunking & auto-resume of interrupted/failed uploads

If supported by the browser (not IE9 and older), Fine Uploader S3 will optionally split large files into parts and send each part separately. This is a life-saver if a failure occurs midway through a large file (due to loss of connection, etc). In that scenario, you don’t have to start the entire file over. Fine Uploader S3 will retry starting with the failed chunk.

Building on this, we have the auto-resume feature. If enabled, Fine Uploader will let you pick up where you left off with a file in another session. Suppose you are in the middle of a large file upload, and either your computer/browser suddenly crashes, or you simply need to resume the upload at a later time. Fine Uploader S3 stores information about the file’s progress in your browser, and will read it back and resume the upload where you left off when you select or drop the file again in a future session.

The default chunk size for Fine Uploader S3 is 5 MiB. This is the minimum chunk size required by S3. If your file is smaller than this size, the upload will be a “simple” (non-chunked) upload. Also, be aware of this S3 restriction if you modify the default chunk size in Fine Uploader S3.

Deleting an uploaded file

If you want to allow your users to delete files already uploaded in the current session, you should enable this feature. If using the default UI, a “delete” button will appear next to each successfully upload file. You may, as always, utilize the Fine Uploader S3 API to delete a file as well. Note that minimal server-side code is required to handle this feature, as delete file requests are sent to your local server, instead of Amazon S3. This is due to the fact that it is not possible to send delete requests directly to S3 via the browser in IE9 and older. See the server-side integration section for details on handling such requests.

Uploading images via paste

If you would like to allow users to upload images directly to S3 by simply pasting them on to your page, enable this function and set the targetElement to any element on your page that should receive the paste event. Fine Uploader S3 will take care of the rest for you! You can also prompt the user to provide a name (if using the default UI) via a dialog whenever an image is pasted via the promptForName property. Note that this feature is only currently available for Chrome.
Of course, there are many other features you may enable, and many other ways to configure Fine Uploader S3 client-side. The above examples only cover a portion of the available options. See the links at the start of the client-side integration section for more details.

Server-side integration

Fine Uploader S3 and Amazon handle the majority of the work for you. However, in order to support uploads directly to S3, you are required to, at the very least, sign requests (using your AWS secret key) sent by Fine Uploader. This must be done server-side in order to keep your secret key a secret.

The following functional server-side examples are available for you to use as a guide:

Note that the PHP example is used on fineuploader.com to support the live Fine Uploader S3 demo. Other server-side examples will be added over time. Read on for details on handling server-side tasks when using Fine Uploader S3.

Here are server-side tasks that you must perform. Some are optional, as noted:

So, as you can see, in its simplest form, only very minimal server-side code is required. Even with more advanced options, you’ll find that your server-side code can still be quite simple.

Signing policy documents

Signing policy documents server-side is the one mandatory task your server must perform, regardless of features enabled and browsers supported. Fine Uploader generates policy documents for you, based on properties of the file and using some of the options you have set for your uploader instance. Policy documents are required to be attached to an S3 upload requests for “simple” (non-chunked) uploads. The policy document also must be signed, and that signature is then attached by Fine Uploader to the request as well. Your server is responsible for signing these requests.

Policy document format

Fine Uploader S3 will send a POST request to the endpoint specified in the signature.endpoint option. This POST request will contain an “application/json” payload: the policy document. The body of this POST request will look something like this (notes added to the documentation for clarity):

{
     //always included
    "expiration": //the expiration date of the policy, set to be 5 minutes from now, in <a href="http://www.w3.org/TR/NOTE-datetime">ISO 8601 format</a>,

    "conditions":
    [
        //always included
        {"acl": /*the "canned acl" value, specified in the <code>objectProperties.acl</code> option*/},

        //always included
        {"bucket": /*the name of the S3 bucket where this file will be sent*/},

        //not included in IE9 and older or Android 2.3.x and older
        {"Content-Type": /*MIME-type of the associated file, as determined by simple extension checking client-side*/},

        //not included in IE9 and older or Android 2.3.x and older
        {"success_action_status": "200" /*expected HTTP status code for the response from S3 is the upload was successful*/},

        //ONLY included in IE9 and older or Android 2.3.x and older
        {"success_action_redirect": /*corresponds to an absolute path based on the <codeiframeSupport.localBlankPagePath*/},

        //always included
        {"key": /*key name for the associated file*/},

        //always included
        {"x-amz-meta-qqfilename": /*url-encoded filename*/},

        //not included in IE9 and older, Android 2.x.3 and older, or if no size validation options are set
        ["content-length-range", /*min file size ("0" if not specified)*/, /*max file size ("9007199254740992" if not specified)*/]
    ]
}

Note that the policy document will ALSO contain ANY parameters specified in your client-side code, prefixed with “x-amz-meta”. For example, if you specify a parameter of “foo” with a value of “bar”, the following entry would also be present in the conditions array of the generated policy document:

{"x-amz-meta-foo": "bar"}

Note that parameter values are URL encoded by Fine Uploader.

Examining the policy document

You should programmatically examine policy documents, server-side, before signing them. It is possible that a malicious user could tamper with the generated policy document before it is sent off to your server by Fine Uploader. If any values of the policy document are not as expected, simply return a non-200 response status code (such as 500) AND the following in the body of your “application/json” response:

{
    "invalid": true
}

The above response will tell Fine Uploader S3 that the policy document may have been tampered with and it will NOT attempt to send the associated file to S3 until a proper signature has been received by your server. Fine Uploader may retry sending the signature request to your server (if retry is enabled).

Responding to a policy document signature request

Your server must return an “application/json” response with content that includes the base-64 encoded policy document AND the signed base-64 encoded policy document. So, your response payload will look something like this:

{
    "policy": /*base-64 encoded policy document*/,
    "signature": /*signed base-64 encoded policy document*/
}

Most server-side languages/frameworks make it easy to base-64 encode a string. For example, Java has a BASE64Encoder class. Amazon provides examples for PHP and Python as well in their developer documentation.

Signing the policy document is quite simple as well. Again, see the examples provided in Amazon’s developer documentation for more details.

Also note that you SHOULD provision a specific pair of keys for client-side use by Fine Uploader that is heavily restricted. See the “Securing your bucket” section in this blog post for more details.

Supporting IE9 and older

It’s trivial to support IE9 and older browsers (including Android 2.3.x and older). Simply provide an accessible empty HTML file/page. That’s it. Really! The path to this file must be specified in the iframeSupport.localBlankPagePath option. The path can be relative (Fine Uploader will determine the absolute path for you) but it MUST be on the same origin/domain as the one hosting your Fine Uploader S3 instance.

Why does Fine Uploader need you to provide an empty page on the same domain as the uploader instance? Well, in browsers that do not support the File API (such as IE9 and older) Fine Uploader must dynamically create a form containing the file input and any associated parameters and submit it. The form targets an iframe to ensure the response does not modify/redirect the main window. The content of the response will be loaded into the associated iframe. Fine Uploader must examine the content of that iframe to determine the status of the upload request. If the iframe is not on the same domain as the window hosting Fine Uploader, there is no way to access the contents of this frame (due to cross-origin restrictions). To get around this, Fine Uploader S3 sends a “success_action_redirect” parameter with upload requests when older browsers are involved. The value of this parameter is the absolute path to the blank page you have provided. S3 responds with a 303 status code in the response and includes the URL of your blank page. This instructs the browser to redirect to your blank page, allowing Fine Uploader to access the contents of the iframe. While the contents are empty, the fact that Fine Uploader S3 can access the contents without a security exception means that the request likely succeeded. To be absolutely sure, Fine Uploader S3 examines some parameters in the iframe’s URL (such as the bucket and key) to ensure that the response refers to the correct file.

Chunking support

To support chunking, your server only needs to sign a string that represents the headers of the request to be sent to S3. Fine Uploader S3 will generate a string based on the request type and required header values, pass it to your server in an “application/json” POST request, and expect your server to sign it using a portion of the examples provided in Amazon’s developer documentation. Note that this signature differs slightly from the policy document signature. In this case, you should NOT base-64 encode the string before signing it. Simply generate an HMAC SHA1 signature of the string using your AWS secret key and base-64 encode the result.

Fine Uploader S3 will send the following in the payload of the signature request:

{
    "headers": /*string to sign*/
}

The presence of the “headers” property in the JSON request alerts your server to the fact that this is a request to sign a REST/multipart request and not a policy document.

Your server only needs to return the following in the body of an “application/json” response:

{
    "signature": /*signed headers string*/
}

Fine Uploader S3 utilizes the following REST API calls, all of which require signatures:

If you are curious about the format of the strings Fine Uploader will send to your server for signing, the general format is:

{METHOD}\n\n{Content-Type Value (optional)}\n\n{CUSTOM HEADERS, EACH ENDING WITH A NEWLINE}/{BUCKET}/{KEYNAME}?{REQUEST-SPECIFIC QUERY PARAMS}

This is explained a bit more in the AWS REST API documentation. You probably don’t need to worry about this though, as you SHOULD provision a specific pair of keys for client-side use by Fine Uploader that is heavily restricted. See the “Securing your bucket” section in this blog post for more details.

Delete file support

If you enable the deleteFile feature, Fine Uploader S3 will send any delete requests directly to your server. Your server is expected to communicate with S3 via an SDK to delete the associated file. Why doesn’t Fine Uploader S3 simply send the delete file requests to S3 directly? Well, this is possible, but not in IE9 and older. Instead of going through the hassle of implementing this REST API call client-side, only to also require your server-side to make this call itself if IE9 and older is involved (most web apps likely have to support at least IE9), I opted to simply delegate to the server for all browsers. IE9 and older cannot send DELETE method requests to S3 since the request is cross-origin and IE9 and older only support POST and GET cross-origin requests (via XDomainRequest).

It’s quite simple to delete a file on S3 in your server-side code if you utilize the appropriate SDK for your server-side language provided by Amazon. Fine Uploader will send, by default, a DELETE request to your local server. The last item in the URI path will be the UUID of the file. Fine Uploader S3 will also include the “key” and “bucket” as parameters in the query string for DELETE requests.

If you change the deleteFile method in the options to POST, Fine Uploader will send a POST request to the endpoint you have specified in your deleteFile.endpoint option. This request will be “application/x-www-form-urlencoded” and will include a “uuid” parameter (with the value of the UUID), along with “bucket” and “key” parameters in the payload of the request.

Handling “successfully uploaded to S3″ POST requests

If you specify a value for the success.endpoint client-side option, Fine Uploader S3 will send a POST request to your server after each file has been successfully uploaded to S3. This request will be “application/x-www-form-urlencoded” with the following parameters in the payload of the request: “key”, “uuid”, “name”, and “bucket”.

If you need to perform some specific task to verify the file server-side at this point, you can do so when handling this request and let Fine Uploader know if there is a problem with this file by returning a response with an appropriate (non-200) status code. Furthermore, you can include a message to be displayed (FineUploader/default-UI mode) and passed to your onError callback handler via an error property in the payload of your response. In this case, the response payload must be valid JSON.

You can also pass any data to your Fine Uploader “complete” event handler, client-side, by including it in a JSON response to this request. In fact, the S3 demo server-side
code on FineUploader.com is passing a signed URL to the `complete` handler which allows you to view the file you’ve uploaded.

CORS Support

Working in a cross-domain environment normally poses additional challenges for client-side code. Fine Uploader insulates you from as much of this as possible. Fine Uploader S3 also includes full support for cross-domain environments. Rest assured that all features will work nicely in Fine Uploader S3 event if you must negotiate a cross-origin environment. All browsers are supported as well, except for IE7 as IE7 has no support for cross-domain ajax requests.

Modern browsers

CORS support in modern browsers (mostly all except IE9 and older) is fairly simple. Modern browsers support CORS ajax requests directly on the XMLHttpRequest object, which is used to initiate ajax requests for signatures, etc. To properly support a CORS environment on these browsers, you must set the expected property of the cors option to true. On your server, you must also handle preflight (OPTIONS) requests by setting the appropriate Access-Control-Allow-Origin, Access-Control-Allow-Headers, and Access-Control-Allow-Methods headers on your response. Finally, you must also include an Access-Control-Allow-Origin header on all responses. The upload-to-s3 demo on fineuploader.com demonstrates handling cross-origin requests with provided client-side and server-side code. Have a look at the demo and the associated server-side code for more details. Also, Mozilla Developer Network has an excellent article on CORS, which is a must-read for anyone dealing with this sort of environment.

IE9 and older

IE9 and IE8 do have support for cross-origin ajax, but this support is very limited. Microsoft added proper CORS support to XMLHttpRequest in IE10. A great deal of time was spent tackling this cross-origin support in Fine Uploader and Fine Uploader S3 for IE8 and IE9, but there are some leaky abstractions that unfortunately cannot be avoided. Below, I detail the additional steps that must be taken when dealing with a cross-origin environment in IE9 and IE8.

Client-side configuration

In addition to enabling CORS, as detailed in the previous section, you also must explicitly enable cross-domain ajax support in IE9 and IE8 in Fine Uploader S3 by setting the allowCors property of the cors option to true.

Parsing POST request payloads

One limitation (of many) of XDomainRequest in IE9 and IE8 is the inability to set ANY request headers. This means that most server-side languages and frameworks will not be able to easily parse the contents of requests. For example, a POST request with URL-encoded parameters in the payload will not have a Content-Type set, preventing most server-side frameworks from automatically parsing the parameters. This will require you to write server-side code that parses the content of these requests based on the expected Content-Type. The PHP example in the Fine Uploader Server Github repository provides an example of how this can be easily accomplished in PHP.

Delete files feature

In order to support the delete file feature (if you choose to enable this) you will need to set the method property of the deleteFile option to "POST". The default method is DELETE, but only POST and GET cross-origin requests are supported in IE9 and IE8. Fine Uploader will send an additional parameter of “_method” with a value of “DELETE” along with these requests. Your server side code should be able to pick out a DELETE request by looking for this “_method” parameter in the request payload. This convention has been discussed and popularized in O’Reilly’s RESTful Web Services.

Responding to success.endpoint POST requests

Fine Uploader S3 provides you the opportunity to optionally inspect the file in S3 (after the upload has completed) and declare the upload a failure if something is obviously wrong with the file. If the success.endpoint property of the request option is set, Fine Uploader S3 will send a POST request after the file has been stored in S3. This request will contain parameters for the bucket, key, filename, and UUID associated with the uploaded file. If the file is invalid for some reason, you can easily return a non-200 response and Fine Uploader S3 will declare the upload a failure. Fine Uploader S3 also provides you the opportunity to have a custom error message, determined by your server, displayed next to the failed file. To do this, you must return a valid JSON response containing an “error” property with a value set to the message to display next to the failed file. In a cross-origin environment in IE9 and IE8, if you want to display such a message, you MUST return a 200 response along with the error property. This is due to the fact that XDomainRequest does not allow access to the response content if the response is determined to be a non-success response, such as one with a non-200 response. If you do return a non-200 response in IE9 or IE8, you will only see an “Upload Failed” message next to the failed file.

Conclusion

This is a fairly big feature and a lot of work went into its development and documentation. This is, quite possibly, the most complex feature ever implemented in Fine Uploader. The goal here is to make it as easy as possible for your S3-dependant web application to accept uploads from your users, and I hope we have achieved that. If we have missed something, or if you simply want to propose an enhacnement to this feature, please open up a request in the project’s Github issue tracker. As always, requests related to technical support should be opened on Stackoverflow under the “fine-uploader” tag, where Fine Uploader team developers will monitor and answer your support questions.

Also, be sure to try out Fine Uploader S3 on the website!