Category Archives: S3

Fine Uploader 5.1 – S3 CDN support, canvas uploads, & performance enhancements

Update: January 8, 2015 – 5.1.3 Hotfix Release

  • Some files that fail validation may not be marked as rejected.  (#1345)

We are excited to announce the official release of Fine Uploader version 5.1

The two biggest features in v5.1 include the ability to upload to Amazon’s Simple Storage Service (S3) via a CDN (such as fastly).  You can also upload to an S3-like storage service, that is, a non-Amazon storage service that follows the S3 API but does not use the same conventions related to bucket naming.

Also, by popular demand, uploading a <canvas> is dirt-simple.  Simple pass an existing <canvas> into Fine Uploader’s addFiles API method, and Fine Uploader will take care of the rest for you!

Other Features

Dynamically change the item limit

If you’d like to adjust the validation item limit set during initialization, you can easily make this adjustment at any time now via a new API method.

Dynamically change the upload success endpoint for S3/Azure

You can make this adjustment for all files or specific files via a new API method.

File parameters / form data sent to S3/Azure upload success endpoint

Instead of querying S3 in your upload success server code, you can pull any parameters associated with the file directly off of the upload success POST request sent by Fine Uploader.

More generic Fine Uploader S3 error messages

Previously, Fine Uploader would explicitly mention S3 when reporting upload errors in the S3 module.  The explicit references to S3 have been removed to better support S3-like endpoints.

Bug Fixes

  • Large number of thumbnail previews may cause browser to crash. (#1279)
  • DnD module makes the drop zone a block level element on hover. (#1273)
  • Canceling an in-progress upload does not hide total progress bar. (#1303)

We also worked on a number of tasks to improve the quality and consistency of Fine Uploader’s codebase and build scripts.

Licensing Change

Starting with version 5.1, Fine Uploader is licensed exclusively under the Widen Commercial License.  If you are using Fine Uploader for commercial purposes or in a commercial product, you must purchase a license. If you are using Fine Uploader in a free open-source product, you may use Fine Uploader free of charge.  In either case, you may download Fine Uploader directly from our site.

Downloading Fine Uploader

Starting with version 5.1, anyone may download Fine Uploader directly from our site.  Please be aware of the licensing associated with Fine Uploader (described in the above section).  You may also access Fine Uploader on npm.  This can be as simple as running “npm install fine-uploader”, which will grab the latest stable version from the npm registry. We will likely add Fine Uploader to various Content Delivery Networks in the future as well.

Note that current versions of Fine Uploader are no longer available on the jQuery plug-in registry, as the jQuery team has stopped maintaining their registry.

Next…

As always, for up-to-date information about features and fixes planned for the next release, please see the milestone in the Github issue tracker. At the writing of this post, we are in the process of planning the 5.2 release.

As always, we have continued to add the features that you all demand the most. Thanks again for your continued support. It is due to the backing of commercial license holders, everyone reporting bugs, those who suggest great features, and all the people that have expressed their support for this library that make it as great as it is.

Thanks for being a part of Fine Uploader!

Uploads without any server code

If you just want to simply accept files from users, why should you have to write server-side code? Well, if you are using Fine Uploader S3 version 4.2 or later, you don’t have to worry about server-side languages anymore! This blog post accompanies a live demo of this workflow at https://fineuploader-s3-client-demo.s3.amazonaws.com/index.html.

Summary

Starting with Fine Uploader S3 4.2, the “no server” upload workflow is fully supported. This means that you only need to host your JavaScript and HTML files. Fine Uploader S3, AWS, and your identity providers take care of the heavy-lifting for you.

The workflow is simple:

  1. authenticate your users with the help of an identity provider, such as Google
  2. Use the temporary token from your ID provider to grab temporary access keys from AWS
  3. Pass the keys on to Fine Uploader S3
  4. Your users can now upload to your S3 bucket

Requirements

A client-side workflow such as this one means that you must ensure your users are utilizing a modern browser (not IE9 or older). Some of the SDKs used here (mostly the AWS JavaScript SDK) will not work on older browsers.

Other things you will need to make this happen:

  • Fine Uploader S3 4.2+
  • OAuth/login JavaScript SDK from Google, Facebook, or Amazon
  • A simple web server to host your static content, such as a public-read S3 bucket
  • An AWS account
  • AWS JavaScript SDK

Concepts

You should be familiar, at a high-level, with OAuth 2.0, which is the standard used by Google, Amazon, Facebook, and other similar identity providers. An identity provider here will allow you to request temporary credentials from AWS on behalf of the authenticated user. These credentials are required by Fine Uploader S3 to upload files to your S3 bucket without any server-side intervention on your part. It is important to ensure traffic between your web app and the identity provider be secured via SSL (HTTPS) when using an OAuth 2.0 ID provider.

You must also be familiar with:

  • HTML
  • JavaScript
  • Amazon’s Simple Storage Service (S3)
  • Fine Uploader S3

Setting up your identity providers

The live demo of this workflow allows you to choose between Google, Facebook, and Amazon as identity providers. You will need to register your application with each of these identity providers, record the assigned client ID, and specify the domain(s) of your web application in order to ensure that authentication against your registered application can only occur on your application’s domain(s).

Google

1.) Login to the Google Cloud Console.

2.) Create a new project.
empty google cloud console project list

google cloud console create project

3.) Enable the “Google+ API”
enable Google+ API

4.) Create a new OAuth Client ID in the “Credentials” section of the “APIs & auth” side menu. The application type will be “Web application”. You should include the domain(s) of your web app in the “Authorized Javascript origins” text box. You can leave the “Authorized redirect URI” field blank.
create OAuth client ID

5.) Record the “Client ID for web application” value.
OAuth client ID

Facebook

1.) Visit the Facebook developers page.

2.) Invoke the “Apps” menu, and click “Create a new app”. Fill in the fields as appropriate.
create facebook app

3.) Record the App ID after creating the app.
facebook app ID

4.) Click on the “Settings” option on the right, and then click the “Add Platform” bar.
facebook app add platform

5.) Click “Website”.
facebook app choose app type

6.) Fill out the “Site URL” under the new “Website” section. This should be the domain of your web app. Finally, save your changes.
facebook app details

Amazon

1.) Sign in to Amazon’s App Console. with your AWS account. Click on the “Register new application” button.

2.) Fill out the relevant form fields. Note that all fields other than the logo URL are required.
register AWS app

3.) Record the App ID.
AWS app ID

4.) Expand the “Web Settings” section, and add your app’s domain name(s) in the “Allowed JavaScript Origins” section. Note that this URL must be SSL/HTTPS.
AWS web app settings

Creating and securing your S3 bucket

If you haven’t done so already, you will need to create an S3 bucket to receive your user’s files. This is covered in some detail in the “Configuring your buckets” section of the initial Fine Uploader S3 blog post.

In addition to the AllowedHeader values mentioned in the CORS section of the bucket configuration documentation, you will also need to ensure that the x-amz-security-token header is allowed.

Setting up your IAM roles

You will need to create a separate IAM role for each identity provider. Each role will link an identity provider to specific client-side permissions needed by Fine Uploader S3 to send files to your S3 bucket.

1.) Navigate to the IAM roles section of the AWS console.

2.) Click the “Create New Role” button & fill in a name.
IAM role name

3.) Choose the “Role for Identity Provider Access” option on the next step. Click the “Select” button next to “Grant access to web identity providers”.
IAM role for ID provider

4.) Pick an identity provider. If you are using Facebook or Amazon, paste the client ID you recorded when you registered your app in the “Application Id” field. If you are using Google, paste the application ID you recorded in the “Audience” field instead. Click “Continue”. You will be brought to a “Verify Role Trust” step. Click “Continue” again.
AWS role ID provider
AWS role verify ID provider

5.) Expand the “Policy Generator” section, and click “Select”.
AWS role policy generator

6.) You must now specify permissions for your S3 bucket. Fine Uploader S3 only needs the PutObject permission to upload files. However, if you intend to make all files publically viewable (as we do in the live demo), you will also need to include the PutObjectAcl permission. The Amazon Resource Name includes the name of the S3 bucket that will receive files, in the format “arn:aws:s3:::YOUR_BUCKET_NAME/*”. After you have filled all of this out, click “Add Statement”, then “Continue”.
IAM role S3 permissions
IAM role bucket name

7.) Click “Continue” on the “Set Permissions” page step that appears next, then “Create Role” on the final step.
IAM role set permissions

8.) Click on your new role in the roles list, then the “Summary” tab on the bottom pane, and record the Role ARN ID.
IAM role ARN

The code

There is a live demo with accompanying commented code as well. Note that the live demo is hosted in a public S3 bucket. This is more apparent when looking at the URL.

The live demo contains the following files:

index.html

Entry point for the live demo. It pulls in all other client-side files and resources. Also, it contains HTML for the Google, Facebook, and Amazon login buttons, as well as other elements required by the third-party JavaScript SDKs. A customized version of a Fine Uploader UI template is also present in the head element. Finally, there is some code to ensure the demo is only displayed if a modern browser is in use. We make use of conditional comments here to display & load the demo if a modern browser is in use, or a message explaining why the demo is not functional if IE9 or older is present.

3 JavaScript files to make use of the identity provider SDKs

We have one JavaScript file for each identity provider. These are used to track authentication requests and pass the token received from a successful auth request on to the AWS SDK. Also, they are used to notify the user when the bearer token has expired (asking them to re-authenticate by clicking on the login button). The files are amazon-auth.js, google-auth.js, and facebook-auth.js.

You must ensure the Role ARN and app IDs are filling in appropriately in these files. Note that the app ID for the Google ID provider is attached to the login button element in index.html as a data attribute. Also note that the facebook and aws files include a providerId in the call they make to the assumeRoleWithWebIdentity method. Google’s ID provider does not have such a property though.

aws-sdk-glue.js

Used to obtain temporary credentials (keys) supplied to Fine Uploader. The bearer tokens obtained from the identity providers are used to generate these credentials.

Fine Uploader S3 built source files & resources

We also must include the Fine Uploader UI S3 jQuery JavaScript, CSS, and other resource files (images/placeholders). You can generate your copy at http://fineuploader.com/customize.

custom.css

Some CSS to enhance the style of this demo.

fineuploader-glue.js

Creates an instance of Fine Uploader UI S3 jQuery. The “complete” and “credentialsExpired” events are observed. The former includes a button that links to the uploaded file in S3 next to the file item in the UI after a successful upload. The latter asks the AWS credentials code for new credentials before they expire.

Additional reading

Fine Uploader 4.1: Pause Uploads, Image Validation, & Support for More Browsers

Update: December 27, 2013 – 4.1.1 Hotfix Release

This hotfix release addresses the following issues:

  • Fine Uploader S3: Uploads fail to complete in Safari 5. (#1080)

Here are some notable additions that are part of the 4.1 release:

Validate Image Dimensions

If you have a need to restrict uploaded image dimensions in your application, you can now enforce this client-side in all browsers other than IE9 and older, Safari 5.1 and older, and Android 2.3.x and older. You can restrict the height and/or width of submitted images via a new validation option property. Please see the validation feature documentation for more details.

Pause Uploads

This feature mainly compliments the chunking and auto-resume features. For example, while it is not necessary to “pause” an in progress upload to resume it later, it is probably much more intuitive for end users to do so. This feature also provides the ability to upload queued items immediately (or sooner) simply by pausing in-progress uploads that are lower priority.

See the pause feature documentation for more details.

Internet Explorer 11 Support

IE11 is now fully supported. In fact, all features available in IE10 are now available for
IE11 users.

Opera Support

We have finally added support for Opera. Note that this only includes Opera 15 and newer. As of Opera 15, Chromium is used “under the hood”. As a result, all features supported in Chrome are also now supported when you are using Opera!

Mobile Chrome Support

We now support the Chrome browser on iOS6+ and Android 4+. In iOS, Chrome supports all of the same features as mobile Safari, except for progress reporting. On Android, mobile Chrome supports all of the features supported by the Android stock browser as well as progress reporting.

Other fixes & changes

Next…

As always, for up-to-date information about features and fixes planned for the next release (4.2), please see the milestone in the Github issue tracker. We will begin to plan the 4.2 release shortly after this release of 4.1. Note that the release cycle for 4.2 will likely be longer than normal due to the upcoming holidays.

Fine Uploader 3.8 Released

Update: August 30, 2013 – 3.8.2 Hotfix Release

This hotfix release addresses the following issue/workflows:

  • ActiveX automatic fallback for IE users with native XHR disabled (delete file feature and upload-to-S3). (#977)
  • AWS object key names with the following characters will fail when uploaded: ! ‘ ( ) * (#979).

Update: August 29, 2013 – 3.8.1 Hotfix Release

This hotfix release addresses the following defects:

  • IE9 and older direct to S3 uploading fails for keys containing spaces (#970)
  • Mismatched key or bucket names in the response to a completed multipart upload will not fail the upload (#972)
  • AWS encodes non-ASCII key names as question marks (#973)

Overview

Fine Uploader 3.8 is the first version to support uploading files directly to Amazon’s Simple Storage Service (S3).

Features & Enhancements

Bugs Fixed

  • Fix error logging when parsing iframe content (#929)
  • CORS ajax request errors are ignored in IE8 & IE9 (#943)

Amazon S3 Uploads

3.8 allows you to easily upload files directly to Amazon S3, bypassing your origin server.  All existing Fine Uploader features in all browsers are supported when dealing with S3.  Some server-side intervention is required (such as to sign requests before they are sent to S3), but this integration is quite simple.

For more information on this feature, and a tutorial on how to implement S3 support, read the associated blog post.

Documentation

We spent some time during this release making the documentation even better, and hope that you find it useful. If you encounter anything broken, or if you have any ideas on improvements for the documentation, then feel free to create an issue on GitHub and we will take it into consideration almost immediately.

What’s Up Next?

Features for the next release are yet-to-be-determined. Planning for the next release is underway, and we will update this post once our planning sessions are complete. If you have any opinions on features that you think would continue to make Fine Uploader the best, then please post a feature request on the issue tracker.

Supports Amazon S3 opens the door for the possibility of uploading directly to other cloud-storage providers such as Microsoft’s Azure, …

We will continue testing Fine Uploader on a variety of different workflows on the latest and greatest browsers and devices. SauceLabs continues to act as our multi-browser testing platform, and writing functional tests for Fine Uploader is a priority for the next release.

Let us know if we can do anything to improve your uploading experience!

– Ray Nicholus, Mark Feltner, and the rest of the team @ Widen

Fine Uploader S3: Upload Directly to Amazon S3 from your Browser

Update: November 16, 2015 – Version 4 signatures are now supported in Fine Uploader S3 5.4.0

Table of Contents

  1. What is This and Why is This Important?
    1. Increased scalability
    2. Less server-side complexity
    3. Save bandwidth
  2. Browser Support
  3. Supported Features
  4. Step-by-Step Guide to Integrating Fine Uploader S3 Into Your Web Application
    1. Configuring your S3 buckets
      1. Editing your bucket’s CORS config
      2. Basic CORS config values
      3. Securing your bucket
        1. CORS restrictions
        2. IAM restrictions
    2. Client-side integration
      1. Simple upload support
      2. Supporting more advanced features
        1. “Successfully uploaded to S3” server notifications
        2. Dynamic key names
        3. Including user metadata
        4. Error message display
        5. File validation
        6. Auto and manual failed upload retries
        7. Chunking & auto-resume
        8. Delete files
        9. Upload via paste
    3. Server-side integration
      1. Signing policies
        1. Policy document format
        2. Verifying the auto-generated policy
        3. Responding to the signature request
      2. Supporting IE9 and older
      3. Signing chunked/REST/multipart API requests
      4. Delete file support
      5. “Successfully uploaded to S3” server-notifications
  5. Cross-Domain (CORS) Environment Support
    1. Modern browsers
    2. Internet Explorer 9 and older
  6. Conclusion

TL;DR

There’s quite a bit of detail in this post, and I encourage you to read it all.  If you require support in the future and it is clear that you have not taken the time to read this post, you will likely be directed at this blog post.  If you really want to jump in headfirst and are already comfortable with all of the concepts surrounding this feature, have a look at the following links to get started:

Also, there is a live, fully functional demo of this feature on Fine Uploader’s home page that allows you to play with Fine Uploader S3 by uploading files to one of our S3 buckets. The demo even allows you to view the file after it has been uploaded, or even delete it via Fine Uploader’s UI. Furthermore, some additional options have been enabled in the demo, such as various validation rules.

Also, please don’t be overwhelmed by the length of this blog post. All of this information is here as we are determined to be complete and document anything and everything you might want to be aware of when using Fine Uploader S3. As always, don’t hesitate to open a support request, file a bug, or a feature request. We are here to help you integrate Fine Uploader S3 into your project! See http://fineuploader.com/support for more details.

What is This and Why is This Important?

Starting with Fine Uploader 3.8, you have the option to upload files directly to your S3 buckets client-side. We’re calling this Fine Uploader S3.  Previously, you would have to send file bytes to your local server (and handle the associated request(s)) and then send them up to S3.  This feature cuts out the middleman (your local server) when dealing with file bytes.

Increased scalability

Since your server no longer has to directly handle uploaded files, this makes it easier to scale your web application.  S3 deals with large and small workloads quite well.  So, let Amazon worry about this!

Less server-side complexity

Handling multipart-encoded requests that Fine Uploader sends for each file is complicated enough.  Once you turn on chunking and auto-resume, things get a bit more complicated.  You have to keep track of the chunks.  You must make sure that you keep file chunks around on your server long enough to properly support the auto-resume feature.  You must be sure that you don’t accidentally run out of space server-side, etc, etc.  Or, you can let S3 handle all of this for you.

Save bandwidth

If you aren’t uploading files directly to S3 client-side, you must receive the files on your local server and then send the same exact bytes to your S3 bucket.  That seems a bit inefficient, doesn’t it?  Your files are destined for S3 anyway, why not just send them there directly?

Browser Support

Uploads directly to S3 via your browser in Fine Uploader is supported in ALL browsers that Fine Uploader already supports for “traditional” uploads.  Yes, this includes IE7.

Note that, if you do need to support IE7, you will also have to include Douglas Crockford’s json2.js in your document.  This is required as Fine Uploader must stringify the JSON policy document it generates when sending the policy to your server for signing.  IE7 does not have any native support for converting JavaScript objects into JSON, or vice-versa.  A non-trivial amount of code is required to do this correctly, which is why it is simply easier to rely on json2.js for this task.

Supported Features

All features offered in the “traditional” uploader are also offered by Fine Uploader’s S3 uploader.  This, of course, includes:

  • chunking
  • auto-resume
  • auto & manual retry
  • editing filenames before the upload
  • auto and manual upload mode
  • deleting uploaded files (via your local server)
  • drag & drop
  • upload via paste
  • upload images via mobile devices
  • cross-origin support

Step-by-Step: Integrating Fine Uploader S3 Into Your Web App

Allowing direct-to-S3 uploads with Fine Uploader is quite simple.  The process is as follows:

  1. Configure your S3 bucket(s).
  2. Write your “glue” code to create and configure a Fine Uploader S3 instance client-side.
  3. Include simple code on your server to sign requests and optionally handle other requests sent by Fine Uploader.

Configuring your S3 bucket(s)

If you want to jump right into this, and already know a bit about CORS and configuring your bucket. take a look at the section in the server-side documentation that provides a sample CORS configuration setup with some information on modifying the sample to suit your needs.  Otherwise, read on.

By default, Amazon allows cross-origin GET requests on your S3 bucket.  This is enforced via an XML document in the CORS configuration section of your bucket in S3’s administrator console.  In order to allow direct-to-S3 uploads from Fine Uploader, you will need to extend the default CORS configuration a bit.

Editing your bucket’s CORS configuration

Let’s use a test bucket I created in the Fine Uploader AWS account during development of this feature as an example.  Start by clicking on your bucket under the “All Buckets” section on the right side of the S3 console.  Your page may look something like this (with your own buckets present instead of Fine Uploader’s development buckets):

S3 console view

Next, click on the bucket you wish to edit, and then click on the “Properties” button on the right side of the page:

Screen Shot 2013-08-05 at 1.41.11 PM

After you do this you will see a set of properties associated with this bucket on the right side of your page.  Expand the “Permissions” section:

S3 bucket permissions

Then, click on the “Add CORS Configuration” button, which will expose an overlay that houses your bucket’s CORS configuration:

S3 CORS configuration section

After you are done making changes to your configuration, be sure to click the “Save” button. After you do this, your changes will be live.

Basic CORS configuration values

Fine Uploader requires you to at least specify some very basic CORS rules for any S3 buckets that will receive files from the library. Since Fine Uploader utilizes ajax requests to upload files in many instances, cross-origin request restrictions are an issue. Fortunately, modern browsers provide support for the CORS spec, which describes how browsers may provide support for cross-domain ajax requests. S3 permits CORS (cross-origin) requests from these browsers, with the proper configuration. Without the proper configuration, these requests will be rejected by S3.

If you do not plan on utilizing the chunking feature in Fine Uploader, your S3 CORS configuration can be quite simple.  In this instance, you need nothing more than this:

If you turn on the chunking (and possibly the auto-resume) feature, you will need to, at least, include the following XML in the CORS configuration section of your S3 bucket:

Tightening up ajax request restrictions on your S3 bucket

CORS Restrictions

The AllowedOrigin tag allows you to restrict which domains Amazon should allow requests from. The wildcard value for the AllowedOrigin tags in the earlier examples will allow ajax requests from any domain. You may want to consider replacing the AllowedOrigin wildcard value with something more restrictive. Any domains not specified here will be rejected if they attempt to make a client-side ajax request. If you know your Fine Uploader instance will be hosted only at http://foo.bar.com, you should replace the wildcard AllowedOrigin tag value with this:

The AllowedHeader tag allows you to restrict which headers are acceptable on incoming ajax requests. The wildcard value in the earlier examples tell Amazon to allow any headers from any otherwise acceptable ajax requests. You also may want to consider replacing the AllowedHeader wildcard values with something more specific. If you want to replace the wildcard with something more restrictive, you must, at a minimum, replace the wildcard tag with the following tags:

If you enable chunking, you will need to add the following tags as well:

If you intend to attach any user metadata to the files uploaded to S3 via the setParams API method or the params property of the request option AND you have the chunking feature enabled, you will need to include additional AllowedHeader tags. If the parameter names cannot all be known ahead of time, you will need to use a wildcard value for the AllowedHeader tag (as displayed in the earlier examples). However, if you do know these parameter names ahead of time, you can specify them in your CORS configuration file if you want to ensure Amazon blocks any ajax requests that include unexpected headers. Each parameter name passed to Fine Uploader will need to be included in an AllowedHeader entry, with “x-amz-meta” prepended to the parameter name. For example, if you know your app will declare associate “foo” and “bar” parameters with some or all of your files, you will need to include the following entries in your bucket’s S3 CORS configuration:

IAM Restrictions

You should strongly consider provisioning a pair of keys with very restrictive permissions to be used specifically by Fine Uploader S3 client-side. This involves creating a new IAM group with restricted permissions. You must then create an IAM user, assign the user to the group you just created, and then pass the public key to Fine Uploader via the request.accessKey option while storing your secret key server side for the purposes of signing requests. The only permission required by the Fine Uploader user is “S3:PutObject”.

Here’s a simple example, assuming our bucket name is “fineuploadertest”:

Step 1: Create an IAM group for client-side use only

You can create a new group by clicking on the “Create New Group” button in the IAM groups section of your AWS console.
create new IAM group

Then, name the group, (“uploads-client” for example), and finally specify permissions. You can select “Custom Policy” name your policy, and then paste in the following:

set permissions
Click “Continue” and then “Create Group”.

Step 2: Create a Fine Uploader S3 user

Now, you must associate a user with the group you just created. Start by clicking the “Create New Users” button in the users section of the IAM console.
create user

Specify a user name, click “Create” and then be sure to click “Download Credentials” on the last step of the wizard. You will need these later.
download credentials

Step 3: Associate the new user with the new group

Click on the user you created in the IAM user’s console, then click on the “Add User to Groups” button at the bottom of the page.
add users to groups

Select the new group you created, then click “Add to Groups”.

Step 4: Start using the Fine Uploader S3 user’s keys

Finally, pass the public key to Fine Uploader via the request.accessKey option while storing your secret key server side for the purposes of signing requests. Remember that you downloaded the keys/credentials back in Step 2.

Client-side integration

If you’ve used Fine Uploader in the past, or if you are an existing user, you’re certainly familiar with writing “glue code” (javascript) to create a Fine Uploader instance on your page, pass appropriate configuration options, register events, and call API methods. This section does not assume any previous experience with Fine Uploader, but is also useful for more experienced users.

Don’t worry, setting up Fine Uploader S3 client-client is a pretty simple task, regardless of the number of features you wish to use, even if you don’t want to use jQuery!

Simple upload support only

First, let’s go over setting up your client-side code for web applications with the most basic needs (just simple upload support). This will allow Fine Uploader to upload files directly to S3 in all supported browsers, without any of the bells and whistles associated with some of the more advanced features of the library.

This first set of examples assumes you are using the default UI created by Fine Uploader. The default UI is customizable, but you may want to create your own entirely unique UI via FineUploderBasic-S3. I’ll provide notes for FineUploaderBasic-S3 users at the end of the examples.

For all examples, it is assumed you have an element with an ID of “fineUploader” present somewhere in your document. Fine Uploader will use this element as a container for any DOM elements it creates. If you are using jQuery, it will also attach an instance of the Fine Uploader S3 jQuery plug-in to that element. Note that if you intend to use multiple instances of Fine Uploader on a page, you will need to adjust the ID to ensure it is unique on your page.

For jQuery users:

You can read more about using the jQuery plug-in wrapper in the documentation.

Non-jQuery users (native javascript-only):

In the above examples, you will need to adjust your endpoint to match the URL of your S3 bucket. All formats supported by Amazon are supported, such as “{bucketname}.s3.amazonaws.com”, “s3.amazonaws.com/{bucketname}”, as well as a custom domain that properly points to your S3 bucket. SSL is also supported, in which case your endpoint address must start with https://.

You will also need to include your specific AWS access key as a value for the accessKey property above. This is your public AWS key, NOT your secret key. Also, this should be the public key for the IAM user created specifically for Fine Uploader S3, and not your main account key. Your secret key should remain a secret, server-side. Your access key(s) can be found on the security credentials page of your AWS account. Once on that page, you can create new keys or access existing keys under the “Access Keys” section:
Access Keys console

The signature.endpoint must contain a path to your server where Fine Uploader can send policy documents and request header strings. This endpoint must sign these items using your AWS secret key and include the signature in the response. Signing is discussed more in the server-side integration section of this blog post.

Finally, the iframeSupport.localBlankPagePath value must point at a path on the same origin/domain as the one hosting your Fine Uploader instance. This endpoint needs to be nothing more than an empty HTML file. Fine Uploader S3 requires this if you plan on supporting IE9 or older. The reason for this is explained a bit more in the implementation details section at the end of this post.

Client-side setup for support of some optional features

The previous section provided a simple example and explanation for client-side setup of a simple instance of Fine Uploader without any optional features enabled. This section will describe the other extreme: an uploader instance with most optional features enabled. The beauty of Fine Uploader S3 is that your server-side code (covered later) only requires a few trivial additions in order to support all of these features, as Amazon takes care of most of the work for you.

For jQuery users:

Non-jQuery users (native javascript-only):

Again, the above examples represent somewhat of an advanced setup. Also, as you can see, jQuery makes your life a bit easier, so use it if you can. Some of the properties of the request option were discussed in the previous section. Let’s step through the new features and options enabled in the above examples:

Fine Uploader S3 can notify your server directly when a file has been uploaded to S3 (success.endpoint)

If you specify this option, Fine Uploader S3 will send a POST request to your server that includes the relevant key name, UUID, bucket, and filename. This can be helpful if you need to perform some server-side tasks related to the file after it is safely stored in your S3 bucket. You can also perform additional checks on the file in S3 at this point, if you wish. Should any of your checks indicate a problem, you can alert Fine Uploader S3 via your server’s response, and the uploader will declare the upload a failure.

Specifying the object (file) key for S3 (objectProperties.key)

As you can see in the S3 options, you can ask Fine Uploader S3 to use the UUID it generates for the file as the object key key: "uuid" (which is the default), the filename key: "filename", or you can specify a function (as in the above examples) where you determine the key name for each file on-demand. Your function will be called once for each file, just before Fine Uploader attempts to upload it for the first time. Fine Uploader S3 will pass the file ID as a parameter when invoking your function as well.

Please understand that use of the filename as S3 object key is strongly discouraged, as the filename is not guaranteed to be unique. If a user uploads a “foo.jpeg” and another user uploads “foo.jpeg” to the same bucket, the last upload will overwrite the existing “foo.jpeg” in your bucket if the filename is the sole identifier of the object key. This is especially problematic if you are supporting iOS devices, as iOS uses the same name for all image files (image.jpg).

When your function is invoked, you can either return the key name immediately based on some simple logic embedded in your client-side code, or you can ask your server via ajax to create a key name. In the latter case, you must return a qq.Promise. In fact, any non-blocking/async calls required to generate the key in this function require your function observe the promise contract. Fine Uploader will delay further handling of that file (but not block the UI thread) until the promise is fulfilled via a call to the promise’s “success” or “failure” methods. You may want to utilize this approach if your server has to, for example, create or lookup an item in the database in order to determine the object’s key name.

Associating user metadata with each object in S3 (request.params)

S3 allows you to store “user metadata” with each object in your bucket(s). This metadata can be retrieved via any one of the AWS SDKs. It is also made available as headers in the response to a simple GET request for the object. In the latter case, the user metadata names will be prefixed by Amazon with “x-amz-meta-“.

Fine Uploader S3 converts any parameters specified via the request.params option or via the setParams API method into “user metadata”. Note that the values of your parameters will be URL encoded by Fine Uploader S3 before they are associated with the object in your S3 bucket.

Displaying error messages for your users (failedUploadTextDisplay)

When a file ultimately fails, Fine Uploader S3 will extract the failure reason from the S3 response (if possible) or provide a canned error message based on some error detection logic in the code (if possible) and display this message next to the failed item. By default, Fine Uploader simply displays “Upload Failed” next to the failed item. The “custom” value for the mode property here instructs Fine Uploader S3 to attempt to display a more specific message based on the failure.

Validation rules

Fine Uploader S3 allows you to optionally put restrictions on files submitted by your users. In the above examples, we are

  • Restricting the allowable file extensions via the validation.allowedExtensions option.
  • Restricting the types of files that are selectable in the file chooser dialog (if the browser supports this) via the validation.acceptFiles option.
  • Limiting the maximum size of any selected files to 5 MB (if supported by the browser) via validation.sizeLimit. Note that Fine Uploader S3 will ALSO ask AWS (server-side) to enforce any size limits you have specified, but only for “simple” (non-chunked) uploads. There doesn’t appear to be a way to ask AWS to enforce this for chunked (multipart) uploads.
  • Preventing users from uploading more than a total of 3 files in this session via validation.itemLimit.

Support for auto & manual retry of failed uploads

Fine Uploader S3 also will automatically retry a failed upload a number of times before giving up (if enabled). After the automatic retries have expired, Fine Uploader S3 will allow the user to manually request a retry via the default UI. You can also programmatically issue retry requests via Fine Uploader S3’s API.

Support for file partitioning/chunking & auto-resume of interrupted/failed uploads

If supported by the browser (not IE9 and older), Fine Uploader S3 will optionally split large files into parts and send each part separately. This is a life-saver if a failure occurs midway through a large file (due to loss of connection, etc). In that scenario, you don’t have to start the entire file over. Fine Uploader S3 will retry starting with the failed chunk.

Building on this, we have the auto-resume feature. If enabled, Fine Uploader will let you pick up where you left off with a file in another session. Suppose you are in the middle of a large file upload, and either your computer/browser suddenly crashes, or you simply need to resume the upload at a later time. Fine Uploader S3 stores information about the file’s progress in your browser, and will read it back and resume the upload where you left off when you select or drop the file again in a future session.

The default chunk size for Fine Uploader S3 is 5 MiB. This is the minimum chunk size required by S3. If your file is smaller than this size, the upload will be a “simple” (non-chunked) upload. Also, be aware of this S3 restriction if you modify the default chunk size in Fine Uploader S3.

Deleting an uploaded file

If you want to allow your users to delete files already uploaded in the current session, you should enable this feature. If using the default UI, a “delete” button will appear next to each successfully upload file. You may, as always, utilize the Fine Uploader S3 API to delete a file as well. Note that minimal server-side code is required to handle this feature, as delete file requests are sent to your local server, instead of Amazon S3. This is due to the fact that it is not possible to send delete requests directly to S3 via the browser in IE9 and older. See the server-side integration section for details on handling such requests.

Uploading images via paste

If you would like to allow users to upload images directly to S3 by simply pasting them on to your page, enable this function and set the targetElement to any element on your page that should receive the paste event. Fine Uploader S3 will take care of the rest for you! You can also prompt the user to provide a name (if using the default UI) via a dialog whenever an image is pasted via the promptForName property. Note that this feature is only currently available for Chrome.
Of course, there are many other features you may enable, and many other ways to configure Fine Uploader S3 client-side. The above examples only cover a portion of the available options. See the links at the start of the client-side integration section for more details.

Server-side integration

Fine Uploader S3 and Amazon handle the majority of the work for you. However, in order to support uploads directly to S3, you are required to, at the very least, sign requests (using your AWS secret key) sent by Fine Uploader. This must be done server-side in order to keep your secret key a secret.

The following functional server-side examples are available for you to use as a guide:

Note that the PHP example is used on fineuploader.com to support the live Fine Uploader S3 demo. Other server-side examples will be added over time. Read on for details on handling server-side tasks when using Fine Uploader S3.

Here are server-side tasks that you must perform. Some are optional, as noted:

So, as you can see, in its simplest form, only very minimal server-side code is required. Even with more advanced options, you’ll find that your server-side code can still be quite simple.

Signing policy documents

Signing policy documents server-side is the one mandatory task your server must perform, regardless of features enabled and browsers supported. Fine Uploader generates policy documents for you, based on properties of the file and using some of the options you have set for your uploader instance. Policy documents are required to be attached to an S3 upload requests for “simple” (non-chunked) uploads. The policy document also must be signed, and that signature is then attached by Fine Uploader to the request as well. Your server is responsible for signing these requests.

Policy document format

Fine Uploader S3 will send a POST request to the endpoint specified in the signature.endpoint option. This POST request will contain an “application/json” payload: the policy document. The body of this POST request will look something like this (notes added to the documentation for clarity):

Note that the policy document will ALSO contain ANY parameters specified in your client-side code, prefixed with “x-amz-meta”. For example, if you specify a parameter of “foo” with a value of “bar”, the following entry would also be present in the conditions array of the generated policy document:

Note that parameter values are URL encoded by Fine Uploader.

Examining the policy document

You should programmatically examine policy documents, server-side, before signing them. It is possible that a malicious user could tamper with the generated policy document before it is sent off to your server by Fine Uploader. If any values of the policy document are not as expected, simply return a non-200 response status code (such as 500) AND the following in the body of your “application/json” response:

The above response will tell Fine Uploader S3 that the policy document may have been tampered with and it will NOT attempt to send the associated file to S3 until a proper signature has been received by your server. Fine Uploader may retry sending the signature request to your server (if retry is enabled).

Responding to a policy document signature request

Your server must return an “application/json” response with content that includes the base-64 encoded policy document AND the signed base-64 encoded policy document. So, your response payload will look something like this:

Most server-side languages/frameworks make it easy to base-64 encode a string. For example, Java has a BASE64Encoder class. Amazon provides examples for PHP and Python as well in their developer documentation.

Signing the policy document is quite simple as well. Again, see the examples provided in Amazon’s developer documentation for more details.

Also note that you SHOULD provision a specific pair of keys for client-side use by Fine Uploader that is heavily restricted. See the “Securing your bucket” section in this blog post for more details.

Supporting IE9 and older

It’s trivial to support IE9 and older browsers (including Android 2.3.x and older). Simply provide an accessible empty HTML file/page. That’s it. Really! The path to this file must be specified in the iframeSupport.localBlankPagePath option. The path can be relative (Fine Uploader will determine the absolute path for you) but it MUST be on the same origin/domain as the one hosting your Fine Uploader S3 instance.

Why does Fine Uploader need you to provide an empty page on the same domain as the uploader instance? Well, in browsers that do not support the File API (such as IE9 and older) Fine Uploader must dynamically create a form containing the file input and any associated parameters and submit it. The form targets an iframe to ensure the response does not modify/redirect the main window. The content of the response will be loaded into the associated iframe. Fine Uploader must examine the content of that iframe to determine the status of the upload request. If the iframe is not on the same domain as the window hosting Fine Uploader, there is no way to access the contents of this frame (due to cross-origin restrictions). To get around this, Fine Uploader S3 sends a “success_action_redirect” parameter with upload requests when older browsers are involved. The value of this parameter is the absolute path to the blank page you have provided. S3 responds with a 303 status code in the response and includes the URL of your blank page. This instructs the browser to redirect to your blank page, allowing Fine Uploader to access the contents of the iframe. While the contents are empty, the fact that Fine Uploader S3 can access the contents without a security exception means that the request likely succeeded. To be absolutely sure, Fine Uploader S3 examines some parameters in the iframe’s URL (such as the bucket and key) to ensure that the response refers to the correct file.

Chunking support

To support chunking, your server only needs to sign a string that represents the headers of the request to be sent to S3. Fine Uploader S3 will generate a string based on the request type and required header values, pass it to your server in an “application/json” POST request, and expect your server to sign it using a portion of the examples provided in Amazon’s developer documentation. Note that this signature differs slightly from the policy document signature. In this case, you should NOT base-64 encode the string before signing it. Simply generate an HMAC SHA1 signature of the string using your AWS secret key and base-64 encode the result.

Fine Uploader S3 will send the following in the payload of the signature request:

The presence of the “headers” property in the JSON request alerts your server to the fact that this is a request to sign a REST/multipart request and not a policy document.

Your server only needs to return the following in the body of an “application/json” response:

Fine Uploader S3 utilizes the following REST API calls, all of which require signatures:

If you are curious about the format of the strings Fine Uploader will send to your server for signing, the general format is:

{METHOD}\n\n{Content-Type Value (optional)}\n\n{CUSTOM HEADERS, EACH ENDING WITH A NEWLINE}/{BUCKET}/{KEYNAME}?{REQUEST-SPECIFIC QUERY PARAMS}

This is explained a bit more in the AWS REST API documentation. You probably don’t need to worry about this though, as you SHOULD provision a specific pair of keys for client-side use by Fine Uploader that is heavily restricted. See the “Securing your bucket” section in this blog post for more details.

Delete file support

If you enable the deleteFile feature, Fine Uploader S3 will send any delete requests directly to your server. Your server is expected to communicate with S3 via an SDK to delete the associated file. Why doesn’t Fine Uploader S3 simply send the delete file requests to S3 directly? Well, this is possible, but not in IE9 and older. Instead of going through the hassle of implementing this REST API call client-side, only to also require your server-side to make this call itself if IE9 and older is involved (most web apps likely have to support at least IE9), I opted to simply delegate to the server for all browsers. IE9 and older cannot send DELETE method requests to S3 since the request is cross-origin and IE9 and older only support POST and GET cross-origin requests (via XDomainRequest).

It’s quite simple to delete a file on S3 in your server-side code if you utilize the appropriate SDK for your server-side language provided by Amazon. Fine Uploader will send, by default, a DELETE request to your local server. The last item in the URI path will be the UUID of the file. Fine Uploader S3 will also include the “key” and “bucket” as parameters in the query string for DELETE requests.

If you change the deleteFile method in the options to POST, Fine Uploader will send a POST request to the endpoint you have specified in your deleteFile.endpoint option. This request will be “application/x-www-form-urlencoded” and will include a “uuid” parameter (with the value of the UUID), along with “bucket” and “key” parameters in the payload of the request.

Handling “successfully uploaded to S3” POST requests

If you specify a value for the success.endpoint client-side option, Fine Uploader S3 will send a POST request to your server after each file has been successfully uploaded to S3. This request will be “application/x-www-form-urlencoded” with the following parameters in the payload of the request: “key”, “uuid”, “name”, and “bucket”.

If you need to perform some specific task to verify the file server-side at this point, you can do so when handling this request and let Fine Uploader know if there is a problem with this file by returning a response with an appropriate (non-200) status code. Furthermore, you can include a message to be displayed (FineUploader/default-UI mode) and passed to your onError callback handler via an error property in the payload of your response. In this case, the response payload must be valid JSON.

You can also pass any data to your Fine Uploader “complete” event handler, client-side, by including it in a JSON response to this request. In fact, the S3 demo server-side
code on FineUploader.com is passing a signed URL to the `complete` handler which allows you to view the file you’ve uploaded.

CORS Support

Working in a cross-domain environment normally poses additional challenges for client-side code. Fine Uploader insulates you from as much of this as possible. Fine Uploader S3 also includes full support for cross-domain environments. Rest assured that all features will work nicely in Fine Uploader S3 event if you must negotiate a cross-origin environment. All browsers are supported as well, except for IE7 as IE7 has no support for cross-domain ajax requests.

Modern browsers

CORS support in modern browsers (mostly all except IE9 and older) is fairly simple. Modern browsers support CORS ajax requests directly on the XMLHttpRequest object, which is used to initiate ajax requests for signatures, etc. To properly support a CORS environment on these browsers, you must set the expected property of the cors option to true. On your server, you must also handle preflight (OPTIONS) requests by setting the appropriate Access-Control-Allow-Origin, Access-Control-Allow-Headers, and Access-Control-Allow-Methods headers on your response. Finally, you must also include an Access-Control-Allow-Origin header on all responses. The upload-to-s3 demo on fineuploader.com demonstrates handling cross-origin requests with provided client-side and server-side code. Have a look at the demo and the associated server-side code for more details. Also, Mozilla Developer Network has an excellent article on CORS, which is a must-read for anyone dealing with this sort of environment.

IE9 and older

IE9 and IE8 do have support for cross-origin ajax, but this support is very limited. Microsoft added proper CORS support to XMLHttpRequest in IE10. A great deal of time was spent tackling this cross-origin support in Fine Uploader and Fine Uploader S3 for IE8 and IE9, but there are some leaky abstractions that unfortunately cannot be avoided. Below, I detail the additional steps that must be taken when dealing with a cross-origin environment in IE9 and IE8.

Client-side configuration

In addition to enabling CORS, as detailed in the previous section, you also must explicitly enable cross-domain ajax support in IE9 and IE8 in Fine Uploader S3 by setting the allowCors property of the cors option to true.

Parsing POST request payloads

One limitation (of many) of XDomainRequest in IE9 and IE8 is the inability to set ANY request headers. This means that most server-side languages and frameworks will not be able to easily parse the contents of requests. For example, a POST request with URL-encoded parameters in the payload will not have a Content-Type set, preventing most server-side frameworks from automatically parsing the parameters. This will require you to write server-side code that parses the content of these requests based on the expected Content-Type. The PHP example in the Fine Uploader Server Github repository provides an example of how this can be easily accomplished in PHP.

Delete files feature

In order to support the delete file feature (if you choose to enable this) you will need to set the method property of the deleteFile option to "POST". The default method is DELETE, but only POST and GET cross-origin requests are supported in IE9 and IE8. Fine Uploader will send an additional parameter of “_method” with a value of “DELETE” along with these requests. Your server side code should be able to pick out a DELETE request by looking for this “_method” parameter in the request payload. This convention has been discussed and popularized in O’Reilly’s RESTful Web Services.

Responding to success.endpoint POST requests

Fine Uploader S3 provides you the opportunity to optionally inspect the file in S3 (after the upload has completed) and declare the upload a failure if something is obviously wrong with the file. If the success.endpoint property of the request option is set, Fine Uploader S3 will send a POST request after the file has been stored in S3. This request will contain parameters for the bucket, key, filename, and UUID associated with the uploaded file. If the file is invalid for some reason, you can easily return a non-200 response and Fine Uploader S3 will declare the upload a failure. Fine Uploader S3 also provides you the opportunity to have a custom error message, determined by your server, displayed next to the failed file. To do this, you must return a valid JSON response containing an “error” property with a value set to the message to display next to the failed file. In a cross-origin environment in IE9 and IE8, if you want to display such a message, you MUST return a 200 response along with the error property. This is due to the fact that XDomainRequest does not allow access to the response content if the response is determined to be a non-success response, such as one with a non-200 response. If you do return a non-200 response in IE9 or IE8, you will only see an “Upload Failed” message next to the failed file.

Conclusion

This is a fairly big feature and a lot of work went into its development and documentation. This is, quite possibly, the most complex feature ever implemented in Fine Uploader. The goal here is to make it as easy as possible for your S3-dependant web application to accept uploads from your users, and I hope we have achieved that. If we have missed something, or if you simply want to propose an enhacnement to this feature, please open up a request in the project’s Github issue tracker. As always, requests related to technical support should be opened on Stackoverflow under the “fine-uploader” tag, where Fine Uploader team developers will monitor and answer your support questions.

Also, be sure to try out Fine Uploader S3 on the website!