SvelteKit S3 Compatible Storage: Presigned Uploads SvelteKit S3 Compatible Storage: Presigned Uploads
In this post on SvelteKit compatible S3 storage, we will take a look at how you can add an upload feature to your Svelte app. We use presigned links, allowing you to share private files in a more controlled way. Rather that focus on a specific cloud storage provider's native API, we take an S3 compatible approach. Cloud storage providers like Backblaze, Supabase and Cloudflare R2 offer access via an API compatible with Amazon's S3 API. The advantage of using an S3 compatible API is flexibility. If you later decide to switch provider, you will be able to keep the bulk of your existing code.
We will build a single page app in SvelteKit which lets the visitor upload a file to your storage bucket. You might use this as a convenient way of uploading files for your projects to the cloud. Alternatively it can provide a a handy starting point for a more interactive app, letting users upload their own content. That might be for a photo sharing app, your own micro-blogging service or for an app letting clients preview and provide feedback on your amazing work. I hope this is something you find interesting if it is let's get going.
Let start by creating a new skeleton SvelteKit project. Type the following commands in the terminal:
We will be using the official AWS SDK for some operations on our S3 compatible cloud storage. As well as the npm packages for the SDK we will need a few other packages including some fonts for self-hosting. Lets install all of these now:
Although most of the code we look at here should work with any S3 compatible storage provider, the mechanism for initial authentication will be slightly different for each provider. Even taking that into account, it should still make sense to use the provider's S3 compatible API for all other operations to benefit from the flexibility this offers. We focus on Backblaze for initial authentication. Check your own provider's docs for their mechanism.
To get S3 compatible storage parameters from the Backblaze API you need to supply an Account ID
and Account Auth token with read and write access to the bucket we want to use. Let's add these to
.env file together with the name of the bucket (if you already have
one set up). Buckets offer a mechanism for organising objects (or files) in cloud storage. They play
a role analogous to folders or directories on your computer's file system.
The last bit of setup before spinning up the dev server is to configure the
dotenv environment variables package in
Use this command to start the dev server:
By default it will run on TCP port 3000. If you already have something running there, see how you can change server ports in the article on getting started with SvelteKit.
We will generate presigned read and write URLS on the server side. Presigned URLs offer a way to limit access, granting temporary access. Links are valid for 15 minutes by default. Potential clients, app users and so on will be able to access just the files you want them to access. Also because you are using presigned URLs you can keep the access mode on your bucket set to private.
To upload a file we will use the write signed URL. We will also get a read signed URL. We can use that to download the file if we need to.
Let's create a SvelteKit server endpoint to listen for new presigned URL requests. Create a
src/routes/api folder adding an
presigned-urls.json.js file with the following
This code works for Backblaze's API but will be slightly different if you use another provider. The rest of the code we look at should work with any S3 compatible storage provider.
9 we pull the credentials
we stored, earlier, in the
.env file. Moving on, in lines
are useful when uploading large files. Typically you will want to split large files into smaller chunks.
These numbers give you some guidelines on how big each of the chunks should be. We look at presigned multipart uploads in another article. Most important though is the
s3ApiUrl which we will need to create
Next we use that S3 API URL to get the S3 region and then use that to get the presigned URLs from
the SDK. Add this code to the bottom of the
63 we use the
to help us generate a unique session id. That's the server side setup. Next let's look at the client.
We'll split the code into a couple of stages. First let's add our script block with the code for interfacing with the endpoint that we just created and also the cloud provider. We get presigned URLs from the endpoint then, upload directly to the cloud provider from the client. Since all we need for upload is the presigned URL, there is no need to use a server endpoint. This helps us keep the code simpler.
Replace the content of
src/routes/index.svelte with the following:
The first part is mostly about setting up the user interface state. There is nothing unique to
this app there, so let's focus on the
handleSubmit function. There
are two parts. The first in which we get a signed URL from the endpoint we just created and the second
where we use the FileReader API to upload the file to the cloud.
The FileReader API lets us read in a file given the local path and output a binary string, DataURL
or an array buffer. You would use a DataURL if you wanted to Base64 encode an image (for example).
You could then set the
src of an
<img> element to a generated Base64 data uri string or upload the image to a Cloudflare worker for processing.
For our use case, uploading files to cloud storage, instead we go for the
The API is asynchronous so we can just tell it what we want to do once the file is uploaded and
carry on living our life in the meantime! We create an instance of the API in line
onloadend we specifiy that we want to use fetch to upload
our file to the cloud, once it is loaded into an array buffer (from the local file system). In line
62 (after the
onreadend block), we
specify what we want to read. The file actually comes from a file input, which we will add in a moment.
The fetch request is inside the
onloadend block. We make a
PUT request, including the file type in a header. The body of the request is the result of the file
read from the FileReader API. Because we are making a PUT request, from the browser, and also because
the content type may not be
text/plain, we will need some CORS
configuration. We'll look at that before we finish.
How do we get the file name and type? When the user selects a file, from the file input we just
handleChange code in lines
24 runs. This gets the file, by updating the
files variable, but does not read the file in (that happens in our FileReader API code). Next, when the
user clicks the Upload button which triggers the
call, we get the name and file content type in line
Next we'll add the markup, including the file browse input which lets the user select a file to upload. After that we'll add some optional styling, look at CORS rules and finally test.
Paste this code at the bottom of the
You can see the file input code in lines
128. We have set the input to allow the user to select
multiple files (
multiple attribute in line
123). For simplicity the logic we added previously only uploads the first file, though you can
tweak it if you need multiple uploads from your application. In line
125 we set the input to accept only image files with
accept="image/*". This can be helpful for user experience, as typically in the file select user interface, just
image files will be highlighted. You can change this to accept just a certain image format or
different file types, like PDF, or video formats — whatever your application needs. See more
on file type specifier in the MDN docs .
Finally before we check out CORS, here's some optional styling. This can be nice to add as the default HTML file input does not look a little brutalistic!
src/routes/index.svelte — click to expand code.
CORS rules are a browser security feature which limit what can be sent to a different origin. By
origin we mean sending data to example-b.com when you are on the example-a.com site. If the
request to a cross origin does not meet some basic criteria
text/plain content type,
for example) the browser will perform some extra checks. We send a
PUT request from our code so the browser will send a so-called preflight request ahead
of the actual request. This just checks with the site we are sending the data to what it is expecting
us to send, or rather what it will accept.
To avoid CORS issues, we can set CORS rules with our storage provider. It is possible to set them on your bucket when you create it. Check with your provider on the mechanism for this. With Backblaze you can set CORS rules using the b2 command line utility in JSON format. Here is an example file:
We can set separate rules to let our dev and production requests work. In the allowed origin for dev, we set a dummy hostname instead of localhost and on top we run in HTTPS mode. You may be able to have everything working without this setup, but try it if you have issues. Add this CORS configuration to Backblaze with the CLI utility installed by running:
You can see more on Backblaze CORS rules in their documentation .
To run the SvelteKit dev server in https mode, update your package.json dev script to include the
Then restart the dev server with the usual
pnpm run dev command. Learn
more about this in the video on running a secure SvelteKit dev server .
To set a local hostname, on MacOS add a line to
Then, instead of accessing the site via
your browser use
https://test.localhost.com:3030. This worked for
me on macOS. The same will work on typical Linux and Unix systems, though the file you change will
/etc/hosts. If you are using DNSCryprt Proxy or Unbound, you
can make a similar change in the relevant config files. If you use windows and know how to do
this, please drop a comment below to help out other windows users.
Try uploading a file using the new app. Also make sure the download link works.
In this post we learned:
- why you would use the S3 compatible API for cloud storage instead of your storage provider's native API,
- how to use the AWS SDK to generate a presigned upload URL,
- a way to structure a file upload feature in a SvelteKit app.
I do hope there is at least one thing in this article which you can use in your work or a side project. As an extension you might want to pull a bucket list and display all files in the folder. You could even add options to delete files. On top, you could also calculate a hash of the file before upload and compare that to the hash generated by your storage provider. This avails a method to verify file integrity. There's a world of different apps you can add an upload feature to; knock yourself out!
- AWS offer a cloud storage service called S3. The S3 API can be used to access storage on most other providers. Using this S3 API lets you store and retrieve your data from any of these providers in a standard way. Doing so gives you some flexibility, making it easier to change storage provider at a later date.
- A presigned URL offers a mechanism for granting temporary access to private files in your storage bucket on a per file basis. Using presigned URLs you can let your clients or site visitors download files from your private bucket.
- The easiest way is to use the HTML5 FileReader API. We saw how to do that from the browser on the client side of your SvelteKit app in this post. We also saw how to configure CORS for your bucket.
Have you found the post useful? Would you prefer to see posts on another topic instead? Get in touch with ideas for new posts. Also if you like my writing style, get in touch if I can write some posts for your company site on a consultancy basis. Read on to find ways to get in touch, further below. If you want to support posts similar to this one and can spare a few dollars, euros or pounds, please consider supporting me through Buy me a Coffee.
Finally, feel free to share the post on your social media accounts for all your followers who will find it useful. As well as leaving a comment below, you can get in touch via @askRodney on Twitter and also askRodney on Telegram . Also, see further ways to get in touch with Rodney Lab. I post regularly on SvelteKit as well as other topics. Also subscribe to the newsletter to keep up-to-date with our latest projects.