😕 Why S3 Compatible Storage? #
In this post on SvelteKit compatible S3 storage, we will take a look at how you can add an upload feature to your Svelte app. We use pre-signed links, allowing you to share private files in a more controlled way. Rather than focus on a specific cloud storage provider's native API, we take an S3 compatible approach. Cloud storage providers like Backblaze, Supabase and Cloudflare R2 offer access via an API compatible with Amazon’s S3 API. The advantage of using an S3 compatible API is flexibility. If you later decide to switch provider, you will be able to keep the bulk of your existing code.
We will build a single page app in SvelteKit which lets the visitor upload a file to your storage bucket. You might use this as a convenient way of uploading files for your projects to the cloud. Alternatively, it can provide a handy starting point for a more interactive app, letting users upload their own content. That might be for a photo sharing app, your own microblogging service, or for an app letting clients preview and provide feedback on your amazing work. I hope this is something you find interesting, if it is, let's get going.
⚙️ Getting Started #
Let start by creating a new skeleton SvelteKit project. Type the following commands in the terminal:
We will be using the official AWS SDK for some operations on our S3 compatible cloud storage. As well as the npm packages for the SDK, we will need a few other packages including some fonts for self-hosting. Let’s install all of these now:
Initial Authentication #
Although most of the code we look at here should work with any S3 compatible storage provider, the mechanism for initial authentication will be slightly different for each provider. Even taking that into account, it should still make sense to use the provider's S3 compatible API for all other operations to benefit from the flexibility this offers. We focus on Backblaze for initial authentication. Check your own provider’s docs for their mechanism.
To get S3 compatible storage parameters from the Backblaze API, you need to supply an Account ID
and Account Auth token with read and write access to the bucket we want to use. Let's add these to
a .env
file together with the name of the bucket (if you already have
one set up). Buckets offer a mechanism for organizing objects (or files) in cloud storage. They play
a role analogous to folders or directories on your computer's file system.
Start the dev Server #
Use this command to start the dev server:
By default, it will run on TCP port 5173
. If you already have
something running there, see how you can change server ports in the article on getting started with SvelteKit.
🔗 Pre‑signed URLs #
We will generate pre-signed read and write URLs on the server side. Pre-signed URLs offer a way to limit access, granting temporary access. Links are valid for 15 minutes by default. Potential clients, app users and so on will be able to access just the files you want them to access. Also, because you are using pre-signed URLs, you can keep the access mode on your bucket set to private.
To upload a file, we will use the “write” pre-signed URL. We will also get a read signed URL. We can use that to download the file if we need to.
Let’s create a SvelteKit server endpoint to listen for new pre-signed URL requests. Create a src/routes/api/presigned-urls.json
folder, adding a +server.js
file with the following content:
Utilities #
This endpoint will be called from our client +page.svelte
file later.
You will see it references a presignedUrls
function, which we have
not yet defined. Create a src/lit/utilities
folder and in there make
a storage.js file with this content:
This code works for Backblaze’s API, but will be slightly different if you use another provider. The rest of the code we look at should work with any S3 compatible storage provider.
In lines 1
– 5
we pull the credentials we stored, earlier, in the .env
file. Moving
on, in lines 14
– 17
we see how you can generate a Basic Auth header in JavaScript. Finally, the Backblaze response returns a recommended and minimum part size. These
are useful when uploading large files. Typically, you will want to split large files into smaller chunks.
These numbers give you some guidelines on how big each of the chunks should be. We look at pre-signed multipart uploads in another article. Most important though is the s3ApiUrl
(line 41
) which we will need to create a JavaScript S3 client.
Creating Pre‑signed Links with S3 SDK #
Next we use that S3 API URL to get the S3 region and then use that to get the pre-signed URLs from
the SDK. Add this code to the bottom of the storage.js
file:
In line 64
we use the @paralleldrive/cuid2
package to help us generate a unique (collision resistant) session id. That's the server side setup.
Next let’s look at the client.
🗳 Poll #
🧑🏽 Client Home Page JavaScript #
We’ll split the code into a couple of stages. First, let’s add our script block with the code for interfacing with the endpoint that we just created and also the cloud provider. We get pre-signed URLs from the endpoint then, upload directly to the cloud provider from the client. Since all we need for upload is the pre-signed URL, there is no need to use a server endpoint. This helps us keep the code simpler.
Replace the content of src/routes/+page.svelte
with the following:
The first part is mostly about setting up the user interface state. There is nothing unique to
this app there, so let’s focus on the handleSubmit
function.
There are two parts. The first in which we get a signed URL from the endpoint we just created and the
second where we use the FileReader API to upload the file to the cloud.
FileReader API #
The FileReader API lets us read in a file given the local path and output a binary string, DataURL
or an array buffer. You would use a DataURL if you wanted to Base64 encode an image (for example).
You could then set the src
of an <img>
element to a generated Base64 data uri string, or upload the image to a Cloudflare worker for processing.
For our use case, uploading files to cloud storage, instead we go for the readAsArrayBuffer
option.
The API is asynchronous, so we can just tell it what we want to do once the file is uploaded and
carry on living our life in the meantime! We create an instance of the API in line 51
. Using onloadend
we specify that we want to use fetch to upload
our file to the cloud, once it is loaded into an array buffer (from the local file system). In line
63
(after the onreadend
block), we
specify what we want to read. The file actually comes from a file input, which we will add in a moment.
Fetch Request #
The fetch request is inside the onloadend
block. We make a PUT
request, including the file type in a header. The body of the request is the result of the file
read from the FileReader API. Because we are making a PUT request, from the browser, and also because
the content type may not be text/plain
, we will need some CORS
configuration. We'll look at that before we finish.
How do we get the file name and type? When the user selects a file, from the file input we just
mentioned, the handleChange
code in lines 22
– 25
runs. This gets the file, by updating the
files
variable, but does not read the file in (that happens in our
FileReader API code). Next, when the user clicks the Upload button which triggers the handleSubmit
function call, we get the name and file content type in line 35
.
🖥 Client Home Page Markup #
Next we'll add the markup, including the file browse input which lets the user select a file to upload. After that, we'll add some optional styling, look at CORS rules and finally test.
Paste this code at the bottom of the +page.svelte
file:
You can see the file input code in lines 117
– 129
. We have set the input to allow the
user to select multiple files ( multiple
attribute in line 122
). For simplicity, the logic we added previously only uploads the first file, though you can
tweak it if you need multiple uploads from your application. In line 124
we set the input to accept only image files with accept="image/*"
. This can be helpful for user experience, as typically in the file select user interface, just
image files will be highlighted. You can change this to accept just a certain image format or
different file types, like PDF, or video formats — whatever your application needs. See more
on file type specifier in the MDN docs .
Finally, before we check out CORS, here's some optional styling. This can be nice to add as the default HTML file input does not look a little brutalist!
src/routes/+page.svelte
— click to expand code.
src/lib/styles/global.css
— click to expand code.
⛔ Cross‑origin Resource Sharing (CORS) #
CORS rules are a browser security feature which limit what can be sent to a different origin. By
origin, we mean sending data to example-b.com when you are on the example-a.com site. If the
request to a cross-origin does not meet some basic criteria GET
request
or POST
with text/plain
content type,
for example, the browser will perform some extra checks. We send a PUT
request from our code, so the browser will send a so-called preflight request ahead
of the actual request. This just checks with the site we are sending the data to what it is expecting
us to send, or rather what it will accept.
To avoid CORS issues, we can set CORS rules with our storage provider. It is possible to set them on your bucket when you create it. Check with your provider on the mechanism for this. With Backblaze, you can set CORS rules using the b2 command line utility in JSON format. Here is an example file:
We can set separate rules to let our dev and production requests work. In the allowed origin for dev, we set a dummy hostname instead of localhost and on top we run in HTTPS mode. You may be able to have everything working without this setup, but try it if you have issues. Add this CORS configuration to Backblaze with the CLI utility installed by running:
You can see more on Backblaze CORS rules in their documentation .
Secure dev Server #
Vite recommends creating your own SSL certificates to run a local dev server in HTTPS mode. If you
do not have these already, to test your code, you might opt for installing the @vitejs/plugin-basic-ssl
package, then updating vite.config.js
to use it:
Learn more about this in the video on running a secure SvelteKit dev server .
To set a local hostname on macOS, add a line to private/etc/hosts
:
Then, instead of accessing the site via http://localhost:5173
, in
your browser use https://test.localhost.com:5173
. This worked for
me on macOS. The same will work on typical Linux and Unix systems, though the file you change will
be /etc/hosts
. If you are using DNSCrypt Proxy or Unbound, you can
make a similar change in the relevant config files. If you use Windows and know how to do this,
please drop a comment below to help out other Windows users.
💯 SvelteKit S3 Compatible Storage: Test #
Try uploading a file using the new app. Also make sure the download link works.
🙌🏽 SvelteKit S3 Compatible Storage: What we Learned #
In this post we learned:
- why you would use the S3 compatible API for cloud storage instead of your storage provider's native API,
- how to use the AWS SDK to generate a pre-signed upload URL,
- a way to structure a file upload feature in a SvelteKit app.
I do hope there is at least one thing in this article which you can use in your work or a side project. As an extension, you might want to pull a bucket list and display all files in the folder. You could even add options to delete files. On top, you could also calculate a hash of the file before upload and compare that to the hash generated by your storage provider. This avails a method to verify file integrity. There's a world of different apps you can add an upload feature to; knock yourself out!
You can see the full code for this SvelteKit S3 compatible storage project on the Rodney Lab Git Hub repo .
🏁 SvelteKit S3 Compatible Storage: Summary #
What is S3 compatible storage? #
- AWS offer a cloud storage service called S3. The S3 API can be used to access storage on most other providers. Using this S3 API lets you store and retrieve your data from any of these providers, in a uniform way. Doing so gives you some flexibility, making it easier to change storage provider at a later date.
Why use a pre-signed URL? #
- A pre-signed URL offers a mechanism for granting temporary access to private files in your storage bucket on a per-file basis. Using pre-signed URLs, you can let your clients or site visitors download files from your private bucket.
How can you upload files to cloud storage in SvelteKit? #
- The easiest way is to use the HTML5 FileReader API. We saw how to do that from the browser on the client side of your SvelteKit app in this post. We also saw how to configure CORS for your bucket.
🙏🏽 SvelteKit S3 Compatible Storage: Feedback #
Have you found the post useful? Would you prefer to see posts on another topic instead? Get in touch with ideas for new posts. Also, if you like my writing style, get in touch if I can write some posts for your company site on a consultancy basis. Read on to find ways to get in touch, further below. If you want to support posts similar to this one and can spare a few dollars, euros or pounds, please consider supporting me through Buy me a Coffee.
Finally, feel free to share the post on your social media accounts for all your followers who will find it useful. As well as leaving a comment below, you can get in touch via @askRodney on Twitter and also askRodney on Telegram . Also, see further ways to get in touch with Rodney Lab. I post regularly on SvelteKit as well as other topics. Also, subscribe to the newsletter to keep up-to-date with our latest projects.