Skip to main content

Multipart Uploads

For objects larger than 5 GB, you must use multipart upload. Multipart upload is also recommended for any file over 100 MB for improved reliability -- if a part fails, you retry only that part instead of the entire file.

How it works

  1. Initiate the upload and receive an upload ID.
  2. Upload parts individually (minimum 5 MB each, except the last part).
  3. Complete the upload by listing all parts. The service assembles the final object.

If something goes wrong, abort the upload to clean up incomplete parts.

Example: AWS CLI

The AWS CLI handles multipart uploads automatically for large files:

# Files over 8 MB are automatically multipart-uploaded
aws s3 cp large-file.zip s3://my-bucket/large-file.zip \
--endpoint-url https://s3.fil.one

To control thresholds:

aws configure set s3.multipart_threshold 100MB
aws configure set s3.multipart_chunksize 50MB

Example: Python (boto3)

boto3 also handles multipart automatically via upload_file:

from boto3.s3.transfer import TransferConfig

config = TransferConfig(
multipart_threshold=100 * 1024 * 1024, # 100 MB
multipart_chunksize=50 * 1024 * 1024, # 50 MB
max_concurrency=10,
)

s3.upload_file(
"large-file.zip",
"my-bucket",
"large-file.zip",
Config=config,
)

Limits

ParameterValue
Maximum object size5 TB
Maximum parts10,000
Minimum part size5 MB (except last part)
Maximum part size5 GB

Cleaning up incomplete uploads

Incomplete multipart uploads consume storage. List and abort any stale uploads:

aws s3api list-multipart-uploads \
--bucket my-bucket \
--endpoint-url https://s3.fil.one

aws s3api abort-multipart-upload \
--bucket my-bucket \
--key large-file.zip \
--upload-id YOUR_UPLOAD_ID \
--endpoint-url https://s3.fil.one