S3-Compatible Access
Oceanum Storage provides an S3-compatible API, allowing you to use standard AWS S3 tools and libraries to interact with your storage.
Endpoint Configuration
| Setting | Value |
|---|---|
| Endpoint URL | https://storage.oceanum.io |
| Region | auto |
Authentication
You'll need your Oceanum.io access credentials to authenticate:
- Log in to Oceanum.io Platform
- Navigate to your account settings
- Generate or retrieve your access key and secret key
Using the AWS CLI
Installation
Install the AWS CLI if you haven't already:
pip install awscli
Configuration
Configure a profile for Oceanum Storage:
aws configure --profile oceanum
Enter your credentials when prompted:
- AWS Access Key ID: Your Oceanum access key
- AWS Secret Access Key: Your Oceanum secret key
- Default region name:
auto - Default output format:
json
Basic Commands
List buckets:
aws s3 --endpoint-url=https://storage.oceanum.io ls
List contents of a bucket:
aws s3 --endpoint-url=https://storage.oceanum.io ls s3://my-org-bucket/
Upload a file:
aws s3 --endpoint-url=https://storage.oceanum.io cp myfile.nc s3://my-org-bucket/data/
Download a file:
aws s3 --endpoint-url=https://storage.oceanum.io cp s3://my-org-bucket/data/myfile.nc ./
Sync a directory:
aws s3 --endpoint-url=https://storage.oceanum.io sync ./local-datadir s3://my-org-bucket/sensors
Here's a real-world example syncing sensor data to an organization bucket:
aws s3 --endpoint-url=https://storage.oceanum.io sync ./root_datadir s3://oceanum-eda-org-port-taranaki/sensors
Using boto3 (Python)
import boto3
# Create client
s3 = boto3.client(
's3',
endpoint_url='https://storage.oceanum.io',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY',
)
# List buckets
response = s3.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])
# Upload a file
s3.upload_file('local_file.nc', 'my-bucket', 'data/remote_file.nc')
# Download a file
s3.download_file('my-bucket', 'data/remote_file.nc', 'downloaded_file.nc')
# List objects in a bucket
response = s3.list_objects_v2(Bucket='my-bucket', Prefix='data/')
for obj in response.get('Contents', []):
print(obj['Key'])
Using s3cmd
Installation
pip install s3cmd
Configuration
Create ~/.s3cfg:
[default]
access_key = YOUR_ACCESS_KEY
secret_key = YOUR_SECRET_KEY
host_base = storage.oceanum.io
host_bucket = %(bucket)s.storage.oceanum.io
use_https = True
Commands
# List buckets
s3cmd ls
# Upload file
s3cmd put myfile.nc s3://my-bucket/data/
# Download file
s3cmd get s3://my-bucket/data/myfile.nc
Supported S3 Operations
Oceanum Storage supports the following S3 operations:
| Operation | Supported |
|---|---|
| ListBuckets | Yes |
| CreateBucket | Yes |
| DeleteBucket | Yes |
| ListObjects | Yes |
| GetObject | Yes |
| PutObject | Yes |
| DeleteObject | Yes |
| CopyObject | Yes |
| HeadObject | Yes |
| Multipart Upload | Yes |
| Presigned URLs | Yes |
Multipart Uploads
For large files (>100MB), multipart uploads are recommended for reliability. Most S3 tools handle this automatically, but you can configure the threshold:
# AWS CLI - set multipart threshold to 100MB
aws configure set s3.multipart_threshold 100MB --profile oceanum
Presigned URLs
Generate temporary URLs to share files without exposing credentials:
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-bucket', 'Key': 'data/file.nc'},
ExpiresIn=3600 # URL valid for 1 hour
)
print(url)