You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 25 Next »

With S3 buckets, accessing data is even easier than before. For those who want to use Python (encouraged), it is easy as pie.

Make sure you have these ready:

  1. Python 3
  2. Project ID of the bucket you want to mount
  3. Access key and secret access key for the bucket

Take a look at this code segment, which allows you to access the bucket, list its objects and upload/download files from it.

Before running any Python code, install the boto3 library:

python3 -m pip install boto3


Start by declaring some initial values for boto3 to know where your bucket is located at. Feel free to copy paste this segment and fill in with your own values.

import os
import io
import boto3


#Initializing some values 
project_id = '123' #Fill this in 
bucketname = 'MyFancyBucket123'  #Fill this in  
access_key = '123asdf'  #Fill this in  
secret_access_key = '123asdf111'  #Fill this in  
endpoint = 'https://my-s3-endpoint.com'  #Fill this in   

Lets start by initializing the S3 client with our access keys and endpoint:

#Initialize the S3 client
s3 = boto3.client('s3', endpoint_url=endpoint,
        aws_access_key_id = access_key,
        aws_secret_access_key = secret_access_key)

As a first step, and to confirm we have successfully connected, lets list the objects inside our bucket (up to a 1.000 objects). 

#List the objects in our bucket
response = s3.list_objects(Bucket=bucketname)
for item in response['Contents']:
    print(item['Key'])

If you'd want to list more than 1000 objects in a bucket, you can use paginator:

#List objects with paginator (not constrained to a 1000 objects)
paginator = s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket=bucketname)

#Lets store the names of our objects inside a list
objects = []
for page in pages:
    for obj in page['Contents']:
        objects.append(obj["Key"])

print('Number of objects: ', len(objects))

Where an obj looks like this: 

{'Key': 'MyFile.txt', 'LastModified': datetime.datetime(2021, 11, 11, 0, 39, 23, 320000, tzinfo=tzlocal()), 'ETag': '"2e22f62675cea3445f7e24818a4f6ba0d6-1"', 'Size': 1013, 'StorageClass': 'STANDARD'}

Now lets try to read a file from a bucket into Python's memory, so we can work with it inside Python without ever saving the file to our local computer:

#Read a file into Python's memory and open it as a string
filename = '/folder1/folder2/myfile.txt'  #Fill this in  
obj = s3.get_object(Bucket=bucketname, Key=filename)
myObject = obj['Body'].read().decode('utf-8') 
print(myObject)

But if you'd want to download the file instead of reading it into memory, here's how you'd do that:

 #Downloading a file from the bucket
with open('myfile', 'wb') as f:  #Fill this in  
    s3.download_fileobj(bucketname, 'myfile', f) 

And similarly you can upload files to the bucket (given that you have write access to the bucket):

#Uploading a file to the bucket (make sure you have write access)
response = s3.upload_file('myfile', bucketname, 'myfile')  #Fill this in  

And lastly, creating a bucket (this could take some time):

s3.create_bucket(Bucket="MyBucket")

If you're interested in streaming netCDF files directly from S3 buckets, give this example a look:

import netCDF4 as nc
import os
import glob
import xarray as xr
import boto3 
import tempfile

def load_s3_file(bucketname, filename):
    access_key = 'FILL ME IN' 
    secret_access_key = 'FILL ME IN' 
    endpoint = 'https://s3.waw3-1.cloudferro.com'  
    s3 = boto3.client('s3', endpoint_url=endpoint,
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_access_key)      
    tmp = tempfile.NamedTemporaryFile()
    tc = boto3.s3.transfer.TransferConfig(io_chunksize=2621440)     
    with open(tmp.name, 'wb') as f:
        s3.download_fileobj(bucketname, filename, f, Config=tc)
        dataSet = xr.open_dataset(tmp.name, engine='netcdf4')
    return dataSet

load_s3_file("mybucket", "myfile.nc")

And an alternative and shorter version: 

import smart_open
bucketpath=smart_f = f"s3://{access_key}:{secret_access_key}@{endpoint}@{bucketname}/{obj_name}"
smart_f = smart_open.open(bucketpath, 'rb')

import h5py
h=h5py.File(smart_f)
print(h.keys())


If you're interested in more, I recommend taking a look at this article, which gives you a more detailed view into boto3's functionality (although it does emphasize on Amazon Web Services specifically, you can take a look at the Python code involved):

https://dashbird.io/blog/boto3-aws-python/

Check out a full code example at the official boto3 website: 

https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-examples.html

You can also see a differently styled tutorial at:

https://towardsdatascience.com/introduction-to-pythons-boto3-c5ac2a86bb63



  • No labels