- Print
- DarkLight
Connect to Backblaze B2
- Print
- DarkLight
This first video provides an alternative way of exploring the sample application and reviewing the topics that follow immediately below:
GitHub
As highlighted above, source code used in these exercises is open-sourced and shared on GitHub for you to download. Download here: https://github.com/backblaze-b2-samples/b2-python-s3-sample
Sample Code Structure (Including .Env File)
This sample application code can be downloaded and executed as is.
Among the GitHub files are two critical files:
The script file (sample.py
)
An .env
file, containing credentials to access a public Backblaze B2 Cloud Storage bucket.
Typically, GitHub repositories do not contain .env
files, but these credentials are limited to read-only access of public data, so they are safe to share.
If you are altogether new to developing with Python, the Python IDE used in our video walk-through of the code uses PyCharm Community Edition IDE. PyCharm is not open-source, however the Community Edition is available free for your use and can be downloaded from here:
https://www.jetbrains.com/pycharm/download/
Hosted Media Application
The Backblaze B2 bucket that we will be using is configured with PUBLIC access. This means that all of the objects in the Backblaze B2 sample data bucket can be downloaded via regular, unsigned HTTP requests. HTTP requests are the basis of the internet and are used by web browsers to request files that browsers display.
The bucket includes a browser photo viewer application that you can view here:
https://s3.us-west-002.backblazeb2.com/developer-b2-quick-start/album/photos.html
Note, this Backblaze B2 bucket contains five .jpg photo files visible at the link above, plus all of the associated HTML, CSS, and JavaScript files for the media viewer application. The JavaScript implementation uses the open-source Swiper project. You can find the Swiper source at: https://github.com/nolimits4web/swiper.
You’ll find it useful to open sample.py
in a code editor so you can refer to it as you read the following explanation.
Function get_b2_resource()
In the sample application, the critical logic showing how to get a connection to Backblaze B2 is in the function named get_b2_resource()
. The following are the lines of code you will find in the sample application defining the function get_b2_resource()
:
# Return a boto3 resource object for B2 service
def get_b2_resource(endpoint, key_id, application_key):
b2 = boto3.resource(service_name='s3',
endpoint_url=endpoint, # Backblaze endpoint
aws_access_key_id=key_id, # Backblaze keyID
aws_secret_access_key=application_key,
# Backblaze applicationKey
config=Config(
signature_version='s3v4'))
return b2
As you can see in the code, the sample application uses the open-source boto3 library. On execution the function returns a boto3 service resource. In the code the value returned is stored in a variable named b2
. Referencing this b2
variable you can then call on the Backblaze B2 service actions, sub-resources and collections. For a full set of supported calls supported by the Backblaze B2 service see the documentation here.
The sample application includes the following among required import statements at the top of the application:
import boto3
For details on downloading and installing the current version of boto3, see: https://pypi.org/project/boto3/.
Variations of this logic can be deployed wherever you run your code. This might be on local machines, running in browsers, or running on servers. Regardless you're going to be storing and interacting with files that are stored out in the cloud on the Backblaze B2 service.
The get_b2_resource()
function passes five parameters to boto3.resource()
:
service_name
endpoint_url
aws_access_key_id
aws_secret_access_key
config
The first named parameter, service_name
, selected boto3’s ‘s3’
service.
Note: Although the value being passed is the literal string 's3', all API calls will go to the Backblaze B2 service via the S3-Compatible API.
API request routing and authentication are governed by the remaining four parameters and settings in the Backblaze B2 bucket:
endpoint_url=endpoint
aws_access_key_id=keyID
aws_secret_access_key=applicationKey
config
The middle two parameters both start with the prefix aws_
, however, the values being passed apply to the Backblaze B2 service.
Overriding the default Amazon S3 endpoint URL allows us to use boto3 to connect to Backblaze B2.
In the sample code, the values being passed via 3 of these parameters are retrieved from the .env
file.
The names of all parameters are hardcoded in the boto3 library. The middle two both start with the prefix aws_
however the values being passed are from the Backblaze B2 service.
Constants and .env File
You will find the following lines of code in the sample application file .env()
:
# B2 API endpoint for buckets with sample data
ENDPOINT='https://s3.us-west-002.backblazeb2.com'
# Following 2 application key pairs provide read-only access
# to buckets with sample data
# Bucket with sample data and PUBLIC access
KEY_ID_RO='0027464dd94917b0000000001'
APPLICATION_KEY_RO='K002WU+TkHXkksxIqI6IDa/X7dsN9Cw'
# Bucket with sample data and PRIVATE access
KEY_ID_PRIVATE_RO='0027464dd94917b0000000002'
APPLICATION_KEY_PRIVATE_RO='K002ckrkS/KpaRA9IFzC3xyIn79ALw4'
# Variables below for functions that require write access!
# You must set these values using your own Backblaze account
# 1. Retrieve B2 API Endpoint for region containing your Bucket
# 2. Create Key Pair in Backblaze UI
# Direct Link here to "App Keys" page https://secure.backblaze.com/app_keys.htm
# In Backblaze UI, select "App Keys" on left-side nav (3rd up from bottom)
# Then select "Add a New Application Key" then "Read and Write" re "Type of Access"
# In Backblaze UI, values are labeled as keyID and applicationKey respectively
ENDPOINT_URL_YOUR_BUCKET='<ENTER YOUR B2 API ENDPOINT HERE!>'
KEY_ID_YOUR_ACCOUNT='<ENTER YOUR keyID HERE!>'
APPLICATION_KEY_YOUR_ACCOUNT='<ENTER YOUR applicationKey HERE!>'
These constants are stored separately in the .env file because it is best practice to never check key values and other secrets into a repository. We are making an exception in this case in order for you to have a very fast development startup experience. In your own production applications you should add .env
to .gitignore
to avoid checking in the .env
file. Please note that in .env the names of both constants for the key values end with an _ro
for “read only” to indicate that these keys are related to read-only access. Please also note the comments in the .env
file that identify how the key variables are labeled in the Backblaze web console: keyID and applicationKey. When you use this code against buckets in your own account you will generate your own keys via the “App Keys” section under “Account” in the Backblaze web console.
When get_b2_resource()
is called it invokes the boto3.resource()
method which returns a reference to the Backblaze B2 ServiceResource object. With the Backblaze B2 ServiceResource object, your code can now call additional actions and sub-resources defined on it in boto3. For details on actions and sub-resources see the Boto3 Docs. For the list of S3 calls supported by Backblaze B2, see the docs for the S3-Compatible API.
This script is designed to successfully execute without any parameters. In this sample application code, the main()
function calls get_b2_resource()
, then, by default, calls list_object_keys()
.
When executed, the script prints the names, or “keys,” of the objects in the specified bucket. By default the bucket referenced is set on the constant named PUBLIC_BUCKET_NAME
. See “Hosted media application” above for details of the sample bucket.
Among the keys displayed, you will see the following five photos:
beach.jpg
bobcat.jpg
coconuts.jpg
lake.jpg
sunset.jpg