Programmer Thoughts

By John Dickinson

Quickly uploading data to Cloud Files

December 19, 2009

Cloud Files is a great way to store information, either to take advantage of the CDN or to offload the infrastructure requirements of storing large amounts of data. However Cloud Files is used, though, one still must upload the data to the service before being able to use it.

Uploading the data is not problematic if it can be done in small chunks or spread out over time (images on a blog, for example). The Cloud Files language APIs offer a good way to upload data in these cases. Unfortunately, the language bindings can be terribly slow for uploading large numbers of files. While they do make some optimizations (like reusing connections when available), the code is written to be very generic. For example, the bindings make HEAD requests to ensure all proper data is set before allowing you to upload an object. Additionally, at least in the Python language bindings, HEAD requests are issued when an instance of an object is created. While this is good in a general sense, these HEAD requests become superfluous when doing a large batch upload. One can achieve much better results by using the Cloud Files ReST API directly.

As an example, let’s look at the following code which uses the Python API:

 1 #!/usr/bin/env python
 2  
 3 import os
 4 import cloudfiles
 5  
 6 username = 'xxxx'
 7 apikey = 'xxxx'
 8  
 9 conn = cloudfiles.get_connection(username, apikey)
10  
11 container = conn.create_container('api_speed_test3')
12 data_list = ('test_data/%s'%x for x in os.listdir('test_data') \
13              if x.endswith('.dat'))
14 for filename in data_list:
15     try:
16         obj = container.create_object(filename)
17         obj.load_from_filename(filename)
18     except cloudfiles.errors.ResponseError, err:
19         print err
20 print len(container.list_objects())

In my tests, using the above code takes about 5.5 minutes to upload 1000 16KB files to Cloud Files.

I wrote the same functionality using the ReST API directly:

 1 #!/usr/bin/python
 2  
 3 import os
 4 import httplib
 5  
 6 username = 'xxxx'
 7 apikey = 'xxxx'
 8  
 9 # auth
10 conn = httplib.HTTPSConnection('auth.api.rackspacecloud.com')
11 headers = {'x-auth-user': username, 'x-auth-key': apikey}
12 conn.request('GET', '/auth', headers=headers)
13 resp = conn.getresponse()
14 auth_token = resp.getheader('x-auth-token')
15 url = resp.getheader('x-storage-url')
16 conn.close()
17 # send data
18 send_headers = {'X-Auth-Token': auth_token, 'Content-Type': 'text/plain'}
19 container_path = '/'+'/'.join(url.split('/')[3:])+'/api_speed_test2'
20 conn = httplib.HTTPSConnection(url.split('/')[2])
21 conn.request('PUT', container_path, headers=send_headers)
22 conn.getresponse().read()
23 data_list = ('test_data/%s'%x for x in os.listdir('test_data') \
24              if x.endswith('.dat'))
25 for filename in data_list:
26     f = open(filename)
27     conn.request('PUT', container_path+'/'+filename, body=f,
28                  headers=send_headers)
29     f.close()
30     resp = conn.getresponse()
31     resp.read()
32     if resp.status >= 300:
33         print resp.status, resp.reason, container_path+'/'+filename
34 conn.close()

Although slightly longer, the majority of the extra code is for the auth. In my tests, uploading 1000 16KB files took about 4.5 minutes. A whole minute improvement for only 1000 objects is a very good result. I would expect the difference to be even greater as the number of files increases.

All of the code above (plus code to generate the test data) can be found in my github account.

By using the ReST API directly, I can make certain assumptions about my data that are not possible in the generic language bindings. I do not need to do the HEAD requests because I know I have just created the container and I have not uploaded the files yet. I am explicitly setting all the data for each object upload. Further improvements would be to add some error handling and parallelization.

This work is licensed under a Creative Commons Attribution 3.0 Unported License.

The thoughts expressed here are my own and do not necessarily represent those of my employer.