Programmer Thoughts

By John Dickinson

Cloud Files Object Copy

November 19, 2009

Update: This post is outdated and the referenced github branches no longer exist. The functionality described herein is now supported server-side in the latest version of Cloud Files ( See my newer post for more details.

Cloud Files does not currently support object copying. However, a simple workaround is to re-upload the file with the new name. Implementing this workaround may be inconvenient, and one may miss some things like ensuring that metadata is updated. I have added a copy feature to my fork of the python-cloudfiles API that takes care of these details. This is a convenience function only and is not officially supported by Rackspace. Keep in mind that billable bandwidth will be used (unless the servicenet flag is set in the API). One option for renaming large files is to spin up a small Cloud server, use the API to copy over servicenet, and spin down the server. At $0.015 per hour, one could run a 256MB instance for 100 hours before equalling the transfer cost for copying one 5GB (Cloud Files max size) file over the billed network.

My python-cloudfiles fork on github: python-cloudfiles.

Example script that copies the last file in a container to another container:

 1 import cloudfiles
 2 conn = cloudfiles.get_connection(username='myname', api_key='mykey')
 3 container_name = 'example_container'
 4 another_container = 'example_container2'
 5 c = conn.get_container(container_name)
 6 l = c.list_objects()
 7 o = c.get_object(l[-1])
 8 new_path = '%s/%s' % (another_container,
 9 o.copy_to(new_path)
10 print 'copied', l[-1], 'to', new_path
11 new_list = conn.get_container(another_container).list_objects()
12 print new_list
13 assert in new_list

This work is licensed under a Creative Commons Attribution 3.0 Unported License.

The thoughts expressed here are my own and do not necessarily represent those of my employer.