[Openstack] S3 API with Swift

John van Ommen john.vanommen at gmail.com
Fri Aug 19 16:48:38 UTC 2016


Has anyone been able to successfully integrate the S3 api with Swift?

I'm working on this in my lab, and finding a number of issues:

1) I've found that when I enable the S3 API, my swift proxy doesn't
bind to it's port. I'm guessing that it's failing to start for some
reason, but the logs don't say why it's not.
2) I've found that when I disable the S3 API, my swift proxy works
fine. So I don't think the problem is Swift, I think the problem is
the S3 API.
3) The documentation is very limited. There is a sample configuration
file, but little explanation of how things are set or why.

It looks like the folks at SwiftStack got this working, as it's
featured in their product. IBM also has a page on this, but IBM's
configuration doesn't match what's documented at OpenStack.org.

I've tried about two dozen combinations, and every time S3 is enabled,
Swift won't listen. Any ideas?

Here's my config. Note that the stanzas related to S3 are edited out,
because I couldn't get Swift to run with them turned on.

[DEFAULT]
bind_port = 8080
bind_ip = 172.16.15.10
swift_dir = /etc/swift
user = swift

[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache
container_sync bulk crossdomain tempurl formpost ratelimit authtoken
keystoneauth staticweb container-quotas account-quotas slo dlo
versioned_writes proxy-logging name_check proxy-server

[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
sorting_method = timing

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:gatekeeper]
use = egg:swift#gatekeeper

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:cache]
use = egg:swift#memcache

[filter:container_sync]
use = egg:swift#container_sync

[filter:bulk]
use = egg:swift#bulk

[filter:crossdomain]
use = egg:swift#crossdomain

[filter:tempurl]
use = egg:swift#tempurl

[filter:formpost]
use = egg:swift#formpost

[filter:ratelimit]
use = egg:swift#ratelimit

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
identity_uri = https://helion-cp1-vip-KEY-API-mgmt:5000
admin_tenant_name = services
admin_user = swift
admin_password = somepassword
auth_uri = https://helion-cp1-vip-KEY-API-mgmt:5000
cache = swift.cache
include_service_catalog = False
delay_auth_decision = true
#memcache_security_strategy = ENCRYPT
#memcache_secret_key = somekey

# Note to reviewer: I'm including all possible filters but not all are
# included in the pipeline (because they are not required)
[filter:versioned_writes]
use = egg:swift#versioned_writes
allow_versioned_writes = true

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin, swiftoperator, _member_, Member

[filter:staticweb]
use = egg:swift#staticweb

[filter:container-quotas]
use = egg:swift#container_quotas

[filter:account-quotas]
use = egg:swift#account_quotas

[filter:slo]
use = egg:swift#slo

# AWS S3 document says that each part must be at least 5 MB in a multipart
# upload, except the last part.
#min_segment_size = 5242880

[filter:dlo]
use = egg:swift#dlo

[filter:domain_remap]
use = egg:swift#domain_remap

[filter:cname_lookup]
use = egg:swift#cname_lookup

[filter:name_check]
use = egg:swift#name_check
forbidden_chars = "<>
maximum_length = 255

[filter:list-endpoints]
use = egg:swift#list_endpoints

[filter:xprofile]
use = egg:swift#xprofile

#[filter:swift3]
#use = egg:swift3#swift3

# Swift has no concept of the S3's resource owner; the resources
# (i.e. containers and objects) created via the Swift API have no owner
# information. This option specifies how the swift3 middleware handles them
# with the S3 API.  If this option is 'false', such kinds of resources will be
# invisible and no users can access them with the S3 API.  If set to 'true',
# the resource without owner is belong to everyone and everyone can access it
# with the S3 API.  If you care about S3 compatibility, set 'false' here.  This
# option makes sense only when the s3_acl option is set to 'true' and your
# Swift cluster has the resources created via the Swift API.
# allow_no_owner = false
#
# Set a region name of your Swift cluster.  Note that Swift3 doesn't choose a
# region of the newly created bucket actually.  This value is used for the
# GET Bucket location API and v4 signatures calculation.
# location = US
#
# Set whether to enforce DNS-compliant bucket names. Note that S3 enforces
# these conventions in all regions except the US Standard region.
# dns_compliant_bucket_names = True
#
# Set the default maximum number of objects returned in the GET Bucket
# response.
# max_bucket_listing = 1000
#
# Set the maximum number of parts returned in the List Parts operation.
# (default: 1000)
# When setting it to be larger than 10000, set to be larger
# container_listing_limit in swift.conf.(specification of S3: 1000)
# max_parts_listing = 1000
#
# Set the maximum number of objects we can delete with the Multi-Object Delete
# operation.
# max_multi_delete_objects = 1000
#
# If set to 'true', Swift3 uses its own metadata for ACL
# (e.g. X-Container-Sysmeta-Swift3-Acl) to achieve the best S3 compatibility.
# If set to 'false', Swift3 tries to use Swift ACL (e.g. X-Container-Read)
# instead of S3 ACL as far as possible.  If you want to keep backward
# compatibility with Swift3 1.7 or earlier, set false here
# If set to 'false' after set to 'true' and put some container/object,
# all users will be able to access container/object.
# Note that s3_acl doesn't keep the acl consistency between S3 API and Swift
# API. (e.g. when set s3acl to true and PUT acl, we won't get the acl
# information via Swift API at all and the acl won't be applied against to
# Swift API even if it is for a bucket currently supported.)
# Note that s3_acl currently supports only keystone and tempauth.
# DON'T USE THIS for production before enough testing for your use cases.
# This stuff is still under development and it might cause something
# you don't expect.
# s3_acl = false
#
# Specify a host name of your Swift cluster.  This enables virtual-hosted style
# requests.
# storage_domain =
#
# Enable pipeline order check for SLO, s3token, authtoken,
keystoneauth according to
# standard swift3/Swift construction using either tempauth or keystoneauth.
# If the order is incorrect, it raises a except to stop proxy.
# Turn auth_pipeline_check off only when you want to bypass these authenticate
# middlewares in order to use other 3rd party (or your proprietary)
authenticate middleware.
# auth_pipeline_check = True
#
# Enable multi-part uploads. (default: true)
# This is required to store files larger than Swift's max_file_size
(by default, 5GiB).
# Note that has performance implications when deleting objects, as we
now have to
# check for whether there are also segments to delete.
# allow_multipart_uploads = True
#
# Set the maximum number of parts for Upload Part operation.(default: 1000)
# When setting it to be larger than the default value in order to match the
# specification of S3, set to be larger max_manifest_segments for slo
# middleware.(specification of S3: 10000)
# max_upload_part_num = 1000
#
# Enable returning only buckets which owner are the user who requested
# GET Service operation. (default: false)
# If you want to enable the above feature, set this and s3_acl to true.
# That might cause significant performance degradation. So, only if your
# service absolutely need this feature, set this setting to true.
# If you set this to false, Swift3 returns all buckets.
# check_bucket_owner = false
#
# In default, Swift reports only S3 style access log.
# (e.g. PUT /bucket/object) If set force_swift_request_proxy_log
# to be 'true', Swift will become to output Swift style log
# (e.g. PUT /v1/account/container/object) in addition to S3 style log.
# Note that they will be reported twice (i.e. Swift3 doesn't care about
# the duplication) and Swift style log will includes also various subrequests
# to achieve S3 compatibilities when force_swift_request_proxy_log is set to
# 'true'
#force_swift_request_proxy_log = false

# AWS S3 document says that each part must be at least 5 MB in a multipart
# upload, except the last part.
#min_segment_size = 5242880

#[filter:s3token]
#use = egg:swift3#s3token

# Prefix that will be prepended to the tenant to form the account
#reseller_prefix = AUTH_

# Keystone server details
#auth_uri = https://172.16.15.13:35357/
#auth_uri = https://helion-cp1-vip-KEY-API-mgmt:5000

# SSL-related options
#insecure = False
#certfile =
#keyfile =

## Do NOT put anything after this line ##




More information about the Openstack mailing list