Who the $?#% is this guy?
Ceph is based on a distributed, autonomic, redundant native object store named RADOS.
Reliable
Autonomic
Distributed
Object
Store
RADOS is a flat namespace.
Each object has a name, any number of attributes, and a payload of (almost) arbitrary size.
Objects are assigned to Placement Groups (PGs).
Each PG has an ordered list of Object Storage Devices (OSDs) where its contents are stored in a redundant fashion.
Object placement is entirely algorithmic.
There is no central lookup or distributed hashtable.
Controlled
Replication
Under
Scalable
Hashing
Built for big data.
gigantic data, really.
Lots of client APIs.
Many of these are fully integrated with OpenStack.
RADOS Block Device (RBD) is a thin‑provisioned block device interface that stripes data across multiple RADOS objects.
It supports cheap, read‑only redirect‑on‑write snapshots.
RBD also supports efficient cloning.
This makes it very well suited for maintaining template-based virtual machines.
RBD comes in two flavors.
rbd
is a kernel-level block device driver merged upstream in Linux 2.6.37.
qemu-rbd
is a userspace storage driver for Qemu and KVM.
It is built on the librados
C API.
And you can build images for any purpose you like.
Fully integrated with Glance for image storage.
Fully integrated with Cinder for persistent VM block storage.
Fully integrated with Nova for boot-from-volume.
Ceph provides ReSTful HTTP(S) access to the object store.
It does so through a FastCGI
application, radosgw.
radosgw uses the libradospp
C++ API.
radosgw runs in any web server that supports FastCGI.
The canonical deployment approach is with Apache and mod_fastcgi.
radosgw currently understands the Amazon S3 and OpenStack Swift APIs.
radosgw supports native load balancing and scaleout.
Now also supports Keystone.
What's new?
Incremental snapshots
Improved RHEL Support
Integration with cinder-backup
Thanks to:
Sage Weil @liewegas
& crew for Ceph
Bartek Szopka @bartaz
for impress.js
Inktank @inktank
for the Ceph logo
Want this talk?