Enter the cuttlefish!

Posted on Tue 07 May 2013 in blog • 2 min read

Today, the developers released Ceph 0.61, codenamed cuttlefish. There are some interesting features in this new release, take a look.

One thing that will undoubtedly make Ceph a lot more palatable to RHEL/CentOS users is the availability of Ceph in EPEL. This was originally announced in late March, but 0.61 is the first supported release that comes with Red Hat compatible RPMs. Note that at the time of writing, EPEL is obviously still stuck on the 0.56 bobtail release, but it is expected that cuttlefish support will follow shortly. In the interim, cuttlefish packages are available outside EPEL, on the ceph.com yum repo.

This allows you to run a Ceph cluster on RHEL/CentOS. It does, however come with a few limitations:

  • You can’t use RBD from a kvm/libvirt box that is running RHEL. RHEL does not ship with librados support enabled in the qemu-kvm builds, and removing this limitation would mean for third parties to provide their own libvirt/kvm build. As of today, tough, no RBD-support libvirt/kvm lives in CentOS Plus.
  • You can’t use the kernel rbd or ceph modules from a client that is running RHEL. RBD and Ceph filesystem support is absent from RHEL kernels.

I’m curious to see if and when that will change, given Red Hat’s focus on GlusterFS as their preferred distributed storage solution. It will be interesting to see what happens there.

Another neat little new feature is the ability to set quotas on pools, which is something that we’ve frequently had customers ask for in our consulting practice.

Then there are incremental snapshots for RBD, another really handy feature for RBD management in cloud solutions like OpenStack.

There’s more, and you may head over to the press release and the Inktank blog for more details. And then you might want to mark your calendars for one of the following events:

All these events are expected to sell out beforehand, and they are only a couple of weeks away. So make sure you grab your seat, and we’ll see you there!


This article originally appeared on my blog on the hastexo.com website (now defunct).