Select Page

Ceph: Increasing placement groups in production

by | Oct 25, 2017 | Ceph, Storage und Backup

Ceph LogoIf you are running a Ceph cluster for several years you will come to the point where the number of placement groups (PGs) no longer fits to the size of your cluster. Maybe you increased the number of OSDs several times, maybe you misjudged the growth of a pool years ago.
Too few PGs per OSD can lead to drawbacks regarding performance, recovery time and uneven data distribution. The latter can be examined with the command ceph osd df. It shows the usage, weight, variance and number of PGs for each OSD as well the min/max variance and the standard deviation of your OSD usage.
Increasing the number of the PGs can lead to a more even data distribution,
but there are several warnings in blogs or mailings lists, especially for production environments. Doubling the PGs (e.g. from 1024 to 2048) may bring down your cluster for some minutes, because creating, activating and peering the new PGs may strongly influence your client’s traffic.
Regarding this warnings, we decided to increase the PGs in slices of 128. Between each slice we waited until all PGs peered successfully, which needed only a few seconds and did not influence the client traffic at all.
Increasing the number of PGs is done with two simple commands:

$ ceph osd set <pool> pg_num <int>
$ ceph osd set <pool> pgp_num <int>

Increasing pg_num creates new PGs, but the data rebalancing and backfilling will only start after increasing pgp_num (the number of placement groups for placement), too. pg_num and pgp_num should always have the same value. Increasing your PGs usually comes with a huge amount of backfilling which should not be a problem for a well configured cluster.

Achim Ledermüller
Achim Ledermüller
Senior Manager Cloud

Der Exil Regensburger kam 2012 zu NETWAYS, nachdem er dort sein Wirtschaftsinformatik Studium beendet hatte. In der Managed Services Abteilung ist er für den Betrieb und die Weiterentwicklung unserer Cloud-Plattform verantwortlich.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

More posts on the topic Ceph | Storage und Backup

Ceph auf Raspberry Pi als Bastelstorage

Ceph auf Raspberry Pi als Bastelstorage ist eine (relativ) einfache Möglichkeit, eine Technologie wie Ceph auszuprobieren, ohne gleich Racks voller Server anschaffen zu müssen. Ceph hat mich schon sehr lange gereizt, es mal intensiver ansehen zu können, aber mich hat...

Ceph OSDs mit BlueStore erstellen

BlueStore ist das neue Speicher-Backend für Ceph ab Luminous. v12.2.x. Es wird standardmäßig verwendet, wenn neue OSDs durch ceph-disk, ceph-deploy oder ceph-volume erzeugt werden. Es bietet einen großen Vorteil in Bezug auf Leistung, Robustheit und Funktionalität...

Monthly Snap February 2020

Start with a laugh! In the beginning of the month Nicole asked some of us what we found the most annoying aspect of working in an office. Not at all surprising was the most common answer: the printers! Yes, sometimes they seemingly do as they please! Read Nicole`s...