Getting pool and OSD information out of Ceph

Whenever I’m doing things with Ceph I seem to forget the exact commands to get details and stats about ceph pools and OSDs. You have the ceph command with its huge number of subcommands, but you can also use the rados and rbd commands.

Pool properties
First of all there is the command to show all pool properties. This is useful when having tiered storage in Ceph:

$ ceph osd pool ls detail

pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 190129 lfor 190129 flags hashpspool tiers 29 read_tier 29 write_tier 29 min_write_recency_for_promote 1 stripe_width 0
 removed_snaps [1~3]

pool 29 'rbd_ssd' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 350 pgp_num 350 last_change 190129 flags hashpspool,incomplete_clones tier_of 0 cache_mode writeback target_bytes 429496729600 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 600s x4 decay_rate 0 search_last_n 0 min_read_recency_for_promote 2 min_write_recency_for_promote 2 stripe_width 0
 removed_snaps [1~3]

This command is the only command I am aware of that lists all properties of a pool, instead of having to query them one by one with ceph osd pool get.

Disk usage
Ceph has a plethora of commands to get the data/disk usage:

First of all you can use ceph. It gives detailed information about the disk usage. It also takes specific crush rules into account to display the available data. In the example below, two crush rules with different roots have been defined. This allows us to place pools on SSD storage and on SAS disks.

$ ceph df detail

df_pool

Next, you can get a listing of the disk usage per OSD. The documentation mostly mentions ceph osd tree to list all OSD’s and where they are located in the crush tree. With ceph osd df you get a listing of the disk usage of each OSD and the data distribution.

$ ceph osd df
ceph osd df output

A final command is rados df to be complete. The output is similar to ceph df detail. I just recently found out about ceph df detail and have been using rados df for a few years already.

$ rados df
raods_df

Leave a Reply