Manually add an extra Ceph mon/mgr/osd node
Contents
16. Manually add an extra Ceph mon/mgr/osd node¶
16.1. Configurations on the original node.¶
On the original node, modify the configuration file, /etc/ceph/ceph.conf, and add the new monitor host IP address. For example, node10 (192.168.5.10),
On your system, it will be the different node.
Modify the monitor map by including the new monitor:
monmaptool --add node10 192.168.5.10 --fsid f3df55d7-5766-4763-8e67-e588ddd22dcc /tmp/monmap
On your system, the node name, IP address, and fsid maybe different.
16.2. Configurations on the new node.¶
Image the new node with fresh Debian installation by using FOG.
Add Ceph apt repository:
apt install gnupg
apt install software-properties-common
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
apt-add-repository 'deb https://download.ceph.com/debian-quincy/ bullseye main'
apt update
Install ceph packages:
apt install ceph ceph-common
Copy the configuration and monmap files from the original node:
cd /etc/ceph
scp hostadm@node06:/etc/ceph/ceph.conf .
cd /tmp
scp hostadm@node06:/tmp/monmap .
Create the monitor directory:
mkdir /var/lib/ceph/mon/ceph-node10
chown ceph:ceph /var/lib/ceph/mon/ceph-node10
Create the monitor map:
sudo -u ceph ceph-mon --mkfs -i node10 --monmap /tmp/monmap
Start the monitor daemon:
systemctl start ceph-mon@node10
Enable msgr2 communications module:
sudo ceph mon enable-msgr2
Start the manager and metadata daemons:
systemctl start ceph-mgr@node10
systemctl start ceph-mds@node10
See the ceph status:
ceph -s
Also check the dashboard.
Check what drives you can add as OSD:
ceph-volume inventory
You may need to clean up SSD drives from the previous LVS/OSD use. Follow the procedure in tutorial #13.3.
Add the available drives as OSD, for example /dev/sdc:
ceph-volume lvm batch --bluestore /dev/sdc
See how they show up in the Ceph cluster:
ceph osd df