Ceph one-node manual installation
Contents
13. Ceph one-node manual installation¶
For simplicity, we won’t be using cephx and keerings for service communications.
Steps:
Clean Debian 11 installation via FOG is needed.
Setup
apt
repository for Ceph QuincyInstall packages
ceph
andceph-common
Clean up SSD drives from previous LVS/OSD use.
Configuration file setup,
/etc/ceph/ceph.conf
Monitor service setup
Manager service setup
Dashboard setup
Object Storage Device (OSD) setup
Storage Pool setup
File system, CephFS setup
Mounting CephFS
13.1. Setup apt
repository¶
apt install gnupg
apt install software-properties-common
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
apt-add-repository 'deb https://download.ceph.com/debian-quincy/ bullseye main'
apt update
13.2. Install packages ceph
and ceph-common
¶
apt install ceph ceph-common
13.3. Clean up SSD drives from previous LVS/OSD use.¶
Run command
lsblk
Identify your system drive, the one that contains partitions ‘/’ and ‘/boot/efi’. Don’t touch it.
Clean the other three drives.
Run command
lvdisplay
Get the VG Name
then remove the volume groups, for example:
lvremove ceph-bce8d134-df89-4bdb-a800-80186c3da10e
lvremove ceph-ab92e1c3-e2df-4b0c-9c6e-0854d8c1662a
lvremove ceph-d4b5a4be-b29b-4097-abf3-8c83bc476baa
Clean the volumes:
ceph-volume lvm zap /dev/sdb
ceph-volume lvm zap /dev/sdc
ceph-volume lvm zap /dev/sdd
Check if the volumes are gone:
ceph-volume lvm list
13.4. Configuration file setup, /etc/ceph/ceph.conf¶
We follow most of the steps from here: MANUAL DEPLOYMENT
Generate fsid:
uuidgen
Use the fsid above along with the correct IP address and the hostname for your node in the Ceph configuration file:
/etc/ceph/ceph.conf
[global]
fsid = f3df55d7-5766-4763-8e67-e588ddd22dcc
mon_initial_members = node03
mon_host = 192.168.5.3
public_network = 192.168.5.0/24
auth_cluster_required = none
auth_service_required = none
auth_client_required = none
osd_pool_default_size = 3
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 333
osd crush chooseleaf type = 0
auth_allow_insecure_global_id_reclaim = false
[mon]
mon_allow_pool_delete = true
13.5. Monitor service setup¶
Create directories for the monitor and manager services:
mkdir /var/lib/ceph/mon/ceph-admin
mkdir /var/lib/ceph/mgr/ceph-node03
chown -R ceph:ceph /var/lib/ceph/mon
chown -R ceph:ceph /var/lib/ceph/mgr
Create the database for the monitor in /var/lib/ceph/mon. Please use your IP address, node name, and fsid below:
monmaptool --create --add node03 192.168.5.3 --fsid 3bc7b44b-2d70-45e3-a4fb-c99e0200c08f /tmp/monmap
sudo -u ceph ceph-mon --mkfs -i node03 --monmap /tmp/monmap
Start the monitor service:
systemctl start ceph-mon@node03
Check if the monitor is running:
systemctl status ceph-mon@node03
Enable messenger v2 protocol for inter-service communication:
ceph mon enable-msgr2
Check the Ceph status:
ceph -s
13.6. Manager service setup¶
Start the manager service:
systemctl start ceph-mgr@node03
Check if the manager is running:
systemctl status ceph-mgr@node03
Check the Ceph status:
ceph status
13.7. Dashboard setup¶
Check the manager modules:
ceph mgr services
ceph mgr module ls
ceph mgr module enable dashboard
ceph mgr module ls
Create self-signed certificate for thre Dashboard:
ceph dashboard create-self-signed-cert
Create the password:
echo password > /tmp/password
ceph dashboard ac-user-create admin -i /tmp/password administrator
ceph mgr module disable dashboard
ceph mgr module enable dashboard
Login to the dashboard in the browser: https://node03:8443/
13.8. Create OSDs:¶
Assuming your storage drives are /dev/sdb, /dev/sdc, /dev/sdd:
ceph-volume lvm batch --bluestore /dev/sdb
ceph-volume lvm batch --bluestore /dev/sdc
ceph-volume lvm batch --bluestore /dev/sdd
Check the OSDs inventory:
ceph-volume inventory
ceph osd ls
13.9. Storage Pool setup¶
Create data and metadata pools with 64 pool group number, pg_num:
ceph osd pool create cephfs_data 64
ceph osd pool create cephfs_metadata 64
Check the pool list
ceph osd pool ls
13.10. File system, CephFS setup¶
Create a file system on the pool:
ceph fs new cephfs cephfs_metadata cephfs_data
Start MDS service:
systemctl start ceph-mds@node03
Check if the MDS service is running:
systemctl status ceph-mds@node03
Check the file system list:
ceph fs ls
13.11. Mounting CephFS on the node¶
Create a client user key:
ceph fs authorize cephfs client.user / rw | sudo tee /etc/ceph/ceph.client.user.keyring
Create mounting point:
mkdir /mnt/cephfs
Mount CephFS onto the mounting point. Use the hash from /etc/ceph/ceph.client.user.keyring. It is different in your case.
mount -t ceph :/ /mnt/cephfs -o name=user,secret=AQDm26ZkvhASNRAA+BySJ/aTZsGNfq0tPl+FeA==
Check if the file system is mounted:
df -h
Save the secret in file /etc/ceph/ceph.client.user.secfile.
Umount the file system, then mount it with referencing the /etc/ceph/ceph.client.user.secfile.
umount /mnt/cephfs
mount -t ceph :/ /mnt/cephfs -o name=user,secretfile=/etc/ceph/ceph.client.user.secfile
13.12. Mounting CephFS on the LXC container¶
Copy the ceph.conf file from the Ceph node:
scp hostadm@node03:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Note, in your case, the node name above is different.
Copy the secfile from the Ceph node:
scp hostadm@node03:/etc/ceph/ceph.client.user.secfile /etc/ceph/ceph.client.user.secfile
Create the mounting point and mount Ceph file system from the node:
mkdir /mnt/cephfs
mount -t ceph node03:/ /mnt/cephfs -o name=user,secretfile=/etc/ceph/ceph.client.user.secfile
Check if the file system is mounted:
df -h
13.13. Add Ceph services to startup¶
On the node, add crash, monitor, manager, metadata, and all the osd services to startup:
systemctl enable ceph-crash
systemctl enable ceph-mon@node03
systemctl enable ceph-mgr@node03
systemctl enable ceph-mds@node03
systemctl enable ceph-osd@0
systemctl enable ceph-osd@1
systemctl enable ceph-osd@2
Reboot the system, and verify that Ceph status shows up OK.
ceph -s