Ceph installation on one node.
Contents
12. Ceph installation on one node.¶
The latest version of Ceph is Quincy.
Your node will be the Ceph cluster. Your LXC container will be a client that mounts the Ceph file system.
12.1. Clean Debian 12 installation by FOG.¶
Reimage your node with FOG to get clean Debioan 12 install.
Schedule Deploy task for your node.
Login to the FOG portral: https://192.168.5.250//fog/
Username: fog
Password: password
Navigate to “Hosts”
List all hosts.
In the task column, click onto the green “Deploy” icon.
Click onto Task to schedule a new task.
Configure your node for Network boot.
Login to the ADMIN page of your node, https://192.168.5.20x where ‘x’ is your node number.
Click on Remote Control.
Scroll down to iKVM.
Login to the console of your node.
Reboot with Ctr-Alt-Del key combination on the virtual keyboard.
When the system boots up, press F12 on the virtual keyboard.
The node should boot into the FOG installer, which will start the installation.
GRUB update.
The FOG installer shouldn’t touch your boot loader, so your node should be able to bootup into the new Debian 12 fine.
Hostname update.
On trhe node, edit file
/etc/hostname
and fix the node name for the correct one:
nano /etc/hostname
12.2. Preinstallation steps.¶
Try SSH to the node.
You’ll need to clean the old SSH key in your known_hosts file, then accept the new key.
Login to the node.
The 3 non-system drives were used for the storage exercises. You need to remove the partition tables on them.
Find out what is your system drive:
df -h
If, for example, it is /dev/sdd, wipe out the partition tables on /dev/sda, /dev/sdb, /dev/sdc:
wipefs -a /dev/sda
wipefs -a /dev/sdb
wipefs -a /dev/sdc
Create user and group ceph
with uid = git =167 to make them consistent with the containers:
groupadd -g 167 ceph
useradd -m -d /var/lib/ceph -g 167 -u 167 -s /usr/sbin/nologin ceph
If it shows that the drives still have LVS labels on them, clear the LVS with the command below:
ceph-volume lvm zap /dev/sda
ceph-volume lvm zap /dev/sdb
ceph-volume lvm zap /dev/sdc
Install cephadm.
Install curl
sudo apt install curl
Reconfigure the timezone to US/EST
sudo dpkg-reconfigure tzdata
Download cephadm
for Quincy:
CEPH_RELEASE=17.2.6
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
Make the installation script executable:
chmod a+x cephadm
sudo -s
./cephadm install
which cephadm
If you see /usr/sbin/cephadm
, it means you got cephadm installed on the system.
12.3. Bootstrap Ceph cluster.¶
We are going to use the IP address of eno1 interface for the monitoring service. Get the IP address of eno1 on your node:
ip add show eno1
Run the ceph bootstrap command. Substitute the IP from above for the *<mon-ip>* below:
sudo cephadm bootstrap --mon-ip *<mon-ip>* --dashboard-password-noupdate --initial-dashboard-user admin --initial-dashboard-password password
Install Command Line Interface (CLI):
sudo cephadm install ceph-common
Check if the Ceph services are running:
ceph status
same as
ceph -s
12.4. Ceph configurations.¶
Most of the Ceph processes are running within containers, and have their own copy of /etc/ceph/ceph.conf file. Therefore, all the configuration changes should update the files in every container.
Essentially there are three recommended ways for Ceph configuration updates:
A) By using ceph config
command.
B) By using get crush map
dump, decoding it, modifying the txt configuration file, encoding the updated file, then
pushing the set crush map
.
C) By using the GUI portal with limited functionalities.
12.5. One node cluster specifics.¶
Data replication should be done between the OSD devices only since there are no other nodes.
Check the configuration settings:
ceph config dump
ceph config get osd
ceph config get osd osd_crush_chooseleaf_type
Set ceph config get osd osd_crush_chooseleaf_type = 0
. This will prevent the monitor from attempting to
store the data replicas on the different nodes:
ceph config set osd osd_crush_chooseleaf_type 0
12.6. Ceph configuration via crush map dump
. You need to do it only if you are getting HEALTH_WARN.¶
Check the crush dump rules for OSD:
ceph osd crush rule dump
Get the crush rules into a file comp_crush_map.cm:
ceph osd getcrushmap -o comp_crush_map.cm
We need to install package ceph to get command crushtool
:
cephadm install ceph
Decode file comp_crush_map.cm:
crushtool -d comp_crush_map.cm -o crush_map.cm
Edit file crush_map.cm:
nano crush_map.cm
In line 57, replace type node
with type osd
.
Encode crush_map.cm:
crushtool -c crush_map.cm -o new_crush_map.cm
Push the configuration change:
ceph osd setcrushmap -i new_crush_map.cm
Run
ceph -s
See if the health: HEALTH_OK
now.
Otherwise, run
ceph health detail
12.7. Create OSD.¶
Clear the LVS labels on the drives with the command below:
ceph-volume lvm zap /dev/sda
ceph-volume lvm zap /dev/sdb
ceph-volume lvm zap /dev/sdc
Create OSD from the available SSD drives on the node:
ceph orch apply osd --all-available-devices
ceph config get osd osd_pool_default_pg_num
ceph config get osd osd_pool_default_pgp_num
Set osd_pool_default_pgp_num = osd_pool_default_pg_num
:
ceph config set osd osd_pool_default_pgp_num 32
12.8. Dashboard.¶
Login to the dashboard with
user: admin
, password: password
Check the cluster status. It should be in state HEALTH_OK.
If it gives you HEALTH_WARN state, then most likely you need to fix another setting with replacing the node with osd by using the procedure below.
12.9. Ceph file system setup for sharing.¶
Create volume cephfs
:
ceph fs volume create cephfs
Verify it has been created:
ceph fs volume ls
Setup a key for a user volume access:
ceph fs authorize cephfs client.user / rw | sudo tee /etc/ceph/ceph.client.user.keyring
Create mounting point and mount Ceph file system:
mkdir /mnt/cephfs
mount -t ceph :/ /mnt/cephfs -o name=user,secret=AQC76phkkEy1JxAA4hvQkHJ8MRu39xJEG+X1QQ==
The key above is from file /etc/ceph/ceph.client.user.keyring
It is different in your case.
Save the secret in file /etc/ceph/ceph.client.user.secfile.
Umount the file system, then mount it with referencing the /etc/ceph/ceph.client.user.secfile.
umount /mnt/cephfs
mount -t ceph :/ /mnt/cephfs -o name=user,secretfile=/etc/ceph/ceph.client.user.secfile
12.10. Ceph client setup.¶
The Ceph client should be setup on the LXC container.
Stop and disable autofs on the LXC container.
systemctl stop autofs
systemctl disable autofs
Configure the timezone for US/Eastern:
sudo dpkg-reconfigure tzdata
Install package for adding 3rd party repositories:
sudo apt install software-properties-common
Add Ceph apt repository:
sudo -s
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
apt-add-repository 'deb https://download.ceph.com/debian-quincy/ bullseye main'
apt update
Install package ceph-common
:
apt install ceph-common
Copy the ceph.conf file from the Ceph node:
scp hostadm@node03:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Note, in your case, the node name above is different.
Copy the secfile from the Ceph node:
scp hostadm@node03:/etc/ceph/ceph.client.user.secfile /etc/ceph/ceph.client.user.secfile
Create the mounting point and mount Ceph file system from the node:
mkdir /mnt/cephfs
mount -t ceph node03:/ /mnt/cephfs -o name=user,secretfile=/etc/ceph/ceph.client.user.secfile
Check if the file system is mounted:
df -h