Integrasi Openstack dan Ceph dengan Cara Manual

Sebelumnya kita membahas terkait dengan OpenStack dan Ceph. Pada kali ini kita akan coba untuk mengintegrasikan kedua software tersebut, dimana OpenStack akan menyimpan data seperti instance, image dan volume di dalam Ceph.

Apa kelebihan meyimpan data di ceph? tentunya high availability yang akan mereplika data kita ke jumlah yang kita inginkan (defaultnya 3 replika) dan kita bisa melakukan recovery ketika terjadi corrupt data yang tersimpan dalam ceph.

Cara yang akan kita gunakan adalah manual, dimana kita tidak menggunakan tools automasi seperti kolla ansible. Apa alasannya? karena supaya kita tau konfigurasi apa yang diperlukan untuk konfigurasi OpenStack dan Ceph secara manual. Namun pada blog ini saya melakukan konfigurasi pada container secara langsung jadi bisa dibilang bersifat temporary dan akan hilang jika melakukan reconfigure menggunakan kolla-ansible.

Environment yang digunakan melanjutkan blog sebelumnya dimana ada 3 node (alfian-controller, alfian-compute1, alfian-compute2) yang sudah di-deploy OpenStack menggunakan kolla-ansible dan Ceph menggunakan cephadm.

Langsung saja kita mulai untuk langkah-langkah integrasinya. Let’s go…

Integrasi OpenStack dan Ceph

  1. Create pool
# alfian-controller
ceph osd pool create volumes
ceph osd pool create images
ceph osd pool create backups
ceph osd pool create vms
  1. Initialize pool
# alfian-controller
rbd pool init volumes
rbd pool init images
rbd pool init backups
rbd pool init vms
  1. Copy ceph configuration to all node
# alfian-controller
ssh alfian-compute1 sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
ssh alfian-compute1 sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
  1. Copy ceph configuration to container
# alfian-controller
sudo docker cp /etc/ceph/ceph.conf glance_api:/etc/ceph/ceph.conf
sudo docker cp /etc/ceph/ceph.conf cinder_api:/etc/ceph/ceph.conf
sudo docker cp /etc/ceph/ceph.conf cinder_volume:/etc/ceph/ceph.conf
sudo docker cp /etc/ceph/ceph.conf cinder_scheduler:/etc/ceph/ceph.conf
sudo docker cp /etc/ceph/ceph.conf cinder_backup:/etc/ceph/ceph.conf
# alfian-compute1
sudo docker cp /etc/ceph/ceph.conf cinder_volume:/etc/ceph/ceph.conf
sudo docker cp /etc/ceph/ceph.conf cinder_backup:/etc/ceph/ceph.conf
# alfian-compute2
sudo docker cp /etc/ceph/ceph.conf cinder_volume:/etc/ceph/ceph.conf
sudo docker cp /etc/ceph/ceph.conf cinder_backup:/etc/ceph/ceph.conf
  1. Install ceph-common on compute node
# alfian-compute1 & alfian-compute2
sudo apt-get install ceph-common
  1. Create keyrings
sudo ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images'
sudo ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images' mgr 'profile rbd pool=volumes, profile rbd pool=vms'
sudo ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' mgr 'profile rbd pool=backups'
  1. Add keyrings to all node
# alfian-controller
sudo ceph auth get-or-create client.glance | sudo tee /etc/ceph/ceph.client.glance.keyring
sudo ceph auth get-or-create client.cinder | sudo tee /etc/ceph/ceph.client.cinder.keyring
sudo ceph auth get-or-create client.cinder-backup | sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
sudo ceph auth get-or-create client.cinder | ssh alfian-compute1 sudo tee /etc/ceph/ceph.client.cinder.keyring
sudo ceph auth get-or-create client.cinder-backup | ssh alfian-compute1 sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
sudo ceph auth get-or-create client.cinder | ssh alfian-compute2 sudo tee /etc/ceph/ceph.client.cinder.keyring
sudo ceph auth get-or-create client.cinder-backup | ssh alfian-compute2 sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
  1. Copy keyrings to container
# alfian-controller
sudo docker cp /etc/ceph/ceph.client.glance.keyring glance_api:/etc/ceph/ceph.client.glance.keyring
sudo docker cp /etc/ceph/ceph.client.cinder.keyring cinder_api:/etc/ceph/ceph.client.cinder.keyring
sudo docker cp /etc/ceph/ceph.client.cinder.keyring cinder_volume:/etc/ceph/ceph.client.cinder.keyring
sudo docker cp /etc/ceph/ceph.client.cinder.keyring cinder_scheduler:/etc/ceph/ceph.client.cinder.keyring
sudo docker cp /etc/ceph/ceph.client.cinder-backup.keyring cinder_backup:/etc/ceph/ceph.client.cinder-backup.keyring
# alfian-compute1
sudo docker cp /etc/ceph/ceph.client.cinder.keyring cinder_volume:/etc/ceph/ceph.client.cinder.keyring
sudo docker cp /etc/ceph/ceph.client.cinder-backup.keyring cinder_backup:/etc/ceph/ceph.client.cinder-backup.keyring
sudo docker cp /etc/ceph/ceph.client.cinder.keyring nova_libvirt:/etc/ceph/ceph.client.cinder.keyring
sudo docker cp /etc/ceph/ceph.client.cinder.keyring nova_compute:/etc/ceph/ceph.client.cinder.keyring
# alfian-compute2
sudo docker cp /etc/ceph/ceph.client.cinder.keyring cinder_volume:/etc/ceph/ceph.client.cinder.keyring
sudo docker cp /etc/ceph/ceph.client.cinder-backup.keyring cinder_backup:/etc/ceph/ceph.client.cinder-backup.keyring
sudo docker cp /etc/ceph/ceph.client.cinder.keyring nova_libvirt:/etc/ceph/ceph.client.cinder.keyring
sudo docker cp /etc/ceph/ceph.client.cinder.keyring nova_compute:/etc/ceph/ceph.client.cinder.keyring
  1. Create secret key to nova compute
# alfian-controller
sudo ceph auth get-key client.cinder | ssh alfian-compute1 sudo tee /etc/ceph/client.cinder.key
sudo ceph auth get-key client.cinder | ssh alfian-compute2 sudo tee /etc/ceph/client.cinder.key
# alfian-compute1 & alfian-compute2
uuidgen
sudo vi /etc/ceph/secret.xml
...
<secret ephemeral='no' private='no'>
 <uuid>b4d1424d-2fae-492e-bea7-942c7b348ad7</uuid>
 <usage type='ceph'>
 <name>client.cinder secret</name>
 </usage>
</secret>
...
  1. Copy secret to nova container
# alfian-compute1 & alfian-compute2
sudo docker cp /etc/ceph/client.cinder.key nova_compute:/etc/ceph/client.cinder.key
sudo docker cp /etc/ceph/client.cinder.key nova_libvirt:/etc/ceph/client.cinder.key
  1. Add secret to libvirt
# alfian-compute1 & alfian-compute2
sudo docker cp /etc/ceph/secret.xml nova_libvirt:/etc/ceph/secret.xml
sudo docker exec -it nova_libvirt bash
# container nova_libvirt
cd /etc/ceph
virsh secret-define --file secret.xml
virsh secret-set-value --secret b4d1424d-2fae-492e-bea7-942c7b348ad7 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
  1. Configure glance
# alfian-controller
sudo vi /etc/kolla/glance-api/glance-api.conf
...
[DEFAULT]
enabled_backends = rbd:rbd
show_image_direct_url = True

[glance_store]
default_backend = rbd
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

[rbd]
rbd_store_user = glance
rbd_store_pool = images
rbd_store_chunk_size = 8
  1. Configure cinder
# alfian-controller
sudo vi /etc/kolla/cinder-api/cinder.conf
...
[DEFAULT]
enabled_backends = ceph
glance_api_version = 2

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = b4d1424d-2fae-492e-bea7-942c7b348ad7
...
# alfian-controller
sudo vi /etc/kolla/cinder-scheduler/cinder.conf
...
[DEFAULT]
enabled_backends = ceph
glance_api_version = 2

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = b4d1424d-2fae-492e-bea7-942c7b348ad7
...
# all nodes
sudo vi /etc/kolla/cinder-volume/cinder.conf
...
[DEFAULT]
enabled_backends = ceph
glance_api_version = 2

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = b4d1424d-2fae-492e-bea7-942c7b348ad7
...
# all nodes
sudo vi /etc/kolla/cinder-backup/cinder.conf
...
[DEFAULT]
enabled_backends = ceph
glance_api_version = 2

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = backups
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = b4d1424d-2fae-492e-bea7-942c7b348ad7
...
  1. Edit ceph configuration
# compute node
sudo vi /etc/ceph/ceph.conf
...
[client]
 rbd cache = true
 rbd cache writethrough until flush = true
 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
 log file = /var/log/qemu/qemu-guest-$pid.log
 rbd concurrent management ops = 20
...
  1. Copy ceph configuration to container
# compute nodes
sudo docker cp /etc/ceph/ceph.conf nova_compute:/etc/ceph/ceph.conf
sudo docker cp /etc/ceph/ceph.conf nova_libvirt:/etc/ceph/ceph.conf
  1. Configure nova
# compute nodes
sudo vi /etc/kolla/nova-compute/nova.conf
...
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = b4d1424d-2fae-492e-bea7-942c7b348ad7
disk_cachemodes="network=writeback"
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
...
  1. Restart service
# alfian-controller
sudo docker restart glance_api
sudo docker restart cinder_api
sudo docker restart cinder_scheduler
sudo docker restart cinder_volume
sudo docker restart cinder_backup
# alfian-compute1
sudo docker restart cinder_volume
sudo docker restart cinder_backup
sudo docker restart nova_compute
sudo docker restart nova_libvirt
sudo docker restart nova_ssh
# alfian-compute2
sudo docker restart cinder_volume
sudo docker restart cinder_backup
sudo docker restart nova_compute
sudo docker restart nova_libvirt
sudo docker restart nova_ssh

Tes Operasional

  1. Create image

$ wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
$ openstack image create  --disk-format qcow2 --file jammy-server-cloudimg-amd64.img ubuntu-jammy
$ openstack image list
+--------------------------------------+--------------+--------+
| ID                                   | Name         | Status |
+--------------------------------------+--------------+--------+
| 772b4046-edf8-4b7f-9b54-c2c0ced1da9c | ubuntu-jammy | active |
+--------------------------------------+--------------+--------+
$ sudo rbd -p images ls
772b4046-edf8-4b7f-9b54-c2c0ced1da9c
  1. Create instance
$ openstack server create --image ubuntu-jammy --flavor m1.medium --nic port-id=port --security-group security-group --key-name keypair node1
$ openstack server create --image ubuntu-jammy --flavor m1.medium --nic port-id=port2 --security-group security-group --key-name keypair node2
$ openstack server list
+--------------------------------------+-------+--------+--------------------------------+--------------+-----------+
| ID                                   | Name  | Status | Networks                       | Image        | Flavor    |
+--------------------------------------+-------+--------+--------------------------------+--------------+-----------+
| 29e58e7e-0c62-4ef0-bd64-2a564ceb61e0 | node2 | ACTIVE | internal-network=192.168.7.110 | ubuntu-jammy | m1.medium |
| 4f8e7401-aaa5-4703-8dc3-dda8ba003631 | node1 | ACTIVE | internal-network=192.168.7.100 | ubuntu-jammy | m1.medium |
+--------------------------------------+-------+--------+--------------------------------+--------------+-----------+
$ sudo rbd -p vms ls
29e58e7e-0c62-4ef0-bd64-2a564ceb61e0_disk
4f8e7401-aaa5-4703-8dc3-dda8ba003631_disk

Referensi: