Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Add SSD Journal to osd via GUI

Hello.

I have a cluster with 5 HDD for each node use only for s3 object storage.
I have installed one datacenter nvme (with plp)  for each node, so I can move journal to a nvme partition.
Can I modify OSDs without destroy/recreate (and wait for rebuild) each OSD ?

Thanks, Fabrizio

For existing running OSDs, you can do this from script

/opt/petasan/scripts/util/migrate_db.sh

look at comments within script for usage.

Quote from admin on December 8, 2025, 1:42 pm

For existing running OSDs, you can do this from script

/opt/petasan/scripts/util/migrate_db.sh

look at comments within script for usage.

Thank you; I've done it, now in OSD list I can see the linked device, but OSD doesn't start anymore.
This is from ceph-osdX.log

-25> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 1 bdev(0x55b5ab104e00 /var/lib/ceph/osd/ceph-6/block) open size 6000601989120 (0x5751fc00000, 5.5 TiB) block_size 4096 (4 KiB) rotational device, discard not supported
-24> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 1 bluestore(/var/lib/ceph/osd/ceph-6) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
-23> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 5 asok(0x55b5ab12c000) register_command bluestore bluefs device info hook 0x55b5abe8d930
-22> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 5 asok(0x55b5ab12c000) register_command bluefs stats hook 0x55b5abe8d930
-21> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 5 asok(0x55b5ab12c000) register_command bluefs files list hook 0x55b5abe8d930
-20> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 5 asok(0x55b5ab12c000) register_command bluefs debug_inject_read_zeros hook 0x55b5abe8d930
-19> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 1 bdev(0x55b5ab105500 /var/lib/ceph/osd/ceph-6/block.db) open path /var/lib/ceph/osd/ceph-6/block.db
-18> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 1 bdev(0x55b5ab105500 /var/lib/ceph/osd/ceph-6/block.db) open size 140000624640 (0x2098b00000, 130 GiB) block_size 4096 (4 KiB) non-rotational device, discard supported
-17> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-6/block.db size 130 GiB
-16> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 1 bdev(0x55b5ab105180 /var/lib/ceph/osd/ceph-6/block) open path /var/lib/ceph/osd/ceph-6/block
-15> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 1 bdev(0x55b5ab105180 /var/lib/ceph/osd/ceph-6/block) open size 6000601989120 (0x5751fc00000, 5.5 TiB) block_size 4096 (4 KiB) rotational device, discard not supported
-14> 2025-12-10T18:39:54.740+0100 7f264ec1d7c0 1 bluefs add_block_device bdev 2 path /var/lib/ceph/osd/ceph-6/block size 5.5 TiB
-13> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option writable_file_max_buffer_size = 0
-12> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option max_bytes_for_level_base = 1073741824
-11> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option level0_file_num_compaction_trigger = 8
-10> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option max_background_jobs = 4
-9> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option write_buffer_size = 16777216
-8> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option compaction_style = kCompactionStyleLevel
-7> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option compaction_readahead_size = 2MB
-6> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option max_bytes_for_level_multiplier = 8
-5> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option max_write_buffer_number = 64
-4> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option max_total_wal_size = 1073741824
-3> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option min_write_buffer_number_to_merge = 6
-2> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 set rocksdb option compression = kLZ4Compression
-1> 2025-12-10T18:39:54.744+0100 7f264ec1d7c0 1 bluefs mount
0> 2025-12-10T18:39:54.748+0100 7f264ec1d7c0 -1 *** Caught signal (Aborted) **
in thread 7f264ec1d7c0 thread_name:ceph-osd

 

not sure, you can try to test the script in a test environment to see how it works.

for the failed OSD, you can increase the log level to see if you can get more error log. else you may need to delete it and recreate if unfortunately.