PetaSAN 4.0.0 Released!
Pages: 1 2

admin
2,973 Posts
March 28, 2025, 5:11 pmQuote from admin on March 28, 2025, 5:11 pmHappy to announce our newest release version 4.0.0!
New Features:
- Ceph Reef 18.2.4.
- Ubuntu 22.04.
- mClock QoS.
- Updated NFS Container.
- Gradual filling for new OSDs.
- Chart new filter for top 5 disks.
- Start/Stop OSDs from UI.
- Display more hardware info on physical drives.
- General improvements.
For online upgrade,see latest Online Upgrade Guide
Happy to announce our newest release version 4.0.0!
New Features:
- Ceph Reef 18.2.4.
- Ubuntu 22.04.
- mClock QoS.
- Updated NFS Container.
- Gradual filling for new OSDs.
- Chart new filter for top 5 disks.
- Start/Stop OSDs from UI.
- Display more hardware info on physical drives.
- General improvements.
For online upgrade,see latest Online Upgrade Guide
Last edited on March 28, 2025, 5:12 pm by admin · #1

f.cuseo
86 Posts
March 29, 2025, 11:58 amQuote from f.cuseo on March 29, 2025, 11:58 amWonderful, thanks for your work.
I'm upgrading a test (virtual machines) cluster to verify if problem with multipart upload is fixed.
Wonderful, thanks for your work.
I'm upgrading a test (virtual machines) cluster to verify if problem with multipart upload is fixed.

f.cuseo
86 Posts
March 29, 2025, 6:45 pmQuote from f.cuseo on March 29, 2025, 6:45 pmWith 4.0 same problem of 3.3.0 with multipart upload 🙁 It doesn't work.
With 3.2.1 works like a charm...
With 4.0 same problem of 3.3.0 with multipart upload 🙁 It doesn't work.
With 3.2.1 works like a charm...

yangsm
34 Posts
March 30, 2025, 3:16 amQuote from yangsm on March 30, 2025, 3:16 amquincy: S3 CompleteMultipartUploadResult has empty ETag element,Can you compile a version 17.2.8 for testing? Thank you
quincy: S3 CompleteMultipartUploadResult has empty ETag element,Can you compile a version 17.2.8 for testing? Thank you

f.cuseo
86 Posts
March 30, 2025, 8:26 amQuote from f.cuseo on March 30, 2025, 8:26 am
Quote from yangsm on March 30, 2025, 3:16 am
quincy: S3 CompleteMultipartUploadResult has empty ETag element,Can you compile a version 17.2.8 for testing? Thank you
But reef 18.2.4 is already patched... and is the version used with petasan 4.0
Quote from yangsm on March 30, 2025, 3:16 am
quincy: S3 CompleteMultipartUploadResult has empty ETag element,Can you compile a version 17.2.8 for testing? Thank you
But reef 18.2.4 is already patched... and is the version used with petasan 4.0

admin
2,973 Posts
March 30, 2025, 7:12 pmQuote from admin on March 30, 2025, 7:12 pmWhat client side tool do you use? we will try to test this ourselves, probably within this coming month.
What client side tool do you use? we will try to test this ourselves, probably within this coming month.

f.cuseo
86 Posts
March 31, 2025, 8:26 amQuote from f.cuseo on March 31, 2025, 8:26 am
Quote from admin on March 30, 2025, 7:12 pm
What client side tool do you use? we will try to test this ourselves, probably within this coming month.
Hello.
I am using rclone, several versions.
Now i will try with different clients.
Thanks, Fabrizio
rclone version
rclone v1.53.3-DEV
- os/arch: linux/amd64
- go version: go1.18.1
------------
rclone2 version
rclone v1.64.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-88-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none
---------------
./rclone version
rclone v1.69.1
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-88-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.0
- go/linking: static
- go/tags: none
Quote from admin on March 30, 2025, 7:12 pm
What client side tool do you use? we will try to test this ourselves, probably within this coming month.
Hello.
I am using rclone, several versions.
Now i will try with different clients.
Thanks, Fabrizio
rclone version
rclone v1.53.3-DEV
- os/arch: linux/amd64
- go version: go1.18.1
------------
rclone2 version
rclone v1.64.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-88-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none
---------------
./rclone version
rclone v1.69.1
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-88-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.0
- go/linking: static
- go/tags: none

f.cuseo
86 Posts
March 31, 2025, 8:52 amQuote from f.cuseo on March 31, 2025, 8:52 am
Quote from admin on March 30, 2025, 7:12 pm
What client side tool do you use? we will try to test this ourselves, probably within this coming month.
confirmed working with s3browser 12.2.9 (windows).
Quote from admin on March 30, 2025, 7:12 pm
What client side tool do you use? we will try to test this ourselves, probably within this coming month.
confirmed working with s3browser 12.2.9 (windows).

wid
51 Posts
April 4, 2025, 3:45 pmQuote from wid on April 4, 2025, 3:45 pmOh, it's a miracle!
Problem with permanent:
Health: warning (1 clients failing to respond to cache pressure)
It disappeared permanently after the update.
Thank you very, very much for your work.
Oh, it's a miracle!
Problem with permanent:
Health: warning (1 clients failing to respond to cache pressure)
It disappeared permanently after the update.
Thank you very, very much for your work.

wid
51 Posts
April 7, 2025, 1:32 pmQuote from wid on April 7, 2025, 1:32 pmAfter upgrading to PetaSAN 4.0.0 (Ceph Reef 18.2), I started seeing this warning in ceph health detail
:
HEALTH_WARN 1 OSD(s) experiencing BlueFS spillover
BLUEFS_SPILLOVER: osd.X spilled over XX GiB metadata from 'db' device to slow device
This happens on large OSDs (14–18 TB), even if the block.db
partition is 60 GB and not full.
It seems Ceph moved metadata from NVMe to the main HDD, which can reduce performance.
This issue is explained here:
https://hexide.com/ceph-osd-spilled-over/
Right now, PetaSAN automatically creates 60 GB partitions on NVMe for all OSDs.
But for bigger disks, this is too small. We may need 80–100 GB for block.db
to avoid spillover.
Suggestion: please consider adding an option in the GUI to choose the block.db
size, or allow changing it during disk setup.
Is this planned?
After upgrading to PetaSAN 4.0.0 (Ceph Reef 18.2), I started seeing this warning in ceph health detail
:
HEALTH_WARN 1 OSD(s) experiencing BlueFS spillover
BLUEFS_SPILLOVER: osd.X spilled over XX GiB metadata from 'db' device to slow device
This happens on large OSDs (14–18 TB), even if the block.db
partition is 60 GB and not full.
It seems Ceph moved metadata from NVMe to the main HDD, which can reduce performance.
This issue is explained here:
https://hexide.com/ceph-osd-spilled-over/
Right now, PetaSAN automatically creates 60 GB partitions on NVMe for all OSDs.
But for bigger disks, this is too small. We may need 80–100 GB for block.db
to avoid spillover.
Suggestion: please consider adding an option in the GUI to choose the block.db
size, or allow changing it during disk setup.
Is this planned?
Pages: 1 2
PetaSAN 4.0.0 Released!
admin
2,973 Posts
Quote from admin on March 28, 2025, 5:11 pmHappy to announce our newest release version 4.0.0!
New Features:
- Ceph Reef 18.2.4.
- Ubuntu 22.04.
- mClock QoS.
- Updated NFS Container.
- Gradual filling for new OSDs.
- Chart new filter for top 5 disks.
- Start/Stop OSDs from UI.
- Display more hardware info on physical drives.
- General improvements.
For online upgrade,see latest Online Upgrade Guide
Happy to announce our newest release version 4.0.0!
New Features:
- Ceph Reef 18.2.4.
- Ubuntu 22.04.
- mClock QoS.
- Updated NFS Container.
- Gradual filling for new OSDs.
- Chart new filter for top 5 disks.
- Start/Stop OSDs from UI.
- Display more hardware info on physical drives.
- General improvements.
For online upgrade,see latest Online Upgrade Guide
f.cuseo
86 Posts
Quote from f.cuseo on March 29, 2025, 11:58 amWonderful, thanks for your work.
I'm upgrading a test (virtual machines) cluster to verify if problem with multipart upload is fixed.
Wonderful, thanks for your work.
I'm upgrading a test (virtual machines) cluster to verify if problem with multipart upload is fixed.
f.cuseo
86 Posts
Quote from f.cuseo on March 29, 2025, 6:45 pmWith 4.0 same problem of 3.3.0 with multipart upload 🙁 It doesn't work.
With 3.2.1 works like a charm...
With 4.0 same problem of 3.3.0 with multipart upload 🙁 It doesn't work.
With 3.2.1 works like a charm...
yangsm
34 Posts
Quote from yangsm on March 30, 2025, 3:16 amquincy: S3 CompleteMultipartUploadResult has empty ETag element,Can you compile a version 17.2.8 for testing? Thank you
quincy: S3 CompleteMultipartUploadResult has empty ETag element,Can you compile a version 17.2.8 for testing? Thank you
f.cuseo
86 Posts
Quote from f.cuseo on March 30, 2025, 8:26 amQuote from yangsm on March 30, 2025, 3:16 amquincy: S3 CompleteMultipartUploadResult has empty ETag element,Can you compile a version 17.2.8 for testing? Thank you
But reef 18.2.4 is already patched... and is the version used with petasan 4.0
Quote from yangsm on March 30, 2025, 3:16 amquincy: S3 CompleteMultipartUploadResult has empty ETag element,Can you compile a version 17.2.8 for testing? Thank you
But reef 18.2.4 is already patched... and is the version used with petasan 4.0
admin
2,973 Posts
Quote from admin on March 30, 2025, 7:12 pmWhat client side tool do you use? we will try to test this ourselves, probably within this coming month.
What client side tool do you use? we will try to test this ourselves, probably within this coming month.
f.cuseo
86 Posts
Quote from f.cuseo on March 31, 2025, 8:26 amQuote from admin on March 30, 2025, 7:12 pmWhat client side tool do you use? we will try to test this ourselves, probably within this coming month.
Hello.
I am using rclone, several versions.
Now i will try with different clients.Thanks, Fabrizio
rclone version
rclone v1.53.3-DEV
- os/arch: linux/amd64
- go version: go1.18.1------------
rclone2 version
rclone v1.64.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-88-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none---------------
./rclone version
rclone v1.69.1
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-88-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.0
- go/linking: static
- go/tags: none
Quote from admin on March 30, 2025, 7:12 pmWhat client side tool do you use? we will try to test this ourselves, probably within this coming month.
Hello.
I am using rclone, several versions.
Now i will try with different clients.
Thanks, Fabrizio
rclone version
rclone v1.53.3-DEV
- os/arch: linux/amd64
- go version: go1.18.1
------------
rclone2 version
rclone v1.64.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-88-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none
---------------
./rclone version
rclone v1.69.1
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-88-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.0
- go/linking: static
- go/tags: none
f.cuseo
86 Posts
Quote from f.cuseo on March 31, 2025, 8:52 amQuote from admin on March 30, 2025, 7:12 pmWhat client side tool do you use? we will try to test this ourselves, probably within this coming month.
confirmed working with s3browser 12.2.9 (windows).
Quote from admin on March 30, 2025, 7:12 pmWhat client side tool do you use? we will try to test this ourselves, probably within this coming month.
confirmed working with s3browser 12.2.9 (windows).
wid
51 Posts
Quote from wid on April 4, 2025, 3:45 pmOh, it's a miracle!
Problem with permanent:
Health: warning (1 clients failing to respond to cache pressure)
It disappeared permanently after the update.
Thank you very, very much for your work.
Oh, it's a miracle!
Problem with permanent:
Health: warning (1 clients failing to respond to cache pressure)
It disappeared permanently after the update.
Thank you very, very much for your work.
wid
51 Posts
Quote from wid on April 7, 2025, 1:32 pmAfter upgrading to PetaSAN 4.0.0 (Ceph Reef 18.2), I started seeing this warning in
ceph health detail
:HEALTH_WARN 1 OSD(s) experiencing BlueFS spillover BLUEFS_SPILLOVER: osd.X spilled over XX GiB metadata from 'db' device to slow deviceThis happens on large OSDs (14–18 TB), even if the
block.db
partition is 60 GB and not full.
It seems Ceph moved metadata from NVMe to the main HDD, which can reduce performance.This issue is explained here:
https://hexide.com/ceph-osd-spilled-over/Right now, PetaSAN automatically creates 60 GB partitions on NVMe for all OSDs.
But for bigger disks, this is too small. We may need 80–100 GB forblock.db
to avoid spillover.Suggestion: please consider adding an option in the GUI to choose the
block.db
size, or allow changing it during disk setup.Is this planned?
After upgrading to PetaSAN 4.0.0 (Ceph Reef 18.2), I started seeing this warning in ceph health detail
:
HEALTH_WARN 1 OSD(s) experiencing BlueFS spillover BLUEFS_SPILLOVER: osd.X spilled over XX GiB metadata from 'db' device to slow device
This happens on large OSDs (14–18 TB), even if the block.db
partition is 60 GB and not full.
It seems Ceph moved metadata from NVMe to the main HDD, which can reduce performance.
This issue is explained here:
https://hexide.com/ceph-osd-spilled-over/
Right now, PetaSAN automatically creates 60 GB partitions on NVMe for all OSDs.
But for bigger disks, this is too small. We may need 80–100 GB for block.db
to avoid spillover.
Suggestion: please consider adding an option in the GUI to choose the block.db
size, or allow changing it during disk setup.
Is this planned?