Storage option available in Ceph
Pages: 1 2

the.only.chaos.lucifer
31 Posts
January 18, 2024, 7:00 pmQuote from the.only.chaos.lucifer on January 18, 2024, 7:00 pmThanks good to know PetaSAN supports both asyn and sync like MinIO video show which is cool. That video clarifies lots of things and your explanation has been extremely helpful. Now I need to find a good guide for the PetaSAN setup for a ceph beginner user lol to do asynchronous and sync to test if both case work for my case as I am still confused which is the correct case.
Thanks good to know PetaSAN supports both asyn and sync like MinIO video show which is cool. That video clarifies lots of things and your explanation has been extremely helpful. Now I need to find a good guide for the PetaSAN setup for a ceph beginner user lol to do asynchronous and sync to test if both case work for my case as I am still confused which is the correct case.
Last edited on January 18, 2024, 7:34 pm by the.only.chaos.lucifer · #11

the.only.chaos.lucifer
31 Posts
January 21, 2024, 8:47 amQuote from the.only.chaos.lucifer on January 21, 2024, 8:47 am@Admin Turns out that async or sync doesn't make much of a difference for me so I will go with the method outline in the document your team posted. I think I am slowly coming closer to my solution thanks! I do have a question about the setup though.
I was going through the S3 document and was wondering if the MultiSite has to be on the same subnet or can it be on a different subnet? I notice in the documentation it is on the same subnet but most regional sites connected through vpn can have different subnet.
Site 1 node has 3 interfaces:
- Management uses subnet ip 10.0.1.0 and subnet mask 255.255.255.0
- Backend uses subnet ip 10.0.2.0 and subnet mask 255.255.255.0
- S3 uses subnet ip 10.0.3.0 and subnet mask 255.255.255.0
Site 2:
- Management uses subnet ip 10.1.1.0 and subnet mask 255.255.255.0
- Backend uses subnet ip 10.1.2.0 and subnet mask 255.255.255.0
- S3 Public uses subnet ip 10.1.3.0 and subnet mask 255.255.255.0
Only reason I am asking why it has to be on a different subnet is due to the difference in the vpn setup. If it has to be on same subnet I will set it up to be on the same subnet but if it doesn't have to be on the same subnet that would be ideal. I did notice that if the cluster nodes are on different subnet the backend can't talk to each other and fail to connect. But not sure if this is still true for the multisite setup and it has to be on same subnet.
@Admin Turns out that async or sync doesn't make much of a difference for me so I will go with the method outline in the document your team posted. I think I am slowly coming closer to my solution thanks! I do have a question about the setup though.
I was going through the S3 document and was wondering if the MultiSite has to be on the same subnet or can it be on a different subnet? I notice in the documentation it is on the same subnet but most regional sites connected through vpn can have different subnet.
Site 1 node has 3 interfaces:
- Management uses subnet ip 10.0.1.0 and subnet mask 255.255.255.0
- Backend uses subnet ip 10.0.2.0 and subnet mask 255.255.255.0
- S3 uses subnet ip 10.0.3.0 and subnet mask 255.255.255.0
Site 2:
- Management uses subnet ip 10.1.1.0 and subnet mask 255.255.255.0
- Backend uses subnet ip 10.1.2.0 and subnet mask 255.255.255.0
- S3 Public uses subnet ip 10.1.3.0 and subnet mask 255.255.255.0
Only reason I am asking why it has to be on a different subnet is due to the difference in the vpn setup. If it has to be on same subnet I will set it up to be on the same subnet but if it doesn't have to be on the same subnet that would be ideal. I did notice that if the cluster nodes are on different subnet the backend can't talk to each other and fail to connect. But not sure if this is still true for the multisite setup and it has to be on same subnet.
Last edited on January 21, 2024, 8:55 am by the.only.chaos.lucifer · #12

admin
2,973 Posts
January 21, 2024, 9:50 amQuote from admin on January 21, 2024, 9:50 ams3 multi site replication uses http urls, sites do not need to be in same subnet.
s3 multi site replication uses http urls, sites do not need to be in same subnet.
Last edited on January 21, 2024, 9:51 am by admin · #13

the.only.chaos.lucifer
31 Posts
February 7, 2024, 8:40 amQuote from the.only.chaos.lucifer on February 7, 2024, 8:40 am@admin thanks for the help I was able to get s3 multisite working. It seem s3 only sync the "zone00001.rgw.XXX" and "zone00002.rgw.XXX" pools in an active active async base on what you told me earlier. Will play around with it now.
Below is "zone00001.rgw.XXX":
Name
Type
Usage
PGs Autoscale
PGs
Size
Min Size
Rule Name
Used Space
Available Space
Active OSDs
Status
.rgw.root
replicated
radosgw
on
64
3
2
replicated_rule
240.0 KB
237.33 GB
3
Active
zone00001.rgw.buckets.data
replicated
radosgw
on
64
3
2
replicated_rule
0 Bytes
237.33 GB
3
Active
zone00001.rgw.buckets.index
replicated
radosgw
on
16
3
2
replicated_rule
0 Bytes
237.33 GB
3
Active
zone00001.rgw.control
replicated
radosgw
on
16
3
1
replicated_rule
0 Bytes
237.33 GB
3
Active
zone00001.rgw.log
replicated
radosgw
on
16
3
1
replicated_rule
4.2 MB
237.33 GB
3
Active
zone00001.rgw.meta
replicated
radosgw
on
16
3
1
replicated_rule
24.0 KB
237.33 GB
3
Active
zone00001.rgw.otp
replicated
radosgw
off
32
3
2
replicated_rule
0 Bytes
237.33 GB
3
Active
"zone00002.rgw.XXX" is identical as above with all the XXX replaced respectively.
Question: I was wondering if there is a multisite version for cephfs or is it not availabile yet? Multisite cephfs seem to be a feature in ceph as well but petasan as far as i can tell doesn't support it in the gui so is it all cmd line?
Thanks!!!
@admin thanks for the help I was able to get s3 multisite working. It seem s3 only sync the "zone00001.rgw.XXX" and "zone00002.rgw.XXX" pools in an active active async base on what you told me earlier. Will play around with it now.
Below is "zone00001.rgw.XXX":
Name
Type
Usage
PGs Autoscale
PGs
Size
Min Size
Rule Name
Used Space
Available Space
Active OSDs
Status
.rgw.root
replicated
radosgw
on
64
3
2
replicated_rule
240.0 KB
237.33 GB
3
Active
zone00001.rgw.buckets.data
replicated
radosgw
on
64
3
2
replicated_rule
0 Bytes
237.33 GB
3
Active
zone00001.rgw.buckets.index
replicated
radosgw
on
16
3
2
replicated_rule
0 Bytes
237.33 GB
3
Active
zone00001.rgw.control
replicated
radosgw
on
16
3
1
replicated_rule
0 Bytes
237.33 GB
3
Active
zone00001.rgw.log
replicated
radosgw
on
16
3
1
replicated_rule
4.2 MB
237.33 GB
3
Active
zone00001.rgw.meta
replicated
radosgw
on
16
3
1
replicated_rule
24.0 KB
237.33 GB
3
Active
zone00001.rgw.otp
replicated
radosgw
off
32
3
2
replicated_rule
0 Bytes
237.33 GB
3
Active
"zone00002.rgw.XXX" is identical as above with all the XXX replaced respectively.
Question: I was wondering if there is a multisite version for cephfs or is it not availabile yet? Multisite cephfs seem to be a feature in ceph as well but petasan as far as i can tell doesn't support it in the gui so is it all cmd line?
Thanks!!!
Last edited on February 7, 2024, 8:50 am by the.only.chaos.lucifer · #14

the.only.chaos.lucifer
31 Posts
February 7, 2024, 8:59 amQuote from the.only.chaos.lucifer on February 7, 2024, 8:59 amI found this not sure if this is related example as mirroring not sure if it is same as async of data:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/file_system_guide/ceph-file-system-mirrors
From what i gather online:
"Ceph File System (CephFS) mirroring allows for active-active replication, which means that you can write to both locations of the mirrored clusters. This feature enables simultaneous read and write access to the CephFS filesystems in multiple clusters, providing data availability and allowing for load balancing and improved performance."
I found this not sure if this is related example as mirroring not sure if it is same as async of data:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/file_system_guide/ceph-file-system-mirrors
From what i gather online:
"Ceph File System (CephFS) mirroring allows for active-active replication, which means that you can write to both locations of the mirrored clusters. This feature enables simultaneous read and write access to the CephFS filesystems in multiple clusters, providing data availability and allowing for load balancing and improved performance."
Pages: 1 2
Storage option available in Ceph
the.only.chaos.lucifer
31 Posts
Quote from the.only.chaos.lucifer on January 18, 2024, 7:00 pmThanks good to know PetaSAN supports both asyn and sync like MinIO video show which is cool. That video clarifies lots of things and your explanation has been extremely helpful. Now I need to find a good guide for the PetaSAN setup for a ceph beginner user lol to do asynchronous and sync to test if both case work for my case as I am still confused which is the correct case.
Thanks good to know PetaSAN supports both asyn and sync like MinIO video show which is cool. That video clarifies lots of things and your explanation has been extremely helpful. Now I need to find a good guide for the PetaSAN setup for a ceph beginner user lol to do asynchronous and sync to test if both case work for my case as I am still confused which is the correct case.
the.only.chaos.lucifer
31 Posts
Quote from the.only.chaos.lucifer on January 21, 2024, 8:47 am@Admin Turns out that async or sync doesn't make much of a difference for me so I will go with the method outline in the document your team posted. I think I am slowly coming closer to my solution thanks! I do have a question about the setup though.
I was going through the S3 document and was wondering if the MultiSite has to be on the same subnet or can it be on a different subnet? I notice in the documentation it is on the same subnet but most regional sites connected through vpn can have different subnet.
Site 1 node has 3 interfaces:
- Management uses subnet ip 10.0.1.0 and subnet mask 255.255.255.0
- Backend uses subnet ip 10.0.2.0 and subnet mask 255.255.255.0
- S3 uses subnet ip 10.0.3.0 and subnet mask 255.255.255.0
Site 2:
- Management uses subnet ip 10.1.1.0 and subnet mask 255.255.255.0
- Backend uses subnet ip 10.1.2.0 and subnet mask 255.255.255.0
- S3 Public uses subnet ip 10.1.3.0 and subnet mask 255.255.255.0
Only reason I am asking why it has to be on a different subnet is due to the difference in the vpn setup. If it has to be on same subnet I will set it up to be on the same subnet but if it doesn't have to be on the same subnet that would be ideal. I did notice that if the cluster nodes are on different subnet the backend can't talk to each other and fail to connect. But not sure if this is still true for the multisite setup and it has to be on same subnet.
@Admin Turns out that async or sync doesn't make much of a difference for me so I will go with the method outline in the document your team posted. I think I am slowly coming closer to my solution thanks! I do have a question about the setup though.
I was going through the S3 document and was wondering if the MultiSite has to be on the same subnet or can it be on a different subnet? I notice in the documentation it is on the same subnet but most regional sites connected through vpn can have different subnet.
Site 1 node has 3 interfaces:
- Management uses subnet ip 10.0.1.0 and subnet mask 255.255.255.0
- Backend uses subnet ip 10.0.2.0 and subnet mask 255.255.255.0
- S3 uses subnet ip 10.0.3.0 and subnet mask 255.255.255.0
Site 2:
- Management uses subnet ip 10.1.1.0 and subnet mask 255.255.255.0
- Backend uses subnet ip 10.1.2.0 and subnet mask 255.255.255.0
- S3 Public uses subnet ip 10.1.3.0 and subnet mask 255.255.255.0
Only reason I am asking why it has to be on a different subnet is due to the difference in the vpn setup. If it has to be on same subnet I will set it up to be on the same subnet but if it doesn't have to be on the same subnet that would be ideal. I did notice that if the cluster nodes are on different subnet the backend can't talk to each other and fail to connect. But not sure if this is still true for the multisite setup and it has to be on same subnet.
admin
2,973 Posts
Quote from admin on January 21, 2024, 9:50 ams3 multi site replication uses http urls, sites do not need to be in same subnet.
s3 multi site replication uses http urls, sites do not need to be in same subnet.
the.only.chaos.lucifer
31 Posts
Quote from the.only.chaos.lucifer on February 7, 2024, 8:40 am@admin thanks for the help I was able to get s3 multisite working. It seem s3 only sync the "zone00001.rgw.XXX" and "zone00002.rgw.XXX" pools in an active active async base on what you told me earlier. Will play around with it now.
Below is "zone00001.rgw.XXX":
Name Type Usage PGs Autoscale PGs Size Min Size Rule Name Used Space Available Space Active OSDs Status .rgw.root replicated radosgw on 64 3 2 replicated_rule 240.0 KB 237.33 GB 3 Active zone00001.rgw.buckets.data replicated radosgw on 64 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active zone00001.rgw.buckets.index replicated radosgw on 16 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active zone00001.rgw.control replicated radosgw on 16 3 1 replicated_rule 0 Bytes 237.33 GB 3 Active zone00001.rgw.log replicated radosgw on 16 3 1 replicated_rule 4.2 MB 237.33 GB 3 Active zone00001.rgw.meta replicated radosgw on 16 3 1 replicated_rule 24.0 KB 237.33 GB 3 Active zone00001.rgw.otp replicated radosgw off 32 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active "zone00002.rgw.XXX" is identical as above with all the XXX replaced respectively.
Question: I was wondering if there is a multisite version for cephfs or is it not availabile yet? Multisite cephfs seem to be a feature in ceph as well but petasan as far as i can tell doesn't support it in the gui so is it all cmd line?
Thanks!!!
@admin thanks for the help I was able to get s3 multisite working. It seem s3 only sync the "zone00001.rgw.XXX" and "zone00002.rgw.XXX" pools in an active active async base on what you told me earlier. Will play around with it now.
Below is "zone00001.rgw.XXX":
Name Type Usage PGs Autoscale PGs Size Min Size Rule Name Used Space Available Space Active OSDs Status .rgw.root replicated radosgw on 64 3 2 replicated_rule 240.0 KB 237.33 GB 3 Active zone00001.rgw.buckets.data replicated radosgw on 64 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active zone00001.rgw.buckets.index replicated radosgw on 16 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active zone00001.rgw.control replicated radosgw on 16 3 1 replicated_rule 0 Bytes 237.33 GB 3 Active zone00001.rgw.log replicated radosgw on 16 3 1 replicated_rule 4.2 MB 237.33 GB 3 Active zone00001.rgw.meta replicated radosgw on 16 3 1 replicated_rule 24.0 KB 237.33 GB 3 Active zone00001.rgw.otp replicated radosgw off 32 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active
"zone00002.rgw.XXX" is identical as above with all the XXX replaced respectively.
Question: I was wondering if there is a multisite version for cephfs or is it not availabile yet? Multisite cephfs seem to be a feature in ceph as well but petasan as far as i can tell doesn't support it in the gui so is it all cmd line?
Thanks!!!
the.only.chaos.lucifer
31 Posts
Quote from the.only.chaos.lucifer on February 7, 2024, 8:59 amI found this not sure if this is related example as mirroring not sure if it is same as async of data:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/file_system_guide/ceph-file-system-mirrorsFrom what i gather online:
"Ceph File System (CephFS) mirroring allows for active-active replication, which means that you can write to both locations of the mirrored clusters. This feature enables simultaneous read and write access to the CephFS filesystems in multiple clusters, providing data availability and allowing for load balancing and improved performance."
I found this not sure if this is related example as mirroring not sure if it is same as async of data:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/file_system_guide/ceph-file-system-mirrors
From what i gather online:
"Ceph File System (CephFS) mirroring allows for active-active replication, which means that you can write to both locations of the mirrored clusters. This feature enables simultaneous read and write access to the CephFS filesystems in multiple clusters, providing data availability and allowing for load balancing and improved performance."