Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Storage option available in Ceph

Pages: 1 2

Thanks good to know PetaSAN supports both asyn and sync like MinIO video show which is cool. That video clarifies lots of things and your explanation has been extremely helpful. Now I need to find a good guide for the PetaSAN setup for a ceph beginner user lol to do asynchronous and sync to test if both case work for my case as I am still confused which is the correct case.

@Admin Turns out that async or sync doesn't make much of a difference for me so I will go with the method outline in the document your team posted. I think I am slowly coming closer to my solution thanks! I do have a question about the setup though.

I was going through the S3 document and was wondering if the MultiSite has to be on the same subnet or can it be on a different subnet? I notice in the documentation it is on the same subnet but most regional sites connected through vpn can have different subnet.

Site 1 node has 3 interfaces:

  • Management uses subnet ip 10.0.1.0 and subnet mask 255.255.255.0
  • Backend uses subnet ip 10.0.2.0 and subnet mask 255.255.255.0
  • S3 uses subnet ip 10.0.3.0 and subnet mask 255.255.255.0

Site 2:

  • Management uses subnet ip 10.1.1.0 and subnet mask 255.255.255.0
  • Backend uses subnet ip 10.1.2.0 and subnet mask 255.255.255.0
  • S3 Public uses subnet ip 10.1.3.0 and subnet mask 255.255.255.0

Only reason I am asking why it has to be on a different subnet is due to the difference in the vpn setup. If it has to be on same subnet I will set it up to be on the same subnet but if it doesn't have to be on the same subnet that would be ideal. I did notice that if the cluster nodes are on different subnet the backend can't talk to each other and fail to connect. But not sure if this is still true for the multisite setup and it has to be on same subnet.

s3 multi site replication uses http urls, sites do not need to be in same subnet.

@admin thanks for the help I was able to get s3 multisite working. It seem s3 only sync the "zone00001.rgw.XXX" and "zone00002.rgw.XXX" pools in an active active async base on what you told me earlier. Will play around with it now.

Below is "zone00001.rgw.XXX":

Name Type Usage PGs Autoscale PGs Size Min Size Rule Name Used Space Available Space Active OSDs Status
.rgw.root replicated radosgw on 64 3 2 replicated_rule 240.0 KB 237.33 GB 3 Active
zone00001.rgw.buckets.data replicated radosgw on 64 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active
zone00001.rgw.buckets.index replicated radosgw on 16 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active
zone00001.rgw.control replicated radosgw on 16 3 1 replicated_rule 0 Bytes 237.33 GB 3 Active
zone00001.rgw.log replicated radosgw on 16 3 1 replicated_rule 4.2 MB 237.33 GB 3 Active
zone00001.rgw.meta replicated radosgw on 16 3 1 replicated_rule 24.0 KB 237.33 GB 3 Active
zone00001.rgw.otp replicated radosgw off 32 3 2 replicated_rule 0 Bytes 237.33 GB 3 Active

"zone00002.rgw.XXX" is identical as above with all the XXX replaced respectively.

Question: I was wondering if there is a multisite version for cephfs or is it not availabile yet? Multisite cephfs seem to be a feature in ceph as well but petasan as far as i can tell doesn't support it in the gui so is it all cmd line?

Thanks!!!

I found this not sure if this is related example as mirroring not sure if it is same as async of data:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/file_system_guide/ceph-file-system-mirrors

From what i gather online:
"Ceph File System (CephFS) mirroring allows for active-active replication, which means that you can write to both locations of the mirrored clusters. This feature enables simultaneous read and write access to the CephFS filesystems in multiple clusters, providing data availability and allowing for load balancing and improved performance."

Pages: 1 2