Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Reclaiming Free Space on PetaSAN with Hyper-V Cluster

Hi everyone,

Firstly, I'd like to reference a previous discussion on this forum regarding a similar issue with VMWare: PetaSAN and Free Space Issue on VMWare.

I find myself facing a related issue, but in a Hyper-V context. In our environment, we have a Hyper-V cluster connected to PetaSAN. When we free up or delete data from the iSCSI drives mapped on our Hyper-V servers, the total available space on PetaSAN continues to shrink. Specifically, these PetaSAN iSCSI drives are formatted using CSVFS.

While I understand from the previous discussion that PetaSAN operates at the block level and doesn't recognize file deletions in the way an OS-level file system does, I'm searching for a solution similar to Linux's "fstrim" that would help in reclaiming this "unused" space.

We did attempt to use "SDelete," but it worsened the situation as it writes a large temporary file to zero out the free space.

Has anyone faced this specific scenario with Hyper-V and PetaSAN? Any suggestions on best practices or tools to reclaim this space effectively?

Your insights would be highly appreciated. Thank you in advance!

You can enable trim/unmap support by editing the iSCSI disk settings. To make the settings take effect you can stop/start the disk or better from the path assignment, move paths 1 at at time to force path re connection while disk is running.

VMWate reclaim is working as in

https://www.petasan.org/forums/?view=thread&id=389

note that enabling trim in UI  will set the emaulate_tpu flag so you no longer have to manually set it.

Note that the default value is disabled, this is the default of the Linux LIO kernel target. In some cases trimming is not desirable. for example with trim enabled, NTFS formatting can take much longer to complete as it constantly sends tons of needless trim commands which in Ceph will take time to process.

Hello Admin,

Thank you for the timely feedback.

I've reviewed our settings, and trim seems to be enabled in our UI (I wanted to add a screenshot for better clarity, but it seems the photo gallery tool on the forum isn't working.)

We're currently on PetaSAN version 3.1. Considering we've been using PetaSAN since 2020 and have updated it regularly, configurations from older versions could possibly still be active.

For clarity, here's the content of our /opt/petasan/config/tuning/current/lio_tunings:


{
"storage_objects": [
{
"attributes": {
"block_size": 512,
"emulate_3pc": 1,
"emulate_caw": 1,
"queue_depth": 256
}
}
],
"targets": [
{
"tpgs": [
{
"attributes": {
"default_cmdsn_depth": 512
},
"parameters": {
"DefaultTime2Retain": "20",
"DefaultTime2Wait": "2",
"FirstBurstLength": "1048576",
"ImmediateData": "Yes",
"InitialR2T": "No",
"MaxBurstLength": "1048576",
"MaxOutstandingR2T": "16",
"MaxRecvDataSegmentLength": "1048576",
"MaxXmitDataSegmentLength": "1048576"
}
}
]
}
]
}


From this configuration, I don't see the "emulate_tpu" flag set. Can you please confirm if this is causing the issue? Additionally, is there a way to verify whether the LIO module is actively considering this parameter? A specific command or logfile would be helpful.

Any insights or recommendations based on this configuration would be greatly appreciated.

Thank you for your continuous support.

The tuning file is for static params, the ui will add dynamic params like emulate_tpu.

You can verift the setting on image, via targetcli-fb, example:
targetcli-fb /backstores/rbd/image-00001 get attribute emulate_tpu

You aslo need to perform trim from Windows side:
https://www.bdrsuite.com/blog/reclaiming-space-using-trim-unmap/

As posted earlier, the space claimed is actually written at the block layer, when you delete a large file at the filesystem level it will only modify unlike it from its inode metadata and the block layer is not aware of this, but the filesystem will re-use that space for future writes. So in most cases you should leave things as is.