1 client failing to respond to cache pressure.

Vishnu
9 Posts

admin
2,973 Posts
July 4, 2023, 8:43 amQuote from admin on July 4, 2023, 8:43 amtry increasing mds memory, example
ceph config set mds mds_cache_memory_limit 4GB
try increasing mds memory, example
ceph config set mds mds_cache_memory_limit 4GB

rahulraj
18 Posts
July 5, 2023, 2:58 amQuote from rahulraj on July 5, 2023, 2:58 amHow to know my current mds cache limit
How to know my current mds cache limit

admin
2,973 Posts
July 5, 2023, 8:36 amQuote from admin on July 5, 2023, 8:36 amceph config get mds mds_cache_memory_limit
ceph config get mds mds_cache_memory_limit

Vishnu
9 Posts
July 6, 2023, 3:53 amQuote from Vishnu on July 6, 2023, 3:53 amcurrently i have a new issue""lease expired failed on NFSv4 server ip
with error 121"" what does this means how to overcome it?
currently i have a new issue""lease expired failed on NFSv4 server ip
with error 121"" what does this means how to overcome it?

wid
51 Posts
July 28, 2023, 10:17 amQuote from wid on July 28, 2023, 10:17 amHi,
I have very wired situation. It begins just after update cluster to 3.0.0, and still present in 3.2.0
My HEALTH_WARN 1 clients failing to respond to cache pressure
exists almost all the time.
I tried to set cache from 4 GB to 6 GB and 8 GB but it does not help.
Result below says, that cache is used 0,15 GB and 0,08 GB.
ceph tell mds.* cache status
2023-07-28T12:15:19.103+0200 7fd4c17fa700 0 client.91857905 ms_handle_reset on v2:172.30.0.42:6800/623421266
2023-07-28T12:15:19.135+0200 7fd4c17fa700 0 client.91823569 ms_handle_reset on v2:172.30.0.42:6800/623421266
Error ENOSYS:
2023-07-28T12:15:19.139+0200 7fd4c17fa700 0 client.91823575 ms_handle_reset on v2:172.30.0.41:6800/1186658423
2023-07-28T12:15:19.167+0200 7fd4c17fa700 0 client.91823581 ms_handle_reset on v2:172.30.0.41:6800/1186658423
mds.ceph01: {
"pool": {
"items": 5962916,
"bytes": 151031037
}
}
2023-07-28T12:15:19.175+0200 7fd4c17fa700 0 client.91823587 ms_handle_reset on v2:172.30.0.43:6896/2544779998
2023-07-28T12:15:19.203+0200 7fd4c17fa700 0 client.91823593 ms_handle_reset on v2:172.30.0.43:6896/2544779998
mds.ceph03: {
"pool": {
"items": 2904338,
"bytes": 89250706
}
}
What other thing can resolve this issue?
Hi,
I have very wired situation. It begins just after update cluster to 3.0.0, and still present in 3.2.0
My HEALTH_WARN 1 clients failing to respond to cache pressure
exists almost all the time.
I tried to set cache from 4 GB to 6 GB and 8 GB but it does not help.
Result below says, that cache is used 0,15 GB and 0,08 GB.
ceph tell mds.* cache status
2023-07-28T12:15:19.103+0200 7fd4c17fa700 0 client.91857905 ms_handle_reset on v2:172.30.0.42:6800/623421266
2023-07-28T12:15:19.135+0200 7fd4c17fa700 0 client.91823569 ms_handle_reset on v2:172.30.0.42:6800/623421266
Error ENOSYS:
2023-07-28T12:15:19.139+0200 7fd4c17fa700 0 client.91823575 ms_handle_reset on v2:172.30.0.41:6800/1186658423
2023-07-28T12:15:19.167+0200 7fd4c17fa700 0 client.91823581 ms_handle_reset on v2:172.30.0.41:6800/1186658423
mds.ceph01: {
"pool": {
"items": 5962916,
"bytes": 151031037
}
}
2023-07-28T12:15:19.175+0200 7fd4c17fa700 0 client.91823587 ms_handle_reset on v2:172.30.0.43:6896/2544779998
2023-07-28T12:15:19.203+0200 7fd4c17fa700 0 client.91823593 ms_handle_reset on v2:172.30.0.43:6896/2544779998
mds.ceph03: {
"pool": {
"items": 2904338,
"bytes": 89250706
}
}
What other thing can resolve this issue?
Last edited on July 28, 2023, 10:34 am by wid · #6

wid
51 Posts
April 4, 2025, 4:16 pmQuote from wid on April 4, 2025, 4:16 pmThis problem has gone after upgrade to 4.x (for me)
This problem has gone after upgrade to 4.x (for me)
1 client failing to respond to cache pressure.
Vishnu
9 Posts
admin
2,973 Posts
Quote from admin on July 4, 2023, 8:43 amtry increasing mds memory, example
ceph config set mds mds_cache_memory_limit 4GB
try increasing mds memory, example
ceph config set mds mds_cache_memory_limit 4GB
rahulraj
18 Posts
Quote from rahulraj on July 5, 2023, 2:58 amHow to know my current mds cache limit
How to know my current mds cache limit
admin
2,973 Posts
Quote from admin on July 5, 2023, 8:36 amceph config get mds mds_cache_memory_limit
ceph config get mds mds_cache_memory_limit
Vishnu
9 Posts
Quote from Vishnu on July 6, 2023, 3:53 amcurrently i have a new issue""lease expired failed on NFSv4 server
ip
with error 121"" what does this means how to overcome it?
currently i have a new issue""lease expired failed on NFSv4 server ip
with error 121"" what does this means how to overcome it?
wid
51 Posts
Quote from wid on July 28, 2023, 10:17 amHi,
I have very wired situation. It begins just after update cluster to 3.0.0, and still present in 3.2.0
My HEALTH_WARN 1 clients failing to respond to cache pressure
exists almost all the time.I tried to set cache from 4 GB to 6 GB and 8 GB but it does not help.
Result below says, that cache is used 0,15 GB and 0,08 GB.
ceph tell mds.* cache status
2023-07-28T12:15:19.103+0200 7fd4c17fa700 0 client.91857905 ms_handle_reset on v2:172.30.0.42:6800/623421266
2023-07-28T12:15:19.135+0200 7fd4c17fa700 0 client.91823569 ms_handle_reset on v2:172.30.0.42:6800/623421266
Error ENOSYS:
2023-07-28T12:15:19.139+0200 7fd4c17fa700 0 client.91823575 ms_handle_reset on v2:172.30.0.41:6800/1186658423
2023-07-28T12:15:19.167+0200 7fd4c17fa700 0 client.91823581 ms_handle_reset on v2:172.30.0.41:6800/1186658423
mds.ceph01: {
"pool": {
"items": 5962916,
"bytes": 151031037
}
}
2023-07-28T12:15:19.175+0200 7fd4c17fa700 0 client.91823587 ms_handle_reset on v2:172.30.0.43:6896/2544779998
2023-07-28T12:15:19.203+0200 7fd4c17fa700 0 client.91823593 ms_handle_reset on v2:172.30.0.43:6896/2544779998
mds.ceph03: {
"pool": {
"items": 2904338,
"bytes": 89250706
}
}What other thing can resolve this issue?
Hi,
I have very wired situation. It begins just after update cluster to 3.0.0, and still present in 3.2.0
My HEALTH_WARN 1 clients failing to respond to cache pressure
exists almost all the time.
I tried to set cache from 4 GB to 6 GB and 8 GB but it does not help.
Result below says, that cache is used 0,15 GB and 0,08 GB.
ceph tell mds.* cache status
2023-07-28T12:15:19.103+0200 7fd4c17fa700 0 client.91857905 ms_handle_reset on v2:172.30.0.42:6800/623421266
2023-07-28T12:15:19.135+0200 7fd4c17fa700 0 client.91823569 ms_handle_reset on v2:172.30.0.42:6800/623421266
Error ENOSYS:
2023-07-28T12:15:19.139+0200 7fd4c17fa700 0 client.91823575 ms_handle_reset on v2:172.30.0.41:6800/1186658423
2023-07-28T12:15:19.167+0200 7fd4c17fa700 0 client.91823581 ms_handle_reset on v2:172.30.0.41:6800/1186658423
mds.ceph01: {
"pool": {
"items": 5962916,
"bytes": 151031037
}
}
2023-07-28T12:15:19.175+0200 7fd4c17fa700 0 client.91823587 ms_handle_reset on v2:172.30.0.43:6896/2544779998
2023-07-28T12:15:19.203+0200 7fd4c17fa700 0 client.91823593 ms_handle_reset on v2:172.30.0.43:6896/2544779998
mds.ceph03: {
"pool": {
"items": 2904338,
"bytes": 89250706
}
}
What other thing can resolve this issue?
wid
51 Posts
Quote from wid on April 4, 2025, 4:16 pmThis problem has gone after upgrade to 4.x (for me)
This problem has gone after upgrade to 4.x (for me)