Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

errors in log: nl-cache: inode is not of type dir [Invalid argument] #2521

Open
jirireischig opened this issue Jun 11, 2021 · 2 comments · May be fixed by #4372
Open

errors in log: nl-cache: inode is not of type dir [Invalid argument] #2521

jirireischig opened this issue Jun 11, 2021 · 2 comments · May be fixed by #4372

Comments

@jirireischig
Copy link

Description of problem:
Some times errors in "mount-dir".log file:

[2021-06-11 08:37:30.807011 +0000] E [nl-cache-helper.c:406:nlc_set_dir_state] (-->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x539f) [0x7efbf56ce39f] -->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x3fc9) [0x7efbf56ccfc9] -->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x9e34) [0x7efbf56d2e34] ) 0-www-cloud-nl-cache: inode is not of type dir [Invalid argument]
[2021-06-11 08:37:31.068687 +0000] E [nl-cache-helper.c:406:nlc_set_dir_state] (-->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x539f) [0x7efbf56ce39f] -->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x3fc9) [0x7efbf56ccfc9] -->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x9e34) [0x7efbf56d2e34] ) 0-www-cloud-nl-cache: inode is not of type dir [Invalid argument]
[2021-06-11 08:37:31.238255 +0000] E [nl-cache-helper.c:406:nlc_set_dir_state] (-->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x539f) [0x7efbf56ce39f] -->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x3fc9) [0x7efbf56ccfc9] -->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x9e34) [0x7efbf56d2e34] ) 0-www-cloud-nl-cache: inode is not of type dir [Invalid argument]
[2021-06-11 08:45:50.980950 +0000] E [nl-cache-helper.c:406:nlc_set_dir_state] (-->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x539f) [0x7efbf56ce39f] -->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x3fc9) [0x7efbf56ccfc9] -->/usr/lib64/glusterfs/9.2/xlator/performance/nl-cache.so(+0x9e34) [0x7efbf56d2e34] ) 0-www-cloud-nl-cache: inode is not of type dir [Invalid argument]

The exact command to reproduce the issue:

I don!t known.

The full output of the command that failed:

Expected results:

Mandatory info:
- The output of the gluster volume info command:

Type: Replicate
Volume ID: b529713a-070d-425a-b27d-047a1435acb7
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 1-internal:/data/brick-cloud
Brick2: 2-internal:/databrick-cloud
Brick3: 3-internal:/data//brick-cloud (arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 1048576
performance.cache-size: 1GB
performance.write-behind-window-size: 20MB
client.event-threads: 4
server.event-threads: 4
performance.parallel-readdir: on
performance.readdir-ahead: on
performance.nl-cache: on
performance.nl-cache-positive-entry: on
performance.qr-cache-timeout: 600
performance.cache-max-file-size: 50MB
diagnostics.brick-log-level: WARNING
diagnostics.client-log-level: WARNING

- The output of the gluster volume status command:

Status of volume: www-cloud
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 1-internal:/data/brick-cloud                                 49153     0          Y       4071 
Brick 2-internal:/data/brick-cloud                                   49153     0          Y       4191 
Brick 3-internal:/data/brick-cloud                                N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       4292 
Self-heal Daemon on 2-internal          N/A       N/A        Y       1069982
Self-heal Daemon on 3-internal           N/A       N/A        Y       4115 
 
Task Status of Volume www-cloud
------------------------------------------------------------------------------
There are no active volume tasks

- The output of the gluster volume heal command:

# gluster volume heal www-cloud info
Brick 1-internal:/data/brick-cloud
Status: Connected
Number of entries: 0

Brick 2-internal:/data//brick-cloud
<gfid:238359ee-3502-4446-b13b-d3628e29103b> 
<gfid:10190a62-7155-4881-8b4b-7146bec8e5b5> 
<gfid:b4dd910e-7f29-44a3-af9c-611be8afdda9> 
<gfid:d6b15da1-0698-4f5d-b5b6-3a25152a4b9d> 
<gfid:cd7f53ac-bb87-4560-8750-c1b42c681164> 
<gfid:e154e545-08a5-450a-9398-808e4fa81720> 
<gfid:2434d486-69da-46fe-949d-3a8357239738> 
<gfid:f31e7d6d-1fd9-4a47-903d-f7996169f9ae> 
<gfid:68780cd4-e3fa-4bb7-bf2e-3b23e4cb08ec> 
Status: Connected
Number of entries: 9

Brick 3-internal:/data/brick-cloud
file1
file2
file3
file4
file5
file6
file7
file8
file9
Status: Connected
Number of entries: 9

**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/

**- Is there any crash ? Provide the backtrace and coredump

Additional info:

- The operating system / glusterfs version:

Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration

@stale
Copy link

stale bot commented Jan 9, 2022

Thank you for your contributions.
Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity.
It will be closed in 2 weeks if no one responds with a comment here.

@stale stale bot added the wontfix Managed by stale[bot] label Jan 9, 2022
@stale
Copy link

stale bot commented Jan 27, 2022

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

@stale stale bot closed this as completed Jan 27, 2022
@mohit84 mohit84 reopened this Jun 1, 2024
@stale stale bot removed the wontfix Managed by stale[bot] label Jun 1, 2024
mohit84 added a commit to mohit84/glusterfs that referenced this issue Jun 1, 2024
During fop(mkdir)_cbk process by nl-cache it checks
the inode type validation based on loc->inode though
loc->inode is populated by first xlator like fuse/gfapi.
The loc->inode does not have inode_type so it always throw
an error.

Solution: Check the inode type based on inode attribute
return by server xlator as an argument in callback function.

Fixes: gluster#2521
Change-Id: Ibc4675ebe095d14310cdb2348c2f55a73f972046
Signed-off-by: Mohit Agrawal <[email protected]>
@mohit84 mohit84 linked a pull request Jun 1, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants