Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generic/260: fix unary operator error when run tests #4

Open
wants to merge 53 commits into
base: local
Choose a base branch
from

Conversation

cz0807bin
Copy link

When I run test/generic/260 with check script at ext4 filesystem, there is error that appeared at the terminal like below. So I try to add bracket and run again, the error disappeared.

QA output created by 260
[+] Start beyond the end of fs (should fail)
fstrim: SCRATCH_MNT: FITRIM ioctl failed: Invalid argument
[+] Start beyond the end of fs with len set (should fail)
fstrim: SCRATCH_MNT: FITRIM ioctl failed: Invalid argument
[+] Start = 2^64-1 (should fail)
fstrim: SCRATCH_MNT: FITRIM ioctl failed: Invalid argument
[+] Start = 2^64-1 and len is set (should fail)
fstrim: SCRATCH_MNT: FITRIM ioctl failed: Invalid argument
[+] Default length (should succeed)
[+] Default length with start set (should succeed)
[+] Length beyond the end of fs (should succeed)
[+] Length beyond the end of fs with start set (should succeed)
/var/lib/xfstests/tests/generic/260: line 89: [: -gt: unary operator expected
/var/lib/xfstests/tests/generic/260: line 102: [: -le: unary operator expected
/var/lib/xfstests/tests/generic/260: line 174: [: -le: unary operator expected
Test done

Darrick J. Wong and others added 30 commits March 12, 2024 11:39
There's a bunch of tests that fail the formatting step when the test run
is configured to use XFS with a 64k blocksize.  This happens because XFS
doesn't really support that combination due to minimum log size
constraints. Fix the test to format larger devices in that case.

Signed-off-by: "Darrick J. Wong" <[email protected]>
Co-developed-by: Pankaj Raghav <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
This patch adds fcntl corner cases that was being used to confirm issues
on a GFS2 filesystem. The GFS2 filesystem has it's own ->lock()
implementation and in those corner cases issues was being found and
fixed.

Signed-off-by: Alexander Aring <[email protected]>
Reviewed-by: Jeff Layton <[email protected]>
Signed-off-by: Zorro Lang <[email protected]>
There's a few patches in flight to make this test pass sanely, and I
imagine different discussions will go on about the actual fix.  Disable
this for now since it has never passed for us.

Signed-off-by: Josef Bacik <[email protected]>
The fixes haven't been merged for this yet, don't run.

Signed-off-by: Josef Bacik <[email protected]>
There are some btrfs tests that do _scratch_pool_mkfs in a loop.
Sometimes this fails with EBUSY.  Tracing revealed that udevd will
sometimes write to /sys/block/device/uevent to make sure an event
triggers to rules get written.  However these events will not get sent
to user space until after an O_EXCL open as been closed.  The general
flow is something like

mkfs.btrfs /dev/sda /dev/sdb /dev/sdc /dev/sdd
mount /dev/sda /mnt/test
<things>
umount /mnt/test

in a loop.  The problem is udevd will add uevents for the devices and
they won't get delivered until after the umount.  If we're doing the
above sequence in a loop the next mkfs.btrfs will fail because udev is
touching the devices to consume the KOBJ_CHANGE event.

Fix this by doing a udev settle before _scratch_pool_mkfs.

Signed-off-by: Josef Bacik <[email protected]>
While wiring up fstests into github actions I noticed that it would hang
on one of our tests that uses _ddt.  A long investigation uncovered that
od was still reading /dev/urandom and spewing to stdout despite the fact
that it had been disconnected.  This is a bug in od, however it's going
to be a while until the fix makes it everywhere, so for now change the
_ddt helper to take an argument for od to limit the amount it reads.

Signed-off-by: Josef Bacik <[email protected]>
A long time ago we changed the short options to long options in
btrfs-corrupt-block, so adjust the helper to use the correct options so
the verity tests pass properly.

Signed-off-by: Josef Bacik <[email protected]>
Sometimes it's useful to see how long the test runs.  The time is
calculated the same way as in the normal case but the time is not stored
in the $tmp.time file.

Signed-off-by: David Sterba <[email protected]>
There's a bug with aio where the delayed fput for a task that is killed
get's async'ed off, so there's a chance we could get EBUSY on the
unmount.  Until this is fixed upstream just loop on the unmount, as this
test catches important failures.

Signed-off-by: Josef Bacik <[email protected]>
A new disk format option will make the no-holes option a requirement, so
add a helper to make sure that we aren't creating a fs with
BLOCK_GROUP_TREE by default, and skip the tests that require turning off
no-holes.

Signed-off-by: Josef Bacik <[email protected]>
For now 290 as it doesn't appear to pass anywhere.

Signed-off-by: Josef Bacik <[email protected]>
btrfs/287 requires specific layouts so needs to be reworked.  Not sure
what's wrong with btrfs/291.

Signed-off-by: Josef Bacik <[email protected]>
These are very rarely flakey, we need to look into why as they're just
counting extents, but for now exclude them.

Signed-off-by: Josef Bacik <[email protected]>
This is particularly flakey, so disable it until we can figure out why.

Signed-off-by: Josef Bacik <[email protected]>
generic/619 takes too long, it's likely a problem with enospc flushing
that makes it slow down too much on ARM and that should be investigated,
but for now disable it for the CI.

Signed-off-by: Josef Bacik <[email protected]>
This fails occasionally with an ENOSPC abort, we should investigate this
one.

Signed-off-by: Josef Bacik <[email protected]>
The test works with assumptions that do not apply to btrfs and it always
fails. A specific test is needed to take the physical/logical addressing
into account.

Signed-off-by: David Sterba <[email protected]>
Apparently it's one test per line.

Signed-off-by: Josef Bacik <[email protected]>
Add new tests and comments.

Signed-off-by: Josef Bacik <[email protected]>
Sometimes this fails because the file is too fragmented, usually with
compression or holes turned on, so likely just needs to be adjusted to
account for those differences.

Signed-off-by: Josef Bacik <[email protected]>
Running this in a loop we can see failures where the qgroup numbers
don't match what fsck thinks they should be.  This doesn't appear to
happen on x86, so it's subpage blocksize related, or perhaps simply
timing related.

Signed-off-by: Josef Bacik <[email protected]>
btrfs/220 needs to be updated, but the rest seem like legitimate
failures that need to be investigated.

Signed-off-by: Josef Bacik <[email protected]>
This appears to be flakey so exclude it for now.

Signed-off-by: Josef Bacik <[email protected]>
Btrfs has had the ability for almost a decade to allow ro and rw
mounting of subvols.  This behavior specifically

mount -o subvol=foo,ro /some/dir
mount -o subvol=bar,rw /some/other/dir

This seems simple, but because of the limitations of how we did mounting
in ye olde days we would mark the super block as RO and the mount if we
mounted RO first.  In the case above /some/dir would instantiate the
super block as read only and the mount point.  So the second mount
command under the covers would convert the super block to RW, and then
allow the mount to continue.

The results were still consistent, /some/dir was still read only because
the mount was marked read only, but /some/other/dir could be written to.

This is a test to make sure we maintain this behavior, as I almost
regressed this behavior while converting us to the new mount API.

Signed-off-by: Josef Bacik <[email protected]>
Our CI has been failing on this test for compression since 0fc226e
("fstests: generic/352 should accomodate other pwrite behaviors").  This
is because we changed the size of the initial write down to 4k, and we
write a repeatable pattern.  With compression on btrfs this results in
an inline extent, and when you reflink an inline extent this just turns
it into full on copies instead of a reflink.

As this isn't a bug with compression, it's just not well aligned with
how compression interacts with the allocation of space, simply exclude
this test from running when you have compression enabled.

Signed-off-by: Josef Bacik <[email protected]>
This test creates a small file and then a giant file and then tries to
create a bunch of small files in a loop to exercise ENOPSC.  The problem
is that with compression the giant file isn't actually giant, so it can
make this test take forever.  Simply disable it for compression.

Signed-off-by: Josef Bacik <[email protected]>
This is meant to test ENOSPC, but we're dd'ing /dev/zero, which won't
fill up anything with compression on.

Additionally we're killing dd and then immediately trying to unmount.
With compression we could have references to the inode being held by the
async compression workers, so sometimes this will fail with EBUSY on the
unmount.

Make it easier on us and just skip this if we have compression enabled.

Signed-off-by: Josef Bacik <[email protected]>
Exclude some tests so we have a clean run of the CI. All need to be
fixed or analyzed if it's a real problem or not.

Signed-off-by: David Sterba <[email protected]>
The path /tmp.repair would be on the system root that could not be
writable, the temporary files are available at $tmp .

Signed-off-by: David Sterba <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants