Skip to content

Commit

Permalink
force segyio header buffer to big
Browse files Browse the repository at this point in the history
  • Loading branch information
tasansal authored Sep 13, 2023
1 parent a0d388a commit 6e336c7
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
5 changes: 3 additions & 2 deletions src/mdio/segy/_workers.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ def header_scan_worker(
byte_types: Tuple consisting of the data types for the index attributes.
trace_range: Tuple consisting of the trace ranges to read
index_names: Tuple of the names for the index attributes
segy_endian: Endianness of the input SEG-Y. Rev.2 allows little endian
Returns:
dictionary with headers: keys are the index names, values are numpy
Expand Down Expand Up @@ -79,7 +78,9 @@ def header_scan_worker(
# First we create a struct to unpack the 240-byte trace headers.
# The struct only knows about dimension keys, and their byte offsets.
# Pads the rest of the data with voids.
endian = ByteOrder[segy_endian.upper()]
# NOTE: segyio buffer is always big endian. In the future if we use
# a different parser, we need to expose this as a parameter.
endian = ByteOrder.BIG

# Handle byte offsets
offsets = [0 if byte_loc is None else byte_loc - 1 for byte_loc in byte_locs]
Expand Down
1 change: 0 additions & 1 deletion src/mdio/segy/parsers.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,6 @@ def parse_trace_headers(
repeat(byte_locs),
repeat(byte_types),
repeat(index_names),
repeat(segy_endian),
chunksize=2, # Not array chunks. This is for `multiprocessing`
)

Expand Down

0 comments on commit 6e336c7

Please sign in to comment.