-
-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sharding Prototype I: implementation as translating Store #876
Closed
Closed
Changes from 4 commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
f8cabaa
initial sharding prototype
jstriebel 8290f1e
add small script to test chunking
jstriebel a44b2e5
Update util.py
jstriebel 97a9368
implement feedback
jstriebel 7e2768a
make shard_format configurable, add bitmask for uncompressed chunks
jstriebel 8071268
add chunking_test.py output to itself
jstriebel b4fd3e2
implement indexed sharded format
jstriebel d0434a6
index: use little endian, note empty chunks with pair of max uint64
jstriebel 19be805
Merge branch 'master' into sharding
normanrz 4859e31
fix linting & typing
jstriebel 2d1fea0
Merge branch 'master' into sharding
jstriebel File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
import json | ||
import os | ||
|
||
import zarr | ||
|
||
store = zarr.DirectoryStore("data/chunking_test.zarr") | ||
z = zarr.zeros((20, 3), chunks=(3, 3), shards=(2, 2), store=store, overwrite=True, compressor=None) | ||
z[...] = 42 | ||
z[15, 1] = 389 | ||
z[19, 2] = 1 | ||
z[0, 1] = -4.2 | ||
|
||
print("ONDISK", sorted(os.listdir("data/chunking_test.zarr"))) | ||
assert json.loads(store[".zarray"].decode()) ["shards"] == [2, 2] | ||
|
||
print("STORE", list(store)) | ||
print("CHUNKSTORE (SHARDED)", list(z.chunk_store)) | ||
|
||
z_reopened = zarr.open("data/chunking_test.zarr") | ||
assert z_reopened.shards == (2, 2) | ||
assert z_reopened[15, 1] == 389 | ||
assert z_reopened[19, 2] == 1 | ||
assert z_reopened[0, 1] == -4.2 | ||
assert z_reopened[0, 0] == 42 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
from functools import reduce | ||
from itertools import product | ||
from typing import Any, Iterable, Iterator, Optional, Tuple | ||
|
||
import numpy as np | ||
|
||
from zarr._storage.store import BaseStore, Store | ||
from zarr.storage import StoreLike, array_meta_key, attrs_key, group_meta_key | ||
|
||
|
||
def _cum_prod(x: Iterable[int]) -> Iterable[int]: | ||
prod = 1 | ||
yield prod | ||
for i in x[:-1]: | ||
prod *= i | ||
yield prod | ||
|
||
|
||
class ShardedStore(Store): | ||
"""This class should not be used directly, | ||
but is added to an Array as a wrapper when needed automatically.""" | ||
|
||
def __init__( | ||
self, | ||
store: StoreLike, | ||
shards: Tuple[int, ...], | ||
dimension_separator: str, | ||
are_chunks_compressed: bool, | ||
dtype: np.dtype, | ||
fill_value: Any, | ||
chunk_size: int, | ||
) -> None: | ||
self._store: BaseStore = BaseStore._ensure_store(store) | ||
self._shards = shards | ||
# This defines C/F-order | ||
self._shard_strides = tuple(_cum_prod(shards)) | ||
self._num_chunks_per_shard = reduce(lambda x, y: x*y, shards, 1) | ||
self._dimension_separator = dimension_separator | ||
# TODO: add jumptable for compressed data | ||
chunk_has_constant_size = not are_chunks_compressed and not dtype == object | ||
assert chunk_has_constant_size, "Currently only uncompressed, fixed-length data can be used." | ||
self._chunk_has_constant_size = chunk_has_constant_size | ||
if chunk_has_constant_size: | ||
binary_fill_value = np.full(1, fill_value=fill_value or 0, dtype=dtype).tobytes() | ||
self._fill_chunk = binary_fill_value * chunk_size | ||
else: | ||
self._fill_chunk = None | ||
|
||
# TODO: add warnings for ineffective reads/writes: | ||
# * warn if partial reads are not available | ||
# * optionally warn on unaligned writes if no partial writes are available | ||
|
||
def __key_to_sharded__(self, key: str) -> Tuple[str, int]: | ||
# TODO: allow to be in a group (aka only use last parts for dimensions) | ||
subkeys = map(int, key.split(self._dimension_separator)) | ||
|
||
shard_tuple, index_tuple = zip(*((subkey // shard_i, subkey % shard_i) for subkey, shard_i in zip(subkeys, self._shards))) | ||
shard_key = self._dimension_separator.join(map(str, shard_tuple)) | ||
index = sum(i * j for i, j in zip(index_tuple, self._shard_strides)) | ||
return shard_key, index | ||
|
||
def __get_chunk_slice__(self, shard_key: str, shard_index: int) -> Tuple[int, int]: | ||
# TODO: here we would use the jumptable for compression, which uses shard_key | ||
start = shard_index * len(self._fill_chunk) | ||
return slice(start, start + len(self._fill_chunk)) | ||
|
||
def __getitem__(self, key: str) -> bytes: | ||
shard_key, shard_index = self.__key_to_sharded__(key) | ||
chunk_slice = self.__get_chunk_slice__(shard_key, shard_index) | ||
# TODO use partial reads if available | ||
full_shard_value = self._store[shard_key] | ||
return full_shard_value[chunk_slice] | ||
|
||
def __setitem__(self, key: str, value: bytes) -> None: | ||
shard_key, shard_index = self.__key_to_sharded__(key) | ||
if shard_key in self._store: | ||
full_shard_value = bytearray(self._store[shard_key]) | ||
else: | ||
full_shard_value = bytearray(self._fill_chunk * self._num_chunks_per_shard) | ||
chunk_slice = self.__get_chunk_slice__(shard_key, shard_index) | ||
# TODO use partial writes if available | ||
full_shard_value[chunk_slice] = value | ||
self._store[shard_key] = full_shard_value | ||
|
||
def __delitem__(self, key) -> None: | ||
# TODO not implemented yet | ||
# For uncompressed chunks, deleting the "last" chunk might need to be detected. | ||
raise NotImplementedError("Deletion is not yet implemented") | ||
|
||
def __iter__(self) -> Iterator[str]: | ||
for shard_key in self._store.__iter__(): | ||
if any(shard_key.endswith(i) for i in (array_meta_key, group_meta_key, attrs_key)): | ||
# Special keys such as ".zarray" are passed on as-is | ||
yield shard_key | ||
else: | ||
# For each shard key in the wrapped store, all corresponding chunks are yielded. | ||
# TODO: For compressed chunks we might yield only the actualy contained chunks by reading the jumptables. | ||
# TODO: allow to be in a group (aka only use last parts for dimensions) | ||
subkeys = tuple(map(int, shard_key.split(self._dimension_separator))) | ||
for offset in product(*(range(i) for i in self._shards)): | ||
original_key = (subkeys_i * shards_i + offset_i for subkeys_i, offset_i, shards_i in zip(subkeys, offset, self._shards)) | ||
yield self._dimension_separator.join(map(str, original_key)) | ||
|
||
def __len__(self) -> int: | ||
return sum(1 for _ in self.keys()) | ||
|
||
# TODO: For efficient reads and writes, we need to implement | ||
# getitems, setitems & delitems | ||
# and combine writes/reads/deletions to the same shard. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pondering the feedback on #877, I wonder if rather than the relatively hard-coded
shared=...
, an implementation of some interface here might not be able to give us the flexibility for different backends. i.e. basically passing inchunk_store
directly, but that would raise the question of how to serialize the state into.zarray
for other processes.