Skip to content

Releases: Yura52/delu

v0.0.25

10 Aug 20:55
Compare
Choose a tag to compare

Performance

  • Significantly improve the efficiency of delu.nn.NLinear for cases where batch size is greater than 1. The larger the input dimensions -- the larger the speedup. Since the computation algorithm is updated, the results can be slightly different with the new version (the underlying "math" is totally the same).

v0.0.23

07 Nov 22:51
Compare
Choose a tag to compare

This is a minor release.

  • Various improvements in the documentation.
  • delu.nn.named_sequential is deprecated.
  • delu.utils.data.Enumerate is deprecated.
  • delu.utils.data.IndexDataset is deprecated.

v0.0.22

30 Oct 19:32
Compare
Choose a tag to compare

This release improves the documentation website (both style and content).

v0.0.21

10 Oct 16:04
Compare
Choose a tag to compare

This is a relatively big release after v0.0.18.

Breaking changes

  • delu.iter_batches: now, shuffle is a keyword-only argument
  • delu.nn.Lambda
    • now, this module accepts only the functions from the torch module or methods of torch.Tensor
    • now, the passed callable is not accessible as a public attribute
  • delu.random.seed: the algorithm computing the library- and device-specific seeds changed, so the result can change compared to the previous versions
  • In the following functions, the first arguments are now positional-only:
    • delu.to
    • delu.cat
    • delu.iter_batches
    • delu.Timer.format
    • delu.data.Enumerate
    • delu.nn.Lambda
    • delu.random.seed
    • delu.random.set_state

New features

  • Added delu.tools -- a new home for EarlyStopping, Timer and other general tools.

  • Added delu.nn.NLinear -- a module representing N linear layers that are applied to N different inputs:
    (*B, *N, D1) -> (*B, *N, D2), where *B are the batch dimensions.

  • Added delu.nn.named_sequential -- a shortcut for creating torch.nn.Sequential with named modules without OrderedDict:

    sequential = delu.nn.named_sequential(
        ('linear1', nn.Linear(10, 20)),
        ('activation', nn.ReLU()),
        ('linear2', nn.Linear(20, 1))
    )
    
  • delu.nn.Lambda: now, the constructor accepts keyword arguments for the callable:

    m = delu.nn.Lambda(torch.squeeze, dim=1)
    
  • delu.random.seed

    • the algorithm computing random seeds for all libraries was improved
    • now, None is allowed as base_seed; in this case, an unpredictable seed generated by OS will be used and returned:
    truly_random_seed = delu.random.seed(None)
    
  • delu.random.set_state: now, omitting the 'torch.cuda' is allowed to avoid setting the states of CUDA RNGs

Deprecations & Renamings

  • delu.data was renamed to delu.utils.data. The old name is now a deprecated alias.
  • delu.Timer and delu.EarlyStopping were moved to the new delu.tools submodule. The old names are now deprecated aliases.

Dependencies

  • Now, torch >=1.8,<3

Documentation

  • Updated logo
  • Simplified structure
  • Removed the only (and not particularly representative) end-to-end example

Project

  • Migrate from sphinx doctest to xdoctest

v0.0.18

02 Sep 12:49
Compare
Choose a tag to compare

Add support for PyTorch 2

v0.0.17

25 May 21:32
Compare
Choose a tag to compare

Breaking changes

  • delu.cat: now, the input must be a list (before, any iterable was allowed)
  • delu.Timer: now, print(timer), str(timer), f'{timer}' etc. return the full-precision representation (without rounding to seconds)

New features

  • delu.EarlyStopping: a simpler replacement for delu.ProgressTracker, but now, without tracking the best score. The usage is very similar, see the documentation.
  • delu.cat: now, supports nested collections (e.g. the input can be a list of tuple[Tensor, dict[str, tuple[Tensor, Tensor]]])

Deprecations

  • delu.ProgressTracker: instead, use delu.EarlyStopping
  • delu.data.FnDataset: no alternatives are provided

v0.0.15

11 Mar 21:38
Compare
Choose a tag to compare

Breaking changes

  • delu.iter_batches is now powered by torch.arange/randperm, the interface was changed accordingly
  • delu.Timer: the methods add and sub are removed

New features

  • delu.to: like torch.Tensor.to, but for (nested) collections of tensors
  • delu.cat: like torch.cat, but for collections of tensors
  • delu.iter_batches is now faster and has a better interface

Deprecations

  • delu.concat is deprecated in favor of delu.cat
  • delu.hardware.free_memory is now a deprecated alias to delu.cuda.free_memory
  • deprecate delu.data.Stream
  • deprecate delu.data.collate
    • instead, use torch.utils.data.dataloader.default_collate
  • deprecate delu.data.make_index_dataloader
    • instead, use delu.data.IndexDataset + torch.utils.data.DataLoader
  • deprecate delu.evaluation
    • instead, use torch.nn.Module.eval + torch.inference_mode
  • deprecate delu.hardware.get_gpus_info
    • instead, use the corresponding functions from torch.cuda
  • deprecate delu.improve_reproducibility
    • instead, use delu.random.seed and manually set settings mention here

Documentation

  • many improved explanations and examples

Dependencies

  • require python>=3.8
  • remove tqdm and pynvml from dependencies

Project

  • switch from flake8 to ruff
  • move tool settings from setup.cfg to pyproject.toml for coverage, isort, mypy
  • freeze versions in requirements_dev.txt

v0.0.13

29 Jul 22:16
Compare
Choose a tag to compare

THE PROJECT WAS RENAMED FROM "Zero" TO "DeLU"

The changes since v0.0.8:

Deprecations

  • delu.data.IndexLoader is deprecated in favor of the new delu.data.make_index_dataloader

New features

  • delu.data.make_index_dataloader is a better replacement for the deprecated delu.data.IndexLoader

Documentation

  • change theme to Furo (view)

Project

  • move most tool settings to pyproject.toml

v0.0.8

06 Nov 22:07
Compare
Choose a tag to compare

API changes:

  • zero.random.seed
    • the argument must be less than 2 ** 32 - 10000
    • the seed argument was renamed to base_seed

New features:

  • zero.random.seed:
    • sets better seeds based on the given argument
    • the new argument one_cuda_seed (False by default) allows to choose if the same seed is set for all CUDA devices

Documentation

  • style improvements

Dependencies

  • numpy>=1.18

v0.0.7

31 Oct 23:39
Compare
Choose a tag to compare

This release includes several nice updates to the website:

  • added the "copy button" to code blocks
  • highlight Python signatures
  • make section link buttons visible and usable