Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added more API Documentations with Sphinx #233

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,5 @@ dist/
*.npz
*.pkl

# vscode
venv
.vscode
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 9 additions & 0 deletions docs/source/build_your_own_model.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Build Your Own Model
====================



TimeSeriesModel
---------------

.. automodule:: torchts.nn.model
4 changes: 4 additions & 0 deletions docs/source/contributing.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Contributing to TorchTS
=======================

Start Contributing
85 changes: 85 additions & 0 deletions docs/source/getting_started.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
Getting Started
===============

Make sure you have installed `torchTS`

In the following example, we will use the `torchTS` package to train a simple LSTM model on a time-series datasets. We will also enable uncertainty quantification so that we can get prediction intervals.

1. First, we will import necessary package.

.. code-block:: python

import torch
import numpy as np
import matplotlib.pyplot as plt

from torchts.nn.models.lstm import LSTM
from torchts.nn.loss import quantile_loss

1. Let's randomly generate a time-series dataset.

.. code-block:: python

# generate linear time series data with some noise
n = 200
x_max = 10
slope = 2
scale = 2

x = torch.from_numpy(np.linspace(-x_max, x_max, n).reshape(-1, 1).astype(np.float32))
y = slope * x + np.random.normal(0, scale, n).reshape(-1, 1).astype(np.float32)

plt.plot(x, y)
plt.show()

We will get the following plots:

.. image:: ./_static/images/getting_started__dataset_plot.png
:scale: 100%

3. Then, we can start selecting and training our model. In this example, we will use LSTM model.

.. code-block:: python
# model configs
inputDim = 1
outputDim = 1
optimizer_args = {"lr": 0.01}
# confidence level = 0.025
quantile = 0.025
batch_size = 10

model = LSTM(
inputDim,
outputDim,
torch.optim.Adam,
criterion=quantile_loss,
criterion_args={"quantile": quantile},
optimizer_args=optimizer_args
)
model.fit(x, y, max_epochs=100, batch_size=batch_size)

1. After model is trained, we can use it to predict the future values. And more importantly, since we enable uncertainty quantification method, we can also get a prediction interval!

.. code-block:: python

y_preds = model.predict(x)

5. Let's plot prediction results

.. code-block:: python

plt.plot(x, y, label="y_true")
plt.plot(x, y_preds, label=["lower", "upper"])
plt.legend()
plt.show()

.. image:: ./_static/images/getting_started__pred_results_1.png
:scale: 100%

Example prediction results for other datasets:

.. image:: ./_static/images/getting_started__sample_dataset.png
:scale: 100%

.. image:: ./_static/images/getting_started__sample_results.png
:scale: 100%
25 changes: 21 additions & 4 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,30 @@
Welcome to TorchTS's documentation!
===================================



.. toctree::
:maxdepth: 1
:caption: Getting Started:

installation
getting_started


.. toctree::
:maxdepth: 2
:caption: Contents:
:maxdepth: 1
:caption: TorchTS Documentation:

modules
torchts
torchts.nn/index
torchts.nn.loss
torchts.utils.data

.. toctree::
:maxdepth: 1
:caption: More Advanced

build_your_own_model
contributing


Indices and tables
Expand Down
73 changes: 73 additions & 0 deletions docs/source/installation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
Installing torchTS
===================

Dependencies
^^^^^^^^^^^^
* Python 3.7+
* `PyTorch <https://pytorch.org/>`_
* `PyTorch Lightning <https://www.pytorchlightning.ai/>`_
* `SciPy <https://scipy.org/>`_

Installing the Latest Release
------------------------------

PyTorch Configuration
^^^^^^^^^^^^^^^^^^^^^
- Since torchTS is built upon PyTorch, you may want to customize your PyTorch configuration for your specific needs by following the `PyTorch installation instructions <https://pytorch.org/get-started/locally/>`_.

**Important note for MacOS users:**

- If you need CUDA on MacOS, you will need to build PyTorch from source. Please consult the PyTorch installation instructions above.

Typical Installation
^^^^^^^^^^^^^^^^^^^^

- To install torchTS through PyPI, execute this command::

pip install torchts

Conda Installation
^^^^^^^^^^^^^^^^^^

- If you would like to install torchTS through conda, it is available through this command::

conda install -c conda-forge torchts

torchTS Installation for Development Local Environment
------------------------------------------------------

- In order to develop torchTS, it is important to ensure you have the most up-to-date dependencies. `Poetry <https://python-poetry.org/>`_ is used by torchTS to help manage these dependencies in a elegant manner.

Clone repository
^^^^^^^^^^^^^^^^^^
- Begin by cloning the GitHub Repository::

# Clone the latest version of torchTS from GitHub and navigate to the root directory
git clone https://github.com/Rose-STL-Lab/torchTS.git
cd torchTS


Use Poetry to Install Dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

- Once you are in the root directory for torchTS, you can use the following command to install the most up-to-date dependencies for torchTS
- If you are unfamiliar with Poetry, follow the guides on `installation <https://python-poetry.org/docs/>`_ and `basic usage <https://python-poetry.org/docs/basic-usage/>`_ from the Poetry project’s documentation.::

# install torchTS' dependencies through poetry
poetry install

Running a simple notebook with your local environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

- Poetry essentially sets up a virtual environment that automatically configures itself with the dependencies needed to work with torchTS.
- Once you’ve installed the dependencies for torchTS through Poetry, we can run a Jupyter Notebook with a base kernel built upon torchTS’ using these commands::

# Run this from the root directory of torchTS
poetry run jupyter notebook

- Similarly, we can run Python scripts through our compatible environment using this code configuration::

# Run any python script through our new
poetry run [PYTHON FILE]

- Poetry is a very capable package management tool and we recommend you explore it’s functionalities further with `their documentation <https://python-poetry.org/docs/>`_ to get the most out of it.
7 changes: 0 additions & 7 deletions docs/source/modules.rst

This file was deleted.

28 changes: 28 additions & 0 deletions docs/source/torchts.nn.loss.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
torchts.nn.loss
===============

Quatile Loss
------------

Quantile regression uses the one-sided quantile loss to predict specific percentiles of the dependent variable.
The quantile regression model uses the pinball loss function written as:

.. math::
L_{Quantile}\big(y,f(x),\theta,p\big) = min_\theta\{\mathbb{E}_{(x,y)\sim D}[(y - f(x))(p - \mathbb{1}\{y < f(x)\})]\}

where :math:`p` is our fixed confidence interval parameterized by :math:`\theta`. When the pinball loss is minimized, the result is the optimal quantile.

.. autofunction:: torchts.nn.loss.quantile_loss
:noindex:

Mean Interval Score Loss
------------------------

.. autofunction:: torchts.nn.loss.mis_loss
:noindex:

Masked Mean Absolute Error Loss
--------------------------------

.. autofunction:: torchts.nn.loss.masked_mae_loss
:noindex:
14 changes: 14 additions & 0 deletions docs/source/torchts.nn/dcrnn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
DCRNN
=====

In spatiotemporal forecasting, assume we have multiple time series generated from a fixed space :math:`x(s,t)`.
`Diffusion Convolutional LSTM <https://openreview.net/pdf?id=SJiHXGWAZ>`_ models the time series on an irregular grid (graph) as a diffusion process.
Diffusion Convolutional LSTM replaces the matrix multiplication in a regular LSTM with diffusion convolution. It determines the future state of a certain cell in the graph by the inputs and past states of its local neighbors:

.. math::
\begin{bmatrix} i_t \\ f_t \\ o_t \end{bmatrix} = \sigma\big(W^{x} \star_g x_t + W^h \star_g h_{t-1} + W^c \circ c_{t-1} + b\big)

where :math:`W \star_g x = \sum_{i=1}^k \big(D^{-1}A\big)^i \cdot W \cdot x` is the diffusion convolution.

.. automodule:: torchts.nn.models.dcrnn
:members:
8 changes: 8 additions & 0 deletions docs/source/torchts.nn/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
torchts.nn
===============

.. toctree::
:caption: Models:

seq2seq
dcrnn
17 changes: 17 additions & 0 deletions docs/source/torchts.nn/seq2seq.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
Seq2seq
=======

The `sequence to sequence model <https://proceedings.neurips.cc/paper/2014/file/a14ac55a4f27472c5d894ec1c3c743d2-Paper.pdf>`_ originates from language translation.
Our implementation adapts the model for multi-step time series forecasting. Specifically, given the input series :math:`x_1,
\ldots, x_{t}`, the model maps the input series to the output series:

.. math::
x_{t-p}, x_{t-p+1}, \ldots, x_{t-1} \longrightarrow x_t, x_{t+1}, \ldots, x_{t+h-1}

where :math:`p` is the input history length and :math:`h` is the forecasting horizon.
Sequence to sequence (Seq2Seq) models consist of an encoder and a decoder. The final state of the encoder is fed as the initial state of the decoder.
We can use various models tor both the encoder and decoder. This function implements a Long Short Term Memory (LSTM).


.. automodule:: torchts.nn.models.seq2seq
:members:
18 changes: 0 additions & 18 deletions docs/source/torchts.rst

This file was deleted.

5 changes: 5 additions & 0 deletions docs/source/torchts.utils.data.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
torchts.utils.data
==================

.. automodule:: torchts.utils.data
:members:
1 change: 1 addition & 0 deletions scripts/build_docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,6 @@ echo "Moving Sphinx documentation to Docusaurus"
echo "-----------------------------------------"

SPHINX_HTML_DIR="website/static/api/"

cp -R "./docs/build/html/" "./${SPHINX_HTML_DIR}"
echo "Sucessfully moved Sphinx docs to ${SPHINX_HTML_DIR}"
Loading