Skip to content

Commit

Permalink
docs: fix doc code highlights
Browse files Browse the repository at this point in the history
This commit fixes several incorrectly emphasized code lines in the
documentation.
  • Loading branch information
rickstaa committed Feb 8, 2024
1 parent 5d39316 commit 596fe84
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 12 deletions.
16 changes: 8 additions & 8 deletions docs/source/usage/hyperparameter_tuning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,14 +48,14 @@ Consider the example in ``stable_learning_control/examples/pytorch/sac_ray_hyper
:language: python
:linenos:
:lines: 32-
:emphasize-lines: 12, 15-30, 38-49, 56, 59-66, 70-95, 98

In this example, a boolean on line ``12`` can enable Weights & Biases logging. On lines ``15-30``, we first create a small wrapper
function that ensures that the Ray Tuner serves the hyperparameters in the SLC algorithm's format. Following lines ``38-49`` setup
a Weights & Biases callback if the ``USE_WANDB`` constant is set to ``True``. On line ``56``, we then set the starting point for
several hyperparameters used in the hyperparameter search. Next, we define the hyperparameter search space on lines ``59-66``
while we initialise the Ray Tuner instance on lines ``70-95``. Lastly, we start the hyperparameter search by calling the
Tuners ``fit`` method on line ``98``.
:emphasize-lines: 12, 15-29, 38-48, 55, 58-65, 69-94, 97

In this example, a boolean on line ``12`` can enable Weights & Biases logging. On lines ``15-29``, we first create a small wrapper
function that ensures that the Ray Tuner serves the hyperparameters in the SLC algorithm's format. Following lines ``38-48`` setup
a Weights & Biases callback if the ``USE_WANDB`` constant is set to ``True``. On line ``55``, we then set the starting point for
several hyperparameters used in the hyperparameter search. Next, we define the hyperparameter search space on lines ``58-65``
while we initialise the Ray Tuner instance on lines ``69-94``. Lastly, we start the hyperparameter search by calling the
Tuners ``fit`` method on line ``97``.

When running the script, the Ray tuner will search for the best hyperparameter combination. While doing so will print
the results to the ``stdout``, a TensorBoard logging file and the Weights & Biases portal. You can check the TensorBoard logs using the
Expand Down
2 changes: 1 addition & 1 deletion docs/source/usage/running.rst
Original file line number Diff line number Diff line change
Expand Up @@ -533,7 +533,7 @@ Consider the example in ``stable_learning_control/examples/pytorch/sac_exp_grid_
.. literalinclude:: /../../examples/pytorch/sac_exp_grid_search.py
:language: python
:linenos:
:lines: 16-
:lines: 17-
:emphasize-lines: 22-28, 31

After making the ExperimentGrid object, parameters are added to it with
Expand Down
4 changes: 2 additions & 2 deletions docs/source/usage/saving_and_loading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ In this example, observe that

* On line 6, we import the algorithm we want to load.
* On line 12-14, we use the :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.load_config` method to restore the hyperparameters that were used during the experiment. This saves us time in setting up the correct hyperparameters.
* on line 15, we use the :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.load_config` method to restore the environment used during the experiment. This saves us time in setting up the environment.
* on line 15, we use the :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.load_env` method to restore the environment used during the experiment. This saves us time in setting up the environment.
* on line 17, we import the model weights.
* on line 18-19, we load the saved weights onto the algorithm.

Expand Down Expand Up @@ -297,7 +297,7 @@ In this example, observe that
* On line 12-14, we use the :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.load_config` method
to restore the hyperparameters that were used during the experiment. This saves us time in setting up the correct
hyperparameters.
* on line 15, we use the :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.load_config` method to
* on line 15, we use the :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.load_env` method to
restore the environment used during the experiment. This saves us time in setting up the environment.
* on line 17, we import the model weights.
* on line 18-19, we load the saved weights onto the algorithm.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/utils/loggers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ In this example, observe that
* On lines 107 :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.setup_pytorch_saver` is used to prepare the logger to save the key elements of the CNN model.
* On line 141, diagnostics are saved to the logger's internal state via :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.store`.
* On line 147, the CNN model saved once per epoch via :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.save_state`.
* On lines 60-65, :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.log_tabular` and :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.dump_tabular` are
* On lines 150-157, :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.log_tabular` and :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.dump_tabular` are
used to write the epoch diagnostics to file. Note that the keys passed into :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.log_tabular` are the same as the
keys passed into :meth:`~stable_learning_control.utils.log_utils.logx.EpochLogger.store`. We use the ``tb_write=True`` option to enable TensorBoard logging.

Expand Down

0 comments on commit 596fe84

Please sign in to comment.