diff --git a/mask_detection_training/infer/files/lightweight_screen_torch_inference.py b/mask_detection_training/infer/files/pytorch_inference.py similarity index 100% rename from mask_detection_training/infer/files/lightweight_screen_torch_inference.py rename to mask_detection_training/infer/files/pytorch_inference.py diff --git a/neural_networks_hero/aboutworkshop/about.md b/neural_networks_hero/aboutworkshop/about.md new file mode 100644 index 0000000..15d79e7 --- /dev/null +++ b/neural_networks_hero/aboutworkshop/about.md @@ -0,0 +1,72 @@ +# About This Workshop + +Estimated Time: 5 minutes + +## Introduction + +League of Legends is one of the most played videogames in the entire world. In this workshop, we'll leverage the power of AI with League of Legends in a unique and innovative way. We'll dive deep into extractable data (accessible through the game's API), how to structure this data, and how to use it to train our own Machine Learning model to generate real-time predictions about any match. + +Are you interested in learning machine learning (ML)? How about doing this in the context of the exciting world of gaming?! Get your ML skills bootstrapped here! + +Here's a short 3-minute introductory video into League of Legends: + +[League of Legends Introduction](youtube:OfYU4gbk13w) + +And here's a video to get you hyped about League and understand advanced strategies from one of the best teams in the world: + +[LCS Finals - 2023](youtube:0b0TXaJMUMU) + +## About Product/Technology + +OCI Data Science is a fully managed and serverless platform for data science teams to build, train, and manage machine learning models using Oracle Cloud Infrastructure. + +The Data Science Service: + +- Provides data scientists with a collaborative, project-driven workspace. +- Enables self-service, serverless access to infrastructure for data science workloads. +- Helps data scientists concentrate on methodology and domain expertise to deliver models to production. + +## Objectives + +In this lab, you will complete the following steps: + +- Data Collection - Download datasets +- Data Preparation - Preparing datasets +- Data Load - Searching for patterns in datasets +- Implement ML Models - Develop, different ML models, evaluate best model and tune +- Data Integration - Connecting to real-time data sources + +## OCI Elements + +This solution is designed to work with several OCI services, allowing you to quickly be up-and-running. You can read more about the services used in the lab here: + +- [OCI Data Science](https://www.oracle.com/artificial-intelligence/) +- [OCI Cloud Shell](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/cloudshellintro.htm) +- [OCI Compute](https://www.oracle.com/cloud/compute/) +- [OCI Autonomous JSON Database](https://www.oracle.com/autonomous-database/autonomous-json-database/) + +## Data Sets + +We'll access datasets uploaded to OCI Object Storage. You will get detailed instructions on how to download the full dataset in the Infrastructure lab. + +You don't need to download any datasets by yourself, everything will be **automated**. + +I have also published my datasets into Kaggle, should you be interested to look at them in more detail, or even use them yourself: + +- [Old Dataset](https://www.kaggle.com/jasperan/league-of-legends-1v1-matchups-results) +- [New Dataset](https://www.kaggle.com/datasets/jasperan/league-of-legends-optimizer-dataset) + +Make sure to give them a thumbs up if you're enjoying the content and find them interesting. + +## Useful Sources + +- Data Integration with leagueoflegends-optimizer: [YouTube video](https://www.youtube.com/watch?v=SlG0q4oWGsk) + +You may now [proceed to the next lab](#next). + +## Acknowledgements + +- **Author** - Nacho Martinez, Data Science Advocate @ DevRel +- **Editor** - Erin Dawson, DevRel Communications Manager +- **Contributors** - Victor Martin, Product Strategy Director +- **Last Updated By/Date** - May 29th, 2023 diff --git a/neural_networks_hero/augment_train/augment_train.md b/neural_networks_hero/augment_train/augment_train.md new file mode 100644 index 0000000..91cd20e --- /dev/null +++ b/neural_networks_hero/augment_train/augment_train.md @@ -0,0 +1,203 @@ +# Lab 4: Augment Dataset & Train Model + +Estimated Time: 40 minutes + +## Introduction + +In this section, we're going to learn about the benefits of augmenting datasets, the different ways in which this can be achieved; and how to properly train a model using on-demand infrastructure (with Oracle Cloud Infrastructure). + +### Prerequisites + +* It's highly recommended to have completed [the first workshop](../../workshops/mask_detection_labeling/index.html) before starting to do this one, as we'll use some files and datasets that come from our work in the first workshop. + +* An [Oracle Free Tier, Paid or LiveLabs Cloud Account](https://signup.cloud.oracle.com/?language=en&sourceType=:ow:de:ce::::RC_WWMK220210P00063:LoL_handsonLab_introduction&intcmp=:ow:de:ce::::RC_WWMK220210P00063:LoL_handsonLab_introduction) +* Active Oracle Cloud Account with available credits to use for Data Science service. + +### Objectives + +In this lab, you will complete the following steps: + +✓ Learn about Data Augmentation + +✓ Learn about when data augmentation is necessary, and when it isn't + +✓ Learn how to train a Computer Vision model + +## Task 1: Hyperparameters & Checkpoints + +The most important part of training a model is choosing the right **hyperparameters**. In this section, I'll explain the parameters I usually use, and why these are recommended for this specific problem. + +Then, once we have the hyperparameters set, we just need to launch the training process. + +### Training Parameters + +We're ready to make a couple of extra decisions regarding which parameters we'll use during training. + +It's important to choose the right parameters, as doing otherwise can cause terrible models to be created. So, let's dive deep into what's important about training parameters. Official documentation can be found [here](https://docs.ultralytics.com/config/). + +* `--device`: specifies which CUDA device (or by default, CPU) we want to use. Since we're working with an OCI CPU Instance, let's set this to "cpu", which will perform training with the machine's CPU. +* `--epochs`: the total number of epochs we want to train the model for. If the model doesn't find an improvement during training. I set this to 3000 epochs, although my model converged very precisely long before the 3000th epoch was done. + > **Note**: YOLOv5 (and lots of Neural Networks) implement a function called **early stopping/patience**, which will stop training before the specified number of epochs if it can't find a way to improve the mAPs (Mean Average Precision) for any class. + +* `--batch`: the batch size. I set this to either 16 images per batch, or 32. Setting a lower value (and considering that my dataset already has 10,000 images) is usually a *bad practice* and can cause instability. +* `--lr`: I set the learning rate to 0.01 by default. +* `--img` (image size): this parameter was probably the one that gave me the most trouble. I initially thought that all images -- if trained with a specific image size -- must always follow this size; however, you don't need to worry about this due to image subsampling and other techniques that are implemented to avoid this issue. This value needs to be the maximum value between the height and width of the pictures, averaged across the dataset. +* `--save_period`: specifies how often the model should save a copy of the state. For example, if I set this to 25, it will create a YOLOv5 checkpoint that I can use every 25 trained epochs. +* `--hyp`: specifies a custom YAML file that will contain the set of hyperparameters for our model. We will talk more specifically about this property in the next section. + +> **Note**: if I have 1,000 images with an average width of 1920 and height of 1080, I'll probably create a model of image size = 640, and subsample my images. If I have issues with detections, perhaps I'll create a model with a higher image size value, but training time will ramp up, and inference will also require more computing power. + +### YOLO Checkpoints - Which one to choose from? + +The second and last decision we need to make is which YOLOv5 checkpoint we're going to start from. It's **highly recommended** that you start training from one of the possible checkpoints: + +![yolov5 checkpoints](./images/yolov5_performance.jpg) + +> **Note**: you can also start training 100% from scratch, without any checkpoints. You should only do this if what you're trying to detect has never been reproduced before, e.g. astrophotography. The upside of using a checkpoint is that YOLOv5 has already been trained up to a point, with real-world data. So, anything that resembles the real world can easily be trained from a checkpoint, which will help you reduce training time (and therefore expense). + +The higher the average precision from each checkpoint, the more parameters it contains (typically). Here's a detailed comparison with all available pre-trained checkpoints: + +| Model | size
(pixels) | Mean Average Precisionval
50-95
| Mean Average Precisionval
50
| Speed
CPU b1
(ms)
| Speed
V100 b1
(ms)
| Speed
V100 b32
(ms)
| Number of parameters
(M) | FLOPs
@640 (B) | +| ----- | ------------ | ------------------------------ | --------------------------- | --------------- | ---------------- | ----------------- | ----------------------- | ------------- | +| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** | +| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 | +| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 | +| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 | +| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 | +| | | | | | | | | | +| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 | +| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 | +| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 | +| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 | +| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x6.pt)
+[TTA](https://github.com/ultralytics/yolov5/issues/303) | 1280
1536 | 55.0
**55.8** | 72.7
**72.7** | 3136
- | 26.2
- | 19.4
- | 140.7
- | 209.8
- | + +> **Note**: all checkpoints have been trained for 300 epochs with the default settings (find all of them [in the official docs](https://docs.ultralytics.com/config/)). The nano and small version use [these hyperparameters](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml), all others use [these](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml). + +YOLOv8 also has checkpoints with the above naming convention, so if you're using YOLOv8 instead of YOLOv5 you will still need to decide which checkpoint is best for your problem. + +Also, note that - if we want to create a model with an *`image size>640`* - we should select those YOLOv5 checkpoints that end with the number `6` in the end. + +So, for this model, since I will use 640 pixels, we will just create a first version using **YOLOv5s**, and another one with **YOLOv5x**. You only really need to train one, but if you have extra time, it will be interesting to see the differences between two (or more) models when doing training against the same dataset. + +## Task 2: Augment Dataset + +In this part, we're going to augment our dataset. + +Image augmentation is a process through which you create new images based on existing images in your project training set. It's an effective way to boost model performance. By creating augmented images and adding them to your dataset, you can help your model learn to better identify classes, particularly in conditions that may not be well represented in your dataset. + +To make a decision as to what augmentations to apply and how they should be configured, we should ask yourselves the following: + +*What types of augmentations will generate data that is beneficial for our use case?* + +For example, in the case of aerial images, they might be taken in the early morning when the sun is rising, during the day when the sky is clear, during a cloudy day, and in the early evening. During these times, there will be different levels of brightness in the sky and thus in the images. Thus, modifying the brightness of images can be considered a **great** augmentation for this example. + +If we see a decrease in performance from our model with this augmentation, we can always roll the augmentation back by reverting back to an earlier version of our dataset. + +Now that we have some knowledge of the set of checkpoints and training parameters we can specify, I'm going to focus on a parameter that is **specifically created** for data augmentation: *`--hyp`*. + +This option allows us to specify a custom YAML file that will hold the values for all hyperparameters of our Computer Vision model. + +In our YOLOv5 repository, we go to the default YAML path: + +```bash + +cd /home/$USER/yolov5/data/hyps/ + +``` + +Now, we can copy one of these files and start modifying these hyperparameters at our convenience. For this specific problem, I'm not going to use all customizations, since we already augmented our dataset in the previous workshop quite a lot. Therefore, I will explain the augmentations that are usually used for a problem of this type. + +Here are all available augmentations: + +![augmentation types](./images/initial_parameters.png) + +The most notable ones are: + +* *`lr0`*: initial learning rate. If you want to use SGD optimizer, set this option to `0.01`. If you want to use ADAM, set it to `0.001`. +* *`hsv_h`*, *`hsv_s`*, *`hsv_v`*: allows us to control HSV modifications to the image. We can either change the **H**ue, **S**aturation, or **V**alue of the image. You can effectively change the brightness of a picture by modifying the *`hsv_v`* parameter, which carries image information about intensity. +* *`degrees`*: it will rotate the image and let the model learn how to detect objects in different directions of the camera. +* *`translate`*: translating the image will displace it to the right or to the left. +* *`scale`*: it will resize selected images (more or less % gain). +* *`shear`*: it will create new images from a new viewing perspective (randomly distort an image across its horizontal or vertical axis.) The changing axis is horizontal but works like opening a door in real life. RoboFlow also supports vertical shear. +* *`flipud`*, *`fliplr`*: they will simply take an image and flip it either "upside down" or "left to right", which will generate exact copies of the image but in reverse. This will teach the model how to detect objects from different angles of a camera. Also notice that *`flipud`* works in very limited scenarios: mostly with satellite imagery. And *`fliplr`* is better suited for ground pictures of any sort (which envelops 99% of Computer Vision models nowadays). +* *`mosaic`*: this will take four images from the dataset and create a mosaic. This is particularly useful when we want to teach the model to detect smaller-than-usual objects, as each detection from the mosaic will be "harder" for the model: each object we want to predict will be represented by fewer pixels. +* *`mixup`*: I have found this augmentation method particularly useful when training **classification** models. It will mix two images, one with more transparency and one with less, and let the model learn the differences between two *problematic* classes. + +Once we create a separate YAML file for our custom augmentation, we can use it in training as a parameter by setting the *`--hyp`* option. We'll see how to do that right below. + +RoboFlow also supports more augmentations. Here's an figure with their available augmentations: + +![augmentations offered by RoboFlow](./images/roboflow_augmentations.png) + +If you're particularly interested in performing additional advanced types of augmentations, [check out this video from [Jacob Solawetz](https://www.youtube.com/watch?v=r-QBawf9Eoc) illustrating even more ways you can use augmentation, like object occlusion, to improve your dataset. + +## Task 3: Train Model + +Now that we have our hyperparameters and checkpoint chosen, we just need to run the following commands. To execute training, we first navigate to YOLOv5's cloned repository path: + +``` + +cd /home/$USER/yolov5 + +``` + +And then, start training: + +```bash + +~/anaconda3/bin/python train.py --img 640 --data --weights --name --save-period 25 --device cpu --batch 16 --epochs 3000 + +``` + +> **Note**: if you don't specify a custom *`--hyp`* file, augmentation will still happen in the background, but it won't be customizable. Refer to the YOLO checkpoint section above to see which default YAML file is used by which checkpoint by default. However, if you want to specify custom augmentations, make sure to add this option to the command above. + +```bash + +# for yolov5s +~/anaconda3/bin/python train.py --img 640 --data ./datasets/y5_mask_model_v1/data.yaml --weights yolov5s.pt --name markdown --save-period 25 --device cpu --batch 16 --epochs 3000 + +# for yolov5x +~/anaconda3/bin/python train.py --img 640 --data ./datasets/y5_mask_model_v1/data.yaml --weights yolov5x.pt --name y5_mask_detection --save-period 25 --device cpu --batch 16 --epochs 3000 + +``` + +And the model will start training. Depending on the size of the dataset, each epoch will take more or less time. In my case, with 10.000 images, each epoch took about 2 minutes to train and 20 seconds to validate. + +![Training GIF](./images/training.gif) + +For each epoch, we will have broken-down information about epoch training time and mAP for the model, so we can see how our model progresses over time. + +## Task 4: Check Results + +After the training is done, we can have a look at the results. Visualizations are provided automatically, and they are pretty similar to what we discovered in the previous workshop using RoboFlow Train. + +Some images, visualizations, and statistics about training are saved in the destination folder. With these visualizations, we can improve our understanding of our data, mean average precisions, and many other things which will help us improve the model upon the next iteration. + +For example, we can see how well each class in our dataset is represented: + +![Number of instances per class](./images/num_instances.jpg) + +> **Note**: this means that both the `incorrect` and `no mask` classes are underrepresented if we compare them to the `mask` class. An idea for the future is to increase the number of examples for both of these underrepresented classes. + +The confusion matrix tells us how many predictions from images in the validation set were correct, and how many weren't: + +![confusion matrix](./images/confusion_matrix.jpg) + +As we have previously specified, our model autosaves its training progress every 25 epochs with the *`--save-period`* option. This will cause the resulting directory to be about will about 1GB. + +In the end, we only care about the best-performing models out of all the checkpoints, so let us keep *`best.pt`* as the best model for the training we performed (the model with the highest mAP of all checkpoints) and delete all others. + +The model took **168** epochs to finish (early stopping happened, so it found the best model at the 68th epoch), with an average of **10 minutes** per epoch. + +Remember that training time can be significantly reduced if you try this with a GPU. You can rent an OCI GPU at a fraction of the price you will find other GPUs in other Cloud vendors. For example, I did originally train this model with 2 OCI Compute NVIDIA V100s *just for **$2.50/hr***, and training time went from ~30 hours to about 6 hours. + +This is a list of the mAPs, broken down by each class type. + +![results](./images/results.jpg) + +The model has a notable mAP of **70%**. This is awesome, but this can always be improved with a bigger dataset and fine-tuning our augmentation and training hyperparameters. Keep in mind that real-world problems, like this one, will never achieve 100% accuracy due to the nature of the problem + +## Acknowledgements + +* **Author** - Nacho Martinez, Data Science Advocate @ Oracle DevRel +* **Last Updated By/Date** - July 17th, 2023 diff --git a/neural_networks_hero/augment_train/images/confusion_matrix.jpg b/neural_networks_hero/augment_train/images/confusion_matrix.jpg new file mode 100644 index 0000000..747bb87 Binary files /dev/null and b/neural_networks_hero/augment_train/images/confusion_matrix.jpg differ diff --git a/neural_networks_hero/augment_train/images/initial_parameters.png b/neural_networks_hero/augment_train/images/initial_parameters.png new file mode 100644 index 0000000..d2fe9df Binary files /dev/null and b/neural_networks_hero/augment_train/images/initial_parameters.png differ diff --git a/neural_networks_hero/augment_train/images/num_instances.jpg b/neural_networks_hero/augment_train/images/num_instances.jpg new file mode 100644 index 0000000..dd6acdc Binary files /dev/null and b/neural_networks_hero/augment_train/images/num_instances.jpg differ diff --git a/neural_networks_hero/augment_train/images/results.jpg b/neural_networks_hero/augment_train/images/results.jpg new file mode 100644 index 0000000..76ada05 Binary files /dev/null and b/neural_networks_hero/augment_train/images/results.jpg differ diff --git a/neural_networks_hero/augment_train/images/roboflow_augmentations.png b/neural_networks_hero/augment_train/images/roboflow_augmentations.png new file mode 100644 index 0000000..dc3acf3 Binary files /dev/null and b/neural_networks_hero/augment_train/images/roboflow_augmentations.png differ diff --git a/neural_networks_hero/augment_train/images/training.gif b/neural_networks_hero/augment_train/images/training.gif new file mode 100644 index 0000000..974a204 Binary files /dev/null and b/neural_networks_hero/augment_train/images/training.gif differ diff --git a/neural_networks_hero/augment_train/images/yolov5_example.jpg b/neural_networks_hero/augment_train/images/yolov5_example.jpg new file mode 100644 index 0000000..f188b06 Binary files /dev/null and b/neural_networks_hero/augment_train/images/yolov5_example.jpg differ diff --git a/neural_networks_hero/augment_train/images/yolov5_performance.jpg b/neural_networks_hero/augment_train/images/yolov5_performance.jpg new file mode 100644 index 0000000..7c64b6f Binary files /dev/null and b/neural_networks_hero/augment_train/images/yolov5_performance.jpg differ diff --git a/neural_networks_hero/augment_train/images/yolov5_performance.png b/neural_networks_hero/augment_train/images/yolov5_performance.png new file mode 100644 index 0000000..4c93956 Binary files /dev/null and b/neural_networks_hero/augment_train/images/yolov5_performance.png differ diff --git a/neural_networks_hero/creatingmodel/creatingmodel.md b/neural_networks_hero/creatingmodel/creatingmodel.md new file mode 100644 index 0000000..10eec47 --- /dev/null +++ b/neural_networks_hero/creatingmodel/creatingmodel.md @@ -0,0 +1,250 @@ +# Creating the Model + +## Introduction + +In this lab, we'll be creating all Machine Learning Models. This model will attempt to use as many variables as possible, whilst taking advantage of the power of AutoML (remember, work smart, not hard!). It'll be useful to us as it introduces the most basic and fundamental ML concepts. + +Estimated Time: 45 minutes + +### Prerequisites + +* An Oracle Free Tier, Paid or LiveLabs Cloud Account +* Active Oracle Cloud Account with available credits to use for Data Science service. +* [Previously created](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/hols/dataextraction/infra/infra.md) OCI Data Science Environment + +## Task 1: Set up OCI Data Science Environment + +[Having previously created our OCI Data Science environment](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/hols/dataextraction/infra/infra.md), we need to install the necessary Python dependencies to execute our code. For that, we'll access our environment. + +1. We open the notebook that was provisioned: + + ![selecting data science](./images/select_data_science.jpg) + + > **Note**: You may find the Data Science section by also searching in the top left bar, or in the Analytics & AI tab, if it doesn't appear in "Recently visited" for you: + + ![analytics tab](images/analyticstab.png) + + Now, we have access to a [list of our Data Science projects launched within OCI.](https://cloud.oracle.com/data-science/projects) We access our project, and inside our project we'll find the notebook. + + > **Note**: The name of the notebook may be different than shown here in the screenshot. + + ![opening notebook](./images/open-notebook.png) + + ![opening notebook](./images/open-notebook2.png) + + You should now see the Jupyter environment + + ![jupyter environment](./images/notebook.png) + +2. We now need to load our notebook and datasets into our environment. For that, we open a new terminal inside our environment: + + ![new terminal](./images/new_terminal.png) + + Then, we execute the following command, which will download all necessary datasets: + + ```bash + + wget https://objectstorage.eu-frankfurt-1.oraclecloud.com/p/FcwFW-_ycli9z8O_3Jf8gHbc1Fr8HkG9-vnL4I7A07mENI60L8WIMGtG5cc8Qmuu/n/axywji1aljc2/b/league-hol-ocw-datasets/o/league_ocw_2023.zip && unzip league_ocw_2023.zip -d /home/datascience/. + + ``` + + This process should take about a minute. + + ![unzipping](./images/unzip_result.png) + + Now, download the repository (if you haven't already): + + ```bash + + git clone --branch livelabs https://github.com/oracle-devrel/leagueoflegends-optimizer.git + + ``` + + After this, we will open the notebook called _`models_2023.ipynb`_ located in _`leagueoflegends-optimizer/notebooks`_ by double clicking it. + +3. Now, with our Python dependencies installed and our repository and notebook ready, we're ready to run it from the first cell. Make sure to select the correct Kernel (the one that you have configured and has all Python dependencies installed within it) from the Kernel dropdown menu: + +![selecting kernel](./images/select_kernel.PNG) + +## Task 2: The Data Structure + +From our dataset, we can observe an example of the data structure we're going to use to build our model: + +![example data structure](./images/structure_2023.webp) + + +It is important to remember that structuring and manipulating data in the data science process takes about 80 to 90% of the time, according to expert sources (image courtesy of [“2020 State of Data Science: Moving From Hype Toward Maturity.”](https://www.anaconda.com/state-of-data-science-2020)), and we shouldn't be discouraged when spending most of our time processing and manipulating data structures. The ML algorithm is the easy part if you've correctly identified the correct data structure and adapted it to the structure ML algorithms and pipelines expect. + +![Breakdown of effort to train model](../../../images/lab1-anaconda_1.png?raw=true) + +## Task 3: Load Data / Generate Dataset + +First, we load the model and train-test split it. + +To perform ML properly, we need to take the dataset we're going to work with, and split it into two: + +* A **training** dataset, from which our ML model will learn to make predictions. +* A **testing** dataset, from which our ML model will validate the predictions it makes, and check how accurate it was compared to the truth. + +In ML, it's very typical to find values around 80% train / 20% test proportions, as it provides enough data for the model to be trained, and enough data to check the accuracy of the model without having too much / too little data in either of the datasets. + +After this split, we divide the whole dataset into two separate files, one containing training data (85% of the original dataset) and testing data (15%). + +![reading dataset](images/read_dataset.png) + +Then, we begin with simple data exploration of our initial dataset. Histograms are particularly useful to find the distribution of one (or many!) variables in our dataset and see if they follow any known statistical distribution. + +![histogram example](images/histogram_example.PNG) + +It's also good to look at our new variables `f1...f3` and their minimum, average, maximum: + +![describe](images/minmax_f.PNG) + +This will also help us determine what to return to the user when they're playing a game in the end: the closer they are to the maximum, the better they will have performed, and we also need to adjust that accordingly. + +We're also interested in other variables' histograms, especially the ones around people getting multiple kills in a row, number of wards, big jungle objectives, match durations... Basically statistics that I find personally interesting after finishing a League match myself. + +![all_histograms](images/histograms.webp) + +After getting a rough idea of what our dataset and some of our variables contain, it's time to tell the ML model which variables we want as input and which ones as output. + +For this example (in the notebook, we create several models), we'll first drop those columns we don't want to use as inputs or outputs - in this first model, we don't want to use any `f1...f5` variables in our dataset, as we're going to create a model with League's original data to begin with: + +![dropping columns](images/dropping_columns.PNG) + +After we create our `TabularDataset()` object (which extends a pandas dataframe and therefore has most of panda's functions available), we're ready to start training. + +## Task 4: Model Training + +Now that we've seen the shape of our dataset and we have the variable we want to predict (in this case, calculated_player_performance), we train as many models as possible for 10 minutes. We can instantiate a `TabularPredictor()` which takes most of the difficulties of usually writing this kind of code out of the equation: + +![training simplified](images/training_simplified.PNG) + +We need to specify that this problem is a regression problem (we're predicting numerical, continuous values (not integers)) and we specify which variable it is we're trying to predict through the `label` parameter. + +The preset is a pre-configuration that restraints the amount of iterations, models, and time dedicated to train each model, to achieve some "quality" defined as a preset. +> **Note**: find all available presets [here](https://auto.gluon.ai/0.5.2/tutorials/tabular_prediction/tabular-quickstart.html#presets). + +After our training is done (about 10 minutes), we can display some results: + +First, we display a leaderboard of the best trained models ordered by decreasing RMSE. If you're not familiar with this concept, don't worry, we'll revisit all metrics right below, in our Model Testing task. This will help us see which models perform better against the target variable that we specified before: + +![leaderboard](images/leaderboard.PNG) + +Note that our Level 2 Weighted Ensemble has the lowest RMSE of all: we'll probably want to use this model. + +![example of an ensemble model in computer vision](./images/example_ensemble.png) +> **Note**: this is an example of an weighted ensemble model, in which decisions are taken using a technique called **bagging**: every model makes a prediction, and the best models will weigh more upon the final decision. + +## Task 5: Model Testing + +After training is done, we need to check whether the training we did was actually useful or a waste of time. To achieve this, we make use of some metrics, which depend on the type of problem we're dealing with. + +For example, in a binary classification problem (where we're trying to predict if something is either 0 or 1), I typically use **accuracy, precision, recall and f1-score** as standard evaluation metrics: + +![classification metrics](./images/classification_metrics.PNG) +> **Note**: an example on how each one of these 4 metrics are calculated just by looking at the Confusion Matrix. + +However, as we're dealing with a regression problem, the most popular metrics are: the MSE, MAE, RMSE, R-Squared and variants of these coefficients. + +The MSE, MAE, RMSE, and R-Squared metrics are mainly used to evaluate the prediction error rates and model performance in regression analysis. + +* MAE (Mean absolute error) represents the difference between the original and predicted values extracted by averaged the absolute difference over the data set. +* MSE (Mean Squared Error) represents the difference between the original and predicted values extracted by squared the average difference over the data set. +* RMSE (Root Mean Squared Error) is the error rate by the square root of MSE. +* R-squared (Coefficient of determination) represents the coefficient of how well the values fit compared to the original values. + The value from 0 to 1 interpreted as percentages. The higher the value is, the better the model is. +* The Pearson correlation coefficient is a descriptive statistic, meaning that it summarizes the characteristics of a dataset + Specifically, it describes the strength and direction of the linear relationship between two quantitative variables. + +Note that, in our code, we need to use our **testing dataset** as the one to validate our metrics (we'll have the testing data to check against). + +![metrics 1](./images/metrics_1.PNG) + +We're also able to extract the feature importance of our model. This is an awesome calculation provided to us automatically by the AutoML library. +Feature importance is an index that represents the measure of contribution of each feature in our ML model. +> **Note**: it's important to note that feature importance depends on the model and dataset used, and different algorithms may assign different importance values to the same set of features. + +For more advanced Machine Learning practicioners, there's a caveat I need to make here about certain types of regularization (like *L1*/*Lasso* regularization) - a technique that's used to often prevent overfitting and improve the generalized performance of a model -: it can force some coefficients to become zero, rendering those coefficients' variables useless in a model. + +![metrics 2](./images/metrics_2.PNG) +> **Note**: if I have two variables with importances N and M, the first variable will have an importance N/M times higher than the second variable, and viceversa. + +This means that our model takes `deaths, assists, kills` as the three most important variables, and the fourth most important variable is the game duration. + +After creating this function and invoking it, we will obtain a resulting CSV file or dataframe object. We'll use this new object to create our model. + +## Task 6: Creating Extra Models + +The rest of the notebook is similar to the process we've followed until now, with a few changes, which shall be mentioned here for clarity. + +### Win Prediction Model (2nd Model) + +The second model we create is a winning predictor (a binary classifier that tries to predict whether the player won or lost, based on all input variables). We specify that this is a binary classification problem this way: + +![binary classification](./images/binary_classification.PNG) + +The results from this model are very promising, reaching up to 99.37% accuracy: + +![2nd importances](./images/2nd_importances.PNG) +> **Note**: if you're planning to run inference (deploy your model and make predictions) on a low-end computer, you might be better off with the Light Gradient Boosted Model, as it's prediction times are about 100 times faster than our L2 Weighted Ensemble. + +### Live Client API Compatible Model + +And now that we have one model for each, we attempt to create a model only using our `f1...f3` variables as discussed during the workshop. We call this model _Live Client API Compatible Model_ as it utilizes as much temporal data as possible from the API return object. + +These variables were calculated like this: + +* Kills + assists / gameTime ==> kills + assists ratio ==> `f2` +* Deaths / gameTime ==> death ratio ==> `f1` +* xp / gameTime ==> xp per min ==> `f3` + +In our dataset, we also had two other variables that I was hoping I could also calculate with Live Client API data, but these variables weren't possible to accurately calculate: + +* `f4`, which represented the total amount of damage per minute, wasn't present in the Live Client API in any field +* `f5`, which represented the total amount of gold per minute, wasn't either. You can only extract the **current** amount of gold, which doesn't add any real value to the model. + +So, the idea now is to create a model that, given f1, f2 and f3, and the champion name, is **able to predict any player's performance**. + +![3rd data manipulation](./images/3rd_manipulation.PNG) + +![3rd fit](./images/3rd_fit.PNG) + +![3rd leaderboard](./images/3rd_leaderboard.PNG) +> **Note**: the RMSE in this third experiment, compared to the first model (both models predict the same target variable `calculated_player_performance`) is higher, which I expected, since we're using only 4 input variables for this model instead of 100+. However, as our leaderboard indicates, all these models are able to properly **infer** a player's performance, even if the RMSE is a bit more elevated. + +![3rd leaderboard](./images/3rd_importances.PNG) + +Just as an interesting observation, our model's importance is mostly based around `f1` and `f2`, being `f3` about 5-8 times less important than the other two. + +## Task 7: Downloading Models + +If you want to use these models in your computer, while you play League, you will need to *zip* all generated models into a file and download it to your computer. + +In a terminal, you can run the following command to bundle all directories into one file: + +```bash + +zip -r all_models.zip /home/datascience/leagueoflegends-optimizer/notebooks/live_model_1/ /home/datascience/leagueoflegends-optimizer/notebooks/player_performance_models /home/datascience/leagueoflegends-optimizer/notebooks/winner_models/ + +``` + +Then go to the file explorer and download the selected file: + +![download all models](./images/download_models.PNG) + +Now, we have successfully completed the Data Science process! We have: + +* Loaded and Explored the dataset +* Created some useful visualizations through histograms and `.describe()` +* Created some TabularPredictors to help us autotrain models +* Evaluated each model's performance against a **test** dataset +* Saved these models for future deployment - + +You may now [proceed to the next lab](#next). + +## Acknowledgements + +* **Author** - Nacho Martinez, Data Science Advocate @ DevRel +* **Contributors** - Victor Martin, Product Strategy Director +* **Last Updated By/Date** - May 28th, 2023 diff --git a/neural_networks_hero/creatingmodel/images/2nd_importances.PNG b/neural_networks_hero/creatingmodel/images/2nd_importances.PNG new file mode 100644 index 0000000..e13920c Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/2nd_importances.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/3rd_fit.PNG b/neural_networks_hero/creatingmodel/images/3rd_fit.PNG new file mode 100644 index 0000000..09d21d0 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/3rd_fit.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/3rd_importances.PNG b/neural_networks_hero/creatingmodel/images/3rd_importances.PNG new file mode 100644 index 0000000..f5ea110 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/3rd_importances.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/3rd_leaderboard.PNG b/neural_networks_hero/creatingmodel/images/3rd_leaderboard.PNG new file mode 100644 index 0000000..18ce77b Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/3rd_leaderboard.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/3rd_manipulation.PNG b/neural_networks_hero/creatingmodel/images/3rd_manipulation.PNG new file mode 100644 index 0000000..d792fc5 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/3rd_manipulation.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/analyticstab.png b/neural_networks_hero/creatingmodel/images/analyticstab.png new file mode 100644 index 0000000..9dae9ba Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/analyticstab.png differ diff --git a/neural_networks_hero/creatingmodel/images/binary_classification.PNG b/neural_networks_hero/creatingmodel/images/binary_classification.PNG new file mode 100644 index 0000000..db876dc Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/binary_classification.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/classification_metrics.png b/neural_networks_hero/creatingmodel/images/classification_metrics.png new file mode 100644 index 0000000..1f5d0f6 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/classification_metrics.png differ diff --git a/neural_networks_hero/creatingmodel/images/download_models.PNG b/neural_networks_hero/creatingmodel/images/download_models.PNG new file mode 100644 index 0000000..46392e1 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/download_models.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/downloadkaggle.jpg b/neural_networks_hero/creatingmodel/images/downloadkaggle.jpg new file mode 100644 index 0000000..25de0de Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/downloadkaggle.jpg differ diff --git a/neural_networks_hero/creatingmodel/images/dropping_columns.PNG b/neural_networks_hero/creatingmodel/images/dropping_columns.PNG new file mode 100644 index 0000000..a26454e Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/dropping_columns.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/example_data_structure.png b/neural_networks_hero/creatingmodel/images/example_data_structure.png new file mode 100644 index 0000000..ed1f3d1 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/example_data_structure.png differ diff --git a/neural_networks_hero/creatingmodel/images/example_ensemble.png b/neural_networks_hero/creatingmodel/images/example_ensemble.png new file mode 100644 index 0000000..9a99543 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/example_ensemble.png differ diff --git a/neural_networks_hero/creatingmodel/images/histogram_example.PNG b/neural_networks_hero/creatingmodel/images/histogram_example.PNG new file mode 100644 index 0000000..5c71dd0 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/histogram_example.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/histograms.webp b/neural_networks_hero/creatingmodel/images/histograms.webp new file mode 100644 index 0000000..81bf743 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/histograms.webp differ diff --git a/neural_networks_hero/creatingmodel/images/leaderboard.PNG b/neural_networks_hero/creatingmodel/images/leaderboard.PNG new file mode 100644 index 0000000..7fcf6b2 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/leaderboard.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/metrics_1.PNG b/neural_networks_hero/creatingmodel/images/metrics_1.PNG new file mode 100644 index 0000000..1fc9dcd Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/metrics_1.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/metrics_2.PNG b/neural_networks_hero/creatingmodel/images/metrics_2.PNG new file mode 100644 index 0000000..10e77c8 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/metrics_2.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/minmax_f.PNG b/neural_networks_hero/creatingmodel/images/minmax_f.PNG new file mode 100644 index 0000000..46b353d Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/minmax_f.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/new_terminal.png b/neural_networks_hero/creatingmodel/images/new_terminal.png new file mode 100644 index 0000000..c06f927 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/new_terminal.png differ diff --git a/neural_networks_hero/creatingmodel/images/notebook.png b/neural_networks_hero/creatingmodel/images/notebook.png new file mode 100644 index 0000000..d4845b2 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/notebook.png differ diff --git a/neural_networks_hero/creatingmodel/images/open-notebook.png b/neural_networks_hero/creatingmodel/images/open-notebook.png new file mode 100644 index 0000000..1af9c33 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/open-notebook.png differ diff --git a/neural_networks_hero/creatingmodel/images/open-notebook2.png b/neural_networks_hero/creatingmodel/images/open-notebook2.png new file mode 100644 index 0000000..5f8e55d Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/open-notebook2.png differ diff --git a/neural_networks_hero/creatingmodel/images/raw.jpg b/neural_networks_hero/creatingmodel/images/raw.jpg new file mode 100644 index 0000000..9081823 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/raw.jpg differ diff --git a/neural_networks_hero/creatingmodel/images/read_dataset.png b/neural_networks_hero/creatingmodel/images/read_dataset.png new file mode 100644 index 0000000..473ea18 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/read_dataset.png differ diff --git a/neural_networks_hero/creatingmodel/images/running.jpg b/neural_networks_hero/creatingmodel/images/running.jpg new file mode 100644 index 0000000..d9ec21d Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/running.jpg differ diff --git a/neural_networks_hero/creatingmodel/images/savepage.jpg b/neural_networks_hero/creatingmodel/images/savepage.jpg new file mode 100644 index 0000000..2a88d14 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/savepage.jpg differ diff --git a/neural_networks_hero/creatingmodel/images/select_data_science.jpg b/neural_networks_hero/creatingmodel/images/select_data_science.jpg new file mode 100644 index 0000000..e93468d Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/select_data_science.jpg differ diff --git a/neural_networks_hero/creatingmodel/images/select_kernel.PNG b/neural_networks_hero/creatingmodel/images/select_kernel.PNG new file mode 100644 index 0000000..da73b0c Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/select_kernel.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/structure_2023.webp b/neural_networks_hero/creatingmodel/images/structure_2023.webp new file mode 100644 index 0000000..bd2a508 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/structure_2023.webp differ diff --git a/neural_networks_hero/creatingmodel/images/training_simplified.PNG b/neural_networks_hero/creatingmodel/images/training_simplified.PNG new file mode 100644 index 0000000..b88d146 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/training_simplified.PNG differ diff --git a/neural_networks_hero/creatingmodel/images/unzip.png b/neural_networks_hero/creatingmodel/images/unzip.png new file mode 100644 index 0000000..840f945 Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/unzip.png differ diff --git a/neural_networks_hero/creatingmodel/images/unzip_result.png b/neural_networks_hero/creatingmodel/images/unzip_result.png new file mode 100644 index 0000000..2c7fade Binary files /dev/null and b/neural_networks_hero/creatingmodel/images/unzip_result.png differ diff --git a/neural_networks_hero/end/end.md b/neural_networks_hero/end/end.md new file mode 100644 index 0000000..83b98bd --- /dev/null +++ b/neural_networks_hero/end/end.md @@ -0,0 +1,87 @@ +# The End + +Follow us if you want content like this, AI news, coding tips... in your social media timelines! + +![victor](./images/victor_1.PNG) + +![nacho](./images/me_1.PNG) + +Hey. + + + +If you're interested in my content, check out the following links. I'm a Data Scientist Advocate with 4 years of experience, and I love teaching people about Machine Learning (ML) in unique ways that make people learn better. + +Follow me if you're interested in ML content. I promise, everything I do goes opensource. + +## 🏆 My Stats + +![Streak](https://github-readme-streak-stats.herokuapp.com/?user=jasperan&theme=tokyonight) + +[![Trophies](https://github-profile-trophy.vercel.app/?username=jasperan&theme=onedark)](https://github.com/jasperan) + +## ☕ Get In Touch + +- [StackOverflow](https://stackoverflow.com/users/9151930/jasper?tab=profile) +- [My Website](https://jasperan.com) +- [LinkedIn](https://www.linkedin.com/in/jasperan/) + +### My Gaming ML Content + +- [League of Legends Machine Learning with OCI - Data Extraction](https://oracle-devrel.github.io/leagueoflegends-optimizer/hols/workshops/dataextraction/index.html) - About Data Extraction & Engineering! +- [League of Legends Machine Learning with OCI - Model Building with scikit-learn and AutoGluon](https://oracle-devrel.github.io/leagueoflegends-optimizer/hols/workshops/mlwithoci/index.html) - Illustrates the whole AI process once we have data available +- [League of Legends Machine Learning with OCI - Introduction to Neural Networks](https://oracle-devrel.github.io/leagueoflegends-optimizer/hols/workshops/nn/index.html) - A very basic introduction to all Neural Network concepts, like learning rate, backpropagation... etc. + +#### Topic-specific Articles + +- [League of Legends Optimizer using Oracle Cloud Infrastructure: Data Extraction & Processing](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/main/articles/article1.md) +- [League of Legends Optimizer using Oracle Cloud Infrastructure: Data Extraction & Processing 2](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/main/articles/article2.md) +- [League of Legends Optimizer using Oracle Cloud Infrastructure: Building an Adversarial League of Legends AI Model](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/main/articles/article3.md) +- [League of Legends Optimizer using Oracle Cloud Infrastructure: Real-Time predictions](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/main/articles/article4.md) +- [League of Legends Optimizer using Oracle Cloud Infrastructure: Real-Time predictions 2](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/main/articles/article5.md) + +### My RedBull Content + +- [Oracle RedBull Pit Strategy Hands-on Lab](https://oracle-devrel.github.io/redbull-pit-strategy/hols/workshops/pitstrategy/index.html) +- [Oracle x RedBull AI conference](https://github.com/oracle-devrel/redbull-analytics-hol) +- [Connecting F1 2021 Telemetry with Oracle JET](https://medium.com/oracledevs/connecting-f1-2021-telemetry-with-oracle-jet-a73714768c34) + +### My Computer Vision (Health ML) Content + +- [Creating a CMask Detection Model on OCI with YOLOv5: Data Labeling with RoboFlow](https://medium.com/oracledevs/creating-a-cmask-detection-model-on-oci-with-yolov5-data-labeling-with-roboflow-5cff89cf9b0b) + +- [Creating a Mask Model on OCI with YOLOv5: Training and Real-Time Inference](https://medium.com/oracledevs/creating-a-mask-model-on-oci-with-yolov5-training-and-real-time-inference-3534c7f9eb21) + +- [YOLOv5 and OCI: Implementing Custom PyTorch Code From Scratch](https://medium.com/oracledevs/yolov5-and-oci-implementing-custom-pytorch-code-from-scratch-7c6b82b0b6b1) + +### My General ML Content + +- [Benchmarking TensorFlow on OCI](https://medium.com/oracledevs/benchmarking-tensorflow-on-oci-70c781287b7d) +- [Benchmarking PyTorch on OCI and EfficientNet Models](https://medium.com/oracledevs/benchmarking-pytorch-on-oci-and-efficientnet-models-1d729b45d503) +- [Working with Data in TensorFlow](https://medium.com/oracledevs/working-with-data-in-tensorflow-a0656f616f4f) +- [Working with Data in PyTorch](https://medium.com/oracledevs/working-with-data-in-pytorch-fa2641e37d17) +- [Getting Started with PyTorch on OCI](https://medium.com/oracledevs/getting-started-with-pytorch-on-oci-dbaa5e7a40ef) + +### Articles in which I'm Featured + +- [Team up with Red Bull Racing Honda and Oracle for Hands-on Lab teaching machine learning with racing data](https://medium.com/oracledevs/team-up-with-red-bull-racing-honda-and-oracle-for-hands-on-lab-teaching-machine-learning-with-70eafcf78383) + +### My Public Appearances + +![Oracle CloudWorld 2022](https://user-images.githubusercontent.com/20752424/214705966-dd90d511-713b-4322-b620-bd2946857f02.jpg) + > Me @ Oracle CloudWorld 2022, Las Vegas + +### Current public repositories I am working on + +- [League of Legends Optimizer](https://github.com/oracle-devrel/leagueoflegends-optimizer) +- [WhatsApp OSINT Tool](https://github.com/jasperan/whatsapp-osint) + +## YouTube Videos + +- [League of Legends Content Series Interview: Using ML with League of Legends (LoL)](https://youtu.be/zz3xaLI0uq8) +- [AlmaLinux Pi Day 2021](https://youtu.be/kGfwYqXxBfY) +- [Machine Learning on OCI for League of Legends- Data Extraction](https://youtu.be/ad0RkqB07vI) +- [Machine Learning on OCI for League of Legends - Model Building with scikit-learn and AutoGluon](https://youtu.be/5iIvkgcMvhM) +- [Machine Learning on OCI for League of Legends: Introduction to Neural Networks](https://youtu.be/Uuo3ZSexNU8) + +⭐️ From [jasperan](https://github.com/jasperan) diff --git a/neural_networks_hero/end/images/me_1.PNG b/neural_networks_hero/end/images/me_1.PNG new file mode 100644 index 0000000..37dd3f2 Binary files /dev/null and b/neural_networks_hero/end/images/me_1.PNG differ diff --git a/neural_networks_hero/end/images/victor_1.PNG b/neural_networks_hero/end/images/victor_1.PNG new file mode 100644 index 0000000..b631da7 Binary files /dev/null and b/neural_networks_hero/end/images/victor_1.PNG differ diff --git a/neural_networks_hero/infer/files/pytorch_inference.py b/neural_networks_hero/infer/files/pytorch_inference.py new file mode 100644 index 0000000..baa0b8e --- /dev/null +++ b/neural_networks_hero/infer/files/pytorch_inference.py @@ -0,0 +1,103 @@ +import torch +from PIL import ImageGrab +import argparse +import time +import cv2 +import numpy as np + +# parse arguments for different execution modes. +parser = argparse.ArgumentParser() +parser.add_argument('-m', '--model', help='Model path', + type=str, + required=True) +parser.add_argument('-d', '--detect', help='Detection mode (screen)', + choices=['screenshot'], + default='screenshot', + type=str, + required=False +) + +args = parser.parse_args() + +CURRENT_DETECTIONS = 0 + + +# Model +model = torch.hub.load('ultralytics/yolov5', + 'custom', + path=args.model, + force_reload=False) + + +def draw_over_image(img, df): + + draw_color = (255, 255, 255) + yellow = (128, 128, 0) + green = (0, 255, 0) + red = (255, 0, 0) + white = (255, 255, 255) + for idx, row in df.iterrows(): + # FONT_HERSHEY_SIMPLEX + if row['name'] == 'mask': + draw_color = green + elif row['name'] == 'incorrect': + draw_color = yellow + else: + draw_color = red + img = cv2.rectangle(img=img, pt1=(int(row['xmin']), int(row['ymin'])), + pt2=(int(row['xmax']), int(row['ymax'])), + color=draw_color, + thickness=5 + ) + + cv2.putText(img, row['name'], (int(row['xmin'])-10, int(row['ymin'])-10), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=1, color=draw_color, thickness=2 + ) + + cv2.putText(img, row['name'], (int(row['xmin'])-10, int(row['ymin'])-10), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=1, color=draw_color, thickness=2 + ) + + + count = len(df[df['name']=='mask']) # detecting 'correct' mask class, for example. + if (count) > 0: + print('# Detections: {}'.format(count)) + CURRENT_DETECTIONS = count + else: + CURRENT_DETECTIONS = 0 + + cv2.putText(img, 'Total Masks: {}'.format(CURRENT_DETECTIONS).upper(), (150, 150), + cv2.FONT_HERSHEY_PLAIN, 2, + white + ) + + return img + +# Main loop; infers sequentially until you press "q" +while True: + + # Image + im = ImageGrab.grab() # take a screenshot + + img = np.array(im) + img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) + + #img = cv2.resize(img, (1280, 1024)) + + # Inference + results = model(img) + # Capture start time to calculate fps + start = time.time() + + print(results.pandas().xyxy[0]) + + #results.show() + + + + cv2.imshow('Image', draw_over_image(img, results.pandas().xyxy[0])) + key = cv2.waitKey(30) + if key == ord('q'): + cv2.destroyAllWindows() + break + + # Print frames per second + print('{} fps'.format(1/(time.time()-start))) \ No newline at end of file diff --git a/neural_networks_hero/infer/files/requirements.txt b/neural_networks_hero/infer/files/requirements.txt new file mode 100644 index 0000000..c209f0a --- /dev/null +++ b/neural_networks_hero/infer/files/requirements.txt @@ -0,0 +1,4 @@ +numpy==1.24.1 +opencv_python==4.7.0.68 +Pillow==9.4.0 +torch==1.11.0+cu113 diff --git a/neural_networks_hero/infer/infer.md b/neural_networks_hero/infer/infer.md new file mode 100644 index 0000000..ef7e4b7 --- /dev/null +++ b/neural_networks_hero/infer/infer.md @@ -0,0 +1,118 @@ +# Lab 5: Inference (Real-Time Predictions) + +Estimated Time: 10 minutes + +## Introduction + +You may be asking yourself: how can I perform inference / how can I use my model? + +Note that, since training & augmentation is done, we can spin down our OCI Instance, unless we want to perform real time inference on it. + +In my opinion, it's better to use inference on your **local computer**, as this will be the only way to achieve true real time inference. + +And now, we have arrived at the last lab of this workshop. This lab will teach you how to use the model in real time. + +There are two notable ways to use the model: + +- Using the _integrated_ YOLOv5 predictor and processor (beginner level) +- Using your own custom Python code (intermediate-advanced level) + +We'll quickly go over both of these methods so you can use which case is better for you. + +### Prerequisites + +- It's highly recommended to have completed [the first workshop](../../workshops/mask_detection_labeling/index.html) before starting to do this one, as we'll use some files and datasets that come from our work in the first workshop. +- An [Oracle Free Tier, Paid or LiveLabs Cloud Account](https://signup.cloud.oracle.com/?language=en&sourceType=:ow:de:ce::::RC_WWMK220210P00063:LoL_handsonLab_introduction&intcmp=:ow:de:ce::::RC_WWMK220210P00063:LoL_handsonLab_introduction) +- Active Oracle Cloud Account with available credits to use for Data Science service. + +### Objectives + +In this lab, you will complete the following steps: + +✓ Perform the easiest form of inference with YOLOv5 + +✓ Perform a more advanced form of inference, with custom Python code + +## Task 1: Inference with Integrated YOLOv5 (Beginner) + +This inference method is the easiest one, as it's already implemented by YOLO, and we just have to invoke it. I highly recommend running inference on your own local computer. + +First, we go to our YOLOv5 directory: + +```bash + +cd /home/$USER/yolov5/ + +``` + +And invoke their predictor, called _`detect.py`_. Let's invoke it: + +```bash + +~/anaconda3/bin/python detect.py --weights= --img --conf --source= + +``` + +Each parameter represents the following: +- _`--img`_: dimensions of the file(s) we're going to pass the model. If the model was trained with X image size, it usually makes sense to specify a similar image dimension here. +- _`--weights`_: path to the final `best.pt` file (returned after model training). +- _`--source`_: this option is great because it allows us to specify any type of source. We can give it things like: + * YouTube video URL + * Directory (it will perform inference on every file inside the directory) + * Individual Video, in which case it will perform inference frame-by-frame and merge the result into a final video file. + * Individual Image + +For example, let us execute: + +```bash + +~/anaconda3/bin/python detect.py --weights="./models/mask_model/weights/best.pt" --img 640 --conf 0.4 --source="./videos/my_video.mp4" + +``` + +## Task 2: Custom Inference with Python (Advanced) + +For this method, we're going to use **PyTorch** as the supporting framework. We need PyTorch to load the model, obtain results that make sense, and return these results. + +To follow this short tutorial, you will need two files: + +- [requirements.txt](./files/requirements.txt) file, containing all dependencies you need to install before running the main Python file +- [The code](./files/pytorch_inference.py) + +If you run the following Python code, you will be able to run your own custom model. PyTorch Model Hub's library allows us to load our PyTorch-compatible, YOLO-trained model. Then, we will make predictions, draw bounding boxes and print results in real-time. + +You can always modify this code to your convenience, to implement things like: + +- Saving to a file +- Streaming results over RTMP or HTTP + +Or even expand the functionality, with things like counting objects, combining several Computer Vision models to achieve something more complex, integrating with databases... you name it. If you're interested in expanding the original functionality, refer to the following article, which illustrates how to do some intermediate-level things: + +- [YOLOv5 and OCI: Implementing Custom PyTorch Code From Scratch](https://medium.com/oracledevs/yolov5-and-oci-implementing-custom-pytorch-code-from-scratch-7c6b82b0b6b1) + +## Task 3: Conclusions + +We have arrived at the end of this workshop. + +In my case, I processed this example video against our newly-trained model, and it produced the following results: + +[Watch the video](youtube:LPRrbPiZ2X8) + +By this point, you should already be able to: + +✓ Use OCI to help you train your own Computer Vision models. + +✓ Learn about **Automatic Data Augmentation** to improve our datasets (with YOLO), and how to perform it. + +✓ Learned how to train the model on custom data, and how to use this model in real time (inference). + +I hope this helped you bootstrap your Computer Vision, PyTorch, and YOLO skills to the next level. + +If you’re curious about the goings-on of Oracle Developers in their natural habitat like me, come join us [on our public Slack channel!](https://bit.ly/odevrel_slack) We don’t mind being your fish bowl 🐠. + +Stay tuned... + +## Acknowledgements + +- **Author** - Nacho Martinez, Data Science Advocate @ Oracle DevRel +- **Last Updated By/Date** - June 1st, 2023 diff --git a/neural_networks_hero/infra/images/cloud-code-editor.png b/neural_networks_hero/infra/images/cloud-code-editor.png new file mode 100644 index 0000000..735cb9e Binary files /dev/null and b/neural_networks_hero/infra/images/cloud-code-editor.png differ diff --git a/neural_networks_hero/infra/images/cloud-shell-button.png b/neural_networks_hero/infra/images/cloud-shell-button.png new file mode 100644 index 0000000..0f29b48 Binary files /dev/null and b/neural_networks_hero/infra/images/cloud-shell-button.png differ diff --git a/neural_networks_hero/infra/images/code-editor-open-menu.png b/neural_networks_hero/infra/images/code-editor-open-menu.png new file mode 100644 index 0000000..26ab41d Binary files /dev/null and b/neural_networks_hero/infra/images/code-editor-open-menu.png differ diff --git a/neural_networks_hero/infra/images/code-editor-open-popup.png b/neural_networks_hero/infra/images/code-editor-open-popup.png new file mode 100644 index 0000000..0015d51 Binary files /dev/null and b/neural_networks_hero/infra/images/code-editor-open-popup.png differ diff --git a/neural_networks_hero/infra/images/code-editor-open-tfvars.png b/neural_networks_hero/infra/images/code-editor-open-tfvars.png new file mode 100644 index 0000000..943876b Binary files /dev/null and b/neural_networks_hero/infra/images/code-editor-open-tfvars.png differ diff --git a/neural_networks_hero/infra/images/code-editor-path.png b/neural_networks_hero/infra/images/code-editor-path.png new file mode 100644 index 0000000..16e9bc6 Binary files /dev/null and b/neural_networks_hero/infra/images/code-editor-path.png differ diff --git a/neural_networks_hero/infra/images/code-editor-save.png b/neural_networks_hero/infra/images/code-editor-save.png new file mode 100644 index 0000000..139ec69 Binary files /dev/null and b/neural_networks_hero/infra/images/code-editor-save.png differ diff --git a/neural_networks_hero/infra/images/file_explorer.png b/neural_networks_hero/infra/images/file_explorer.png new file mode 100644 index 0000000..05fff2f Binary files /dev/null and b/neural_networks_hero/infra/images/file_explorer.png differ diff --git a/neural_networks_hero/infra/images/file_explorer_2.png b/neural_networks_hero/infra/images/file_explorer_2.png new file mode 100644 index 0000000..d5fb2f9 Binary files /dev/null and b/neural_networks_hero/infra/images/file_explorer_2.png differ diff --git a/neural_networks_hero/infra/images/git-clone.png b/neural_networks_hero/infra/images/git-clone.png new file mode 100644 index 0000000..cd23a3b Binary files /dev/null and b/neural_networks_hero/infra/images/git-clone.png differ diff --git a/neural_networks_hero/infra/images/lol_infra.drawio b/neural_networks_hero/infra/images/lol_infra.drawio new file mode 100644 index 0000000..6deedca --- /dev/null +++ b/neural_networks_hero/infra/images/lol_infra.drawio @@ -0,0 +1 @@ +5Vxdc+I4s/41uTlVk/IngUuCIXEKmyGYEHPzFhiPsTE4LzYx9q8/T7dsCCGZZHczO7vnbG0KLEutVn+pn5aYC7Wz3t9sZ09LK1n48YUiLfYXqnGhKLIqtfBBLYVoabWuREOwDRdVp2PDKCz9qlGqWnfhwk9POmZJEmfh02mjl2w2vpedtM222yQ/7fYjiU9nfZoF/lnDyJvF562TcJEtq1a50Tq+uPXDYFlN3VSq9a1ndedqJelytkjyF01q90LtbJMkE9/W+44fk/BquYhxvXfeHhjb+pvsMwNun66Sb7ZZZuvlw7fnkfLfm035TRdUnmfxrlrwYDvzsHhF6sTJblGxnhW1PLbJbrPwiaR8oV7nyzDzR08zj97msAC0LbN1XL0+LFnCQzyb+/H3JA2zMNmgzQPf/hYvnv1tFkLi/VcdsoTIzeIweLN7u3oxT7IsWePFj2STVQak0YTV0tDd378rM/mgCZiwn6z9bFugSzVAq3RX1GqvnvMXpiA1q8blCzPQrqrGWWV/wYH2UUX4UmnpD2hMVs504i9gstVjss2WSZBsZnH32Hp91BoJ5tinn5CMWVeRn2VFJb7ZLktONenvw+yRhl/q1ZP74o2xryjzQ1E/bLDeF4Po8TCKHo7D+Olk3Hd/G0JgpHFufKFcWSPTymbbrE0OTrYRz9I09OrmXhgfLDDbJquD56oHqyCR/dwmIOFkt/X8n6hCq+LRbBv42U/6Kc23jWzrx7MsfD5l5OsNpnHu4x2THDzZpAm5+hvmxL54agKfd8Stn4blbM70SHdPSbjJeFn69YVuvNJm48/Hhnc9vNoNKh6OMfilln/iXe/GA+lSVnT5JCZURvBpXVa0v5NMjoQxceOE7Df5qnFKJPnxI4WZvTaHA5N/3kK0MwP5wrD/A87YSeJky4TUhe43F9rBM1+8aSpztdH4mrjdOI3biiqfxW3lrbCt6L8sbJ/JuNpfpdGSh585Ycjm20nT+13MGdDl/wjZPlGXcM2Jy7sOsg4XCw797FrXM28VsA5fCPwH//eGIx/GztInkVP9CPek/GuetV23SnULvi9m2exCbYtHpZc+BxfK9R4moXS+39rKtLjW5pP9ziulcHZ7L3lG8txXF+qi0FWr0J+9tfdsRe3c6rTKxdoLzdtlNr/Ry8HGbZnrpbS4vS4HYfN5sX6IFjfd3WzSfJ6ve7t5YQYLJV4tboKWGVmqPdI022kXA2Oo2MYwHxgB6JvB7Obhaaospe8js7Qdazcw2rJdeqptuKpdurpNdNZxvJDunn1DCq2OVtjOWB84K9mOxnsrGsqW01VNo72zDVOxDPBqmBrm0exoqFmGF6Av3g1L9FcGjrm3ykCxSxPt1s6OunsxxsWnBVpu+D10I/+mewX5SP5kH5u397p3M26Zm/vYvx0+E2dmpK1Ndbkc5Kudpw5zX5k+zW/yhqn0Yle5W7tO1roLbc110rv+xpasx2w9m+zTQWivp2v36sdIzNEZ6eXCuLuaruN0biQR1qDZE1caqNJL+uWBfukqg8n92o4O9Jt9VfBZ891x9rn7eJ+YN3bqPtrl99Ed5miH5s2wZa4krR+NJbvTVq2RplvRMuo7Q8i9K0OG2QDyhW4yq9Agw9XhGZ+lFS0mdqHpA8M2+pEp20Vb6kcuxg13kHPRj4aFHbZVfOrQt2RF02RgQKfOuHDL4c4u2zRHMGAdmgHmKO1ydXjG594u7xXb6UJfK9kttNyG/jCnDBqZFWoS5tyDb80qh9lgpOUW0UF/zCP5ofn8cxl0axkUmAu03cB2vJ3lPCh92Ahso5gb1r4fBZoVSnuWgdPNrJLoB0Xf6ZZYr27Fro71K9Ykl8iuYTvgIcD4QFpEsLUybswdU0af0h61aYxqPbqZ5cBGnYcE/kTykhdRN4cu5I/5Nmu+Jdh/bhVtDbzJVnQ/sYk3x9Rmhglapm4/uiQfyGvaMEPkN9e1FX6/vV7CGwNX2S891YI1wyrhXVN4lePQisgjD5QVwW23HBkWKPcmpElo6UTK30VUwCynuYPyFq7YinB+yFdi/0f2NbtK8xUaaJ6jgSvt8ko931euflVyp3y8dX92xzikb+/sGAv/x2wXZ2/sFxVQ+/xmscwygu9tWrvS2z3FyWxxmYcrZPyLcHaZbLF59Oj5iZ7x3UvWa2Sr+JYtd+s5PiX6Q+LSc/ztdvYj2a7/0wemueSNp6fLytP+2/mry6dNcKH2FEluypLcwKeutb7GOGTpVc7RUi/1c7RIrY038GKj7vz1JqL+TsBYf3dPwONHgPElXHyBHt8BjK8jwj8BGzY+iQ2vfic0PEeGnWT9tMvOMeFng4gIBe8iw/OQ8HbU+JUJ5TKdTfQtNrxkcXufc1KJcf2NV/bXrWJaNPdI5PS+KvqZ4bUyfbwrZ5PWDskjNm3a3OMVxk+ckRlwsnh7raGfjve5dxtgrunT9HHRmauUlLYDq9PeIwHcDUb0SZvsw86d3KXTkRy6E3s7Ve+eFxOdaO6mjx71z02jq1DixJ8h5rmR08HGztE/Rr/cWz+U4EuZjszj5tv0bnrSrHPNKSC4iZBSFPfG3TW4I+q2jZTA7uQ6tncdGy5SBqRaEdKqSYCUSlMfohU24bhhdjkFUPtGgD5xgvRnZ8supXL7EdJqGscp12MSOE63QOqk9A0em/QdU3mXbqdN/XP0196iPesgHeL+/I740JEo6BVt046QRiHxtZHCIRXheQZIIAQfHtJEM2W+JkFFx4PGeqaNtBDjKOnQ/fC6loNMfNRyoPSr5tfsXE8oZRvcEl1ar5faNM9NTdfSmC7BAdmVISuN6GItSEvdlMfeIL2NXHpf8toqfo90azmMQX9Zy6Gk9PQoB373Qg5j0F/WcmC64DWp5uH18PpovWHOfM2MbiHWF+yscaBCdsULunskW7tBJxef40CHrnbWOtjbJMeeK9qFnsFrVyI+iCalpA8R2qI703KWE8sZ1joDXAKNUQ4IZO4sKSixFp1kOsDa7ZtAh4xLjKX2kvk1yDvQP7J2lhKoBAMWSI3RD3Y6hA5j0DdB3xb0jTHzTMmjtbYoZYf9tTObbRZzjiSJdYZnt2yn4j2154f2PuAE0vzABY92mOfiWYKsuuI94ApSV57DJt4gB6vApxGQ7Yi5w/yUl00S9ssKspX5s6dON98DCqH0/6vCyAd57KEGdlZ4+4JsRVNPsxVVUs5zFfWNCsmh8esrJPrvzFO+orAtfZCn/GsK21efTF5+c2H76ix9GQ37aDD8Zz9OniBiRZr48zfN6icF7hpAvlsW++v17QqTfli8e9fT/3yBW/9pXPgmXcq6pJ0imYsvqXDrJ0T15imBX1fdriv0/3z880/EMZ8NBb81EpwHgjYUsknWyS5F+91oYF/Q+Yoqk1gNIIv5LP13gJz3wMf14vH+eb5uZXP1oXyUetZDR4CE+caqKubtRr9oAdp4u0Vp7ebq3aZfmrlltDktASSoANSTNr+Jd7OfASkAosGoJfoV/zbQg8SUqo6pHY13NiWLHUoGu1R/5bozEkMkfV6GVVKCWNgK90GCiISukNDPSvEsA9jQp4rEFPQWEZJTrU+1fUr+RpQwUp01SDlRHEl70JaR9EY8n0NJeFWbpXcjTQK9Pf4w5i4FHZqznHckPK8oodXdgr5jDNN2maaY00QSO8x4nkmeISmlZB40IO0OJbDTqE/zMK1xJuai+UEXMsCaNDEngyD06QbmjZif1zySWF6UnFe1aKwJslNy6gMQYhO/JCviXczFPHQzrk9DHizPyCT6BWlUrAXyE7ywzCi57xM4YfkvFPApH9ZMekE/8IJni2Sg8DyjdjAsiacxgB7x1ab5lIHQAQEk8H4PPUnMG8AKzaeLNQREj8AqzanyPFFs2EjgCazYvLYxQNoqpVo/kn55wG3dvVt2Zepvkx3w2HEK2vs+g0eAzpFWsq7pDGgkCZlFAQEGnekRuARYAGjSiX+b+DIIRI5JtjmDRgALAkG2seI1oa8iwKpJ8lVoPoAZ0qfeF2uBjVV/txbJPiXw2BdATfMI8IEeyUb8tYNx6RYMgErWn0Igs2+44LdHwEoWQJ6fYbe5SjoloMd6IvshMEhgCLrzCgap4IUAUP2d+sUljxE2l4ox4v24HFcAWYBkAC9eVw0CwZNK/gJZkq+RrROYh+ztVNhe/Uz267J/2kpAelHgOwr7qrMwmRaD1qFEMrYhL3tCdol5IVcAbKzxzrIJ9Ak9ywI4e6wzPFNhgPxEsR0A7BKx4cHldsEr2x7mc+E7QUGnGQC0EgC2ZMd0xgdAvuGiBBVRJALtDMAfXQBajwocUg3AGfSG9ZzU3ntzzoFTgWSivT6ZUyfbx/ccPqVUhQiW/QDrG3QDzb51dS4UgIfBg6vZTPOdE5Z/DkxtvD7If7uorl29AVTrxq8vqMtnmcPfClSv/hRSbV6dFtUbH1XVP0Srjb+Sov7hGz+H9OrDe0gv7wZ9SQ7c/LvgMA+FNGfFiw4VunwXWMmS3Dpxkiu19crGBcmvRVXnJ492kvnzJFnRnRYfhgC1fdXVsr8Avb8eQn8QswChVb3xSyB0fYxc3xFT/rYrYs0zbROKIk17ob+B9xzhVW0G6b8aXzmuEsfztR0fcdIvP2jSjlhqBQx1nfud17eSzjHSdKIv3fU+7k/ulq6Sbbx1S56vh8CAerwoqD+wmONibSv6LMR68oYHD3QVum10XUwf7djbTOPDbaNyvzm/KcI3g1pmaJWWc9e1VodbLpTBKSIrXkTgrWBURCV1ZIx0q6bPWToynQ4jAYEOODNH9hVKMvVjJEa3ZCZWxvQo+w8lyrQF6mB0teKskI6RQAcZ3DAV6GhMmZHEGTpn5YE4FiBe6D5JyKgqZV6MtijXEy8lIxeB8Eac2VM2WFTZvewYpk7v6KZQxVch0A5lmmOBmCjjZ8RjBoQAakRIa7QZkfAaKZOm7LwQ6zdTscY28S3QRIezbDrminhNBtORqoxV5zWF1LcXCV75WIIRG9YgVTRUOs6w6uMMgQBTZIUCdQE9AemVjB55DpKbJOYguTksN0JqGSPeAwLuEm1BY50LNNVhNEJoBzIGSqjXvAZKZIRHc/cYHVY0BPIm1BqyDFieLA/H5FtXLLuC+DEzwSvriRCNWDej9XaFPHm9RbV+PNsfyKUr0C7puEKjNtNu05pkzpaBTqmf4NdNGR05nJ1LjIRAx2GECbsrh6Rvg2ocQAx8lAXkxOjKKvlo6KVfUKZfDJg/OpK8jyof4SqEzUd7jNj1yjcqOwmy2k7wTmX7ZxuEzSmW4J3tlWWrkc1YbOsB22J1LEb2QzqivgrbQ9cqBZJ1Zbb9EjZON8rC/CWP4hhYrAHjHqKqwlBWCF0ifx2QrKEvjCMZV3ZjVsg9SGufIfQJ2yuE7ZC8zcrnXOKL5Q6/LCvbAjom2lKNrtVhSVUcj5E09aebf32+M9ZNbfbxnI5uX99yO8ax4eoTlaL2XqB12MjYUkGPEH4hECT9WbReGfNrlf0DBQ7Tam74E1cJMH68s2/ID8YcC8B/MTeoMkD+O+VqykBUgTiWQS4K21RBNiQqLkJnJscK+g6/llmeiAlcOaHqEcUmyI98mGLbsOxKA7aDcVkhb1XIzBRHmOKYlWy1QrsBH2tba2EXXJ0K21Uco/hB+uAYDr6oOkaVI0LYQJU3ws8HrOeewdW1EVct0NfLK5RPvqFCr3QUXQhkSjL2YG8WxRqF7aDDFYWiinl7sXbMVwhebPbzoKpWBVzJETEVPEcmeLFo/5Ao5lFsGXCFj/ecwuKKmQf0b2VMh8ZybAoyQScXewzR6Vp0NF1VQIKUbvRWfkz+k4uYSe/uqmPoCpEznVpuXREf4Cc22bcDm3AsUW2iY/l1Xalq056Sc9+Q/IArRGIP4srLPa1bEXLP6ZYo2ZfE8hcVA6pYlW7JNplX/pIfqlchVQIpDtARN1ViEGcoxhpdqqrwMbeockkUC2hfVkQFj323qOK9yntFyTGTdCSLKouoZELuUlWtlMTeNeYbvfiucNUO32G31b6dlyLOEU2KOSJ2iCopZNez9qyDTi3P/FS+L4/W8xXdM32e38abuaIF4lazFJolaKyGxXBkBt5N62m+uS/h5xQrWHYUH0xjLGKPM6aYHlD8prlsjkNt2WKf7xJfwjdPb7d2WpsXNZPNP7x+0my9qp80z38IIV8pb1RP6savr5785mN+PmQ/nsg1PriSiIfXNZA/+7u2L6udfGFpQ/lsbaPxO8/3lPOLigCcfBeZhHpiTY3/7pL6xbeUxd1GB7nxtGfR1e/xLaDP0ei2pgXeBDnx5v9o9eLnFwDgIC25+arM8DXVi/qX0TVZ7dVNol94BeC8fOEmuzP9flh6fKH0d+7An/7m4PQ3bptkQ0bw6gdxknTd7SkXx192rfcB/ZT+clbutv7lLmWT+kNbSn2B5Qs2kEazeXl6caPxxq8eWm9cFGv9qntiSuuNYPD/4jcPJDi6zdJrb9IQjv+f+PiLB0VHgPv2+sXh9w6KLGmyJl2pV7LyNZYh68efutSXCHX1zDSa8qXy1umM9MfNA4/Hfy1BhIbjvzmhdv8X \ No newline at end of file diff --git a/neural_networks_hero/infra/images/lol_infra.png b/neural_networks_hero/infra/images/lol_infra.png new file mode 100644 index 0000000..adff9dd Binary files /dev/null and b/neural_networks_hero/infra/images/lol_infra.png differ diff --git a/neural_networks_hero/infra/images/open_terminal.png b/neural_networks_hero/infra/images/open_terminal.png new file mode 100644 index 0000000..3174ea4 Binary files /dev/null and b/neural_networks_hero/infra/images/open_terminal.png differ diff --git a/neural_networks_hero/infra/images/paste-compartment-ocid.png b/neural_networks_hero/infra/images/paste-compartment-ocid.png new file mode 100644 index 0000000..7934531 Binary files /dev/null and b/neural_networks_hero/infra/images/paste-compartment-ocid.png differ diff --git a/neural_networks_hero/infra/images/paste-public-ssh-key.png b/neural_networks_hero/infra/images/paste-public-ssh-key.png new file mode 100644 index 0000000..f8c7236 Binary files /dev/null and b/neural_networks_hero/infra/images/paste-public-ssh-key.png differ diff --git a/neural_networks_hero/infra/images/paste-region.png b/neural_networks_hero/infra/images/paste-region.png new file mode 100644 index 0000000..9375146 Binary files /dev/null and b/neural_networks_hero/infra/images/paste-region.png differ diff --git a/neural_networks_hero/infra/images/paste-riot-api-key.png b/neural_networks_hero/infra/images/paste-riot-api-key.png new file mode 100644 index 0000000..e216ee3 Binary files /dev/null and b/neural_networks_hero/infra/images/paste-riot-api-key.png differ diff --git a/neural_networks_hero/infra/images/paste-tenancy-ocid.png b/neural_networks_hero/infra/images/paste-tenancy-ocid.png new file mode 100644 index 0000000..3379a56 Binary files /dev/null and b/neural_networks_hero/infra/images/paste-tenancy-ocid.png differ diff --git a/neural_networks_hero/infra/images/proceed.png b/neural_networks_hero/infra/images/proceed.png new file mode 100644 index 0000000..dc4402a Binary files /dev/null and b/neural_networks_hero/infra/images/proceed.png differ diff --git a/neural_networks_hero/infra/images/python-check-ok.png b/neural_networks_hero/infra/images/python-check-ok.png new file mode 100644 index 0000000..5ce460f Binary files /dev/null and b/neural_networks_hero/infra/images/python-check-ok.png differ diff --git a/neural_networks_hero/infra/images/riot_api_key_gen.png b/neural_networks_hero/infra/images/riot_api_key_gen.png new file mode 100644 index 0000000..8135373 Binary files /dev/null and b/neural_networks_hero/infra/images/riot_api_key_gen.png differ diff --git a/neural_networks_hero/infra/images/start-sh-ansible.png b/neural_networks_hero/infra/images/start-sh-ansible.png new file mode 100644 index 0000000..478c865 Binary files /dev/null and b/neural_networks_hero/infra/images/start-sh-ansible.png differ diff --git a/neural_networks_hero/infra/images/start-sh-beginning.png b/neural_networks_hero/infra/images/start-sh-beginning.png new file mode 100644 index 0000000..f1e2de9 Binary files /dev/null and b/neural_networks_hero/infra/images/start-sh-beginning.png differ diff --git a/neural_networks_hero/infra/images/start-sh-output.png b/neural_networks_hero/infra/images/start-sh-output.png new file mode 100644 index 0000000..188b60c Binary files /dev/null and b/neural_networks_hero/infra/images/start-sh-output.png differ diff --git a/neural_networks_hero/infra/images/start-sh-ssh.png b/neural_networks_hero/infra/images/start-sh-ssh.png new file mode 100644 index 0000000..455ab8c Binary files /dev/null and b/neural_networks_hero/infra/images/start-sh-ssh.png differ diff --git a/neural_networks_hero/infra/images/start-sh-terraform.png b/neural_networks_hero/infra/images/start-sh-terraform.png new file mode 100644 index 0000000..3b4f3ae Binary files /dev/null and b/neural_networks_hero/infra/images/start-sh-terraform.png differ diff --git a/neural_networks_hero/infra/images/unzip_result.png b/neural_networks_hero/infra/images/unzip_result.png new file mode 100644 index 0000000..5429e69 Binary files /dev/null and b/neural_networks_hero/infra/images/unzip_result.png differ diff --git a/neural_networks_hero/infra/infra.md b/neural_networks_hero/infra/infra.md new file mode 100644 index 0000000..2d206f8 --- /dev/null +++ b/neural_networks_hero/infra/infra.md @@ -0,0 +1,305 @@ +# Infrastructure + +Estimated Time: 15-20 minutes + +## Introduction + +In this lab, we will build the infrastructure that we will use to run the rest of the workshop. + +The main four elements that we will be creating are: + +- **Compute** instance using a Linux-based image from Oracle Cloud. +- **Autonomous JSON Database** where we'll allocate the JSON documents. +- **Data Science** session and notebook, to experiment with the newly-generated data using notebooks. + +![Infrastructure](images/lol_infra.png) + +We will use Cloud Shell to execute `start.sh` script, which will call Terraform and Ansible to deploy all the infrastructure required and setup the configuration. If you don't know about Terraform or Ansible, don't worry, there is no need. + +- Terraform is an Open Source tool to deploy resources in the cloud with code. You declare what you want in Oracle Cloud and Terraform make sure you get the resources created. +- Ansible is an Open Source tool to provision on top of the created resources. It automates the dependency installation, copies the source code, and config files so everything is ready for you to use. + +Do you want to learn more? Feel free to check the code for terraform and ansible after the workshop [in our official repository.](https://github.com/oracle-devrel/leagueoflegends-optimizer/) + +### Prerequisites + +- An Oracle Free Tier, Paid or LiveLabs Cloud Account +- Active Oracle Cloud Account with available credits to use for Data Science service. + +### Objectives + +In this lab, you will learn how to: + +- Use Oracle Cloud Infrastructure for your Compute needs +- Deploy resources using Terraform and Ansible +- Learn about federation, and what's necessary to authenticate a Terraform request +- Download the datasets we will use + +## Task 1: Cloud Shell + +First, we need to download the official repository to get access to all the code (Terraform and Ansible code for this step). + +1. From the Oracle Cloud Console, click on **Cloud Shell**. + ![Cloud Shell Button](images/cloud-shell-button.png) + +2. As soon as the Cloud Shell is loaded, you can download the assets to run this lab. + + ```bash + git clone --branch livelabs https://github.com/oracle-devrel/leagueoflegends-optimizer.git + ``` + +3. The result will look like this + ![Git Clone](images/git-clone.png) + +4. Change directory with `cd` to `leagueoflegends-optimizer` directory: + + ```bash + + cd leagueoflegends-optimizer/dev + + ``` + +5. Terraform uses a file called `tfvars` that contains the variables Terraform uses to talk to Oracle Cloud and set up your deployment the way you want it. You are going to copy a template we provide to use your own values. Run on Cloud Shell the following command: + + ```bash + + cp terraform/terraform.tfvars.template terraform/terraform.tfvars + + ``` + +## Task 2: Deploy with Terraform and Ansible + +1. Click on **Code Editor**. Next to the Cloud Shell one. + ![Cloud Code Editor](images/cloud-code-editor.png) + + > **Note**: for **Safari** users:
+ > First, it isn't the recommended browser for OCI. Firefox or Chrome are fully tested and are recommended.
+ > With Safari, if you get a message _Cannot Start Code Editor_, go to _**Settings** > **Privacy**_ and disable _**Prevent cross-site tracking**_.
+ > Then open Code Editor again. + +2. On the **Code Editor**, go to _**File** > **Open**_. + ![Open menu](images/code-editor-open-menu.png) + +3. On the pop-up, edit the path by clicking the pencil icon: + ![Open Pop Up](images/code-editor-open-popup.png) + +4. Append, at the end, the path to the `terraform.tfvars` + ![Path to tfvars](images/code-editor-path.png) + + ```bash + /leagueoflegends-optimizer/dev/terraform/terraform.tfvars + ``` + +5. Type _[ENTER]_ to select, click on the `terraform.tfvars` file and click Open. + + ![TFVars Open](images/code-editor-open-tfvars.png) + +6. The file will open and you can copy values you will get from running commands on Cloud Shell and paste it on the Code Editor. + +7. Copy the output of the following command as the region: + + ```bash + echo $OCI_REGION + ``` + + ![Paste Region](images/paste-region.png) + +8. Copy the output of the following command as the tenancy OCID: + + ```bash + echo $OCI_TENANCY + ``` + + ![Paste Tenancy OCID](images/paste-tenancy-ocid.png) + +9. Copy the output of the same command as the compartment OCID: + + ```bash + echo $OCI_TENANCY + ``` + + ![Paste Compartment OCID](images/paste-compartment-ocid.png) + + > **Note**: for experienced Oracle Cloud users:
+ > Do you want to deploy the infrastructure on a specific compartment?
+ > You can get the Compartment OCID in different ways.
+ > The coolest one is with OCI CLI from the Cloud Shell.
+ > You have to change _`COMPARTMENT_NAME`_ for the compartment name you are looking for and run the following command: + + ```bash + + oci iam compartment list --all --compartment-id-in-subtree true --query "data[].id" --name COMPARTMENT_NAME + + ``` + +10. Generate a SSH key pair, by default it will create a private key on _`~/.ssh/id_rsa`_ and a public key _`~/.ssh/id_rsa.pub`_. + It will ask to enter the path, a passphrase and confirm again the passphrase; type _[ENTER]_ to continue all three times. + + ```bash + ssh-keygen -t rsa + ``` + + > **Note**: If there isn't a public key already created, run the following command to create one: + > ``` + > ssh-keygen + > ``` + > And select all defaults. Then, try running the command again. + +11. We need the public key in our notes, so keep the result of the content of the following command in your notes. + + ```bash + cat ~/.ssh/id_rsa.pub + ``` + + ![Paste Public SSH Key](images/paste-public-ssh-key.png) + +12. From the previous lab, you should have the Riot Developer API Key. + + ![Riot API Key](images/riot_api_key_gen.png) + + Paste the Riot API Key on the `riotgames_api_key` entry of the file. + + ![Paste Riot API Key](images/paste-riot-api-key.png) + +13. Save the file. + + ![Code Editor Save](images/code-editor-save.png) + +## Task 3: Start Deployment + +1. Run the `start.sh` script + + ```bash + ./start.sh + ``` + +2. The script will run and it looks like this. + + ![Start SH beginning](images/start-sh-beginning.png) + +3. Terraform will create resources for you, and during the process it will look like this. + + ![Start SH terraform](images/start-sh-terraform.png) + +4. Ansible will continue the work as part of the `start.sh` script. It looks like this. + + ![Start SH ansible](images/start-sh-ansible.png) + +5. The final part of the script is to print the output of all the work done. + + ![Start SH output](images/start-sh-output.png) + +6. Copy the ssh command from the output variable `compute`. + + ![Start SH output](images/start-sh-ssh.png) + +## Task 4: Check Deployment + +1. Run the `ssh` command from the output of the script. It will look like this. + + ```bash + ssh opc@PUBLIC_IP + ``` + +2. In the new machine, run the python script `check.py` that makes sure everything is working. + + ```bash + python src/check.py + ``` + +3. The result will confirm database connection and Riot API works. + + ![Python Check OK](images/python-check-ok.png) + +4. If you get an error, make sure the _`terraform/terraform.tfvars`_ file from the previous task contains the correct values. In case of any error, just run the _`start.sh`_ script again. + +## Task 5: Setting up Data Science Environment + +Once we have set up our Cloud shell to extract data, we also need to prepare a Data Science environment to work with the data once it's been collected. +To achieve this, we need to load this workshop's notebook into our environment through the official repository. + +We now need to load our notebook into our environment. + +1. Opening a **Terminal** inside the _'Other'_ section the console and re-downloading the repository again: + + ![open terminal](./images/open_terminal.png) + +2. Then, we re-clone the repository: + + ```bash + + git clone --branch livelabs https://github.com/oracle-devrel/leagueoflegends-optimizer.git + + ``` + +3. Install the conda environment specifying the Python version we want (you can choose between 3.8, 3.9 and 3.10) + + ```bash + conda create -n myconda python=3.9 + ``` + + ![proceed](./images/proceed.png) + +4. Activate the newly-created conda environment: + + ```bash + + conda activate myconda + + ``` + +5. Install conda dependencies, so our environment shows up in the Kernel selector: + + ```bash + + conda install nb_conda_kernels + + ``` + +6. Install Python dependencies: + + ```bash + + pip install -r leagueoflegends-optimizer/deps/requirements_2023.txt + + ``` + +> Note: make sure to accept prompts by typing 'y' as in 'Yes' when asked. + +After these commands, all requirements will be fulfilled and we're ready to execute our notebooks with our newly created conda environment. + +Once we execute any notebook in this Data Science environment, remember that we'll need to select the correct conda environment under the _Kernel_ dropdown menu. + +## Task 6: Downloading DataSets + +We now need to load our datasets into our environment. For that, we reuse the terminal we created in the previous step: + +![open terminal](./images/open_terminal.png) + +Then, we execute the following command, which will download all necessary datasets: + +```bash + +wget https://objectstorage.eu-frankfurt-1.oraclecloud.com/p/FcwFW-_ycli9z8O_3Jf8gHbc1Fr8HkG9-vnL4I7A07mENI60L8WIMGtG5cc8Qmuu/n/axywji1aljc2/b/league-hol-ocw-datasets/o/league_ocw_2023.zip && unzip league_ocw_2023.zip -d /home/datascience/. + +``` + +![unzip result](./images/unzip_result.png) + +## Task 7: Accessing our Notebooks + +We should now see the repository / files in our file explorer: + +![file explorer - 1](./images/file_explorer.png) + +We navigate to the _`leagueoflegends-optimizer/notebooks/`_ directory and the notebook [_`models_2023.ipynb`_](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/notebooks/models_2023.ipynb) is the one we will review during this workshop. + +![file explorer - 2](./images/file_explorer_2.png) + +Let's open it. You may now [proceed to the next lab](#next). + +## Acknowledgements + +- **Author** - Nacho Martinez, Data Science Advocate @ DevRel +- **Contributors** - Victor Martin, Product Strategy Director +- **Last Updated By/Date** - May 31st, 2023 diff --git a/neural_networks_hero/intro/images/bought_items.jpg b/neural_networks_hero/intro/images/bought_items.jpg new file mode 100644 index 0000000..54e0e23 Binary files /dev/null and b/neural_networks_hero/intro/images/bought_items.jpg differ diff --git a/neural_networks_hero/intro/images/lab1-acl.png b/neural_networks_hero/intro/images/lab1-acl.png new file mode 100644 index 0000000..4d9dad2 Binary files /dev/null and b/neural_networks_hero/intro/images/lab1-acl.png differ diff --git a/neural_networks_hero/intro/images/lab1-apikey.png b/neural_networks_hero/intro/images/lab1-apikey.png new file mode 100644 index 0000000..a76b188 Binary files /dev/null and b/neural_networks_hero/intro/images/lab1-apikey.png differ diff --git a/neural_networks_hero/intro/images/lab1-login.png b/neural_networks_hero/intro/images/lab1-login.png new file mode 100644 index 0000000..33ca8ad Binary files /dev/null and b/neural_networks_hero/intro/images/lab1-login.png differ diff --git a/neural_networks_hero/intro/images/lab1-yaml.png b/neural_networks_hero/intro/images/lab1-yaml.png new file mode 100644 index 0000000..ef8ce0f Binary files /dev/null and b/neural_networks_hero/intro/images/lab1-yaml.png differ diff --git a/neural_networks_hero/intro/images/loginfailed.png b/neural_networks_hero/intro/images/loginfailed.png new file mode 100644 index 0000000..2094f3a Binary files /dev/null and b/neural_networks_hero/intro/images/loginfailed.png differ diff --git a/neural_networks_hero/intro/images/logout.jpg b/neural_networks_hero/intro/images/logout.jpg new file mode 100644 index 0000000..3e6cae3 Binary files /dev/null and b/neural_networks_hero/intro/images/logout.jpg differ diff --git a/neural_networks_hero/intro/images/new_livelabs_functionality.PNG b/neural_networks_hero/intro/images/new_livelabs_functionality.PNG new file mode 100644 index 0000000..42080b9 Binary files /dev/null and b/neural_networks_hero/intro/images/new_livelabs_functionality.PNG differ diff --git a/neural_networks_hero/intro/intro.md b/neural_networks_hero/intro/intro.md new file mode 100644 index 0000000..615c118 --- /dev/null +++ b/neural_networks_hero/intro/intro.md @@ -0,0 +1,71 @@ +# Lab 1: Understand and Sign Up for League of Legends + +Estimated Time: 5-10 minutes + +## Overview +League of Legends is a team-based strategy game in which two teams of five powerful champions face off to destroy the other’s base. As a player, you can choose from over 140 champions to make epic plays, secure kills, and take down towers as you battle your way to victory. To win, you'll need to destroy the enemy’s Nexus—the heart of each team's base. + +Access and mobility play an important role in LoL. Your team needs to clear at least one lane to access the enemy Nexus. Blocking your path are defense structures called turrets and inhibitors. Each lane has three turrets and one inhibitor, and each Nexus is guarded by two turrets. In between the lanes is the jungle, where neutral monsters and jungle plants reside. The two most important monsters are Baron Nashor and the Drakes. Killing these units grants unique buffs for your team and can also turn the tide of the game. + +Team composition depends on five positions. Each lane lends itself to certain kinds of champions and roles—try them all or lock in to the lane that calls you. Champions get stronger by earning experience to level up and buy more powerful items as the game progresses. Staying on top of these two factors is crucial to overpowering the enemy team and destroying their base. + +In this lab, we'll leverage the power of AI with League of Legends in a unique and innovative way. We'll dive deep into extractable data (accessible through the game's API), how to structure this data, and how to use it to train our own Machine Learning model to generate real-time predictions about any match. + +![Bought Items](images/bought_items.jpg) +> **Note**: this image represents the functionality in 2022. You only got a winning chance probability. + +![New Model in action](images/new_livelabs_functionality.PNG) +> **Note**: this image represents the **new** functionality (2023). You get detailed insights about specific parts of your performance, such as your death ratio, your kill+assist ratio and your xp per minute. This allows you to get more information about what you could be doing right or wrong. Notice that in this screenshot, after getting a kill, my winning probabilities increase notably, and my kill + assist ratio, which was terrible until that moment, becomes "not so good". Last year's model had a lot of difficulties in detecting changes like these. + + +By the end of this workshop series, you will be able to use our already-trained ML model to make real-time predictions about our in-game performances. You will also get the chance to train your own model (with your own tuning hyperparameters) and use it while you play League. + +We'll also need to create an autonomous database, which will serve as storage for our generated datasets and access points as a whole. + +In this Hands-On Lab (HOL), we'll start with the assumption that users know about how League of Legends' matchmaking system works. If you have time and don't know a lot about League of Legends, we recommend reading these lists of articles (included in the repository as well) to get a feel for what we've done in the past, and what we'll partially cover in this Hands-on Lab: + +1. [Article 1](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/articles/article1.md): League of Legends Optimizer using Oracle Cloud Infrastructure: Data Extraction & Processing +2. [Article 2](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/articles/article2.md): League of Legends Optimizer using Oracle Cloud Infrastructure: Data Extraction & Processing II +3. [Article 3](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/articles/article3.md): League of Legends Optimizer using Oracle Cloud Infrastructure: Building an Adversarial League of Legends AI Model +4. [Article 4](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/articles/article4.md): League of Legends Optimizer using Oracle Cloud Infrastructure: Real-Time predictions +5. [Article 5](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/articles/article5.md): League of Legends Optimizer using Oracle Cloud Infrastructure: Real-Time predictions II + + +### Prerequisites + +* An [Oracle Free Tier, Paid or LiveLabs Cloud Account](https://signup.cloud.oracle.com/?language=en&sourceType=:ow:de:ce::::RC_WWMK220210P00063:LoL_handsonLab_introduction&intcmp=:ow:de:ce::::RC_WWMK220210P00063:LoL_handsonLab_introduction) +* Active Oracle Cloud Account with available credits to use for Data Science service. +* Creating a League of Legends account and completing the in-game tutorial, as we'll need an account to get an API key and perform in-game tests. Also, you'll need to [download the game and register](https://www.leagueoflegends.com/en-gb/). + + +## Task 1: Get Started + +This instructional video explains what needs to be done to get an API key necessary to what we will explain in the next lab. + +[Watch the video](youtube:HUJgYfrHhYI) + +1. First, you'll need to obtain a Riot Games API key [from the official Riot Games Developer website.](https://developer.riotgames.com/) For that, you need to create a League of Legends account (if you don't have one already) and request a development API key. Note that if you're planning to develop a League of Legends project out of this repository, you can also apply for a production API key which has a longer expiration date, as well as more requests per minute. + ![login to your league account](images/lab1-login.png) +2. After creating the account, we [access the development website](https://developer.riotgames.com/) to find our development API key. Note that by default, the development API key expires every 24 hours. So, if you're planning to generate a dataset for more than 24 hours at a time, in the end you'll start getting HTTP unauthorized errors. To fix this, just regenerate the API key and use the new one. + ![get api key](images/lab1-apikey.png) + +If you do run into issues while you're obtaining the API key / you're not able to login to [the developer portal](https://developer.riotgames.com) like in this image: + +![error logging in](images/loginfailed.png) + +Then make sure to sign out of your newly created account, at the top right corner of your screen: + +![log out](images/logout.jpg) + +And login back again. +> **Note**: if you still get the error message "waiting for email confirmation" wait a couple of minutes and try again. + + +You may now [proceed to the next lab](#next). + +## Acknowledgements + +* **Author** - Nacho Martinez, Data Science Advocate @ DevRel +* **Editor** - Erin Dawson, DevRel Communications Manager +* **Contributors** - Victor Martin, Product Strategy Director +* **Last Updated By/Date** - May 17th, 2023 \ No newline at end of file diff --git a/neural_networks_hero/the_problem/images/diagram.png b/neural_networks_hero/the_problem/images/diagram.png new file mode 100644 index 0000000..91e008f Binary files /dev/null and b/neural_networks_hero/the_problem/images/diagram.png differ diff --git a/neural_networks_hero/the_problem/images/last_year_detail.png b/neural_networks_hero/the_problem/images/last_year_detail.png new file mode 100644 index 0000000..f4c17ce Binary files /dev/null and b/neural_networks_hero/the_problem/images/last_year_detail.png differ diff --git a/neural_networks_hero/the_problem/images/last_year_live.png b/neural_networks_hero/the_problem/images/last_year_live.png new file mode 100644 index 0000000..9da1029 Binary files /dev/null and b/neural_networks_hero/the_problem/images/last_year_live.png differ diff --git a/neural_networks_hero/the_problem/images/last_year_normal.png b/neural_networks_hero/the_problem/images/last_year_normal.png new file mode 100644 index 0000000..ed1f3d1 Binary files /dev/null and b/neural_networks_hero/the_problem/images/last_year_normal.png differ diff --git a/neural_networks_hero/the_problem/images/lcu_architecture.png b/neural_networks_hero/the_problem/images/lcu_architecture.png new file mode 100644 index 0000000..186118f Binary files /dev/null and b/neural_networks_hero/the_problem/images/lcu_architecture.png differ diff --git a/neural_networks_hero/the_problem/images/learning_curves.png b/neural_networks_hero/the_problem/images/learning_curves.png new file mode 100644 index 0000000..19d7404 Binary files /dev/null and b/neural_networks_hero/the_problem/images/learning_curves.png differ diff --git a/neural_networks_hero/the_problem/images/live_client_1.PNG b/neural_networks_hero/the_problem/images/live_client_1.PNG new file mode 100644 index 0000000..03a845d Binary files /dev/null and b/neural_networks_hero/the_problem/images/live_client_1.PNG differ diff --git a/neural_networks_hero/the_problem/images/live_client_2.PNG b/neural_networks_hero/the_problem/images/live_client_2.PNG new file mode 100644 index 0000000..269d968 Binary files /dev/null and b/neural_networks_hero/the_problem/images/live_client_2.PNG differ diff --git a/neural_networks_hero/the_problem/images/live_client_3.PNG b/neural_networks_hero/the_problem/images/live_client_3.PNG new file mode 100644 index 0000000..d654aaa Binary files /dev/null and b/neural_networks_hero/the_problem/images/live_client_3.PNG differ diff --git a/neural_networks_hero/the_problem/images/mljar-output.PNG b/neural_networks_hero/the_problem/images/mljar-output.PNG new file mode 100644 index 0000000..81944b7 Binary files /dev/null and b/neural_networks_hero/the_problem/images/mljar-output.PNG differ diff --git a/neural_networks_hero/the_problem/images/nn_importance.png b/neural_networks_hero/the_problem/images/nn_importance.png new file mode 100644 index 0000000..64eef45 Binary files /dev/null and b/neural_networks_hero/the_problem/images/nn_importance.png differ diff --git a/neural_networks_hero/the_problem/images/result_live_client.PNG b/neural_networks_hero/the_problem/images/result_live_client.PNG new file mode 100644 index 0000000..ec49d66 Binary files /dev/null and b/neural_networks_hero/the_problem/images/result_live_client.PNG differ diff --git a/neural_networks_hero/the_problem/images/structure_2023.webp b/neural_networks_hero/the_problem/images/structure_2023.webp new file mode 100644 index 0000000..bd2a508 Binary files /dev/null and b/neural_networks_hero/the_problem/images/structure_2023.webp differ diff --git a/neural_networks_hero/the_problem/problem.md b/neural_networks_hero/the_problem/problem.md new file mode 100644 index 0000000..93613e3 --- /dev/null +++ b/neural_networks_hero/the_problem/problem.md @@ -0,0 +1,191 @@ +# How to think like a Machine for Data Science + +Estimated Time: 30 minutes + +## Introduction + +In this chapter, I'm going to explain how the thought process of a Data Scientist (me) tried to architect the Machine Learning portion of this workshop before getting into developing code, focusing on ideating the problem and analyzing available variables and insights. + +I recommend you pay special attention to this lab if you are often afraid of starting to solve a problem, or are unsure of how to begin. + +### Prerequisites + +* An [Oracle Free Tier](https://signup.cloud.oracle.com/?language=en&sourceType=:ow:de:ce::::RC_WWMK220210P00063:LoL_handsonLab_optimizer&intcmp=:ow:de:ce::::RC_WWMK220210P00063:LoL_handsonLab_optimizer), Paid, or LiveLabs Cloud Account +* Active Oracle Cloud Account with available credits to use for Data Science service. + +### Objectives + +In this lab, you will learn how to: + +* Learn how to start working on an ML problem +* Architect the problem and data structures +* Find target variables +* Explore the dataset with AutoML tools +* Get our code ready to create some ML pipelines that can be reused in the future for other types of problems + +## Task 1: What Data can I get and how? + +### Standard League API + +The [League API](https://developer.riotgames.com) is where we need to look first. The data on the API is constantly evolving and new endpoints (or even the same endpoints) are being published or improved by Riot Games. + +This is very useful for us, as more and more data is becoming available in newer API endpoints. + +To demonstrate the huge difference that has been achieved throughout previous versions of this workshop, we used to have the following data structures for our models: + +![last year's normal structure](./images/last_year_normal.png) +![last year's detailed structure](./images/last_year_detail.png) + +However, we have much more vast amounts of information present in a newer endpoint nowadays: + +![this year's normal structure](./images/structure_2023.webp) +> **Note**: the difference is so huge the new data doesn't even fit in a 4K screen + +This allows our datasets to be much richer, and in turn, lets our Machine Learning models learn faster, more efficiently, and create a wider variety of models. + +### Live Client API + +Regarding the Live Client API (a specific API that allows us to extract `live` real-time data about any match that we might be playing), the structure hasn't changed a lot. + +![Live Client API Architecture](./images/lcu_architecture.png) + +> **Note**: Communication between libraries **happens automatically when we run the program**. Since we're running the League client in our computer, the IP being used is localhost `(127.0.0.1)`. If you're interested in seeing how this communication works in more detail, check out [this link](https://developer.riotgames.com/docs/lol). + +This is the kind of data that we extracted last year: + +![last year's livelabs](./images/last_year_live.png) +> **Note**: this data focused on the `championStats` object and used champion statistics (like armor, ability power...), which didn't allow for much flexibility + +Compared to this year's data, which kind of stays with the same characteristics: + +![this year's livelabs](./images/live_client_1.PNG) + +However, we also consider some other variables from other structures that we previously didn't, like `gameTime`, `level` and `scores`: + +![this year's livelabs](./images/live_client_2.PNG) +![this year's livelabs](./images/live_client_3.PNG) + +Note that the amount of information present in the Live Client API is much more limited than the info we can find in the standard League API. Since we have such a disparity in the number of stats and variables that can be extracted, we will need to be mindful of which variables to use if we want to "join" both datasets. + +Ideally, we'd want our more complex model to aid in the training process of our smaller, real-time model. Since we have much more data for the big model, we can use this strategy to train the model with millions of examples, and then perform inference on the smaller model. + +At the end of the process, we will have a "real-time companion" that will tell us how well we're playing a specific champion. For that we will need to calculate a specific player's performance. + +## Task 2: Full Comparison + +Between last year and this year, we have made the following progress: + +_**Last year**_'s models? A bit messy: + +* Used 24 variables +* Analyzing 50,000 Masters+ players +* Examining 200,000 Masters+ games +* Gaining 1,288,773 matchup insights 📊 +* 3 models in total, but we didn't calculate a player's performance +* **Win Prediction Model** 🏆: struggling at 65% accuracy 🎯 + +But _**this year**_, we leveled up: + +* Now ⚡ using 107 variables ⚡ +* Analyzing 71,447 Masters+ players 🎮 +* Examining 3,260,537 Masters+ games 🕹️ +* Gaining 3,827,781 performance insights 💡😵 +* 12 model types (**36** models in total) 🚀 +* **Performance Calculator Model**: R^2^ (coefficient of determination) of **99.99%**, RMSE of **0.321** 📈 +* **Win Prediction Model** 🏆: **99.32%** accuracy (with a few hours of training) + +### New Issue Detected in Riot Games API + +Compared to last year, a new error (and important) was detected: there are some API calls that return different types of data. + +For example: + +* Match `LA1_1258959236` returns 107 variables in total +* Match `EUN1_3292157187` returns 125 variables, from which only 107 are going to be used +* Some other matches return a different amount of variables, between 107 and 125. I suspect this has to do with when the match was played. I checked that the issue doesn't only happen in some regions, it's global. + +When problems like these arise, we need to work around these incosistencies and harmonize our dataset (in this case, harmonize the data extraction process). You can find out how I've performed harmonization [in these lines of code](https://github.com/oracle-devrel/leagueoflegends-optimizer/blob/livelabs/src/process_performance.py#L167). + +## Task 3: Calculating Player's Performance + +Now that we have a harmonized dataset, we're ready to calculate a player's performance. But how do we begin? For that, I like to use an AutoML tool called [mljar-supervised](https://github.com/mljar/mljar-supervised), that allows me to easily perform some automatic analysis for the dataset to predict the `win` variable (already provided by the API and present in our dataset). I can launch an experiment like this: + +![mljar output](./images/mljar-output.png) +> **Note**: Check out more information about the parameters I've used [here.](https://supervised.mljar.com/) + +This generated a lot of visualizations for me, that gave me an idea of what's necessary to accurately predict the `win` (and `calculated_player_performance`) variable: + +* For example, in my generated FastAI Neural Network Model (one of the models with the highest accuracy), I got to see the most important variables: + +![mljar output](./images/nn_importance.png) + +* It's also important, if taking decisions to see whether this model's performance is good or not. In our case: + +![learning curves from nn_model](./images/learning_curves.png) + +We can see that the loss of our ML model is low enough for our model to have taken the correct approach to predict the target variable. We can confirm that the model is telling us the most important variables by checking other models' predictions as well. + +> **Note**: beware of **overfitting** if the validation metrics are too good! + +As we can see, the model is able to deduce whether we're going to win or not by just looking at four or five weighted variables. By comparing these stats to what we already have in the Live Client API, we'll determine which variables we can use from that data structure to arrive at the conclusion. + +Considering that we're working with time-dependent data, from the variables mentioned above, we can extract the same statistics (deaths, kills)... per minute. This will introduce the time dimension into our dataset: + +* deaths/min +* champLevel/min +* assists/min +* kills/min +* duration (which is inferred into the above 4 variables already by adding it in the denominator as a factor of the variables). + +From these variables, and for each one of our matches, our Data Extraction pipeline is robust enough so that, any time you download a new match using our repository, all these additional variables will be calculated for us. More specifically, if you look at the dataset, you will see some variables called `f1...f5` which represent: + +* f1: `deaths_per_min` (deaths/min), +* f2: `k_a_per_min` (kills+assists/min), +* f3: `level_per_min` (xp/min), +* f4: `total_damage_per_min` (**NOT** present in Live Client API yet -> not used), +* f5: `gold_per_min` (**NOT** present in Live Client API yet -> not used), + +According to [this Medium post](https://maddcog.medium.com/measure-league-of-legends-performance-with-this-game-grade-778c2fe832cb), the optimal game grade / player performance is calculated with this formula: + +```bash + +Game Grade = 0.336 — (1.437 x Deaths per min) + (0.000117 x gold per min) + (0.443 x K_A per min) + (0.264 x Level per min) + (0.000013 x TD per min) + +``` + +> **Note**: a game grade closer to 1 means the player had a ‘winning’ performance, while a grade closer to 0 equated to a ‘losing performance’. + +This can also be updated with our models, by taking the standardized coefficients for each one of these variables' importances, and create our formula. + +Adding creep score per minute didn't offer them any improvement to the model so I chose to ignore it as well. However, only using Diamond matches in their training dataset increased the accuracy by 3% in the end. This is good for us as we've only considered Masters+ games in our training dataset with the hopes of reducing variability in our data. + +As a conclusion, there is no noticeable improvement by adding variables (eg. creep score) or making the model more specific. Therefore, the simpler, generic model is what we'll aim for. So, we'll take the abovementioned variables (only three out of the five are present in the Live Client API) and build a new model from it: + +* Input variables: deaths/min, kills+assists/min, xp/min. +* Output variables: model 1 will predict the `win` variable and model 2 will predict `calculated_player_performance` for any given player. + +## Task 4: Conclusion + +Now that we have things clear: + +* How many models we want to build +* Inputs / outputs of each model +* Expected RMSE, accuracy for each one of the models + +And the fact that we have some additional model explainability thanks to `mljar-supervised`, **NOW** we're ready to begin building our models / a pipeline in OCI Data Science. + +In order to build these models, we will also use AutoML, but a different tool. The tool you choose, in the end, must be parametrizable enough so that, if I'm unhappy with what's provided by default (like default hyperparameters) I still have enough control over the implementation of the AutoML library to be able to modify them to my convenience. + +And, to finalize, the flow of our problem shall be something like this: + +![final diagram](./images/diagram.png) + +And **now**, it's time to keep extracting data periodically, to improve the quality of our dataset, and start building the model. + +You may now [proceed to the next lab](#next). + +## Acknowledgements + +* **Author** - Nacho Martinez, Data Science Advocate @ DevRel +* **Contributors** - Victor Martin, Product Strategy Director +* **Last Updated By/Date** - May 24th, 2023 diff --git a/neural_networks_hero/understand_nn/images/activation_functions.gif b/neural_networks_hero/understand_nn/images/activation_functions.gif new file mode 100644 index 0000000..fe5a507 Binary files /dev/null and b/neural_networks_hero/understand_nn/images/activation_functions.gif differ diff --git a/neural_networks_hero/understand_nn/images/feedforward.png b/neural_networks_hero/understand_nn/images/feedforward.png new file mode 100644 index 0000000..1241a78 Binary files /dev/null and b/neural_networks_hero/understand_nn/images/feedforward.png differ diff --git a/neural_networks_hero/understand_nn/images/grey_white_matter.PNG b/neural_networks_hero/understand_nn/images/grey_white_matter.PNG new file mode 100644 index 0000000..92da8fb Binary files /dev/null and b/neural_networks_hero/understand_nn/images/grey_white_matter.PNG differ diff --git a/neural_networks_hero/understand_nn/images/inverse_loss.png b/neural_networks_hero/understand_nn/images/inverse_loss.png new file mode 100644 index 0000000..e8bc2b5 Binary files /dev/null and b/neural_networks_hero/understand_nn/images/inverse_loss.png differ diff --git a/neural_networks_hero/understand_nn/images/myelin.PNG b/neural_networks_hero/understand_nn/images/myelin.PNG new file mode 100644 index 0000000..b447106 Binary files /dev/null and b/neural_networks_hero/understand_nn/images/myelin.PNG differ diff --git a/neural_networks_hero/understand_nn/images/neural_network_visualization_1.gif b/neural_networks_hero/understand_nn/images/neural_network_visualization_1.gif new file mode 100644 index 0000000..3a2c048 Binary files /dev/null and b/neural_networks_hero/understand_nn/images/neural_network_visualization_1.gif differ diff --git a/neural_networks_hero/understand_nn/images/neural_network_visualization_2.gif b/neural_networks_hero/understand_nn/images/neural_network_visualization_2.gif new file mode 100644 index 0000000..dfc727c Binary files /dev/null and b/neural_networks_hero/understand_nn/images/neural_network_visualization_2.gif differ diff --git a/neural_networks_hero/understand_nn/images/neural_network_visualization_3.gif b/neural_networks_hero/understand_nn/images/neural_network_visualization_3.gif new file mode 100644 index 0000000..222726f Binary files /dev/null and b/neural_networks_hero/understand_nn/images/neural_network_visualization_3.gif differ diff --git a/neural_networks_hero/understand_nn/images/optimizations.gif b/neural_networks_hero/understand_nn/images/optimizations.gif new file mode 100644 index 0000000..478da6f Binary files /dev/null and b/neural_networks_hero/understand_nn/images/optimizations.gif differ diff --git a/neural_networks_hero/understand_nn/images/recurrent.png b/neural_networks_hero/understand_nn/images/recurrent.png new file mode 100644 index 0000000..16d02c0 Binary files /dev/null and b/neural_networks_hero/understand_nn/images/recurrent.png differ diff --git a/neural_networks_hero/understand_nn/understand_nns.md b/neural_networks_hero/understand_nn/understand_nns.md new file mode 100644 index 0000000..218c366 --- /dev/null +++ b/neural_networks_hero/understand_nn/understand_nns.md @@ -0,0 +1,112 @@ +# Lab 1: Understand Neural Networks + +## Introduction + +In this lab, we're going to take a look at how neural networks work and their characteristics. + +Estimated Time: 20 minutes + +## Task 0: What are Neurons + +Just to explain what's happening in the brain briefly, we just need to need a little vocabulary to go with that. + +In short, our brain is composed of two types of matter: grey matter and white matter. + +![grey and white matter in a brain](./images/grey_white_matter.PNG) + +Our brains are made up of neurons. They look grey. And as we grow up, more of those neurons get wrapped in this white stuff, called **myelin**. + +With myelin, neurons can talk to each other about **3000 times faster!** (like upgrading to fiber optic internet). + +![a picture of myelin](./images/myelin.PNG) + +This is a huge renovation that happens in your brain. + +- This process begins at the back of the brain before we're even born. +- By the time we're teens it's gone a long way, but it hasn't finished. +- And the part of the brain that's considered to develop last (up until 25 years old) is the very front, whith happens to be the part of the brain that sort of makes us human. + +## Task 1: What is a Neural Network + +A neural network is a method in Machine Learning that *simulates* the human brain's way of thinking, and teaches computers how to process data in a way that *looks* human. + +A neural network's (NN) implementation works just like a neuron in the human brain: + +- We have artificial neurons called perceptrons +- A perceptron, just like a neuron would, connects with other neurons through axons (which in NNs are called **configurations**) to transmit data bilaterally +- Each neuron will transmit information and, as we grow older, neurons will form stronger paths based on personal experiences. The concept of "learning", in Neural Networks, can be called **back propagation**. + +In NNs, perceptrons are composed of a series of inputs to produce an output. So we'll always have one input layer and one output layer; it's up to us programmers to decide how these layers communicate and in which order. + +There are two types of neural networks: + +- *Feedforward* NNs: data moves from an input layer to an output layer, and by the time data reaches the output layer, the NN has completed its job. +- *Recurrent* NNs: data doesn't stop at the output layer. Rather than doing so, it feeds the results of a layer *backwards* into previously-traversed layers over and over, performing a specified number of cycles called *epochs*. + +It's important to note that NNs base their calculations (gradients) upon [the chain rule](https://tutorial.math.lamar.edu/classes/calcI/ChainRule.aspx), which requires a bit of background in advanced mathematics. At the beginning of Machine Learning history, people had to calculate their gradients. Nowadays, most modern libraries like TensorFlow have implemented their own *automatic gradient calculator* that does these calculations for us, which does most of the mathematical work automatically. + +> **Note**: this technique is called [automatic differentiation](https://blog.paperspace.com/pytorch-101-understanding-graphs-and-automatic-differentiation/). + +Here's an image of a feedforward NN, where we see only forward steps from the inputs (below) towards the outputs (above): + +![feedforward](images/feedforward.png) + +And here's an image of a recurrent NN. Note that if we have more than one hidden layer, we can call the NN a **deep NN**. + +![recurrent](images/recurrent.png) + +## Task 2: Visualize a Neural Network + +TensorFlow has created an [open-source playground](https://playground.tensorflow.org/) to allow anybody to try NNs visually. + +We'll also take a look at [another open-source tool](https://poloclub.github.io/cnn-explainer/) for visualizing one type of Neural Networks called CNNs or Convolutional Neural Networks + +![recurrent](images/neural_network_visualization_1.gif) + +The example that we can see in this Neural Network is very simple and only has one input layer, one hidden layer, and one output layer. + +In cases like text or image analysis, things get a bit more complicated, and usually, several pre-built layers (groups of layers that work very well together) are used to analyze this kind of data (image, text): + +![recurrent](images/neural_network_visualization_2.gif) + +> **Note**: the hidden layer in the middle usually has pre-trained blocks of layers that have been proven to work very well for a specific problem, so it's more realistic to find something with many layers, resulting in the Neural Network being very complex: + +![recurrent](images/neural_network_visualization_3.gif) + +In this workshop, we're going to work specifically with an open-source library called **fastai** which simplifies the process of creating Neural Networks from scratch. + +## Task 3: Neural Networks Characteristics: Hyperparameters + +Neural Networks are *customizable* in a way: we can train our NN to have specific behaviors. This is achieved through **hyperparameters**. There are different types of ways to customize a NN: + +- *Activation function*: this decides whether a neuron should be activated (work) or not. + + Typically, choosing the right activation function requires some knowledge. For example, there are some activation functions like the hyperbolic tangent or the sigmoid function (see the below figure for the most common modern activation functions) where depending on the problem and dataset, you may suffer from issues like the *vanishing gradients problem*. This means that only *some* activation functions are good for specific problems. + + In this figure, we have a visualization of the most common modern activation functions. + + ![activation functions](images/activation_functions.gif) + + > **Note**: if you're particularly interested in checking out all activation functions (even variants) check out [this website](https://dashee87.github.io/deep%20learning/visualising-activation-functions-in-neural-networks/). Credit goes to [David Sheenan](https://github.com/dashee87) + +- *Loss Optimizers*: these are algorithms/functions that control how a Neural Network decides in the end, by changing its attributes (like the weights of each layer, or the speed at which the NN learns). + + In the following figure, we see 5 of the most used optimization functions nowadays: + + ![NN optimizers](images/optimizations.gif) + + > **Note**: The loss is the amount of error present in the NN. The smaller this number, the better accuracy the model has. It's the inverse of a NN's precision: + + ![loss inverse](images/inverse_loss.png) + +- Learning rate: this controls how quickly the model adapts to the problem. The higher the learning rate, the faster the loss will drop, but the model may suffer from inaccurate predictions. The ideal loss isn't too high or too low. + +Now that we have a basic understanding of what composes a Neural Network and its characteristics, we'll talk about how this relates to League of Legends (the videogame) and how we can use a Neural Network to help us with decision-making within the game. + +You may now [proceed to the next lab](#next). + +## Acknowledgements + +- **Author** - Nacho Martinez, Data Science Advocate @ DevRel +- **Contributors** - Victor Martin - Product Strategy Director +- **Last Updated By/Date** - May 29th, 2023 diff --git a/workshops/mask_detection_labeling/index.html b/workshops/mask_detection_labeling/index.html index db0dd4d..aaac634 100644 --- a/workshops/mask_detection_labeling/index.html +++ b/workshops/mask_detection_labeling/index.html @@ -8,19 +8,19 @@ Oracle LiveLabs - - - + + + - - + + + +
+
+
+
+
+
+
+
+ + + + + diff --git a/workshops/neural_networks_hero/manifest.json b/workshops/neural_networks_hero/manifest.json new file mode 100644 index 0000000..874a029 --- /dev/null +++ b/workshops/neural_networks_hero/manifest.json @@ -0,0 +1,70 @@ +{ + "workshoptitle": "Neural Networks Hero", + "help": "ignacio.m.martinez@oracle.com", + "tutorials": [ + { + "title": "League of Legends Introduction", + "description": "What the workshop is about", + "filename": "../../neural_networks_hero/aboutworkshop/about.md", + "type": "dbcs" + }, + { + "title": "Introduction", + "description": "What the workshop is about", + "filename": "../../neural_networks_hero/intro/intro.md", + "type": "dbcs" + }, + { + "title": "Get Started", + "description": "Prerequisites for LiveLabs (Oracle-owned tenancies). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.", + "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md" + }, + { + "title": "Infrastructure", + "description": "What the workshop is about", + "filename": "https://oracle-devrel.github.io/leagueoflegends-optimizer/hols/dataextraction/infra/infra.md", + "type": "dbcs" + }, + { + "title": "Lab 1: Understand Neural Networks", + "description": "What the workshop is about", + "filename": "../../neural_networks_hero/understand_nn/understand_nns.md", + "type": "dbcs" + }, + { + "title": "Lab 2: Think about the Problem", + "description": "How to think like a Data Scientist to solve the problem", + "filename": "https://oracle-devrel.github.io/leagueoflegends-optimizer/hols/dataextraction/the_problem/problem.md", + "type": "dbcs" + }, + { + "title": "Lab 3: Create a Model", + "description": "Create Model", + "filename": "https://oracle-devrel.github.io/leagueoflegends-optimizer/hols/dataextraction/creatingmodel/creatingmodel.md", + "type": "dbcs" + }, + { + "title": "Lab 4: Augment Dataset & Train Model", + "description": "Augmenting the dataset and training the model", + "filename": "../../neural_networks_hero/augment_train/augment_train.md", + "type": "dbcs" + }, + { + "title": "Lab 5: Inference (Real-Time Predictions)", + "description": "Using the model to make real-time predictions", + "filename": "../../neural_networks_hero/infer/infer.md", + "type": "dbcs" + }, + { + "title": "The End", + "description": "Bye Bye", + "filename": "../../neural_networks_hero/end/end.md", + "type": "dbcs" + }, + { + "title": "Need Help?", + "description": "Solutions to Common Problems and Directions for Receiving Live Help", + "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md" + } + ] +}