Skip to content

Commit

Permalink
Merge branch 'tf-only-export' into tf-android
Browse files Browse the repository at this point in the history
  • Loading branch information
zldrobit committed Jun 17, 2021
2 parents 1de825f + 972c6a2 commit 522d65e
Show file tree
Hide file tree
Showing 54 changed files with 3,182 additions and 1,266 deletions.
5 changes: 5 additions & 0 deletions .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# These are supported funding model platforms

github: glenn-jocher
patreon: ultralytics
open_collective: ultralytics
15 changes: 7 additions & 8 deletions .github/workflows/ci-testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,10 @@ name: CI CPU testing

on: # https://help.github.com/en/actions/reference/events-that-trigger-workflows
push:
branches: [ master ]
branches: [ master, develop ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '0 0 * * *' # Runs at 00:00 UTC every day
branches: [ master, develop ]

jobs:
cpu-tests:
Expand Down Expand Up @@ -66,14 +64,15 @@ jobs:
di=cpu # inference devices # define device
# train
python train.py --img 256 --batch 8 --weights weights/${{ matrix.model }}.pt --cfg models/${{ matrix.model }}.yaml --epochs 1 --device $di
python train.py --img 128 --batch 16 --weights weights/${{ matrix.model }}.pt --cfg models/${{ matrix.model }}.yaml --epochs 1 --device $di
# detect
python detect.py --weights weights/${{ matrix.model }}.pt --device $di
python detect.py --weights runs/train/exp/weights/last.pt --device $di
# test
python test.py --img 256 --batch 8 --weights weights/${{ matrix.model }}.pt --device $di
python test.py --img 256 --batch 8 --weights runs/train/exp/weights/last.pt --device $di
python test.py --img 128 --batch 16 --weights weights/${{ matrix.model }}.pt --device $di
python test.py --img 128 --batch 16 --weights runs/train/exp/weights/last.pt --device $di
python hubconf.py # hub
python models/yolo.py --cfg models/${{ matrix.model }}.yaml # inspect
python models/export.py --img 256 --batch 1 --weights weights/${{ matrix.model }}.pt # export
python models/export.py --img 128 --batch 1 --weights weights/${{ matrix.model }}.pt # export
shell: bash
13 changes: 7 additions & 6 deletions .github/workflows/greetings.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@ jobs:
repo-token: ${{ secrets.GITHUB_TOKEN }}
pr-message: |
👋 Hello @${{ github.actor }}, thank you for submitting a 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master update by running the following, replacing 'feature' with the name of your local branch:
- ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
```bash
git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
git checkout feature # <----- replace 'feature' with local branch name
git rebase upstream/master
git rebase upstream/develop
git push -u origin -f
```
- ✅ Verify all Continuous Integration (CI) **checks are passing**.
Expand All @@ -42,10 +42,11 @@ jobs:
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Google Colab Notebook** with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
- **Kaggle Notebook** with free GPU: [https://www.kaggle.com/ultralytics/yolov5](https://www.kaggle.com/ultralytics/yolov5)
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
- **Docker Image** https://hub.docker.com/r/ultralytics/yolov5. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) ![Docker Pulls](https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker)
- **Google Colab and Kaggle** notebooks with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
## Status
Expand Down
23 changes: 10 additions & 13 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Start FROM Nvidia PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch
FROM nvcr.io/nvidia/pytorch:20.12-py3
FROM nvcr.io/nvidia/pytorch:21.03-py3

# Install linux packages
RUN apt update && apt install -y screen libgl1-mesa-glx
RUN apt update && apt install -y zip htop screen libgl1-mesa-glx

# Install python dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN pip install gsutil
RUN python -m pip install --upgrade pip
RUN pip uninstall -y nvidia-tensorboard nvidia-tensorboard-plugin-dlprof
RUN pip install --no-cache -r requirements.txt coremltools onnx gsutil notebook

# Create working directory
RUN mkdir -p /usr/src/app
Expand All @@ -17,11 +17,8 @@ WORKDIR /usr/src/app
# Copy contents
COPY . /usr/src/app

# Copy weights
#RUN python3 -c "from models import *; \
#attempt_download('weights/yolov5s.pt'); \
#attempt_download('weights/yolov5m.pt'); \
#attempt_download('weights/yolov5l.pt')"
# Set environment variables
ENV HOME=/usr/src/app


# --------------------------------------------------- Extras Below ---------------------------------------------------
Expand All @@ -40,13 +37,13 @@ COPY . /usr/src/app
# sudo docker kill $(sudo docker ps -q)

# Kill all image-based
# sudo docker kill $(sudo docker ps -a -q --filter ancestor=ultralytics/yolov5:latest)
# sudo docker kill $(sudo docker ps -qa --filter ancestor=ultralytics/yolov5:latest)

# Bash into running container
# sudo docker container exec -it ba65811811ab bash
# sudo docker exec -it 5a9b5863d93d bash

# Bash into stopped container
# sudo docker commit 092b16b25c5b usr/resume && sudo docker run -it --gpus all --ipc=host -v "$(pwd)"/coco:/usr/src/coco --entrypoint=sh usr/resume
# id=$(sudo docker ps -qa) && sudo docker start $id && sudo docker exec -it $id bash

# Send weights to GCP
# python -c "from utils.general import *; strip_optimizer('runs/train/exp0_*/weights/best.pt', 'tmp.pt')" && gsutil cp tmp.pt gs://*.pt
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ https://github.com/ultralytics/yolov5.git
### 2. Install requirements
```
pip install -r requirements.txt
pip install tensorflow==2.4.0
pip install tensorflow==2.4.1
```

### 3. Convert and verify
Expand Down
Empty file added __init__.py
Empty file.
55 changes: 55 additions & 0 deletions data/GlobalWheat2020.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Global Wheat 2020 dataset http://www.global-wheat.com/
# Train command: python train.py --data GlobalWheat2020.yaml
# Default dataset location is next to YOLOv5:
# /parent_folder
# /datasets/GlobalWheat2020
# /yolov5


# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: # 3422 images
- ../datasets/GlobalWheat2020/images/arvalis_1
- ../datasets/GlobalWheat2020/images/arvalis_2
- ../datasets/GlobalWheat2020/images/arvalis_3
- ../datasets/GlobalWheat2020/images/ethz_1
- ../datasets/GlobalWheat2020/images/rres_1
- ../datasets/GlobalWheat2020/images/inrae_1
- ../datasets/GlobalWheat2020/images/usask_1

val: # 748 images (WARNING: train set contains ethz_1)
- ../datasets/GlobalWheat2020/images/ethz_1

test: # 1276 images
- ../datasets/GlobalWheat2020/images/utokyo_1
- ../datasets/GlobalWheat2020/images/utokyo_2
- ../datasets/GlobalWheat2020/images/nau_1
- ../datasets/GlobalWheat2020/images/uq_1

# number of classes
nc: 1

# class names
names: [ 'wheat_head' ]


# download command/URL (optional) --------------------------------------------------------------------------------------
download: |
from utils.general import download, Path
# Download
dir = Path('../datasets/GlobalWheat2020') # dataset directory
urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/GlobalWheat2020_labels.zip']
download(urls, dir=dir)
# Make Directories
for p in 'annotations', 'images', 'labels':
(dir / p).mkdir(parents=True, exist_ok=True)
# Move
for p in 'arvalis_1', 'arvalis_2', 'arvalis_3', 'ethz_1', 'rres_1', 'inrae_1', 'usask_1', \
'utokyo_1', 'utokyo_2', 'nau_1', 'uq_1':
(dir / p).rename(dir / 'images' / p) # move to /images
f = (dir / p).with_suffix('.json') # json file
if f.exists():
f.rename((dir / 'annotations' / p).with_suffix('.json')) # move to /annotations
52 changes: 52 additions & 0 deletions data/SKU-110K.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19
# Train command: python train.py --data SKU-110K.yaml
# Default dataset location is next to YOLOv5:
# /parent_folder
# /datasets/SKU-110K
# /yolov5


# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../datasets/SKU-110K/train.txt # 8219 images
val: ../datasets/SKU-110K/val.txt # 588 images
test: ../datasets/SKU-110K/test.txt # 2936 images

# number of classes
nc: 1

# class names
names: [ 'object' ]


# download command/URL (optional) --------------------------------------------------------------------------------------
download: |
import shutil
from tqdm import tqdm
from utils.general import np, pd, Path, download, xyxy2xywh
# Download
datasets = Path('../datasets') # download directory
urls = ['http://trax-geometry.s3.amazonaws.com/cvpr_challenge/SKU110K_fixed.tar.gz']
download(urls, dir=datasets, delete=False)
# Rename directories
dir = (datasets / 'SKU-110K')
if dir.exists():
shutil.rmtree(dir)
(datasets / 'SKU110K_fixed').rename(dir) # rename dir
(dir / 'labels').mkdir(parents=True, exist_ok=True) # create labels dir
# Convert labels
names = 'image', 'x1', 'y1', 'x2', 'y2', 'class', 'image_width', 'image_height' # column names
for d in 'annotations_train.csv', 'annotations_val.csv', 'annotations_test.csv':
x = pd.read_csv(dir / 'annotations' / d, names=names).values # annotations
images, unique_images = x[:, 0], np.unique(x[:, 0])
with open((dir / d).with_suffix('.txt').__str__().replace('annotations_', ''), 'w') as f:
f.writelines(f'./images/{s}\n' for s in unique_images)
for im in tqdm(unique_images, desc=f'Converting {dir / d}'):
cls = 0 # single-class dataset
with open((dir / 'labels' / im).with_suffix('.txt'), 'a') as f:
for r in x[images == im]:
w, h = r[6], r[7] # image width, height
xywh = xyxy2xywh(np.array([[r[1] / w, r[2] / h, r[3] / w, r[4] / h]]))[0] # instance
f.write(f"{cls} {xywh[0]:.5f} {xywh[1]:.5f} {xywh[2]:.5f} {xywh[3]:.5f}\n") # write label
61 changes: 61 additions & 0 deletions data/VisDrone.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset
# Train command: python train.py --data VisDrone.yaml
# Default dataset location is next to YOLOv5:
# /parent_folder
# /VisDrone
# /yolov5


# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../VisDrone/VisDrone2019-DET-train/images # 6471 images
val: ../VisDrone/VisDrone2019-DET-val/images # 548 images
test: ../VisDrone/VisDrone2019-DET-test-dev/images # 1610 images

# number of classes
nc: 10

# class names
names: [ 'pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor' ]


# download command/URL (optional) --------------------------------------------------------------------------------------
download: |
from utils.general import download, os, Path
def visdrone2yolo(dir):
from PIL import Image
from tqdm import tqdm
def convert_box(size, box):
# Convert VisDrone box to YOLO xywh box
dw = 1. / size[0]
dh = 1. / size[1]
return (box[0] + box[2] / 2) * dw, (box[1] + box[3] / 2) * dh, box[2] * dw, box[3] * dh
(dir / 'labels').mkdir(parents=True, exist_ok=True) # make labels directory
pbar = tqdm((dir / 'annotations').glob('*.txt'), desc=f'Converting {dir}')
for f in pbar:
img_size = Image.open((dir / 'images' / f.name).with_suffix('.jpg')).size
lines = []
with open(f, 'r') as file: # read annotation.txt
for row in [x.split(',') for x in file.read().strip().splitlines()]:
if row[4] == '0': # VisDrone 'ignored regions' class 0
continue
cls = int(row[5]) - 1
box = convert_box(img_size, tuple(map(int, row[:4])))
lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n")
with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl:
fl.writelines(lines) # write label.txt
# Download
dir = Path('../VisDrone') # dataset directory
urls = ['https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-train.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip',
'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip']
download(urls, dir=dir)
# Convert
for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev':
visdrone2yolo(dir / d) # convert VisDrone annotations to YOLO labels
21 changes: 21 additions & 0 deletions data/argoverse_hd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/
# Train command: python train.py --data argoverse_hd.yaml
# Default dataset location is next to YOLOv5:
# /parent_folder
# /argoverse
# /yolov5


# download command/URL (optional)
download: bash data/scripts/get_argoverse_hd.sh

# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../argoverse/Argoverse-1.1/images/train/ # 39384 images
val: ../argoverse/Argoverse-1.1/images/val/ # 15062 iamges
test: ../argoverse/Argoverse-1.1/images/test/ # Submit to: https://eval.ai/web/challenges/challenge-page/800/overview

# number of classes
nc: 8

# class names
names: [ 'person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign' ]
4 changes: 2 additions & 2 deletions data/coco.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# COCO 2017 dataset http://cocodataset.org
# Train command: python train.py --data coco.yaml
# Default dataset location is next to /yolov5:
# Default dataset location is next to YOLOv5:
# /parent_folder
# /coco
# /yolov5
Expand Down Expand Up @@ -30,6 +30,6 @@ names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', '

# Print classes
# with open('data/coco.yaml') as f:
# d = yaml.load(f, Loader=yaml.FullLoader) # dict
# d = yaml.safe_load(f) # dict
# for i, x in enumerate(d['names']):
# print(i, x)
2 changes: 1 addition & 1 deletion data/coco128.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# COCO 2017 dataset http://cocodataset.org - first 128 training images
# Train command: python train.py --data coco128.yaml
# Default dataset location is next to /yolov5:
# Default dataset location is next to YOLOv5:
# /parent_folder
# /coco128
# /yolov5
Expand Down
28 changes: 28 additions & 0 deletions data/hyp.finetune_objects365.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
lr0: 0.00258
lrf: 0.17
momentum: 0.779
weight_decay: 0.00058
warmup_epochs: 1.33
warmup_momentum: 0.86
warmup_bias_lr: 0.0711
box: 0.0539
cls: 0.299
cls_pw: 0.825
obj: 0.632
obj_pw: 1.0
iou_t: 0.2
anchor_t: 3.44
anchors: 3.2
fl_gamma: 0.0
hsv_h: 0.0188
hsv_s: 0.704
hsv_v: 0.36
degrees: 0.0
translate: 0.0902
scale: 0.491
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
mosaic: 1.0
mixup: 0.0
Loading

0 comments on commit 522d65e

Please sign in to comment.