Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extract more information from the config.yaml file with DeepLabCutInterface #1030

Open
h-mayorquin opened this issue Aug 26, 2024 · 0 comments

Comments

@h-mayorquin
Copy link
Collaborator

Currently, we are only extracting the task as a session description and experimenter:

self._config_file = _read_config(config_file_path=config_file_path)
self.subject_name = subject_name
self.verbose = verbose
super().__init__(file_path=file_path, config_file_path=config_file_path)
def get_metadata(self):
metadata = super().get_metadata()
metadata["NWBFile"].update(
session_description=self._config_file["Task"],
experimenter=[self._config_file["scorer"]],
)
return metadata

We will make this optional.

We are on a crunch now for a project but in a future project that uses DeepLabCut's config.yaml we should figure out what more metadata could be extracted from those configuration files. From the test data it seems that at least the names of the body parts could be incluced as metadata somewhere.

This is the config.yaml file contents from the test data that we have on gin:

    # Project definitions (do not edit)
Task: openfield
scorer: Pranav
date: Aug20
multianimalproject: false
identity:

    # Project path (change when moving around)
project_path: /Users/sakshamsharda/Documents/NWB/GIN/behavior_testing_data/DLC

    # Annotation data set configuration (and individual video cropping parameters)
video_sets:
  /Data/openfield-Pranav-2018-08-20/videos/m1s1.mp4:
    crop: 0, 640, 0, 480
  /Data/openfield-Pranav-2018-08-20/videos/m1s2.mp4:
    crop: 0, 640, 0, 480
  /Data/openfield-Pranav-2018-08-20/videos/m2s1.mp4:
    crop: 0, 640, 0, 480
  /Data/openfield-Pranav-2018-08-20/videos/m3s1.mp4:
    crop: 0, 640, 0, 480
  /Data/openfield-Pranav-2018-08-20/videos/m3s2.mp4:
    crop: 0, 640, 0, 480
  /Data/openfield-Pranav-2018-08-20/videos/m4s1.mp4:
    crop: 0, 640, 0, 480
  /Data/openfield-Pranav-2018-08-20/videos/m5s1.mp4:
    crop: 0, 800, 0, 800
  /Data/openfield-Pranav-2018-08-20/videos/m6s1.mp4:
    crop: 0, 800, 0, 800
  /Data/openfield-Pranav-2018-08-20/videos/m6s2.mp4:
    crop: 0, 800, 0, 800
  /Data/openfield-Pranav-2018-08-20/videos/m7s1.mp4:
    crop: 0, 800, 0, 800
  /Data/openfield-Pranav-2018-08-20/videos/m7s2.mp4:
    crop: 0, 800, 0, 800
  /Data/openfield-Pranav-2018-08-20/videos/m7s3.mp4:
    crop: 0, 800, 0, 800
  /Data/openfield-Pranav-2018-08-20/videos/m8s1.mp4:
    crop: 0, 800, 0, 800

bodyparts:
- snout
- leftear
- rightear
- tailbase

start: 0
stop: 1
numframes2pick: 20

    # Plotting configuration
skeleton: []
skeleton_color: black
pcutoff: 0.4
dotsize: 8
alphavalue: 0.7
colormap: jet

    # Training,Evaluation and Analysis configuration
TrainingFraction:
- 0.95
iteration: 1
default_net_type: resnet_50
default_augmenter: default
snapshotindex: -1
batch_size: 1

    # Cropping Parameters (for analysis and outlier frame detection)
cropping: false
    #if cropping is true for analysis, then set the values here:
x1: 0
x2: 640
y1: 277
y2: 624

    # Refinement configuration (parameters from annotation dataset configuration also relevant in this stage)
corner2move2:
- 50
- 50
move2corner: true
croppedtraining:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant