We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
it runs in a loop, checks the checkpoint, eval then sends to wandb
example of result https://wandb.ai/rom1504/eval_openclip/runs/2znj3nwy?workspace=user-rom1504
This is very convenient to have early result during training
would be great to have a clean script
related issue LAION-AI/CLIP_benchmark#2
here's a bad way to do it:
while [ 1 ] do for i in `ls -t /fsx/rom1504/open_clip/src/logs/*B*/checkpoints/*` do bash eval.sh $i done sleep 300 done
ev=eval_`basename $1` if [ -f "$ev" ]; then true else echo "$ev does not exist." python -m training.main \ --imagenet-val /fsx/rom1504/imagenetval/imagenet_validation \ --model ViT-B-32 \ --precision amp_bfloat16 \ --batch-size 512 \ --pretrained $1 &> $ev python3 eval_to_wandb_simple.py fi
import glob import wandb # send only new somehow files = list(glob.glob("eval_epoch_*.pt")) metrics = [] for filename in files: epoch = int(filename.split("_")[-1].split(".")[0]) f = open(filename, "r") c = f.read() good =[l for l in c.split("\n") if "imagenet-zero" in l] if len(good) != 1: continue top5 = float(good[0].split("\t")[1].split(" ")[-1]) top1 = float(good[0].split("\t")[0].split(" ")[-1]) metrics.append([epoch, top1, top5]) metrics.sort(key=lambda x:x[0]) print(metrics) wandb.init(project="eval_openclip", id="h1cwb54b", resume="allow") for epoch, top1, top5 in metrics: wandb.log({'top1': top1, 'top5': top5, 'step': epoch})
The text was updated successfully, but these errors were encountered:
Thanks!
Sorry, something went wrong.
No branches or pull requests
it runs in a loop, checks the checkpoint, eval then sends to wandb
example of result https://wandb.ai/rom1504/eval_openclip/runs/2znj3nwy?workspace=user-rom1504
This is very convenient to have early result during training
would be great to have a clean script
related issue LAION-AI/CLIP_benchmark#2
here's a bad way to do it:
The text was updated successfully, but these errors were encountered: