Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GoldFish LLaMa 2 Inference #35

Open
insafim opened this issue Aug 12, 2024 · 0 comments
Open

GoldFish LLaMa 2 Inference #35

insafim opened this issue Aug 12, 2024 · 0 comments

Comments

@insafim
Copy link

insafim commented Aug 12, 2024

{'default': 'configs/datasets/video_chatgpt/default.yaml'}
{'default': 'path to the config file'}
using openai: True
Initialization Model

model arch mini_gpt4_llama_v2
model cls <class 'minigpt4.models.mini_gpt4_llama_v2.MiniGPT4_Video'>
dataset name video_chatgpt
Error setting attribute device with value cuda
Llama model
token pooling True
vit precision fp16
freeze the vision encoder
Loading VIT Done
Loading LLAMA
self.low_resource True
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:09<00:00, 4.85s/it]
trainable params: 33,554,432 || all params: 6,771,970,048 || trainable%: 0.4955
Loading LLAMA Done
Load Minigpt-4-LLM Checkpoint: /share/data/drive_4/insaf/checkpoints/MiniGPT4-Video/checkpoints/video_captioning_llama_checkpoint_last.pth
{'name': 'blip2_image_train', 'image_size': 224}
Initialization Finished
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:09<00:00, 2.42s/it]
Time taken to load model: 73.33053708076477
Video name: 203
Video duration: 480.00 seconds
Long video
External memory is ready
Loading embeddings from pkl file
Traceback (most recent call last):
File "/share/data/drive_4/insaf/MiniGPT4-video/goldfish_inference.py", line 59, in
pred=process_video(processed_video_path, args.add_subtitles, args.question)
File "/share/data/drive_4/insaf/MiniGPT4-video/goldfish_inference.py", line 40, in process_video
result = goldfish_lv.inference(video_path, has_subtitles, instruction,number_of_neighbours)
File "/share/data/drive_4/insaf/MiniGPT4-video/goldfish_lv.py", line 560, in inference
related_information=self.get_related_context(external_memory,related_context_keys)
File "/share/data/drive_4/insaf/MiniGPT4-video/goldfish_lv.py", line 510, in get_related_context
most_related_clips=self.get_most_related_clips(related_context_keys)
File "/share/data/drive_4/insaf/MiniGPT4-video/goldfish_lv.py", line 506, in get_most_related_clips
assert len(most_related_clips)!=0, f"No related clips found {related_context_keys}"
AssertionError: No related clips found []2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant