Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llava-video使用llms-eval测试出错 #297

Open
yuanrr opened this issue Oct 11, 2024 · 4 comments
Open

llava-video使用llms-eval测试出错 #297

yuanrr opened this issue Oct 11, 2024 · 4 comments

Comments

@yuanrr
Copy link

yuanrr commented Oct 11, 2024

按照evaluation部分,目前的llava好像已经没有llava_vid, 在lmms-eval下面好像也有类似的错误 #242 in lmms-eval
想问一下有没有什么快速的解决方案。。如果自己进行适配的话该怎么操作。。

@ZhangYuanhan-AI
Copy link
Collaborator

已经解决 可以用最新的lmms-eval

@yuanrr
Copy link
Author

yuanrr commented Oct 13, 2024

Hi, I'm trying it but found an error when using function read_video_pyav().

最新版本的lmms-eval中的 read_video_pyav() 似乎仍然没有更新

具体错误:

read_video_pyav() got an unexpected keyword argument 'force_sample'

@yuanrr
Copy link
Author

yuanrr commented Oct 13, 2024

我把video_decode_backend: str = "pyav" 默认值暂时换成了video_decode_backend: str = "decord",可以跑通

@ZhangYuanhan-AI
Copy link
Collaborator

是的 默认就是 这个 我把默认也改一下

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants