Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug Report for Crawl4A multiple async #143

Open
jmontoyavallejo opened this issue Oct 7, 2024 · 1 comment
Open

Bug Report for Crawl4A multiple async #143

jmontoyavallejo opened this issue Oct 7, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@jmontoyavallejo
Copy link

Hi UncleCode,
I hope you are doing well!

First of all, I want to express my gratitude for creating Crawl4AI It’s a fantastic tool for what I’m exploring

I did come across a small bug that I wanted to bring to your attention. When I try to run the scraper with LLMs in concurrency, the output format doesn’t seem to align with the Pydantic schema, and it crashes.

This only happens when I’m running it with concurrency and combining it with other async scrapers. the output schema turns into index,tags and content

@unclecode
Copy link
Owner

Hello @jmontoyavallejo , thank you so much for your kind words. I would greatly appreciate it if you could provide a code sample that I can run and replicate the error I'm facing. What you're saying sounds interesting. Please share a sample code that demonstrates the issue when making concurrent requests to multiple URLs using the LLM. Thx

@unclecode unclecode self-assigned this Oct 8, 2024
@unclecode unclecode added the bug Something isn't working label Oct 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants