Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Puppeteer errors causing program termination #376

Open
MCSeekeri opened this issue Aug 27, 2024 · 20 comments
Open

Puppeteer errors causing program termination #376

MCSeekeri opened this issue Aug 27, 2024 · 20 comments

Comments

@MCSeekeri
Copy link

I was using Zimit to archive the SCP-CN Wikidot site and encountered an interruption of the program due to a puppeteer error.
Attached here is the log output before the program exits.

{"timestamp":"2024-08-27T08:36:03.741Z","logLevel":"warn","context":"behavior","message":"Waiting for custom page load failed","details":{"type":"exception","message":"Protocol error (Runtime.evaluate): Target closed","stack":"TargetCloseError: Protocol error (Runtime.evaluate): Target closed\n    at CallbackRegistry.clear (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/CallbackRegistry.js:69:36)\n    at CdpCDPSession._onClosed (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/CDPSession.js:98:25)\n    at Connection.onMessage (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/Connection.js:127:25)\n    at WebSocket.<anonymous> (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/node/NodeWebSocketTransport.js:38:32)\n    at callListener (/app/node_modules/puppeteer-core/node_modules/ws/lib/event-target.js:290:14)\n    at WebSocket.onMessage (/app/node_modules/puppeteer-core/node_modules/ws/lib/event-target.js:209:9)\n    at WebSocket.emit (node:events:519:28)\n    at Receiver.receiverOnMessage (/app/node_modules/puppeteer-core/node_modules/ws/lib/websocket.js:1220:20)\n    at Receiver.emit (node:events:519:28)\n    at Immediate.<anonymous> (/app/node_modules/puppeteer-core/node_modules/ws/lib/receiver.js:601:16)"}}
{"timestamp":"2024-08-27T08:36:03.742Z","logLevel":"warn","context":"links","message":"Link Extraction failed","details":{"type":"exception","message":"Attempted to use detached Frame '9A153325AAEA4D188BAEBA1E8B6B9D41'.","stack":"Error: Attempted to use detached Frame '9A153325AAEA4D188BAEBA1E8B6B9D41'.\n    at CdpFrame.<anonymous> (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/util/decorators.js:92:23)\n    at file:///app/dist/crawler.js:1378:102\n    at Array.map (<anonymous>)\n    at Crawler.extractLinks (file:///app/dist/crawler.js:1378:72)\n    at Crawler.loadPage (file:///app/dist/crawler.js:1322:20)\n    at async Crawler.default [as driver] (file:///app/dist/defaultDriver.js:2:5)\n    at async Crawler.crawlPage (file:///app/dist/crawler.js:566:9)\n    at async PageWorker.crawlPage (file:///app/dist/util/worker.js:153:21)"}}
[zimit::2024-08-27 08:36:04,142] INFO:SIGINT/SIGTERM received, stopping zimit
file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:72
            throw new Error('Execution context was destroyed');
                  ^

Error: Execution context was destroyed
    at file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:72:19
    at file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:1936:31
    at OperatorSubscriber2._this._next (file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:993:9)
    at Subscriber2.next (file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:696:12)
    at listener (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/util.js:367:24)
    at file:///app/node_modules/puppeteer-core/lib/esm/third_party/mitt/mitt.js:36:7
    at Array.map (<anonymous>)
    at Object.emit (file:///app/node_modules/puppeteer-core/lib/esm/third_party/mitt/mitt.js:35:20)
    at EventEmitter.emit (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/EventEmitter.js:77:23)
    at [nodejs.dispose] (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:151:23)

docker-compose.yml:

services:
  openzim:
    image: ghcr.io/openzim/zimit:dev
    environment:
      - HTTP_PROXY=http://100.100.2.2:19999
      - HTTPS_PROXY=http://100.100.2.2:19999
    command:
      - zimit
      - --url=https://scp-wiki-cn.wikidot.com
      - --keep
      - --name=scp-wiki_zh_all
      - --title="SCP 基金会 wiki"
      - --output=/output
      - --exclude=(\?q=|/signin|/forum|/system:)
      - --verbose
      - --workers=64
      - --behaviors=autoplay,autofetch,siteSpecific,autoscroll
      - --lang=zh
      - --zim-lang=zh
      - --description="SCP 基金会中国分部数据存档"
    volumes:
      - ./output:/output

I've modified the configuration with reference to the existing recipes, but I still encounter this problem.

@benoit74
Copy link
Collaborator

benoit74 commented Sep 2, 2024

Puppeteer is the software driving the browser used to crawl the website. The error you get is a puppeteer error which seems to indicate that the browser is gone. Probably crashed. Using --workers=64 means that there is 64 tabs running in parallel in the browser. I can easily imagine it can lead to weird situations either on browser side (out of memory issues, or simply weird browser crash due to parallelism of many many requests to same website) or on browsertrix crawler side (any parallelism bug). Not something that needs to be fixed at our level, and too hard to reproduce to open an issue on browsertrix crawler repo from my PoV.

@benoit74 benoit74 closed this as not planned Won't fix, can't repro, duplicate, stale Sep 2, 2024
@MCSeekeri
Copy link
Author

Puppeteer is the software driving the browser used to crawl the website. The error you get is a puppeteer error which seems to indicate that the browser is gone. Probably crashed. Using --workers=64 means that there is 64 tabs running in parallel in the browser. I can easily imagine it can lead to weird situations either on browser side (out of memory issues, or simply weird browser crash due to parallelism of many many requests to same website) or on browsertrix crawler side (any parallelism bug). Not something that needs to be fixed at our level, and too hard to reproduce to open an issue on browsertrix crawler repo from my PoV.

Thanks for your reply, I will set --workers=8 and run it again after that to see if the problem is solved.

@MCSeekeri
Copy link
Author

{"timestamp":"2024-09-02T11:35:40.440Z","logLevel":"warn","context":"behavior","message":"Waiting for custom page load failed","details":{"type":"exception","message":"Protocol error (Runtime.evaluate): Target closed","stack":"TargetCloseError: Protocol error (Runtime.evaluate): Target closed\n    at CallbackRegistry.clear (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/CallbackRegistry.js:69:36)\n    at CdpCDPSession._onClosed (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/CDPSession.js:98:25)\n    at Connection.onMessage (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/Connection.js:127:25)\n    at WebSocket.<anonymous> (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/node/NodeWebSocketTransport.js:38:32)\n    at callListener (/app/node_modules/puppeteer-core/node_modules/ws/lib/event-target.js:290:14)\n    at WebSocket.onMessage (/app/node_modules/puppeteer-core/node_modules/ws/lib/event-target.js:209:9)\n    at WebSocket.emit (node:events:519:28)\n    at Receiver.receiverOnMessage (/app/node_modules/puppeteer-core/node_modules/ws/lib/websocket.js:1220:20)\n    at Receiver.emit (node:events:519:28)\n    at Immediate.<anonymous> (/app/node_modules/puppeteer-core/node_modules/ws/lib/receiver.js:601:16)"}}
{"timestamp":"2024-09-02T11:35:40.441Z","logLevel":"warn","context":"links","message":"Link Extraction failed","details":{"type":"exception","message":"Attempted to use detached Frame 'B1F942F1A25D1AD25CBAC6935E9C075E'.","stack":"Error: Attempted to use detached Frame 'B1F942F1A25D1AD25CBAC6935E9C075E'.\n    at CdpFrame.<anonymous> (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/util/decorators.js:92:23)\n    at file:///app/dist/crawler.js:1378:102\n    at Array.map (<anonymous>)\n    at Crawler.extractLinks (file:///app/dist/crawler.js:1378:72)\n    at Crawler.loadPage (file:///app/dist/crawler.js:1322:20)\n    at async Crawler.default [as driver] (file:///app/dist/defaultDriver.js:2:5)\n    at async Crawler.crawlPage (file:///app/dist/crawler.js:566:9)\n    at async PageWorker.crawlPage (file:///app/dist/util/worker.js:153:21)"}}
file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:72
            throw new Error('Execution context was destroyed');
                  ^

Error: Execution context was destroyed
    at file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:72:19
    at file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:1936:31
    at OperatorSubscriber2._this._next (file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:993:9)
    at Subscriber2.next (file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:696:12)
    at listener (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/util.js:367:24)
    at file:///app/node_modules/puppeteer-core/lib/esm/third_party/mitt/mitt.js:36:7
    at Array.map (<anonymous>)
    at Object.emit (file:///app/node_modules/puppeteer-core/lib/esm/third_party/mitt/mitt.js:35:20)
    at EventEmitter.emit (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/EventEmitter.js:77:23)
    at [nodejs.dispose] (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:151:23)

Node.js v20.15.0
Traceback (most recent call last):
  File "/usr/bin/zimit", line 8, in <module>
    sys.exit(zimit.zimit())
             ^^^^^^^^^^^^^
  File "/app/zimit/lib/python3.12/site-packages/zimit/zimit.py", line 688, in zimit
    run(sys.argv[1:])
  File "/app/zimit/lib/python3.12/site-packages/zimit/zimit.py", line 574, in run
    raise subprocess.CalledProcessError(crawl.returncode, cmd_args)
subprocess.CalledProcessError: Command '['crawl', '--failOnFailedSeed', '--workers', '8', '--waitUntil', 'load', '--title', '"SCP 基金会 wiki"', '--description', '"SCP 基金会中国分部数据存档"', '--depth', '-1', '--timeout', '90', '--exclude', '(\\?q=|/signin|/forum|/system:)', '--lang', 'zh', '--behaviors', 'autoplay,autofetch,siteSpecific', '--behaviorTimeout', '90', '--diskUtilization', '90', '--url', 'https://scp-wiki-cn.wikidot.com', '--userAgentSuffix', '+Zimit', '--mobileDevice', 'Pixel 2', '--cwd', '/output/.tmp5h5vggyv']' returned non-zero exit status 1.

After changing the configuration the crawler still gets the same error, and after going to check the issue for browsertrix-crawler I found a similar error, but still don't know the solution at the moment.
Should I feed this back to browsertrix-crawler?

@benoit74
Copy link
Collaborator

benoit74 commented Sep 2, 2024

Should I feed this back to browsertrix-crawler?

If you don't mind, yes please ; provide a link to this issue so that we can track them

@ikreymer
Copy link
Collaborator

ikreymer commented Sep 2, 2024

@MCSeekeri can you provide a more complete log of when this happened? After how many pages? We haven't been able to repro this type of error consistently, it could be browser running out of memory, or something else. Would also recommend trying with even less workers than 8, maybe 2 or 4.

@MCSeekeri
Copy link
Author

@MCSeekeri can you provide a more complete log of when this happened? After how many pages? We haven't been able to repro this type of error consistently, it could be browser running out of memory, or something else. Would also recommend trying with even less workers than 8, maybe 2 or 4.

Currently there is only the full log for --workers=64, the log file is quite large (370MB) so only the last part of the file is given here, I will upload the full log if needed.
Also, I'm not sure if lowering the number of workers will solve the problem, I'm running Zimit on a server with 512GB of RAM, so I shouldn't be running out of memory?
Anyway I will set --workers=2 and do the crawl again and hopefully this time I won't have the issue.
log.txt

@MCSeekeri
Copy link
Author

{"timestamp":"2024-09-03T15:47:30.447Z","logLevel":"info","context":"crawlStatus","message":"Crawl statistics","details":{"crawled":17570,"total":48752,"pending":2,"failed":132,"limit":{"max":0,"hit":false},"pendingPages":["{\"seedId\":0,\"started\":\"2024-09-03T15:41:54.998Z\",\"extraHops\":0,\"url\":\"https:\\/\\/scp-wiki-cn.wikidot.com\\/scp-cn-3811\",\"added\":\"2024-09-03T00:25:38.574Z\",\"depth\":2}","{\"seedId\":0,\"started\":\"2024-09-03T15:47:30.446Z\",\"extraHops\":0,\"url\":\"https:\\/\\/scp-wiki-cn.wikidot.com\\/scp-cn-3880\",\"added\":\"2024-09-03T00:25:38.579Z\",\"depth\":2}"]}}
{"timestamp":"2024-09-03T15:47:31.423Z","logLevel":"info","context":"general","message":"Awaiting page load","details":{"page":"https://scp-wiki-cn.wikidot.com/scp-cn-3880","workerid":0}}
{"timestamp":"2024-09-03T15:47:34.999Z","logLevel":"warn","context":"worker","message":"Page Worker Timeout","details":{"seconds":340,"page":"https://scp-wiki-cn.wikidot.com/scp-cn-3811","workerid":1}}
{"timestamp":"2024-09-03T15:47:35.034Z","logLevel":"warn","context":"behavior","message":"Waiting for custom page load failed","details":{"type":"exception","message":"Protocol error (Runtime.evaluate): Target closed","stack":"TargetCloseError: Protocol error (Runtime.evaluate): Target closed\n    at CallbackRegistry.clear (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/CallbackRegistry.js:69:36)\n    at CdpCDPSession._onClosed (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/CDPSession.js:98:25)\n    at Connection.onMessage (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/Connection.js:127:25)\n    at WebSocket.<anonymous> (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/node/NodeWebSocketTransport.js:38:32)\n    at callListener (/app/node_modules/puppeteer-core/node_modules/ws/lib/event-target.js:290:14)\n    at WebSocket.onMessage (/app/node_modules/puppeteer-core/node_modules/ws/lib/event-target.js:209:9)\n    at WebSocket.emit (node:events:519:28)\n    at Receiver.receiverOnMessage (/app/node_modules/puppeteer-core/node_modules/ws/lib/websocket.js:1220:20)\n    at Receiver.emit (node:events:519:28)\n    at Immediate.<anonymous> (/app/node_modules/puppeteer-core/node_modules/ws/lib/receiver.js:601:16)"}}
{"timestamp":"2024-09-03T15:47:35.035Z","logLevel":"warn","context":"links","message":"Link Extraction failed","details":{"type":"exception","message":"Attempted to use detached Frame 'BBD88A61268EEC820394BF9113A847C6'.","stack":"Error: Attempted to use detached Frame 'BBD88A61268EEC820394BF9113A847C6'.\n    at CdpFrame.<anonymous> (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/util/decorators.js:92:23)\n    at file:///app/dist/crawler.js:1378:102\n    at Array.map (<anonymous>)\n    at Crawler.extractLinks (file:///app/dist/crawler.js:1378:72)\n    at Crawler.loadPage (file:///app/dist/crawler.js:1322:20)\n    at async Crawler.default [as driver] (file:///app/dist/defaultDriver.js:2:5)\n    at async Crawler.crawlPage (file:///app/dist/crawler.js:566:9)\n    at async PageWorker.crawlPage (file:///app/dist/util/worker.js:153:21)"}}
file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:72
            throw new Error('Execution context was destroyed');
                  ^

Error: Execution context was destroyed
    at file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:72:19
    at file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:1936:31
    at OperatorSubscriber2._this._next (file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:993:9)
    at Subscriber2.next (file:///app/node_modules/puppeteer-core/lib/esm/third_party/rxjs/rxjs.js:696:12)
    at listener (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/util.js:367:24)
    at file:///app/node_modules/puppeteer-core/lib/esm/third_party/mitt/mitt.js:36:7
    at Array.map (<anonymous>)
    at Object.emit (file:///app/node_modules/puppeteer-core/lib/esm/third_party/mitt/mitt.js:35:20)
    at EventEmitter.emit (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/common/EventEmitter.js:77:23)
    at [nodejs.dispose] (file:///app/node_modules/puppeteer-core/lib/esm/puppeteer/cdp/IsolatedWorld.js:151:23)

Node.js v20.15.0
Traceback (most recent call last):
  File "/usr/bin/zimit", line 8, in <module>
    sys.exit(zimit.zimit())
             ^^^^^^^^^^^^^
  File "/app/zimit/lib/python3.12/site-packages/zimit/zimit.py", line 688, in zimit
    run(sys.argv[1:])
  File "/app/zimit/lib/python3.12/site-packages/zimit/zimit.py", line 574, in run
    raise subprocess.CalledProcessError(crawl.returncode, cmd_args)
subprocess.CalledProcessError: Command '['crawl', '--failOnFailedSeed', '--workers', '2', '--waitUntil', 'load', '--title', '"SCP 基金会 wiki"', '--description', '"SCP 基金会中国分部数据存档"', '--depth', '-1', '--timeout', '180', '--exclude', '"(ad-delivery|doubleclick|btloader|consent\\.nit|nitropay|onesignal|\\?q=|signup-landing\\?|\\?cid=|forum/t|:system|avn\\.sh|crom|/signin|:)"', '--lang', 'zh', '--behaviors', 'autoplay,autofetch,siteSpecific', '--behaviorTimeout', '90', '--diskUtilization', '90', '--url', 'https://scp-wiki-cn.wikidot.com', '--userAgentSuffix', '+Zimit', '--mobileDevice', 'Pixel 2', '--cwd', '/output/.tmpehar8q7d']' returned non-zero exit status 1.

Apparently setting --workers=2 doesn't fix the problem either.
I originally thought that SCP-CN-3811 was causing the problem, but it seems that the page was archived correctly in the previous log.

@MCSeekeri
Copy link
Author

@ikreymer @benoit74 Any progress on this issue so far? The problem persists even with 2.1.1 crawling, and occurs every time after about 20K-30K pages have been crawled.
The logs are pretty much the same every time, first working normally for a long time, then exiting with an error.

@ikreymer
Copy link
Collaborator

ikreymer commented Sep 6, 2024

@MCSeekeri Unfortunately, no, this is very hard to reproduce because it takes 20K-30K pages before the issues pops up, and the stack trace is not helpful at all.

@MCSeekeri
Copy link
Author

@MCSeekeri Unfortunately, no, this is very hard to reproduce because it takes 20K-30K pages before the issues pops up, and the stack trace is not helpful at all.

Is there any way to make the program “more verbose” to find the possible root cause of the problem?
When testing on my server, it seems that having workers set too high doesn't affect the results, and it usually only takes a few hours of running to crawl 20K-30K pages and reproduce the problem.

@ikreymer
Copy link
Collaborator

ikreymer commented Sep 6, 2024

Is there any way to make the program “more verbose” to find the possible root cause of the problem? When testing on my server, it seems that having workers set too high doesn't affect the results, and it usually only takes a few hours of running to crawl 20K-30K pages and reproduce the problem.

Possibly, you can try adding this to the environment: DEBUG=puppeteer:* which should enable extensive puppeteer logging, see if that provides any additional info!

@MCSeekeri
Copy link
Author

Is there any way to make the program “more verbose” to find the possible root cause of the problem? When testing on my server, it seems that having workers set too high doesn't affect the results, and it usually only takes a few hours of running to crawl 20K-30K pages and reproduce the problem.

Possibly, you can try adding this to the environment: DEBUG=puppeteer:* which should enable extensive puppeteer logging, see if that provides any additional info!

Traceback (most recent call last):
  File "/usr/bin/zimit", line 5, in <module>
    from zimit import zimit
  File "/app/zimit/lib/python3.12/site-packages/zimit/zimit.py", line 22, in <module>
    import inotify.adapters
  File "/app/zimit/lib/python3.12/site-packages/inotify/adapters.py", line 37, in <module>
    _IS_DEBUG = bool(int(os.environ.get('DEBUG', '0')))
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'puppeteer:*'

Well...
Changing to setting DEBUG=1 doesn't seem to give more detailed output.

@ikreymer
Copy link
Collaborator

ikreymer commented Sep 6, 2024

You can try it with browsertrix-crawler image directly with the same command line: docker run -e DEBUG="puppeteer:*" -v $PWD/crawls:/crawls -it webrecorder/browsertrix-crawler crawl --failOnFailedSeed --workers 2 --waitUntil load --url https://scp-wiki-cn.wikidot.com --depth -1 --timeout 180 --exclude "(ad-delivery|doubleclick|btloader|consent\\.nit|nitropay|onesignal|\\?q=|signup-landing\\?|\\?cid=|forum/t|:system|avn\\.sh|crom|/signin|:)" --lang zh --behaviors autoplay,autofetch,siteSpecific --behaviorTimeout 90 --diskUtilization 90 --url https://scp-wiki-cn.wikidot.com --userAgentSuffix +Zimit --mobileDevice Pixel-2

@MCSeekeri
Copy link
Author

@ikreymer
In order to crawl a sufficient number of pages quickly, I changed some of the settings
The log file size is 33GB, so only the last part is attached here.
debug.log

@ikreymer
Copy link
Collaborator

ikreymer commented Sep 6, 2024

@MCSeekeri I think it's a bug in Puppeteer that happens when it's cleaning up some internal objects, I opened an issue there: puppeteer/puppeteer#13056

@benoit74
Copy link
Collaborator

benoit74 commented Sep 7, 2024

Thank you @ikreymer for your support!

ikreymer added a commit to webrecorder/browsertrix-crawler that referenced this issue Sep 27, 2024
- add additional catch() block
- wrap page.title() in timedRun() to catch/log exception if this fails
- log error in getting cookies
- hopefully fixes hard-to-repro edge case crash in openzim/zimit#376
@ikreymer
Copy link
Collaborator

This might be fixed in 1.3.1, see if you can repro this again in this version - again very tricky to repro, puppeteer folks think its something that's not being caught in the crawler, so added a bunch of extra exception handling improvements, which might address this.

@MCSeekeri
Copy link
Author

This might be fixed in 1.3.1, see if you can repro this again in this version - again very tricky to repro, puppeteer folks think its something that's not being caught in the crawler, so added a bunch of extra exception handling improvements, which might address this.

Thanks. I'll try again.

@MCSeekeri
Copy link
Author

A new error reporting ............ is kind of progress?
Though I don't see any specific reason for it, after that I'll tone down the number of Workers and try again ......
data.log

@ikreymer
Copy link
Collaborator

A new error reporting ............ is kind of progress? Though I don't see any specific reason for it, after that I'll tone down the number of Workers and try again ...... data.log

Yeah, I think these errors are most certainly way too many workers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants