-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Trouble with HA for LAPI Pod #181
Comments
@ImranR98: Thanks for opening an issue, it is currently awaiting triage. If you haven't already, please provide the following information:
In the meantime, you can:
DetailsI am a bot created to help the crowdsecurity developers manage community feedback and contributions. You can check out my manifest file to understand my behavior and what I can do. If you want to use this for your project, you can check out the forked project rr404/oss-governance-bot repository. |
@ImranR98: There are no 'kind' label on this issue. You need a 'kind' label to start the triage process.
DetailsI am a bot created to help the crowdsecurity developers manage community feedback and contributions. You can check out my manifest file to understand my behavior and what I can do. If you want to use this for your project, you can check out the forked project rr404/oss-governance-bot repository. |
/kind documentation |
Hi, the solution is to check in the chart if the replica is enabled ( more than 1) then add suffix the env var Discussed with @blotus. |
I'm not sure I understand, but glad to see there's a PR to fix it 🚀 |
So a not so tldr; When the LAPI pods come up because they need to have working credentials they execute a direct machine add command and by default the container choose the name "localhost" as by the default value for The side effect is that one of the The fix, we now force each LAPI to have a unique name by using the pod metadata of the randomly generated name, this will stop the name collision. |
Okay that makes sense, thanks for the explanation! |
I've been trying to get this to work in a small testing environment with Traefik. My current config seems to work fine with a single LAPI pod backed by a Postgres DB and connected to 2 agents on 2 nodes.
But if I try setting the
lapi.replicas
value to2
, I get the following error in one of the two pods when I try to run acscli
command (like cscli decisions list
):level=fatal msg="unable to retrieve decisions: performing request: Get \"http://localhost:8080/v1/alerts?has_active_decision=true&include_capi=false&limit=100\": API error: incorrect Username or Password" command terminated with exit code 1
This is my
values.yaml
:My assumption was that since I have disabled persistent volumes and configured a DB instead, both LAPI instances would connect to the same DB and have no issues. But I've clearly misunderstood how everything fits together. Would appreciate anyone pointing me in the right direction!
The text was updated successfully, but these errors were encountered: