Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document how to deploy with HA NATS deployments #101

Open
LucasRoesler opened this issue Dec 19, 2018 · 1 comment
Open

Document how to deploy with HA NATS deployments #101

LucasRoesler opened this issue Dec 19, 2018 · 1 comment
Labels
help wanted Extra attention is needed priority/high

Comments

@LucasRoesler
Copy link
Member

Production deployments of OpenFaaS should use an HA and Durable NATS installation to provide better stability. We should document how to install OpenFaaS and supporting an HA NATS cluster.

@alexellis
Copy link
Member

alexellis commented Dec 20, 2018

Email from Wally from the NATS team / Apcera:

Hi Alex, you have a few options for HA NATS Streaming on K8S
to choose from depending on what you think is more flexible and
fits the use case better:

1) Multiple node Raft based cluster on ephemeral container filesystem

2) Multiple node Raft based cluster backed by persistent filesystem

3) Single instance backed up persistent filesystem

4) Single instance backed up by a DB store

The NATS Streaming operator (https://github.com/nats-io/nats-streaming-operator)
has support for all these options although technically you could do 3) and 4) 
with a StatefulSet/Deployment with a single replica but the operator also provides
you with a CRD that you could can use if you want and some helpers for the configuration.

For the options 1) and 2) the operator really helps here since it
ensures that a single instance is ever defined with the `--cluster_bootstrap` flag 
when creating the cluster for which there are no good workarounds
using StatefulSet/Deployments.  1) is what I think is the most
flexible since it does not have extra dependencies on a file system
but also means that if all instances from the clusterfail then there will be data loss,
but this could be mitigated by defining an anti affinity for the pods so that they are not
in the same host for example.

You can find some examples in the NATS Streaming repo here:

https://github.com/nats-io/nats-streaming-operator/tree/master/deploy/examples

Thanks,

- Wally

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed priority/high
Projects
None yet
Development

No branches or pull requests

2 participants