Skip to content

Commit

Permalink
install updates
Browse files Browse the repository at this point in the history
  • Loading branch information
rachel-netq committed Oct 21, 2024
1 parent 95de092 commit fdfa735
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 7 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ Consider the following deployment options and requirements before you install th
| Single Server | High-Availability Cluster| High-Availability Scale Cluster |
| --- | --- | --- |
| On-premises or cloud | On-premises or cloud | On-premises only |
| Low scale<ul><li>Single server supports up to TKTK devices</li></ul>| Medium scale<ul><li>3-node deployment supports up to 100 devices and 12,800 interfaces</li></ul>| High scale<ul><li>3-node deployment supports up to 1000 devices and TKTK interfaces</li><li>5-node deployment supports up to 2000 devices and TKTK interfaces</li><li>7-node deployment supports up to 3000 devices nad TKTK interfaces</li></ul>|
| Low scale<ul><li>Single server supports up to TKTK devices</li></ul>| Medium scale<ul><li>3-node deployment supports up to 100 devices and 12,800 interfaces</li></ul>| High scale<ul><li>3-node deployment supports up to 1000 devices and TKTK interfaces</li></ul>|
| KVM or VMware hypervisor | KVM or VMware hypervisor | KVM or VMware hypervisor |
| System requirements<br><br> On-premises: 16 virtual CPUs, 64GB RAM, 500GB SSD disk<br><br>Cloud: 4 virtual CPUs, 8GB RAM, 64GB SSD disk | System requirements (per node)<br><br> On-premises: 16 virtual CPUs, 64GB RAM, 500GB SSD disk<br><br>Cloud: 4 virtual CPUs, 8GB RAM, 64GB SSD disk | System requirements (per node)<br><br>On-premises: 48 virtual CPUs, 512GB RAM, 3.2TB SSD disk|
| All features supported | All features supported| Limited or no support for:<ul><li>Topology dashboard</li><li>Network snapshots</li><li>Trace requests</li><li>Flow analysis</li><li>Duplicate IP address validations</li><li>MAC move commentary</li></li><li>Link health view</li></ul>|
| All features supported | All features supported| Limited or no support for:<ul><li>Topology dashboard</li><li>Network snapshots</li><li>Trace requests</li><li>Flow analysis</li><li>Duplicate IP address validations</li><li>Topology validations</li></li><li>Link health view</li></ul>|

NetQ is also available through NVIDIA Base Command Manager. To get started, refer to the {{<exlink url="https://docs.nvidia.com/base-command-manager/#product-manuals" text="Base Command Manager administrator and containerization manuals">}}.
## Deployment Type: On-premises or Cloud
Expand All @@ -36,7 +36,7 @@ Select the **high-availability cluster** deployment for greater device support a

During the installation process, you configure a virtual IP address that enables redundancy for the Kubernetes control plane. In this configuration, the majority of nodes must be operational for NetQ to function. For example, a three-node cluster can tolerate a one-node failure, but not a two-node failure. For more information, refer to the {{<exlink url="https://etcd.io/docs/v3.3/faq/" text="etcd documentation">}}.

The **high-availability scale cluster** deployment provides support for the greatest number of devices and provides an extensible framework for greater scalability. As the number of devices in your network grows, you can add additional nodes to the cluster to support the additional devices.
The **high-availability scale cluster** deployment provides support for the greatest number of devices and provides an extensible framework for greater scalability. <!--As the number of devices in your network grows, you can add additional nodes to the cluster to support the additional devices. 4.12 supports only 3-node cluster-->

### Cluster Deployments and Load Balancers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ weight: 227
toc: 5
bookhidden: true
---
Follow these steps to set up and configure your VM on a cluster of servers in an on-premises deployment. First configure the VM on the master node, and then configure the VM on *each* worker node. NVIDIA recommends installing the virtual machines on different physical servers to increase redundancy in the event of a hardware failure.
Follow these steps to set up and configure your VM on a cluster of servers in an on-premises deployment. First configure the VM on the master node, and then configure the VM on *each* worker node. NVIDIA recommends installing the virtual machines on different servers to increase redundancy in the event of a hardware failure.

- - -

Expand Down Expand Up @@ -70,7 +70,7 @@ Additionally, for internal cluster communication, you must open these ports:
b. Select **NVIDIA Licensing Portal**.<br>
c. Select **Software Downloads** from the menu.<br>
d. Click **Product Family** and select **NetQ**.<br>
e. For deployments using KVM, download the **NetQ SW 4.11 KVM** image. For deployments using VMware, download the **NetQ SW 4.11 VMware** image<br>
e. For deployments using KVM, download the **NetQ SW 4.12 KVM** image. For deployments using VMware, download the **NetQ SW 4.12 VMware** image<br>
f. If prompted, read the license agreement and proceed with the download.<br>

{{%notice note%}}
Expand Down Expand Up @@ -127,7 +127,7 @@ cumulus@ubuntu:~$
4. Verify that the master node is ready for installation. Fix any errors before installing the NetQ software.

```
cumulus@hostname:~$ sudo opta-check
cumulus@hostname:~$ sudo opta-check scale
```

5. Change the hostname for the VM from the default value.
Expand Down Expand Up @@ -156,7 +156,7 @@ Add the same NEW_HOSTNAME value to **/etc/hosts** on your VM for the localhost e
7. Verify that the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.

```
cumulus@hostname:~$ sudo opta-check
cumulus@hostname:~$ sudo opta-check scale
```

8. Repeat steps 6 and 7 for each worker node in your cluster.
Expand Down

0 comments on commit fdfa735

Please sign in to comment.