Skip to content

atlanticwave-sdx/sdx-controller

Repository files navigation

SDX Controller Service

controller-ci-badge controller-cov-badge

Overview

The SDX controller is the central point of the AW-SDX system. It coordinates among local controllers (LCs), datamodels, Path Computation Engine (PCE), as well as domain managers, such as Kytos and OESS. Its major responsibilities include:

  • Collect domain topology from all the LCs.
  • Assumble domain topology into global network topology.
  • Handle user SDX end-to-end connection request.
  • Call Path Computation Engine (PCE) to compute optimal path, and break down topology into per-LC topology.
  • Distribute connection requests to corresponding LCs.
  • Receive and process measurement data from Behavior, Anomaly and Performance Manager (BAPM).

The SDX controller server is a swagger-enabled Flask server based on the swagger-codegen project.

BAPM

The Behavior, Anomaly, and Performance Manager (BAPM) is a self-driving, multi-layer system. It collects fine-grained measurement data from the SDX's underlying infrastructures, and send data reports to the SDX controller. The SDX controller BAPM server included in this project is responsible for receiving and processing the BAPM data.

Communication between SDX Controller and Local Controller

The SDX controller and local controller communicate using RabbitMQ. All the topology and connectivity related messages are sent with RPC, with receiver confirmation. The monitoring related messages are sent without receiver confirmation.

Below are two sample scenarios for RabbitMQ implementation.

SDX controller breaks down the topology and sends connectivity information to local controllers:

SDX controller to local controller

Local controller sends domain information to SDX controller:

Local controller to SDX controller

Running SDX Controller

Configuration

Copy the provided env.template file to .env, and adjust it according to your environment.

The communication between SDX controller and Local controller is enabled by RabbitMQ, which can either run on the same node as SDX controller, or on a separate node. See notes under testing for some hints about running RabbitMQ.

You might need to install Elastic Search too. The script elastic-search-setup.sh should be useful on Rocky Linux systems:

$ sudo sh elastic-search-setup.sh

Running with Docker Compose (recommended)

A compose.yaml is provided for bringing up SDX Controller and a MongoDB instance, and a separate compose.bapm.yml is provided for bringing up bapm-server and a single-node ElasticSearch instance.

To start/stop SDX Controller, from the project root directory, do:

$ source .env
$ docker compose up --build
$ docker compose down

Navigate to http://localhost:8080/SDX-Controller/1.0.0/ui/ for testing the API. The OpenAPI/Swagger definition should be available at http://localhost:8080/SDX-Controller/1.0.0/openapi.json.

Similarly, to start/stop BAPM Server, do:

$ source .env
$ docker compose -f compose.bapm.yml up --build
$ docker compose -f compose.bapm.yml down

To start/stop all the services together:

$ source .env
$ docker compose -f compose.yml -f compose.bapm.yml up --build
$ docker compose -f compose.yml -f compose.bapm.yml down

Building the container images

We have two container images: sdx-server and `bapm-server. Do this to build them:

$ docker build -t sdx-controller .
$ cd bapm_server
$ docker build -t bapm-server .

To run sdx-controller alone:

$ docker run --env-file=.env -p 8080:8080 sdx-controller

Running with Python

You will need:

  • Python 3.9.6+
  • RabbitMQ
  • MongoDB

See notes under testing for some hints about running RabbitMQ and MongoDB.

To run the SDX controller server, do this from the project root directory:

$ python3 -m venv venv --upgrade-deps
$ source ./venv/bin/activate
$ pip3 install [--editable] .
$ source .env
$ flask --app sdx_controller.app:app run --debug

Test topology files and connection requests

During normal course of operation, SDX Controller receives topology data from Local Controllers, which are dynamically created based on real network topology. We have developed some static topology files and connection requests that can be used during development and testing. Since they are used in several places during SDX development, they are all consolidated in the lower-layer datamodel library's repository:

Running the test suite

With tox

You will need Docker installed and running. You will also need tox and tox-docker:

$ python3 -m venv venv --upgrade-deps
$ source ./venv/bin/activate
$ pip install 'tox>=4' 'tox-docker>=5'

Once you have tox and tox-docker installed, you can run tests:

$ tox

You can also run a single test, and optionally, print logs on the console:

$ tox -- -s --log-cli-level=INFO sdx_controller/test/test_l2vpn_controller.py::TestL2vpnController::test_getconnection_by_id

If you want to examine Docker logs after the test suite has exited, run tests with tox --docker-dont-stop [mongo|rabbitmq], and then use docker logs <container-name>.

With pytest

If you want to avoid tox and run pytest directly, that is possible too. You will need to run RabbitMQ and MongoDB, with Docker perhaps:

$ docker run --rm -d --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:latest
$ docker run --rm -d --name mongo -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=guest -e MONGO_INITDB_ROOT_PASSWORD=guest mongo:7.0.11

Some environment variables are expected to be set for the tests to work as expected, so you may want to copy env.template to .env and edit it according to your environment, and make sure the env vars are present in your shell:

$ cp env.template .env 
$ # and then edit .env to suit your environment
$ source .env

And now, activate a virtual environment, install the requirements, and then run pytest:

$ python3 -m venv venv --upgrade-deps
$ source ./venv/bin/activate
$ pip3 install --editable .[test]
$ pytest