Skip to content

Commit

Permalink
Merge pull request #1126 from mythi/PR-2022-054
Browse files Browse the repository at this point in the history
docs: rework development guide
  • Loading branch information
bart0sh authored Sep 2, 2022
2 parents 5c3d9ce + 1b3acca commit f0dd952
Show file tree
Hide file tree
Showing 11 changed files with 418 additions and 994 deletions.
399 changes: 249 additions & 150 deletions DEVEL.md

Large diffs are not rendered by default.

63 changes: 3 additions & 60 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ Table of Contents
* [Demos](#demos)
* [Workload Authors](#workload-authors)
* [Developers](#developers)
* [Running e2e Tests](#running-e2e-tests)
* [Supported Kubernetes versions](#supported-kubernetes-versions)
* [Pre-built plugin images](#pre-built-plugin-images)
* [License](#license)
Expand All @@ -38,7 +37,7 @@ Table of Contents

Prerequisites for building and running these device plugins include:

- Appropriate hardware
- Appropriate hardware and drivers
- A fully configured [Kubernetes cluster]
- A working [Go environment], of at least version v1.16.

Expand Down Expand Up @@ -249,64 +248,8 @@ The summary of resources available via plugins in this repository is given in th

## Developers

For information on how to develop a new plugin using the framework, see the
[Developers Guide](DEVEL.md) and the code in the
[device plugins pkg directory](pkg/deviceplugin).

## Running E2E Tests

Currently the E2E tests require having a Kubernetes cluster already configured
on the nodes with the hardware required by the device plugins. Also all the
container images with the executables under test must be available in the
cluster. If these two conditions are satisfied, run the tests with:

```bash
$ go test -v ./test/e2e/...
```

In case you want to run only certain tests, e.g., QAT ones, run:

```bash
$ go test -v ./test/e2e/... -args -ginkgo.focus "QAT"
```

If you need to specify paths to your custom `kubeconfig` containing
embedded authentication info then add the `-kubeconfig` argument:

```bash
$ go test -v ./test/e2e/... -args -kubeconfig /path/to/kubeconfig
```

The full list of available options can be obtained with:

```bash
$ go test ./test/e2e/... -args -help
```

It is possible to run the tests which don't depend on hardware
without a pre-configured Kubernetes cluster. Just make sure you have
[Kind](https://kind.sigs.k8s.io/) installed on your host and run:

```
$ make test-with-kind
```

## Running Controller Tests with a Local Control Plane

The controller-runtime library provides a package for integration testing by
starting a local control plane. The package is called
[envtest](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest). The
operator uses this package for its integration testing.
Please have a look at `envtest`'s documentation to set up it properly. But basically
you just need to have `etcd` and `kube-apiserver` binaries available on your
host. By default they are expected to be located at `/usr/local/kubebuilder/bin`.
But you can have it stored anywhere by setting the `KUBEBUILDER_ASSETS`
environment variable. If you have the binaries copied to
`${HOME}/work/kubebuilder-assets`, run the tests:

```bash
$ KUBEBUILDER_ASSETS=${HOME}/work/kubebuilder-assets make envtest
```
For information on how to develop a new plugin using the framework or work on development task in
this repository, see the [Developers Guide](DEVEL.md).

## Supported Kubernetes Versions

Expand Down
84 changes: 9 additions & 75 deletions cmd/dlb_plugin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,9 @@ Table of Contents

* [Introduction](#introduction)
* [Installation](#installation)
* [Deploy with pre-built container image](#deploy-with-pre-built-container-image)
* [Getting the source code](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy plugin DaemonSet](#deploy-plugin-daemonset)
* [Deploy by hand](#deploy-by-hand)
* [Build the plugin](#build-the-plugin)
* [Run the plugin as administrator](#run-the-plugin-as-administrator)
* [Verify plugin registration](#verify-plugin-registration)
* [Testing the plugin](#testing-the-plugin)
* [Pre-built Images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)

## Introduction

Expand Down Expand Up @@ -138,7 +131,7 @@ The following sections detail how to obtain, build, deploy and test the DLB devi

Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.

### Deploy with pre-built container image
### Pre-built Images

[Pre-built images](https://hub.docker.com/r/intel/intel-dlb-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
Expand All @@ -149,74 +142,15 @@ release version numbers in the format `x.y.z`, corresponding to the branches and
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command

```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/dlb_plugin?ref=<REF>
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/dlb_plugin?ref=<RELEASE_VERSION>
daemonset.apps/intel-dlb-plugin created
```

Where `<REF>` needs to be substituted with the desired git ref, e.g. `main`.
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.

Nothing else is needed. But if you want to deploy a customized version of the plugin read further.
Nothing else is needed. See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.

### Getting the source code

```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```

### Deploying as a DaemonSet

To deploy the dlb plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes.

#### Build the plugin image

The following will use `docker` to build a local container image called
`intel/intel-dlb-plugin` with the tag `devel`.

The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
to the [`Makefile`](Makefile).

```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-dlb-plugin
...
Successfully tagged intel/intel-dlb-plugin:devel
```

#### Deploy plugin DaemonSet

You can then use the [example DaemonSet YAML](/deployments/dlb_plugin/base/intel-dlb-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:

```bash
$ kubectl apply -k deployments/dlb_plugin
daemonset.apps/intel-dlb-plugin created
```

### Deploy by hand

For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.

#### Build the plugin

First we build the plugin:

```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make dlb_plugin
```

#### Run the plugin as administrator

Now we can run the plugin directly on the node:

```bash
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/dlb_plugin/dlb_plugin
```

### Verify plugin registration
### Verify Plugin Registration

You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes:
Expand All @@ -228,7 +162,7 @@ master
dlb.intel.com/vf: 4
```

### Testing the plugin
## Testing and Demos

We can test the plugin is working by deploying the provided example test images (dlb-libdlb-demo and dlb-dpdk-demo).

Expand Down
96 changes: 12 additions & 84 deletions cmd/dsa_plugin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,9 @@ Table of Contents

* [Introduction](#introduction)
* [Installation](#installation)
* [Deploy with pre-built container image](#deploy-with-pre-built-container-image)
* [Getting the source code](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy plugin DaemonSet](#deploy-plugin-daemonset)
* [Deploy by hand](#deploy-by-hand)
* [Build the plugin](#build-the-plugin)
* [Run the plugin as administrator](#run-the-plugin-as-administrator)
* [Verify plugin registration](#verify-plugin-registration)
* [Testing the plugin](#testing-the-plugin)
* [Pre-built Images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)

## Introduction

Expand All @@ -25,11 +18,9 @@ The DSA plugin and operator optionally support provisioning of DSA devices and w

## Installation

The following sections detail how to obtain, build, deploy and test the DSA device plugin.
The following sections detail how to use the DSA device plugin.

Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.

### Deploy with pre-built container image
### Pre-built Images

[Pre-built images](https://hub.docker.com/r/intel/intel-dsa-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
Expand All @@ -40,26 +31,24 @@ release version numbers in the format `x.y.z`, corresponding to the branches and
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command

```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/dsa_plugin?ref=<REF>
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/dsa_plugin?ref=<RELEASE_VERSION>
daemonset.apps/intel-dsa-plugin created
```

Where `<REF>` needs to be substituted with the desired git ref, e.g. `main`.
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.

Nothing else is needed. But if you want to deploy a customized version of the plugin read further.
Nothing else is needed. See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.

### Deploy with initcontainer
#### Automatic Provisioning

There's a sample [DSA initcontainer](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/build/docker/intel-idxd-config-initcontainer.Dockerfile) included that provisions DSA devices and workqueues (1 engine / 1 group / 1 wq (user/dedicated)), to deploy:
There's a sample [idxd initcontainer](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/build/docker/intel-idxd-config-initcontainer.Dockerfile) included that provisions DSA devices and workqueues (1 engine / 1 group / 1 wq (user/dedicated)), to deploy:

```bash
$ kubectl apply -k deployments/dsa_plugin/overlays/dsa_initcontainer/
```

The provisioning [script](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/demo/idxd-init.sh) and [template](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/demo/dsa.conf) are available for customization.

### Deploy with initcontainer and provisioning config in the ConfigMap

The provisioning config can be optionally stored in the ProvisioningConfig configMap which is then passed to initcontainer through the volume mount.

There's also a possibility for a node specific congfiguration through passing a nodename via NODE_NAME into initcontainer's environment and passing a node specific profile via configMap volume mount.
Expand All @@ -70,68 +59,7 @@ To create a custom provisioning config:
$ kubectl create configmap --namespace=inteldeviceplugins-system intel-dsa-config --from-file=demo/dsa.conf
```

### Getting the source code

```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```

### Deploying as a DaemonSet

To deploy the dsa plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes.

#### Build the plugin image

The following will use `docker` to build a local container image called
`intel/intel-dsa-plugin` with the tag `devel`.

The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
to the [`Makefile`](Makefile).

```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-dsa-plugin
...
Successfully tagged intel/intel-dsa-plugin:devel
```

#### Deploy plugin DaemonSet

You can then use the [example DaemonSet YAML](/deployments/dsa_plugin/base/intel-dsa-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:

```bash
$ kubectl apply -k deployments/dsa_plugin
daemonset.apps/intel-dsa-plugin created
```

### Deploy by hand

For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.

#### Build the plugin

First we build the plugin:

```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make dsa_plugin
```

#### Run the plugin as administrator

Now we can run the plugin directly on the node:

```bash
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/dsa_plugin/dsa_plugin
device-plugin registered
```

### Verify plugin registration

### Verify Plugin Registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes:

Expand All @@ -145,7 +73,7 @@ node1
dsa.intel.com/wq-user-shared: 20
```

### Testing the plugin
## Testing and Demos

We can test the plugin is working by deploying the provided example accel-config test image.

Expand Down
26 changes: 2 additions & 24 deletions cmd/fpga_crihook/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,6 @@ Table of Contents

* [Introduction](#introduction)
* [Dependencies](#dependencies)
* [Building](#building)
* [Getting the source code](#getting-the-source-code)
* [Building the image](#building-the-image)
* [Configuring CRI-O](#configuring-cri-o)

## Introduction
Expand Down Expand Up @@ -40,26 +37,7 @@ install the following:
All components have the same basic dependencies as the
[generic plugin framework dependencies](../../README.md#about)

## Building

The following sections detail how to obtain, build and deploy the CRI-O
prestart hook.

### Getting the source code

```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```

### Building the image

```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-fpga-initcontainer
...
Successfully tagged intel/intel-fpga-initcontainer:devel
```
See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the CRI hook.

## Configuring CRI-O

Expand All @@ -68,4 +46,4 @@ file that prevents CRI-O to discover and configure hooks automatically.
For FPGA orchestration programmed mode, the OCI hooks are the key component.
Please ensure that your `/etc/crio/crio.conf` parameter `hooks_dir` is either unset
(to enable default search paths for OCI hooks configuration) or contains the directory
`/etc/containers/oci/hooks.d`.
`/etc/containers/oci/hooks.d`.
Loading

0 comments on commit f0dd952

Please sign in to comment.