diff --git a/rss.xml b/rss.xml index f52b6687..e7f0fbe3 100644 --- a/rss.xml +++ b/rss.xml @@ -1,8 +1,15 @@ JBoss Tools Aggregated Feed https://tools.jboss.org - 2023-10-12T16:01:20Z + 2023-10-12T18:15:44Z + + JBoss Blogs: How to solve CVE-2023-44487 + 2023-10-12T16:29:29Z + https://www.mastertheboss.com/various-stuff/how-to-solve-cve-2023-44487/ + + + JBoss Blogs: Eclipse Vert.x 4.4.6 released! 2023-10-11T00:00:00Z @@ -17,6 +24,227 @@ + + Red Hat Developer: How to validate GitOps manifests + 2023-10-10T07:00:00Z + 95d3a2f7-477c-4c56-8894-124fa034c2c2 + +

One of the major challenges of managing a cluster and application resources with GitOps is validating that changes to the GitOps manifests are correct. When making changes to objects directly on the cluster, the user is immediately presented with feedback when issues exist. The user is able to troubleshoot and resolve those issues with the knowledge of the context of the changes they just made. When working with GitOps, that feedback cycle is often delayed and users don't receive feedback on changes until they are applied to the cluster, which could be hours or even weeks depending on the approval lifecycle of a change.

+ +

To reduce the number of errors in changes to a GitOps manifest and eliminate the dreaded Unknown state in an ArgoCD application, this article will discuss tools and best practices. We will discuss automating these validations with GitHub actions, but all of these validations can be configured with another CI tool of your choice.

+ +

Using YAML linters

+ +

YAML is the basis of nearly all GitOps repos. As you would expect, YAML has specific syntax standards for validity. Additionally, there are many recommended best practices that may not be required but can help improve the consistency and readability of the YAML.

+ +

YAML linters are a great tool to help validate requirements on a repo and enforce consistent style for some of the optional configurations. Many different YAML linter tools exists, but one that I recommend is yamllint. The yamllint is built with Python, making it easy to set up on most Linux and MacOS environments since Python is installed by default and easily installed on any Windows environment.

+ +

To install yamllint you can run the following command with pip, the Python package management tool:

+ +
+pip install --user yamllint
+
+ +

Once installed, users can use the yamllint cli tool to manually validate individual files:

+ +
+yamllint my-file.yaml
+
+ +

Or an entire directory structure:

+ +
+yamllint .
+
+ +

The yamllint provides a default configuration that may provide warnings for some style standards that you many not wish to enforce. The default options can easily be configured by creating a file called .yamllint in the root of the project. The following is a common configuration used in many GitOps repos:

+ +
+extends: default
+
+rules:
+  document-start: disable
+  indentation:
+    indent-sequences: whatever
+  line-length: disable
+  truthy:
+    ignore: .github/workflows/
+
+ +

Automating with GitHub actions

+ +

Running yamllint locally is a great option for developers to get feedback while making changes to a repo, however running yamllint directly in a CI tool such as GitHub actions can help enforce standards and prevent improperly formatted YAML from ever making it into the main branch.

+ +

To add a yamllint GitHub action, we can utilize a pre-built GitHub action and create a file called .github/workflows/validate-manifests.yaml in your project containing the following:

+ +
+name: Validate Manifests
+on:
+  push:
+    branches:
+      - "*"
+  pull_request:
+    branches:
+      - "*"
+
+jobs:
+  lint-yaml:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Code Checkout
+        uses: actions/checkout@v3
+      - name: Validate YAML
+        uses: ibiqlik/action-yamllint@v3
+        with:
+          format: github
+
+ +

One great feature of yamllint is that it has native integration with GitHub and can do annotations directly on the lines of code with issues, making it easier for developers to identify problems and resolve them (Figure 1).

+ + +
+
+ A screenshot of a yamllint error Github annotation. +
+
Figure 1: This illustrates a yamllint error GitHub annotation.
+
+
+

The benefits of YAML linters

+ +

YAML linters are designed to enforce generic YAML standards and make sure that objects are properly structured based on those generic standards. YAML linters are great for identifying issues with misconfigurations in YAML, such as extra lines in files or incorrect tabbing in objects. YAML linters can be great for catching problems such as objects incorrectly copied and pasted into a repo or a field accidentally duplicated in the same object.

+ +

YAML linters can also keep GitOps repos more consistent and enforce some chosen standards for all contributors to the repo, making the repo more readable and maintainable. However, YAML linters are generally not able to do any sort of deeper inspection of the objects, and they do not validate the object against the expected schema for that object type.

+ +

Kustomize validation

+ +

Kustomize is one of the most common tools found in a GitOps repo for helping to organize and deploy YAML objects. Repos can commonly contain dozen, if not hundreds of kustomization.yaml files that can be incorrectly configured and cause errors when you reach the deployment step if not validated beforehand.

+ +

A simple validation can be performed using the kustomize CLI tool:

+ +
+kustomize build path/to/my/folder
+
+ +

This command will attempt to render the folder using kustomize and display the final YAML objects. If it successfully renders, the kustomization.yaml file is valid. If it does not, kustomize will display an error to troubleshoot the issue.

+ +

When making changes in kustomize, it can be easy to cause unforeseen problems. Therefore, it is always recommended to validate all kustomize resources in a repo, even those that you have not directly changed. A script that looks for all kustomization.yaml files in the repo, and attempts to run kustomize build for each folder can help to validate that no unintentional errors have been created. Fortunately, the Red Hat CoP has already created a script to do exactly that. Copy the validate_manifests.sh directly into a GitOps repo. Generally, I store it in a scripts folder, but you can run the script with the following:

+ +
+./scripts/validate_manifests.sh
+
+ +

Automating with GitHub Actions

+ +

Just like the YAML lint, validating the kustomize in a CI tool is an important step to adding confidence to changes to a repo and ensuring that no errors are introduced into the main branch.

+ +

Conveniently, GitHub Actions already has the kustomize tool built in so we can create a simple action to run the previously mentioned script by adding a new job to the same validation action we created before:

+ +
+jobs:
+  lint-kustomize:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Code Checkout
+        uses: actions/checkout@v3
+      - name: Verify Kustomize CLI Installation
+        run: |
+          which kustomize
+          kustomize version
+      - name: Validate Manifests
+        run: |
+          ./scripts/validate_manifests.sh
+
+ +

The benefits of Kustomize validation

+ +

Kustomize is a powerful tool, but one that is easy for human error to cause problems. This simple practice of validating every kustomization.yaml file in a repo can reduce the number of errors created by accidentally misspelling a filename or forgetting to update a filename in the kustomization.yaml file after renaming it. This kustomize check can also identify problems where objects are updated and any patches that impact those objects are no longer valid.

+ +

Additional, this validation can help to ensure that you don't accidentally break dependencies where another kustomization.yaml file inherits from a folder you did change. You can quickly catch problems before changes are merged into the main branch, such as when an object is removed from a base folder and that same object is being referenced in the overlay.

+ +

Using the Helm tool

+ +

Helm is another popular tool that is utilized in GitOps repos for organizing complex applications. Helm is an extremely powerful tool but one that can be prone to errors due to its complex syntax and structure.

+ +

Fortunately, Helm provides a built in tool to help validate charts within the CLI:

+ +
+helm lint path/to/my/chart
+
+ +

Helm's linting capabilities will help to validate the template code to ensure that it is valid, verify all of the necessary values are present, and emit warnings for other recommendations.

+ +

As with the kustomize script, we can automate validating all of the charts in the repo by searching for any Chart.yaml files. The following script can be created in a file called validate_charts.sh:

+ +
+#!/bin/sh
+
+for i in `find "${HELM_DIRS}" -name "Chart.yaml" -exec dirname {} \;`;
+do
+
+    echo
+    echo "Validating $i"
+    echo
+
+    helm lint $i
+
+    build_response=$?
+
+    if [ $build_response -ne 0 ]; then
+        echo "Error linting $i"
+        exit 1
+    fi
+
+done
+
+echo
+echo "Charts successfully validated!"
+
+ +

You can easily validate all of the charts in a repository at once by running the following command:

+ +
+./scripts/validate_charts.sh
+
+ +

This new script can be triggered from a GitHub Action just like the previous kustomize check. However in this case, helm is not built into the base action, so it must be installed as follows:

+ +
+jobs:
+  helm-lint:
+    runs-on: ubuntu-latest
+    env:
+      HELM_VERSION: 3.12.3
+      HELM_DIRS: .
+    steps:
+      - name: Install Helm
+        run: |
+          sudo curl -L -o /usr/bin/helm https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz
+          sudo chmod +x /usr/bin/helm
+      - name: Code Checkout
+        uses: actions/checkout@v3
+      - name: Validate Charts
+        run: |
+            ./scripts/validate_charts.sh
+
+ +

The benefits of Helm lint

+ +

Helm linting can help to catch many issues. Helm is notorious for its complexity and the challenges that the templating language can introduce. You can catch common issues, such as misspelling a value name or incorrectly scoping a variable, with the Helm linting tool. Additionally, a Helm lint can catch other configuration issues in a chart such as an invalid reference in the Chart.yaml file.

+ +

Helm linting does not do any validation on the YAML from the rendered charts. It only validates that the chart can be rendered. In some cases, it may be beneficial to apply additional validations on the rendered charts themselves.

+ +

Next steps and limitations

+ +

The validations discussed are a great first step for improving the confidence of changes in a GitOps repo before deployment. Running these validations can help you avoid common mistakes in GitOps and allow you to catch and resolve problems before they are ever attempted to be validated against the cluster.

+ +

One major limitation of these checks is the lack of validation of the objects being applied to a cluster. If a field only accepts a value of true or false the validations discussed today will not be able to identify an invalid configuration such as this.

+ +

More specialized tools such as kubeval and kubeconform can help to validate standard Kubernetes objects, but they lack support for Red Hat OpenShift specific objects or CustomResources from Operators out of the box. Extracting schemas for those objects is possible, which can help to extend validation of objects even beyond standard k8s objects.

+ +

Additionally, you can perform validations directly against a target cluster itself using the --dry-run=server flag with oc apply. Using the dry-run flag allows the objects to be validated against the cluster itself and provides an even greater degree of confidence that objects applied to the cluster will be successful.

+The post How to validate GitOps manifests appeared first on Red Hat Developer. +

+
Red Hat Developer: How custom SELinux policies secure servers and containers 2023-10-10T07:00:00Z @@ -306,309 +534,88 @@ Or

5. Use the semanage command to import the configuration file onto the new system.

-#semanage import -f./my-selinux-settings.mod
- -

[Note: This shared configuration file is compatible only with specific OS versions, such as RHEL 9 to RHEL 9 and RHEL 8 to RHEL 8.]

- -

Apply SELinux to the containers

- -

SELinux policies can be generated for containers using the udica package in the UBI8 image. The udica package enables you to create a customized security policy for precise control over a container's access to host system resources, including storage, devices, and the network. This capability helps prevent security violations and simplifies regulatory compliance when deploying containers.

- -

1. Before proceeding, please install the necessary dependency packages.

- -
-#yum install -y udica
- -

2. Install a containerization tool such as Podman or Docker. We will be using Podman as the container tool.

- -
-# yum install podman -y
- -

3. Launch the container using a UBI8 image with volume mounts for the /home directory (read-only permissions) and the /var/spool directory (read and write permissions). Additionally, expose port 80.

- -
-#podman run --env container=podman -v /home:/home:ro -v /var/spool:/var/spool:rw -p 80:80 -dit ubi8 bash
- -

4. Inspect the running container using the provided Podman command and collect the CONTAINER ID.

- -
-#podman ps
- -

5. Gather all running container policies into a .json file.

- -
-# podman inspect 567a363etfle > container.json
-
-# udica -j container.json my_container
-Policy my_container with container id 567a363etfle created!
- -

6. Load the policy module from the udica output in the previous step.

- -
-#semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}
- -

7. Stop the container using the podman stop command, and then start it again with the --security-opt label=type:my_container.process option.

- -
-#podman stop 567a363etfle
-
-#podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 80:80 -dit ubi8 bash
- -

Validating SELinux in containers

- -

8. Verify that the container is running with the my_container.process type, and access the running container using exec.

- -
-# ps -efZ | grep my_container.process
-
-# podman exec -it 567a363etfle bash
- -

9. Verify that SELinux is functioning properly. Proceed to perform vulnerability testing activities within the running container.

- -

Install the nmap-ncat package in the container and attempt to redirect port 80 to port 2567.

- -
-[root@567a363etfle]# cd /var/spool
-[root@567a363etfle]# yum install nmap-ncat
-[root@567a363etfle]# nc -lvp 80
-...
-Ncat: Listening on :::80
-Ncat: Listening on 0.0.0.0:80
-[root@567a363etfle]# nc -lv- 2567
-...
-Ncat: bind to :::2567: Permission denied. QUITTING.
- -

As expected, SELinux is successfully preventing security risk activities within the containers as well.

- -

Find more resources

- -

For a deeper and practical understanding of Red Hat Enterprise Linux, you can engage in thoughtfully curated hands-on labs by Red Hat. Red Hat Universal Base Images (UBI) are container-based and operating system images with complementary runtime languages and packages. Try Red Hat UBI on curated Red Hat UBI hands-on lab.

- -

In this article, we have meticulously crafted bespoke SELinux policies and seamlessly deployed them across an extensive fleet of servers. Additionally, we have seamlessly integrated these policies into containers leveraging the UBI8 container image.

- -

Furthermore, you have the option to obtain tailored Red Hat Enterprise Linux images designed for AWS, Google Cloud Platform, Microsoft Azure, and VMware, facilitating their seamless deployment on your chosen platform.

-The post How custom SELinux policies secure servers and containers appeared first on Red Hat Developer. -

-
- - Red Hat Developer: How to validate GitOps manifests - 2023-10-10T07:00:00Z - 95d3a2f7-477c-4c56-8894-124fa034c2c2 - -

One of the major challenges of managing a cluster and application resources with GitOps is validating that changes to the GitOps manifests are correct. When making changes to objects directly on the cluster, the user is immediately presented with feedback when issues exist. The user is able to troubleshoot and resolve those issues with the knowledge of the context of the changes they just made. When working with GitOps, that feedback cycle is often delayed and users don't receive feedback on changes until they are applied to the cluster, which could be hours or even weeks depending on the approval lifecycle of a change.

- -

To reduce the number of errors in changes to a GitOps manifest and eliminate the dreaded Unknown state in an ArgoCD application, this article will discuss tools and best practices. We will discuss automating these validations with GitHub actions, but all of these validations can be configured with another CI tool of your choice.

- -

Using YAML linters

- -

YAML is the basis of nearly all GitOps repos. As you would expect, YAML has specific syntax standards for validity. Additionally, there are many recommended best practices that may not be required but can help improve the consistency and readability of the YAML.

- -

YAML linters are a great tool to help validate requirements on a repo and enforce consistent style for some of the optional configurations. Many different YAML linter tools exists, but one that I recommend is yamllint. The yamllint is built with Python, making it easy to set up on most Linux and MacOS environments since Python is installed by default and easily installed on any Windows environment.

- -

To install yamllint you can run the following command with pip, the Python package management tool:

- -
-pip install --user yamllint
-
- -

Once installed, users can use the yamllint cli tool to manually validate individual files:

- -
-yamllint my-file.yaml
-
- -

Or an entire directory structure:

- -
-yamllint .
-
- -

The yamllint provides a default configuration that may provide warnings for some style standards that you many not wish to enforce. The default options can easily be configured by creating a file called .yamllint in the root of the project. The following is a common configuration used in many GitOps repos:

- -
-extends: default
-
-rules:
-  document-start: disable
-  indentation:
-    indent-sequences: whatever
-  line-length: disable
-  truthy:
-    ignore: .github/workflows/
-
- -

Automating with GitHub actions

- -

Running yamllint locally is a great option for developers to get feedback while making changes to a repo, however running yamllint directly in a CI tool such as GitHub actions can help enforce standards and prevent improperly formatted YAML from ever making it into the main branch.

- -

To add a yamllint GitHub action, we can utilize a pre-built GitHub action and create a file called .github/workflows/validate-manifests.yaml in your project containing the following:

- -
-name: Validate Manifests
-on:
-  push:
-    branches:
-      - "*"
-  pull_request:
-    branches:
-      - "*"
-
-jobs:
-  lint-yaml:
-    runs-on: ubuntu-latest
-    steps:
-      - name: Code Checkout
-        uses: actions/checkout@v3
-      - name: Validate YAML
-        uses: ibiqlik/action-yamllint@v3
-        with:
-          format: github
-
- -

One great feature of yamllint is that it has native integration with GitHub and can do annotations directly on the lines of code with issues, making it easier for developers to identify problems and resolve them (Figure 1).

- - -
-
- A screenshot of a yamllint error Github annotation. -
-
Figure 1: This illustrates a yamllint error GitHub annotation.
-
-
-

The benefits of YAML linters

- -

YAML linters are designed to enforce generic YAML standards and make sure that objects are properly structured based on those generic standards. YAML linters are great for identifying issues with misconfigurations in YAML, such as extra lines in files or incorrect tabbing in objects. YAML linters can be great for catching problems such as objects incorrectly copied and pasted into a repo or a field accidentally duplicated in the same object.

- -

YAML linters can also keep GitOps repos more consistent and enforce some chosen standards for all contributors to the repo, making the repo more readable and maintainable. However, YAML linters are generally not able to do any sort of deeper inspection of the objects, and they do not validate the object against the expected schema for that object type.

- -

Kustomize validation

- -

Kustomize is one of the most common tools found in a GitOps repo for helping to organize and deploy YAML objects. Repos can commonly contain dozen, if not hundreds of kustomization.yaml files that can be incorrectly configured and cause errors when you reach the deployment step if not validated beforehand.

- -

A simple validation can be performed using the kustomize CLI tool:

- -
-kustomize build path/to/my/folder
-
- -

This command will attempt to render the folder using kustomize and display the final YAML objects. If it successfully renders, the kustomization.yaml file is valid. If it does not, kustomize will display an error to troubleshoot the issue.

- -

When making changes in kustomize, it can be easy to cause unforeseen problems. Therefore, it is always recommended to validate all kustomize resources in a repo, even those that you have not directly changed. A script that looks for all kustomization.yaml files in the repo, and attempts to run kustomize build for each folder can help to validate that no unintentional errors have been created. Fortunately, the Red Hat CoP has already created a script to do exactly that. Copy the validate_manifests.sh directly into a GitOps repo. Generally, I store it in a scripts folder, but you can run the script with the following:

- -
-./scripts/validate_manifests.sh
-
+#semanage import -f./my-selinux-settings.mod -

Automating with GitHub Actions

+

[Note: This shared configuration file is compatible only with specific OS versions, such as RHEL 9 to RHEL 9 and RHEL 8 to RHEL 8.]

-

Just like the YAML lint, validating the kustomize in a CI tool is an important step to adding confidence to changes to a repo and ensuring that no errors are introduced into the main branch.

+

Apply SELinux to the containers

-

Conveniently, GitHub Actions already has the kustomize tool built in so we can create a simple action to run the previously mentioned script by adding a new job to the same validation action we created before:

+

SELinux policies can be generated for containers using the udica package in the UBI8 image. The udica package enables you to create a customized security policy for precise control over a container's access to host system resources, including storage, devices, and the network. This capability helps prevent security violations and simplifies regulatory compliance when deploying containers.

-
-jobs:
-  lint-kustomize:
-    runs-on: ubuntu-latest
-    steps:
-      - name: Code Checkout
-        uses: actions/checkout@v3
-      - name: Verify Kustomize CLI Installation
-        run: |
-          which kustomize
-          kustomize version
-      - name: Validate Manifests
-        run: |
-          ./scripts/validate_manifests.sh
-
+

1. Before proceeding, please install the necessary dependency packages.

-

The benefits of Kustomize validation

+
+#yum install -y udica
-

Kustomize is a powerful tool, but one that is easy for human error to cause problems. This simple practice of validating every kustomization.yaml file in a repo can reduce the number of errors created by accidentally misspelling a filename or forgetting to update a filename in the kustomization.yaml file after renaming it. This kustomize check can also identify problems where objects are updated and any patches that impact those objects are no longer valid.

+

2. Install a containerization tool such as Podman or Docker. We will be using Podman as the container tool.

-

Additional, this validation can help to ensure that you don't accidentally break dependencies where another kustomization.yaml file inherits from a folder you did change. You can quickly catch problems before changes are merged into the main branch, such as when an object is removed from a base folder and that same object is being referenced in the overlay.

+
+# yum install podman -y
-

Using the Helm tool

+

3. Launch the container using a UBI8 image with volume mounts for the /home directory (read-only permissions) and the /var/spool directory (read and write permissions). Additionally, expose port 80.

-

Helm is another popular tool that is utilized in GitOps repos for organizing complex applications. Helm is an extremely powerful tool but one that can be prone to errors due to its complex syntax and structure.

+
+#podman run --env container=podman -v /home:/home:ro -v /var/spool:/var/spool:rw -p 80:80 -dit ubi8 bash
-

Fortunately, Helm provides a built in tool to help validate charts within the CLI:

+

4. Inspect the running container using the provided Podman command and collect the CONTAINER ID.

-helm lint path/to/my/chart
-
- -

Helm's linting capabilities will help to validate the template code to ensure that it is valid, verify all of the necessary values are present, and emit warnings for other recommendations.

+#podman ps -

As with the kustomize script, we can automate validating all of the charts in the repo by searching for any Chart.yaml files. The following script can be created in a file called validate_charts.sh:

+

5. Gather all running container policies into a .json file.

-#!/bin/sh
-
-for i in `find "${HELM_DIRS}" -name "Chart.yaml" -exec dirname {} \;`;
-do
-
-    echo
-    echo "Validating $i"
-    echo
+# podman inspect 567a363etfle > container.json
 
-    helm lint $i
+# udica -j container.json my_container
+Policy my_container with container id 567a363etfle created!
- build_response=$? +

6. Load the policy module from the udica output in the previous step.

- if [ $build_response -ne 0 ]; then - echo "Error linting $i" - exit 1 - fi +
+#semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}
-done +

7. Stop the container using the podman stop command, and then start it again with the --security-opt label=type:my_container.process option.

-echo -echo "Charts successfully validated!" -
+
+#podman stop 567a363etfle
 
-

You can easily validate all of the charts in a repository at once by running the following command:

+#podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 80:80 -dit ubi8 bash
-
-./scripts/validate_charts.sh
-
+

Validating SELinux in containers

-

This new script can be triggered from a GitHub Action just like the previous kustomize check. However in this case, helm is not built into the base action, so it must be installed as follows:

+

8. Verify that the container is running with the my_container.process type, and access the running container using exec.

-jobs:
-  helm-lint:
-    runs-on: ubuntu-latest
-    env:
-      HELM_VERSION: 3.12.3
-      HELM_DIRS: .
-    steps:
-      - name: Install Helm
-        run: |
-          sudo curl -L -o /usr/bin/helm https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz
-          sudo chmod +x /usr/bin/helm
-      - name: Code Checkout
-        uses: actions/checkout@v3
-      - name: Validate Charts
-        run: |
-            ./scripts/validate_charts.sh
-
+# ps -efZ | grep my_container.process -

The benefits of Helm lint

+# podman exec -it 567a363etfle bash
-

Helm linting can help to catch many issues. Helm is notorious for its complexity and the challenges that the templating language can introduce. You can catch common issues, such as misspelling a value name or incorrectly scoping a variable, with the Helm linting tool. Additionally, a Helm lint can catch other configuration issues in a chart such as an invalid reference in the Chart.yaml file.

+

9. Verify that SELinux is functioning properly. Proceed to perform vulnerability testing activities within the running container.

-

Helm linting does not do any validation on the YAML from the rendered charts. It only validates that the chart can be rendered. In some cases, it may be beneficial to apply additional validations on the rendered charts themselves.

+

Install the nmap-ncat package in the container and attempt to redirect port 80 to port 2567.

-

Next steps and limitations

+
+[root@567a363etfle]# cd /var/spool
+[root@567a363etfle]# yum install nmap-ncat
+[root@567a363etfle]# nc -lvp 80
+...
+Ncat: Listening on :::80
+Ncat: Listening on 0.0.0.0:80
+[root@567a363etfle]# nc -lv- 2567
+...
+Ncat: bind to :::2567: Permission denied. QUITTING.
-

The validations discussed are a great first step for improving the confidence of changes in a GitOps repo before deployment. Running these validations can help you avoid common mistakes in GitOps and allow you to catch and resolve problems before they are ever attempted to be validated against the cluster.

+

As expected, SELinux is successfully preventing security risk activities within the containers as well.

-

One major limitation of these checks is the lack of validation of the objects being applied to a cluster. If a field only accepts a value of true or false the validations discussed today will not be able to identify an invalid configuration such as this.

+

Find more resources

-

More specialized tools such as kubeval and kubeconform can help to validate standard Kubernetes objects, but they lack support for Red Hat OpenShift specific objects or CustomResources from Operators out of the box. Extracting schemas for those objects is possible, which can help to extend validation of objects even beyond standard k8s objects.

+

For a deeper and practical understanding of Red Hat Enterprise Linux, you can engage in thoughtfully curated hands-on labs by Red Hat. Red Hat Universal Base Images (UBI) are container-based and operating system images with complementary runtime languages and packages. Try Red Hat UBI on curated Red Hat UBI hands-on lab.

-

Additionally, you can perform validations directly against a target cluster itself using the --dry-run=server flag with oc apply. Using the dry-run flag allows the objects to be validated against the cluster itself and provides an even greater degree of confidence that objects applied to the cluster will be successful.

-The post How to validate GitOps manifests appeared first on Red Hat Developer. +

In this article, we have meticulously crafted bespoke SELinux policies and seamlessly deployed them across an extensive fleet of servers. Additionally, we have seamlessly integrated these policies into containers leveraging the UBI8 container image.

+ +

Furthermore, you have the option to obtain tailored Red Hat Enterprise Linux images designed for AWS, Google Cloud Platform, Microsoft Azure, and VMware, facilitating their seamless deployment on your chosen platform.

+The post How custom SELinux policies secure servers and containers appeared first on Red Hat Developer.

@@ -1398,46 +1405,194 @@ Caused by: java.lang.IllegalArgumentException: WFCMTOOL000004: Server name = JBo

Basically, the migration logs will be transparent, which is very helpful for diagnosing specific issues. One of the most common problems is migrating a version that is not supported, such as migrating JBoss EAP 7.0 to JBoss EAP 7.3 (only JBoss EAP 7.2). If in doubt, you can set the logger.level (found in logging.properties).

-

Finally, JBoss EAP 7.4 lets you migrate straight from JBoss EAP 7.1 and JBoss EAP 7.2. This avoids chain scenarios needing subsequent migrations (e.g., JBoss EAP 7.1 to JBoss EAP 7.2 to JBoss EAP 7.3 to JBoss EAP 7.4), which we encountered before this tech preview came along.

+

Finally, JBoss EAP 7.4 lets you migrate straight from JBoss EAP 7.1 and JBoss EAP 7.2. This avoids chain scenarios needing subsequent migrations (e.g., JBoss EAP 7.1 to JBoss EAP 7.2 to JBoss EAP 7.3 to JBoss EAP 7.4), which we encountered before this tech preview came along.

+ +

FAQ

+ +

Q. Is it possible to keep two JBoss EAP installations at the same time, the new and the old?

+ +

A. Yes, as long as the installation was done using ZIP. RPM installations do not allow multiple JBoss EAP installations. For running two EAP servers simultaneously verify if they won't have port conflicts.

+ +

Q. Can I run the migration from JBoss EAP 7.2 to JBoss EAP 7.4?

+ +

A. Yes, this is available as a feature in JBoss EAP 7.4.2+.

+ +

Q. Does the migration tool help migrate deployments?

+ +

A. No, the JBoss Server Migration Tool does not migrate the deployment code; it migrates the deployment in that it copies the deployment from source to target. To migrate the deployment code itself, you must use the migration toolkit for runtimes (or the migration toolkit for applications, for OpenShift Container Platform migrations). 

+ +

Additional resources

+ +

To learn more, refer to the JBoss Server Migration Tool migration guide in PDF or HTML formats. Red Hat Support team created the article Using JBoss Server Migration Tool to upgrade from JBoss EAP 7.x to 7.current with several issues and an simple example as I above explain. 

+ +

Regarding JBoss EAP 7.3 breaking JBoss EAP 7.4, refer to the solution described in Using EAP 7.3 configuration in EAP 7.4 breaks the backward compatibility.

+ +

For any other specific inquiries, please open a case with Red Hat support. Our global team of experts can help you with any issues.

+The post How to migrate server configurations to JBoss EAP 7.4 appeared first on Red Hat Developer. +

+
+ + Quarkus: Quarkus 3.4.2 released - Maintenance release + 2023-10-05T00:00:00Z + https://quarkus.io/blog/quarkus-3-4-2-released/ + + Today, we released Quarkus 3.4.2, our first maintenance release for our 3.4 release train (we skipped 3.4.0). It includes a bunch of bugfixes, together with documentation improvements. Update To update to Quarkus 3.4.2, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update To migrate... + + + JBoss Blogs: Quarkus 3.4.2 released - Maintenance release + 2023-10-05T00:00:00Z + https://quarkus.io/blog/quarkus-3-4-2-released/ + + + + + Red Hat Developer: About Argo CD ApplicationSet and SCM Provider generator + 2023-10-04T07:00:00Z + eb3cab1e-7d1d-407e-815c-071817e984fe + +

The ApplicationSet controller is a part of Argo CD that adds support for an ApplicationSet CustomResourceDefinition (CRD). The ApplicationSet controller helps to add Application automation and improve multicluster support and cluster multitenant support within Argo CD.

+ +

ApplicationSet provides the following benefits:

+ +
  • The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters and deploy multiple applications from one or multiple Git repositories with Argo CD.
  • +
  • Improved support for monorepos. In the context of Argo CD, a monorepo is multiple Argo CD Application resources defined within a single Git repository.
  • +
  • Within multitenant clusters, improves the ability of individual cluster tenants to deploy applications using Argo CD (without needing to involve privileged cluster administrators in enabling the destination clusters/namespaces).
  • +

You can view all the supported fields for ApplicationSet in the Argo CD documentation.

+ +

ApplicationSet generators

+ +

Generators generate parameters and are rendered into the template field of the ApplicationSet resource. ApplicationSet supports a variety of generators, as described below:

+ +
  • List generator: Generates parameters based on an arbitrary list of key/value pairs.
  • +
  • Cluster generator: Allows you to target Argo CD applications to clusters based on the list of clusters defined within (and managed by) Argo CD.
  • +
  • Git generator: Allows you to create applications based on files within a Git repository or based on the directory structure of a Git repository.
  • +
  • Matrix generator: Can be used to combine the generated parameters of two separate generators.
  • +
  • Merge generator: Can be used to merge the generated parameters of two or more generators.
  • +
  • SCM Provider generator: Uses the API of a source code management (SCM) provider (e.g., GitHub) to discover repositories within an organization automatically.
  • +
  • Pull Request generator: Uses the API of a Source-Code-Management-as-a-Service (SCMaaS) provider (e.g., GitHub) to discover open pull requests within a repository automatically.
  • +
  • Cluster Decision Resource generator: Used to interface with Kubernetes custom resources that use custom resource-specific logic to decide which set of Argo CD clusters to deploy to.
  • +
  • Plug-in generator: Makes RPC HTTP requests to provide parameters.
  • +

This article focuses exclusively on the SCM Provider generator and its use cases.

+ +

Source code management

+ +

SCM is used to track modifications to a source code repository. SCM tracks a running history of changes to a code base and helps resolve conflicts when merging updates from multiple contributors. SCM is also synonymous with version control.

+ +

As software projects grow in lines of code and contributor headcount, the costs of communication overhead and management complexity also grow. SCM is a critical tool to alleviate the organizational strain of growing development costs.

+ +

SCM Provider generator

+ +

The SCM Provider generator uses the API of a SCMaaS provider such as GitHub to discover repositories within an organization automatically. This fits well with GitOps layout patterns that split microservices across many repositories.

+ +

As of Argo CD version 2.8, the supported SCM providers are:

+ +
  • GitHub
  • +
  • GitLab 
  • +
  • Gitea
  • +
  • Bitbucket 
  • +
  • Azure DevOps
  • +
  • Bitbucket Cloud
  • +

You can configure the ApplicationSet with SCM Provider as follows:

+ +
+apiVersion: argoproj.io/v1alpha1
+kind: ApplicationSet
+metadata:
+  name: myapps
+spec:
+  generators:
+  - scmProvider:
+      # Which protocol to clone using.
+      cloneProtocol: ssh
+      # See below for provider specific options.
+      github:
+        # ...
+ +

cloneProtocol: defines the protocol to use for the SCM URL. The default is provider-specific, but ssh if possible.

+ +

The provider's parameters are described in the Argo CD documentation. We will use GitHub for the example in this article.

+ +

GitHub

+ +

The GitHub mode uses the GitHub API to scan an organization in either github.com or GitHub Enterprise:

+ +
+apiVersion: argoproj.io/v1alpha1
+kind: ApplicationSet
+metadata:
+  name: guestbook
+spec:
+  generators:
+  - scmProvider:
+      github:
+        organization: argoproj
+      cloneProtocol: https
+      filters:
+      - repositoryMatch: example-apps
+  template:
+    metadata:
+      name: '{{ repository }}-guestbook'
+    spec:
+      project: "default"
+      source:
+        repoURL: '{{ url }}'
+        targetRevision: '{{ branch }}'
+        path: guestbook
+      destination:
+        server: https://kubernetes.default.svc
+        namespace: guestbook
+ +

The parameters are explained below:

-

FAQ

+
  • Organization: In the example above, we used argoproj. If you have multiple organizations, use multiple generators.
  • +
  • api: This field is optional; however, if you are using GitHub Enterprise, you will need to specify the URL to access the API. In our example, we are not using GitHub Enterprise.
  • +
  • filters: SCM Provider supports multiple filters. We are using a repositoryMatch filter matching the repository with name example-apps. You can find more filters in the documentation.
  • +

Use case

-

Q. Is it possible to keep two JBoss EAP installations at the same time, the new and the old?

+

There is an e-commerce website with multiple services for orders, inventory, payments, etc.; each service is stored in a separate Git repository within the same organization. When we deploy the e-commerce website, we want all the services to be deployed simultaneously for the website to start successfully.

-

A. Yes, as long as the installation was done using ZIP. RPM installations do not allow multiple JBoss EAP installations. For running two EAP servers simultaneously verify if they won't have port conflicts.

+

In this case, we can use the SCM Provider generator to generate all the services belonging to the same organization. You can configure the ApplicationSet in the following format:

-

Q. Can I run the migration from JBoss EAP 7.2 to JBoss EAP 7.4?

+
+apiVersion: argoproj.io/v1alpha1
+kind: ApplicationSet
+metadata:
+  name: ecommerce-website
+spec:
+  generators:
+  - scmProvider:
+      github:
+        organization: ecommerceWebsite
+      cloneProtocol: https
+  template:
+    metadata:
+      name: '{{ repository }}'
+    spec:
+      project: "default"
+      source:
+        repoURL: '{{ url }}'
+        targetRevision: '{{ branch }}'
+        path: app
+      destination:
+        server: https://kubernetes.default.svc
+        namespace: ecomm
-

A. Yes, this is available as a feature in JBoss EAP 7.4.2+.

+

The ApplicationSet above will read all the repositories in the GitHub organization ecommerceWebsite and deploy all the services in the repositories within the path app in each repository.

-

Q. Does the migration tool help migrate deployments?

+

Pros

-

A. No, the JBoss Server Migration Tool does not migrate the deployment code; it migrates the deployment in that it copies the deployment from source to target. To migrate the deployment code itself, you must use the migration toolkit for runtimes (or the migration toolkit for applications, for OpenShift Container Platform migrations). 

+

Deploying multiple applications and services in the same organization is easy and quick. You can use the SCM Provider generator to allow you to dynamically generate ephemeral environments for testing or prod-like environments easily.

-

Additional resources

+

Cons

-

To learn more, refer to the JBoss Server Migration Tool migration guide in PDF or HTML formats. Red Hat Support team created the article Using JBoss Server Migration Tool to upgrade from JBoss EAP 7.x to 7.current with several issues and an simple example as I above explain. 

+

Once the entire organization is added to the SCM Provider generator, it will pick all the repositories matching the added filtering conditions. If you want to create a new repository within the same organization and do not want it to be deployed, you would need to ensure that the repository does not match any of the filtering conditions added to the SCM Provider generator. This becomes an additional effort if the SCM generator is not configured properly.

-

Regarding JBoss EAP 7.3 breaking JBoss EAP 7.4, refer to the solution described in Using EAP 7.3 configuration in EAP 7.4 breaks the backward compatibility.

+

Conclusion

-

For any other specific inquiries, please open a case with Red Hat support. Our global team of experts can help you with any issues.

-The post How to migrate server configurations to JBoss EAP 7.4 appeared first on Red Hat Developer. +

The ApplicationSet custom resource helps you easily and quickly manage multitenant or cluster application deployment. The SCM Provider generator is helpful when you want to manage all repositories of a known pattern within an organization and deploy resources to Argo CD by automatically identifying changes to any of the repositories in the organization.

+The post About Argo CD ApplicationSet and SCM Provider generator appeared first on Red Hat Developer.

- - JBoss Blogs: Quarkus 3.4.2 released - Maintenance release - 2023-10-05T00:00:00Z - https://quarkus.io/blog/quarkus-3-4-2-released/ - - - - - Quarkus: Quarkus 3.4.2 released - Maintenance release - 2023-10-05T00:00:00Z - https://quarkus.io/blog/quarkus-3-4-2-released/ - - Today, we released Quarkus 3.4.2, our first maintenance release for our 3.4 release train (we skipped 3.4.0). It includes a bunch of bugfixes, together with documentation improvements. Update To update to Quarkus 3.4.2, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update To migrate... - Red Hat Developer: 5 steps to build a self-healing server with Alertmanager 2023-10-04T07:00:00Z @@ -1660,154 +1815,6 @@ services:

Get started with Ansible Automation Platform by exploring interactive hands-on labs. Download Ansible Automation Platform at no cost and begin your automation journey.

The post 5 steps to build a self-healing server with Alertmanager appeared first on Red Hat Developer. -

-
- - Red Hat Developer: About Argo CD ApplicationSet and SCM Provider generator - 2023-10-04T07:00:00Z - eb3cab1e-7d1d-407e-815c-071817e984fe - -

The ApplicationSet controller is a part of Argo CD that adds support for an ApplicationSet CustomResourceDefinition (CRD). The ApplicationSet controller helps to add Application automation and improve multicluster support and cluster multitenant support within Argo CD.

- -

ApplicationSet provides the following benefits:

- -
  • The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters and deploy multiple applications from one or multiple Git repositories with Argo CD.
  • -
  • Improved support for monorepos. In the context of Argo CD, a monorepo is multiple Argo CD Application resources defined within a single Git repository.
  • -
  • Within multitenant clusters, improves the ability of individual cluster tenants to deploy applications using Argo CD (without needing to involve privileged cluster administrators in enabling the destination clusters/namespaces).
  • -

You can view all the supported fields for ApplicationSet in the Argo CD documentation.

- -

ApplicationSet generators

- -

Generators generate parameters and are rendered into the template field of the ApplicationSet resource. ApplicationSet supports a variety of generators, as described below:

- -
  • List generator: Generates parameters based on an arbitrary list of key/value pairs.
  • -
  • Cluster generator: Allows you to target Argo CD applications to clusters based on the list of clusters defined within (and managed by) Argo CD.
  • -
  • Git generator: Allows you to create applications based on files within a Git repository or based on the directory structure of a Git repository.
  • -
  • Matrix generator: Can be used to combine the generated parameters of two separate generators.
  • -
  • Merge generator: Can be used to merge the generated parameters of two or more generators.
  • -
  • SCM Provider generator: Uses the API of a source code management (SCM) provider (e.g., GitHub) to discover repositories within an organization automatically.
  • -
  • Pull Request generator: Uses the API of a Source-Code-Management-as-a-Service (SCMaaS) provider (e.g., GitHub) to discover open pull requests within a repository automatically.
  • -
  • Cluster Decision Resource generator: Used to interface with Kubernetes custom resources that use custom resource-specific logic to decide which set of Argo CD clusters to deploy to.
  • -
  • Plug-in generator: Makes RPC HTTP requests to provide parameters.
  • -

This article focuses exclusively on the SCM Provider generator and its use cases.

- -

Source code management

- -

SCM is used to track modifications to a source code repository. SCM tracks a running history of changes to a code base and helps resolve conflicts when merging updates from multiple contributors. SCM is also synonymous with version control.

- -

As software projects grow in lines of code and contributor headcount, the costs of communication overhead and management complexity also grow. SCM is a critical tool to alleviate the organizational strain of growing development costs.

- -

SCM Provider generator

- -

The SCM Provider generator uses the API of a SCMaaS provider such as GitHub to discover repositories within an organization automatically. This fits well with GitOps layout patterns that split microservices across many repositories.

- -

As of Argo CD version 2.8, the supported SCM providers are:

- -
  • GitHub
  • -
  • GitLab 
  • -
  • Gitea
  • -
  • Bitbucket 
  • -
  • Azure DevOps
  • -
  • Bitbucket Cloud
  • -

You can configure the ApplicationSet with SCM Provider as follows:

- -
-apiVersion: argoproj.io/v1alpha1
-kind: ApplicationSet
-metadata:
-  name: myapps
-spec:
-  generators:
-  - scmProvider:
-      # Which protocol to clone using.
-      cloneProtocol: ssh
-      # See below for provider specific options.
-      github:
-        # ...
- -

cloneProtocol: defines the protocol to use for the SCM URL. The default is provider-specific, but ssh if possible.

- -

The provider's parameters are described in the Argo CD documentation. We will use GitHub for the example in this article.

- -

GitHub

- -

The GitHub mode uses the GitHub API to scan an organization in either github.com or GitHub Enterprise:

- -
-apiVersion: argoproj.io/v1alpha1
-kind: ApplicationSet
-metadata:
-  name: guestbook
-spec:
-  generators:
-  - scmProvider:
-      github:
-        organization: argoproj
-      cloneProtocol: https
-      filters:
-      - repositoryMatch: example-apps
-  template:
-    metadata:
-      name: '{{ repository }}-guestbook'
-    spec:
-      project: "default"
-      source:
-        repoURL: '{{ url }}'
-        targetRevision: '{{ branch }}'
-        path: guestbook
-      destination:
-        server: https://kubernetes.default.svc
-        namespace: guestbook
- -

The parameters are explained below:

- -
  • Organization: In the example above, we used argoproj. If you have multiple organizations, use multiple generators.
  • -
  • api: This field is optional; however, if you are using GitHub Enterprise, you will need to specify the URL to access the API. In our example, we are not using GitHub Enterprise.
  • -
  • filters: SCM Provider supports multiple filters. We are using a repositoryMatch filter matching the repository with name example-apps. You can find more filters in the documentation.
  • -

Use case

- -

There is an e-commerce website with multiple services for orders, inventory, payments, etc.; each service is stored in a separate Git repository within the same organization. When we deploy the e-commerce website, we want all the services to be deployed simultaneously for the website to start successfully.

- -

In this case, we can use the SCM Provider generator to generate all the services belonging to the same organization. You can configure the ApplicationSet in the following format:

- -
-apiVersion: argoproj.io/v1alpha1
-kind: ApplicationSet
-metadata:
-  name: ecommerce-website
-spec:
-  generators:
-  - scmProvider:
-      github:
-        organization: ecommerceWebsite
-      cloneProtocol: https
-  template:
-    metadata:
-      name: '{{ repository }}'
-    spec:
-      project: "default"
-      source:
-        repoURL: '{{ url }}'
-        targetRevision: '{{ branch }}'
-        path: app
-      destination:
-        server: https://kubernetes.default.svc
-        namespace: ecomm
- -

The ApplicationSet above will read all the repositories in the GitHub organization ecommerceWebsite and deploy all the services in the repositories within the path app in each repository.

- -

Pros

- -

Deploying multiple applications and services in the same organization is easy and quick. You can use the SCM Provider generator to allow you to dynamically generate ephemeral environments for testing or prod-like environments easily.

- -

Cons

- -

Once the entire organization is added to the SCM Provider generator, it will pick all the repositories matching the added filtering conditions. If you want to create a new repository within the same organization and do not want it to be deployed, you would need to ensure that the repository does not match any of the filtering conditions added to the SCM Provider generator. This becomes an additional effort if the SCM generator is not configured properly.

- -

Conclusion

- -

The ApplicationSet custom resource helps you easily and quickly manage multitenant or cluster application deployment. The SCM Provider generator is helpful when you want to manage all repositories of a known pattern within an organization and deploy resources to Argo CD by automatically identifying changes to any of the repositories in the organization.

-The post About Argo CD ApplicationSet and SCM Provider generator appeared first on Red Hat Developer.

@@ -2176,18 +2183,18 @@ The post In a previous post, we have seen how to implement a CRUD application using virtual threads in Quarkus. The following video shows how to test this application and, specifically, how to detect pinning. The complete code of the application and the tests are available in the virtual threads demos repository.... - JBoss Blogs: Live diff and update quarkus deployments in OpenShift using Jetbrains IDEA + Quarkus: Live diff and update quarkus deployments in OpenShift using Jetbrains IDEA 2023-09-28T00:00:00Z https://quarkus.io/blog/live-diff-and-update-using-idea/ - + Prerequisites OpenShift CLI, oc: installation instructions Kubernetes by Red Hat, Kubernetes Plugin for JetBrains IDEA Marketplace Quarkus CLI, Quarkus: Installation Instructions Optional: Source code for this blog post: https://github.com/adietish/openshift-quickstart IntelliJ Kubernetes Plugin This shows you how the Kubernetes Plugin for Jetbrains IDEA is a great companion when deploying quarkus apps... - Quarkus: Live diff and update quarkus deployments in OpenShift using Jetbrains IDEA + JBoss Blogs: Live diff and update quarkus deployments in OpenShift using Jetbrains IDEA 2023-09-28T00:00:00Z https://quarkus.io/blog/live-diff-and-update-using-idea/ - Prerequisites OpenShift CLI, oc: installation instructions Kubernetes by Red Hat, Kubernetes Plugin for JetBrains IDEA Marketplace Quarkus CLI, Quarkus: Installation Instructions Optional: Source code for this blog post: https://github.com/adietish/openshift-quickstart IntelliJ Kubernetes Plugin This shows you how the Kubernetes Plugin for Jetbrains IDEA is a great companion when deploying quarkus apps... + JBoss Blogs: Just: a tool to store and execute your project commands @@ -2225,31 +2232,31 @@ The post Last week, we published a video demonstrating the creation of a CRUD application using virtual threads in Quarkus. It’s as simple as adding the @RunOnVirtualThread annotation on your HTTP resource (or your controller class if you use the Spring compatibility layer). This companion post explains how it works behind the... - Quarkus: Observability in Quarkus 3 + Quarkus: Quarkus 3.4.1 released - Redis 7.2 and Flyway changes 2023-09-20T00:00:00Z - https://quarkus.io/blog/quarkus-observability-3-3/ - - Observability in Quarkus Observability on a software system can be described as the capability to allow a human to ask and answer questions. To enable developers and support engineers in understanding how their applications behave, Quarkus 3.3 includes many improvements to its main observability related extensions: quarkus-opentelemetry (tracing) quarkus-micrometer (metrics)... + https://quarkus.io/blog/quarkus-3-4-1-released/ + + It is our pleasure to announce the release of Quarkus 3.4.1. We skipped 3.4.0 as we needed a fix for CVE-2023-4853 in 3.4 too. Major changes are: Support for Redis 7.2 Adjustments on how to enable/activate Flyway This version also comes with bugfixes, performance improvements and documentation improvements. We currently... - JBoss Blogs: Observability in Quarkus 3 + Quarkus: Observability in Quarkus 3 2023-09-20T00:00:00Z https://quarkus.io/blog/quarkus-observability-3-3/ - + Observability in Quarkus Observability on a software system can be described as the capability to allow a human to ask and answer questions. To enable developers and support engineers in understanding how their applications behave, Quarkus 3.3 includes many improvements to its main observability related extensions: quarkus-opentelemetry (tracing) quarkus-micrometer (metrics)... - Quarkus: Quarkus 3.4.1 released - Redis 7.2 and Flyway changes + JBoss Blogs: Quarkus 3.4.1 released - Redis 7.2 and Flyway changes 2023-09-20T00:00:00Z https://quarkus.io/blog/quarkus-3-4-1-released/ - It is our pleasure to announce the release of Quarkus 3.4.1. We skipped 3.4.0 as we needed a fix for CVE-2023-4853 in 3.4 too. Major changes are: Support for Redis 7.2 Adjustments on how to enable/activate Flyway This version also comes with bugfixes, performance improvements and documentation improvements. We currently... + - JBoss Blogs: Quarkus 3.4.1 released - Redis 7.2 and Flyway changes + JBoss Blogs: Observability in Quarkus 3 2023-09-20T00:00:00Z - https://quarkus.io/blog/quarkus-3-4-1-released/ - + https://quarkus.io/blog/quarkus-observability-3-3/ + @@ -2266,13 +2273,6 @@ The post - - JBoss Blogs: How to upgrade to Quarkus 3 - 2023-09-18T16:34:53Z - https://www.mastertheboss.com/soa-cloud/quarkus/how-to-upgrade-to-quarkus-3/ - - - JBoss Blogs: This Week in JBoss - July 03, 2023 2023-07-03T00:00:00Z