diff --git a/rss.xml b/rss.xml index 6c3f3aff..b758a960 100644 --- a/rss.xml +++ b/rss.xml @@ -1,7 +1,7 @@ JBoss Tools Aggregated Feed https://tools.jboss.org - 2023-10-13T07:52:25Z + 2023-10-13T09:15:08Z JBoss Blogs: How to solve CVE-2023-44487 @@ -24,6 +24,227 @@ + + Red Hat Developer: How to validate GitOps manifests + 2023-10-10T07:00:00Z + 95d3a2f7-477c-4c56-8894-124fa034c2c2 + +

One of the major challenges of managing a cluster and application resources with GitOps is validating that changes to the GitOps manifests are correct. When making changes to objects directly on the cluster, the user is immediately presented with feedback when issues exist. The user is able to troubleshoot and resolve those issues with the knowledge of the context of the changes they just made. When working with GitOps, that feedback cycle is often delayed and users don't receive feedback on changes until they are applied to the cluster, which could be hours or even weeks depending on the approval lifecycle of a change.

+ +

To reduce the number of errors in changes to a GitOps manifest and eliminate the dreaded Unknown state in an ArgoCD application, this article will discuss tools and best practices. We will discuss automating these validations with GitHub actions, but all of these validations can be configured with another CI tool of your choice.

+ +

Using YAML linters

+ +

YAML is the basis of nearly all GitOps repos. As you would expect, YAML has specific syntax standards for validity. Additionally, there are many recommended best practices that may not be required but can help improve the consistency and readability of the YAML.

+ +

YAML linters are a great tool to help validate requirements on a repo and enforce consistent style for some of the optional configurations. Many different YAML linter tools exists, but one that I recommend is yamllint. The yamllint is built with Python, making it easy to set up on most Linux and MacOS environments since Python is installed by default and easily installed on any Windows environment.

+ +

To install yamllint you can run the following command with pip, the Python package management tool:

+ +
+pip install --user yamllint
+
+ +

Once installed, users can use the yamllint cli tool to manually validate individual files:

+ +
+yamllint my-file.yaml
+
+ +

Or an entire directory structure:

+ +
+yamllint .
+
+ +

The yamllint provides a default configuration that may provide warnings for some style standards that you many not wish to enforce. The default options can easily be configured by creating a file called .yamllint in the root of the project. The following is a common configuration used in many GitOps repos:

+ +
+extends: default
+
+rules:
+  document-start: disable
+  indentation:
+    indent-sequences: whatever
+  line-length: disable
+  truthy:
+    ignore: .github/workflows/
+
+ +

Automating with GitHub actions

+ +

Running yamllint locally is a great option for developers to get feedback while making changes to a repo, however running yamllint directly in a CI tool such as GitHub actions can help enforce standards and prevent improperly formatted YAML from ever making it into the main branch.

+ +

To add a yamllint GitHub action, we can utilize a pre-built GitHub action and create a file called .github/workflows/validate-manifests.yaml in your project containing the following:

+ +
+name: Validate Manifests
+on:
+  push:
+    branches:
+      - "*"
+  pull_request:
+    branches:
+      - "*"
+
+jobs:
+  lint-yaml:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Code Checkout
+        uses: actions/checkout@v3
+      - name: Validate YAML
+        uses: ibiqlik/action-yamllint@v3
+        with:
+          format: github
+
+ +

One great feature of yamllint is that it has native integration with GitHub and can do annotations directly on the lines of code with issues, making it easier for developers to identify problems and resolve them (Figure 1).

+ + +
+
+ A screenshot of a yamllint error Github annotation. +
+
Figure 1: This illustrates a yamllint error GitHub annotation.
+
+
+

The benefits of YAML linters

+ +

YAML linters are designed to enforce generic YAML standards and make sure that objects are properly structured based on those generic standards. YAML linters are great for identifying issues with misconfigurations in YAML, such as extra lines in files or incorrect tabbing in objects. YAML linters can be great for catching problems such as objects incorrectly copied and pasted into a repo or a field accidentally duplicated in the same object.

+ +

YAML linters can also keep GitOps repos more consistent and enforce some chosen standards for all contributors to the repo, making the repo more readable and maintainable. However, YAML linters are generally not able to do any sort of deeper inspection of the objects, and they do not validate the object against the expected schema for that object type.

+ +

Kustomize validation

+ +

Kustomize is one of the most common tools found in a GitOps repo for helping to organize and deploy YAML objects. Repos can commonly contain dozen, if not hundreds of kustomization.yaml files that can be incorrectly configured and cause errors when you reach the deployment step if not validated beforehand.

+ +

A simple validation can be performed using the kustomize CLI tool:

+ +
+kustomize build path/to/my/folder
+
+ +

This command will attempt to render the folder using kustomize and display the final YAML objects. If it successfully renders, the kustomization.yaml file is valid. If it does not, kustomize will display an error to troubleshoot the issue.

+ +

When making changes in kustomize, it can be easy to cause unforeseen problems. Therefore, it is always recommended to validate all kustomize resources in a repo, even those that you have not directly changed. A script that looks for all kustomization.yaml files in the repo, and attempts to run kustomize build for each folder can help to validate that no unintentional errors have been created. Fortunately, the Red Hat CoP has already created a script to do exactly that. Copy the validate_manifests.sh directly into a GitOps repo. Generally, I store it in a scripts folder, but you can run the script with the following:

+ +
+./scripts/validate_manifests.sh
+
+ +

Automating with GitHub Actions

+ +

Just like the YAML lint, validating the kustomize in a CI tool is an important step to adding confidence to changes to a repo and ensuring that no errors are introduced into the main branch.

+ +

Conveniently, GitHub Actions already has the kustomize tool built in so we can create a simple action to run the previously mentioned script by adding a new job to the same validation action we created before:

+ +
+jobs:
+  lint-kustomize:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Code Checkout
+        uses: actions/checkout@v3
+      - name: Verify Kustomize CLI Installation
+        run: |
+          which kustomize
+          kustomize version
+      - name: Validate Manifests
+        run: |
+          ./scripts/validate_manifests.sh
+
+ +

The benefits of Kustomize validation

+ +

Kustomize is a powerful tool, but one that is easy for human error to cause problems. This simple practice of validating every kustomization.yaml file in a repo can reduce the number of errors created by accidentally misspelling a filename or forgetting to update a filename in the kustomization.yaml file after renaming it. This kustomize check can also identify problems where objects are updated and any patches that impact those objects are no longer valid.

+ +

Additional, this validation can help to ensure that you don't accidentally break dependencies where another kustomization.yaml file inherits from a folder you did change. You can quickly catch problems before changes are merged into the main branch, such as when an object is removed from a base folder and that same object is being referenced in the overlay.

+ +

Using the Helm tool

+ +

Helm is another popular tool that is utilized in GitOps repos for organizing complex applications. Helm is an extremely powerful tool but one that can be prone to errors due to its complex syntax and structure.

+ +

Fortunately, Helm provides a built in tool to help validate charts within the CLI:

+ +
+helm lint path/to/my/chart
+
+ +

Helm's linting capabilities will help to validate the template code to ensure that it is valid, verify all of the necessary values are present, and emit warnings for other recommendations.

+ +

As with the kustomize script, we can automate validating all of the charts in the repo by searching for any Chart.yaml files. The following script can be created in a file called validate_charts.sh:

+ +
+#!/bin/sh
+
+for i in `find "${HELM_DIRS}" -name "Chart.yaml" -exec dirname {} \;`;
+do
+
+    echo
+    echo "Validating $i"
+    echo
+
+    helm lint $i
+
+    build_response=$?
+
+    if [ $build_response -ne 0 ]; then
+        echo "Error linting $i"
+        exit 1
+    fi
+
+done
+
+echo
+echo "Charts successfully validated!"
+
+ +

You can easily validate all of the charts in a repository at once by running the following command:

+ +
+./scripts/validate_charts.sh
+
+ +

This new script can be triggered from a GitHub Action just like the previous kustomize check. However in this case, helm is not built into the base action, so it must be installed as follows:

+ +
+jobs:
+  helm-lint:
+    runs-on: ubuntu-latest
+    env:
+      HELM_VERSION: 3.12.3
+      HELM_DIRS: .
+    steps:
+      - name: Install Helm
+        run: |
+          sudo curl -L -o /usr/bin/helm https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz
+          sudo chmod +x /usr/bin/helm
+      - name: Code Checkout
+        uses: actions/checkout@v3
+      - name: Validate Charts
+        run: |
+            ./scripts/validate_charts.sh
+
+ +

The benefits of Helm lint

+ +

Helm linting can help to catch many issues. Helm is notorious for its complexity and the challenges that the templating language can introduce. You can catch common issues, such as misspelling a value name or incorrectly scoping a variable, with the Helm linting tool. Additionally, a Helm lint can catch other configuration issues in a chart such as an invalid reference in the Chart.yaml file.

+ +

Helm linting does not do any validation on the YAML from the rendered charts. It only validates that the chart can be rendered. In some cases, it may be beneficial to apply additional validations on the rendered charts themselves.

+ +

Next steps and limitations

+ +

The validations discussed are a great first step for improving the confidence of changes in a GitOps repo before deployment. Running these validations can help you avoid common mistakes in GitOps and allow you to catch and resolve problems before they are ever attempted to be validated against the cluster.

+ +

One major limitation of these checks is the lack of validation of the objects being applied to a cluster. If a field only accepts a value of true or false the validations discussed today will not be able to identify an invalid configuration such as this.

+ +

More specialized tools such as kubeval and kubeconform can help to validate standard Kubernetes objects, but they lack support for Red Hat OpenShift specific objects or CustomResources from Operators out of the box. Extracting schemas for those objects is possible, which can help to extend validation of objects even beyond standard k8s objects.

+ +

Additionally, you can perform validations directly against a target cluster itself using the --dry-run=server flag with oc apply. Using the dry-run flag allows the objects to be validated against the cluster itself and provides an even greater degree of confidence that objects applied to the cluster will be successful.

+The post How to validate GitOps manifests appeared first on Red Hat Developer. +

+
Red Hat Developer: How custom SELinux policies secure servers and containers 2023-10-10T07:00:00Z @@ -398,332 +619,40 @@ The post -

One of the major challenges of managing a cluster and application resources with GitOps is validating that changes to the GitOps manifests are correct. When making changes to objects directly on the cluster, the user is immediately presented with feedback when issues exist. The user is able to troubleshoot and resolve those issues with the knowledge of the context of the changes they just made. When working with GitOps, that feedback cycle is often delayed and users don't receive feedback on changes until they are applied to the cluster, which could be hours or even weeks depending on the approval lifecycle of a change.

- -

To reduce the number of errors in changes to a GitOps manifest and eliminate the dreaded Unknown state in an ArgoCD application, this article will discuss tools and best practices. We will discuss automating these validations with GitHub actions, but all of these validations can be configured with another CI tool of your choice.

- -

Using YAML linters

- -

YAML is the basis of nearly all GitOps repos. As you would expect, YAML has specific syntax standards for validity. Additionally, there are many recommended best practices that may not be required but can help improve the consistency and readability of the YAML.

- -

YAML linters are a great tool to help validate requirements on a repo and enforce consistent style for some of the optional configurations. Many different YAML linter tools exists, but one that I recommend is yamllint. The yamllint is built with Python, making it easy to set up on most Linux and MacOS environments since Python is installed by default and easily installed on any Windows environment.

- -

To install yamllint you can run the following command with pip, the Python package management tool:

- -
-pip install --user yamllint
-
- -

Once installed, users can use the yamllint cli tool to manually validate individual files:

- -
-yamllint my-file.yaml
-
- -

Or an entire directory structure:

- -
-yamllint .
-
- -

The yamllint provides a default configuration that may provide warnings for some style standards that you many not wish to enforce. The default options can easily be configured by creating a file called .yamllint in the root of the project. The following is a common configuration used in many GitOps repos:

- -
-extends: default
-
-rules:
-  document-start: disable
-  indentation:
-    indent-sequences: whatever
-  line-length: disable
-  truthy:
-    ignore: .github/workflows/
-
- -

Automating with GitHub actions

- -

Running yamllint locally is a great option for developers to get feedback while making changes to a repo, however running yamllint directly in a CI tool such as GitHub actions can help enforce standards and prevent improperly formatted YAML from ever making it into the main branch.

- -

To add a yamllint GitHub action, we can utilize a pre-built GitHub action and create a file called .github/workflows/validate-manifests.yaml in your project containing the following:

- -
-name: Validate Manifests
-on:
-  push:
-    branches:
-      - "*"
-  pull_request:
-    branches:
-      - "*"
-
-jobs:
-  lint-yaml:
-    runs-on: ubuntu-latest
-    steps:
-      - name: Code Checkout
-        uses: actions/checkout@v3
-      - name: Validate YAML
-        uses: ibiqlik/action-yamllint@v3
-        with:
-          format: github
-
- -

One great feature of yamllint is that it has native integration with GitHub and can do annotations directly on the lines of code with issues, making it easier for developers to identify problems and resolve them (Figure 1).

- - -
-
- A screenshot of a yamllint error Github annotation. -
-
Figure 1: This illustrates a yamllint error GitHub annotation.
-
-
-

The benefits of YAML linters

- -

YAML linters are designed to enforce generic YAML standards and make sure that objects are properly structured based on those generic standards. YAML linters are great for identifying issues with misconfigurations in YAML, such as extra lines in files or incorrect tabbing in objects. YAML linters can be great for catching problems such as objects incorrectly copied and pasted into a repo or a field accidentally duplicated in the same object.

- -

YAML linters can also keep GitOps repos more consistent and enforce some chosen standards for all contributors to the repo, making the repo more readable and maintainable. However, YAML linters are generally not able to do any sort of deeper inspection of the objects, and they do not validate the object against the expected schema for that object type.

- -

Kustomize validation

- -

Kustomize is one of the most common tools found in a GitOps repo for helping to organize and deploy YAML objects. Repos can commonly contain dozen, if not hundreds of kustomization.yaml files that can be incorrectly configured and cause errors when you reach the deployment step if not validated beforehand.

- -

A simple validation can be performed using the kustomize CLI tool:

- -
-kustomize build path/to/my/folder
-
- -

This command will attempt to render the folder using kustomize and display the final YAML objects. If it successfully renders, the kustomization.yaml file is valid. If it does not, kustomize will display an error to troubleshoot the issue.

- -

When making changes in kustomize, it can be easy to cause unforeseen problems. Therefore, it is always recommended to validate all kustomize resources in a repo, even those that you have not directly changed. A script that looks for all kustomization.yaml files in the repo, and attempts to run kustomize build for each folder can help to validate that no unintentional errors have been created. Fortunately, the Red Hat CoP has already created a script to do exactly that. Copy the validate_manifests.sh directly into a GitOps repo. Generally, I store it in a scripts folder, but you can run the script with the following:

- -
-./scripts/validate_manifests.sh
-
- -

Automating with GitHub Actions

- -

Just like the YAML lint, validating the kustomize in a CI tool is an important step to adding confidence to changes to a repo and ensuring that no errors are introduced into the main branch.

- -

Conveniently, GitHub Actions already has the kustomize tool built in so we can create a simple action to run the previously mentioned script by adding a new job to the same validation action we created before:

- -
-jobs:
-  lint-kustomize:
-    runs-on: ubuntu-latest
-    steps:
-      - name: Code Checkout
-        uses: actions/checkout@v3
-      - name: Verify Kustomize CLI Installation
-        run: |
-          which kustomize
-          kustomize version
-      - name: Validate Manifests
-        run: |
-          ./scripts/validate_manifests.sh
-
- -

The benefits of Kustomize validation

- -

Kustomize is a powerful tool, but one that is easy for human error to cause problems. This simple practice of validating every kustomization.yaml file in a repo can reduce the number of errors created by accidentally misspelling a filename or forgetting to update a filename in the kustomization.yaml file after renaming it. This kustomize check can also identify problems where objects are updated and any patches that impact those objects are no longer valid.

- -

Additional, this validation can help to ensure that you don't accidentally break dependencies where another kustomization.yaml file inherits from a folder you did change. You can quickly catch problems before changes are merged into the main branch, such as when an object is removed from a base folder and that same object is being referenced in the overlay.

- -

Using the Helm tool

- -

Helm is another popular tool that is utilized in GitOps repos for organizing complex applications. Helm is an extremely powerful tool but one that can be prone to errors due to its complex syntax and structure.

- -

Fortunately, Helm provides a built in tool to help validate charts within the CLI:

- -
-helm lint path/to/my/chart
-
- -

Helm's linting capabilities will help to validate the template code to ensure that it is valid, verify all of the necessary values are present, and emit warnings for other recommendations.

- -

As with the kustomize script, we can automate validating all of the charts in the repo by searching for any Chart.yaml files. The following script can be created in a file called validate_charts.sh:

- -
-#!/bin/sh
-
-for i in `find "${HELM_DIRS}" -name "Chart.yaml" -exec dirname {} \;`;
-do
-
-    echo
-    echo "Validating $i"
-    echo
-
-    helm lint $i
-
-    build_response=$?
-
-    if [ $build_response -ne 0 ]; then
-        echo "Error linting $i"
-        exit 1
-    fi
-
-done
-
-echo
-echo "Charts successfully validated!"
-
- -

You can easily validate all of the charts in a repository at once by running the following command:

- -
-./scripts/validate_charts.sh
-
- -

This new script can be triggered from a GitHub Action just like the previous kustomize check. However in this case, helm is not built into the base action, so it must be installed as follows:

- -
-jobs:
-  helm-lint:
-    runs-on: ubuntu-latest
-    env:
-      HELM_VERSION: 3.12.3
-      HELM_DIRS: .
-    steps:
-      - name: Install Helm
-        run: |
-          sudo curl -L -o /usr/bin/helm https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz
-          sudo chmod +x /usr/bin/helm
-      - name: Code Checkout
-        uses: actions/checkout@v3
-      - name: Validate Charts
-        run: |
-            ./scripts/validate_charts.sh
-
- -

The benefits of Helm lint

- -

Helm linting can help to catch many issues. Helm is notorious for its complexity and the challenges that the templating language can introduce. You can catch common issues, such as misspelling a value name or incorrectly scoping a variable, with the Helm linting tool. Additionally, a Helm lint can catch other configuration issues in a chart such as an invalid reference in the Chart.yaml file.

- -

Helm linting does not do any validation on the YAML from the rendered charts. It only validates that the chart can be rendered. In some cases, it may be beneficial to apply additional validations on the rendered charts themselves.

- -

Next steps and limitations

- -

The validations discussed are a great first step for improving the confidence of changes in a GitOps repo before deployment. Running these validations can help you avoid common mistakes in GitOps and allow you to catch and resolve problems before they are ever attempted to be validated against the cluster.

- -

One major limitation of these checks is the lack of validation of the objects being applied to a cluster. If a field only accepts a value of true or false the validations discussed today will not be able to identify an invalid configuration such as this.

- -

More specialized tools such as kubeval and kubeconform can help to validate standard Kubernetes objects, but they lack support for Red Hat OpenShift specific objects or CustomResources from Operators out of the box. Extracting schemas for those objects is possible, which can help to extend validation of objects even beyond standard k8s objects.

- -

Additionally, you can perform validations directly against a target cluster itself using the --dry-run=server flag with oc apply. Using the dry-run flag allows the objects to be validated against the cluster itself and provides an even greater degree of confidence that objects applied to the cluster will be successful.

-The post How to validate GitOps manifests appeared first on Red Hat Developer. -

-
- - Quarkus: A recap of Quarkus Tools for IntelliJ's latest improvements + JBoss Blogs: A recap of Quarkus Tools for IntelliJ's latest improvements 2023-10-10T00:00:00Z https://quarkus.io/blog/intellij-quarkus-recap/ - Quarkus Tools for IntelliJ is a free and open source extension, helping users develop Quarkus applications by providing content-assist, validation, run configurations and many other features right from their favourite IDE. This extension is based on the LSP4MP (i.e. MicroProfile) and its Quarkus add-on, and the Qute language server. These... + - Quarkus: Quarkus Newsletter #37 - October + JBoss Blogs: Quarkus Newsletter #37 - October 2023-10-10T00:00:00Z https://quarkus.io/blog/quarkus-newsletter-37/ - Read "Integrate your Quarkus application with GPT4All" by Alex Soto Bueno to explore how to integrate GPT4All into a Quarkus application so that you can query this service and return a response without any external resources. Go behind the scenes to learn how to create a CRUD application using virtual... + - JBoss Blogs: Meet Keycloak at KubeCon Chicago in Nov 2023 + Quarkus: Quarkus Newsletter #37 - October 2023-10-10T00:00:00Z - https://www.keycloak.org/2023/10/keycloak-kubeconf-chicago - - + https://quarkus.io/blog/quarkus-newsletter-37/ + + Read "Integrate your Quarkus application with GPT4All" by Alex Soto Bueno to explore how to integrate GPT4All into a Quarkus application so that you can query this service and return a response without any external resources. Go behind the scenes to learn how to create a CRUD application using virtual... - JBoss Blogs: A recap of Quarkus Tools for IntelliJ's latest improvements + Quarkus: A recap of Quarkus Tools for IntelliJ's latest improvements 2023-10-10T00:00:00Z https://quarkus.io/blog/intellij-quarkus-recap/ - + Quarkus Tools for IntelliJ is a free and open source extension, helping users develop Quarkus applications by providing content-assist, validation, run configurations and many other features right from their favourite IDE. This extension is based on the LSP4MP (i.e. MicroProfile) and its Quarkus add-on, and the Qute language server. These... - JBoss Blogs: Quarkus Newsletter #37 - October + JBoss Blogs: Meet Keycloak at KubeCon Chicago in Nov 2023 2023-10-10T00:00:00Z - https://quarkus.io/blog/quarkus-newsletter-37/ - + https://www.keycloak.org/2023/10/keycloak-kubeconf-chicago + - - Red Hat Developer: Load balancing, threading, and scaling in Node.js - 2023-10-09T07:00:00Z - 13a0b0fc-74fe-4f0a-9a6a-5c3d3a10edd0 - -

Many applications require more computation than can be handled by a single thread, CPU, process, or machine. This installment of the ongoing Node.js reference architecture series covers the team's experience on how to satisfy the need for larger computational resources in your Node.js application.

- -

Follow the series:

- -

But Node.js is single threaded?

- -

Node.js is said to be single-threaded. While not entirely true, it reflects that most work is done on a single thread running the event loop. The asynchronous nature of JavaScript means that Node.js can handle a larger number of concurrent requests on that single thread. If that is the case, then why are we even talking about threading?

- -

While by default a Node.js process operates in a single-threaded model, current versions of Node.js support worker threads that you can use to start additional threads of execution, each with their own event loop.

- -

In addition, Node.js applications are often made up of multiple different microservices and multiple copies of each microservice, allowing the overall solution to leverage many concurrent threads available in a single computer or across a group of computers.

- -

The reality is that applications based on Node.js can and do leverage multiple threads of execution over one or more computers. How to balance this work across threads, processes, and computers and scale it in times of increased demand is an important topic for most deployments.

- -

Keep it simple

- -

The team's experience is that, when possible, applications should be designed so that a request to a microservice running in a container will need no more than a single thread of execution to complete in a reasonable time. If that is not possible, then worker threads are the recommended approach versus multiple processes running in a single container as there will be lower complexity and less overhead communicating between multiple threads of execution.

- -

Worker threads are also likely appropriate for desktop-type applications where it is known that you cannot scale beyond the resources of a single machine, and it is preferable to have the application show up as a single process instead of many individual processes.

- -

Long-running requests

- -

The team had a very interesting discussion around longer-running requests. Sometimes, you need to do computation that will take a while to complete, and you cannot break up that work.

- -

The discussion centered around the following question: If we have a separate microservice that handles longer running requests and it's okay for all requests of that type to be handled sequentially, can we just run those on the main thread loop?

- -

Most often, the answer turns out to be no because even in that case, you typically have other APIs like health and readiness APIs that need to respond in a reasonable amount of time when the microservice is running. If you have any request that is going to take a substantial amount of time versus completing quickly or blocking asynchronously so other work can execute on the main thread, you will need to use worker threads.

- -

Load balancing and scaling

- -

For requests that are completed in a timely manner, you might still need more CPU cycles than a single thread can provide in order to keep up with a larger number of requests. When implementing API requests in Node.js, they are most often designed to have no internal state, and multiple copies can be executed simultaneously. Node.js has long supported running multiple processes to allow concurrent execution of the requests through the Cluster API.

- -

As you have likely read in other parts of the Node.js reference architecture, most modern applications run in containers, and often, those containers are managed through tools like Kubernetes. In this context, the team recommends delegating load balancing and scaling to the highest layer possible instead of using the Cluster API. For example, if you deploy the application to Kubernetes, use the load balancing and scaling built into Kubernetes. In our experience, this has been just as efficient or more efficient than trying to manage it at a lower level through tools like the Cluster API.

- -

Threads versus processes

- -

A common question is whether it is better to scale using threads or processes. Multiple threads within a single machine can typically be exploited within a single process or by starting multiple processes. Processes provide better isolation, but also lower opportunities to share resources and make communication between threads more costly. Using multiple threads within a process might be able to scale more efficiently within a single process, but it has the hard limit of only being able to scale to the resources provided by a single machine.

- -

As described in earlier sections, the team's experience is that using worker threads when needed but otherwise leaving load balancing and scaling to management layers outside of the application itself (for example, Kubernetes) results in the right balance between the use of threads and processes across the application.

- -

Learn more about Node.js reference architecture

- -

I hope that this quick overview of the load balancing, scaling and multithreading part of the Node.js reference architecture, along with the team discussions that led to that content, has been helpful, and that the information shared in the architecture helps you in your future implementations.

- -

We plan to cover new topics regularly for the Node.js reference architecture series. Until the next installment, we invite you to visit the Node.js reference architecture repository on GitHub, where you will see the work we have done and future topics.

- -

To learn more about what Red Hat is up to on the Node.js front, check out our Node.js page.

-The post Load balancing, threading, and scaling in Node.js appeared first on Red Hat Developer. -

-
Red Hat Developer: An MIR-based JIT prototype for Ruby 2023-10-09T07:00:00Z @@ -936,145 +865,97 @@ The post - - - - Quarkus: Processing Kafka records on virtual threads - 2023-10-09T00:00:00Z - https://quarkus.io/blog/virtual-threads-4/ - - In another blog post, we have seen how you can implement a CRUD application with Quarkus to utilize virtual threads. The virtual threads support in Quarkus is not limited to REST and HTTP. Many other parts support virtual threads, such as gRPC, scheduled tasks, and messaging. In this post, we... - - - JBoss Blogs: How to solve the error java.net.SocketException: Connection reset - 2023-10-06T11:40:04Z - https://www.mastertheboss.com/java/how-to-solve-the-error-java-net-socketexception-connection-reset/ - - - - - Red Hat Developer: Try Camel on Quarkus in the Developer Sandbox for Red Hat OpenShift - 2023-10-06T07:00:00Z - c7e19de1-859c-41da-bf71-75056f779eea - -

You can now try Apache Camel on Quarkus in the Developer Sandbox for Red Hat OpenShift, an OpenShift environment you can access for a no-cost, hands-on experience in building and deploying cloud-native applications quickly. This article will guide you to the Developer Sandbox and through a Camel Quarkus integration in a fully web-based experience—no local installs needed.

- -

Camel Quarkus is the latest Camel runtime generation. It allows you to run integration processes with super-low memory usage, fast startup, and outstanding performance. The article Boost Apache Camel performance on Quarkus introduces the topic well.

- -

To learn more about the various Camel runtimes available, read the 3-part series article Choose the best Camel for your integration ride.

- -

REST and SOAP with Camel on Quarkus

- -

One recent addition to Camel Quarkus is the ability to perform SOAP operations using Apache CXF, a well-known Java library historically used by Camel but only recently available for Quarkus. 

- -

The demo defines a front-facing OpenAPI service called simple, which integrates with a back-end SOAP service. The code also includes the stub to simulate the SOAP endpoint. The flow showcases an adaptation layer from REST to SOAP. It’s a relatively simple use case but very common in the enterprise. It defines a REST API and hides a legacy service behind the scenes (Figure 1).

- - -
-
- Diagram showing the Camel on Quarkus flow from client to SOAP service. -
-
-
Figure 1: The Camel on Quarkus flow.

Demo highlights

- -

The big win for you is the chance to play the demo in a free-to-access environment from your browser and view/edit the code, and build, deploy, and test the application. Regarding Camel on Quarkus, the example stands out on various fronts.

- -

First of all, both REST and SOAP interfaces are implemented following a contract-first approach. In essence, we rely on the OpenAPI (REST) and WSDL (SOAP) definitions to auto-generate the application’s input/output interfaces. Figure 2 shows how a contract-first OpenAPI helps the developer; this is also valid for SOAP.

- - -
-
- Figure 2: Contract-first development. -
-
-
Figure 2: Contract-first development.

Another notable feature in the demo is data transformation. JSON input must be mapped to outgoing SOAP during the request flow and perform the reverse operation during the response flow. It is all done in a single transformation stylesheet using XSLTs, as in Figure 3.

- - -
-
- Diagram of the JSON / SOAP data mapping flow. -
-
-
Figure 3: JSON / SOAP data mapping operations.

And finally, included in the code, you’ll find a test unit (JUnit) to validate the entire request/response flow (Figure 4). You will discover how to use Camel’s testing framework to dynamically spin up an actual SOAP back end to test against, trigger the processing flow, and tear it all down when done.

+ Red Hat Developer: Load balancing, threading, and scaling in Node.js + 2023-10-09T07:00:00Z + 13a0b0fc-74fe-4f0a-9a6a-5c3d3a10edd0 + +

Many applications require more computation than can be handled by a single thread, CPU, process, or machine. This installment of the ongoing Node.js reference architecture series covers the team's experience on how to satisfy the need for larger computational resources in your Node.js application.

+

Follow the series:

-
-
- The unit testing flow with Camel and JUnit. -
-
-
Figure 4: Unit testing with Camel.

What is unique about Camel running on Quarkus, but also true for all runtimes, is how little code is required and how elegantly it is laid out. This simplicity guarantees economical, long-term, and sustainable support for your landscape of implemented Camel services.

+

But Node.js is single threaded?

-

If you would like to see how all of this is done, jump straight into the Developer Sandbox to explore the code and execute it.

+

Node.js is said to be single-threaded. While not entirely true, it reflects that most work is done on a single thread running the event loop. The asynchronous nature of JavaScript means that Node.js can handle a larger number of concurrent requests on that single thread. If that is the case, then why are we even talking about threading?

-

Access the Developer Sandbox

+

While by default a Node.js process operates in a single-threaded model, current versions of Node.js support worker threads that you can use to start additional threads of execution, each with their own event loop.

-

Follow these instructions to get started in the Developer Sandbox: How to access the Developer Sandbox for Red Hat OpenShift

+

In addition, Node.js applications are often made up of multiple different microservices and multiple copies of each microservice, allowing the overall solution to leverage many concurrent threads available in a single computer or across a group of computers.

-

Once you have your browser connected to the Developer Sandbox console, you’ll be all set to start the article’s tutorial.

+

The reality is that applications based on Node.js can and do leverage multiple threads of execution over one or more computers. How to balance this work across threads, processes, and computers and scale it in times of increased demand is an important topic for most deployments.

-

Inside OpenShift Dev Spaces

+

Keep it simple

-

The Developer Sandbox ships with an entire web-based IDE called Red Hat OpenShift Dev Spaces (formerly Red Hat CodeReady Workspaces).

+

The team's experience is that, when possible, applications should be designed so that a request to a microservice running in a container will need no more than a single thread of execution to complete in a reasonable time. If that is not possible, then worker threads are the recommended approach versus multiple processes running in a single container as there will be lower complexity and less overhead communicating between multiple threads of execution.

-

Set up your dev environment with the Camel tutorials

+

Worker threads are also likely appropriate for desktop-type applications where it is known that you cannot scale beyond the resources of a single machine, and it is preferable to have the application show up as a single process instead of many individual processes.

-

The animated sequence in Figure 5 illustrates the actions to follow to open your development environment along with your tutorial instructions.

+

Long-running requests

+

The team had a very interesting discussion around longer-running requests. Sometimes, you need to do computation that will take a while to complete, and you cannot break up that work.

-
-
- The steps for setting up your development environment in the OpenShift Dev Spaces user interface. -
-
-
Figure 5: The OpenShift Dev Spaces UI.

Follow these steps:

+

The discussion centered around the following question: If we have a separate microservice that handles longer running requests and it's okay for all requests of that type to be handled sequentially, can we just run those on the main thread loop?

-
  1. From the web console, click the Applications icon as shown in Figure 5 (marked 1).
  2. -
  3. Select Red Hat OpenShift Dev Spaces (2). You will be prompted to log in and authorize access; select the Allow selected permissions option.
  4. -
  5. -

    When the Create Workspace dashboard in OpenShift Dev Spaces opens, copy the snippet below:

    +

    Most often, the answer turns out to be no because even in that case, you typically have other APIs like health and readiness APIs that need to respond in a reasonable amount of time when the microservice is running. If you have any request that is going to take a substantial amount of time versus completing quickly or blocking asynchronously so other work can execute on the main thread, you will need to use worker threads.

    -
    -https://github.com/RedHat-Middleware-Workshops/devsandbox-camel.git
    +

    Load balancing and scaling

    -

    Then, paste it into the Git Repo URL field (3).

    -
  6. -
  7. Click Create & Open (4).
  8. -
  9. When the workspace finishes provisioning and the IDE opens, click the deployable Endpoints accordion (5).
  10. -
  11. Then, click on the icon (6), which opens the tutorial in a new browser tab.
  12. -
  13. Choose the tutorial indicated in the next section.
  14. -

Start the Camel Quarkus tutorial

+

For requests that are completed in a timely manner, you might still need more CPU cycles than a single thread can provide in order to keep up with a larger number of requests. When implementing API requests in Node.js, they are most often designed to have no internal state, and multiple copies can be executed simultaneously. Node.js has long supported running multiple processes to allow concurrent execution of the requests through the Cluster API.

-

Select the Camel Quarkus - Rest/Soap Demo tile, highlighted in Figure 6.

+

As you have likely read in other parts of the Node.js reference architecture, most modern applications run in containers, and often, those containers are managed through tools like Kubernetes. In this context, the team recommends delegating load balancing and scaling to the highest layer possible instead of using the Cluster API. For example, if you deploy the application to Kubernetes, use the load balancing and scaling built into Kubernetes. In our experience, this has been just as efficient or more efficient than trying to manage it at a lower level through tools like the Cluster API.

+

Threads versus processes

-
-
- The Camel Quarkus - Rest/Soap Demo tile highlighted in the Solution Explorer. -
-
-
Figure 6: Locating the Camel Quarkus tutorial.

When you click on the tile, the Solution Explorer will show the lab introductions and the exercise chapters included, which you should be able to complete in around 15 minutes.

+

A common question is whether it is better to scale using threads or processes. Multiple threads within a single machine can typically be exploited within a single process or by starting multiple processes. Processes provide better isolation, but also lower opportunities to share resources and make communication between threads more costly. Using multiple threads within a process might be able to scale more efficiently within a single process, but it has the hard limit of only being able to scale to the resources provided by a single machine.

-

Enjoy the Camel ride!

+

As described in earlier sections, the team's experience is that using worker threads when needed but otherwise leaving load balancing and scaling to management layers outside of the application itself (for example, Kubernetes) results in the right balance between the use of threads and processes across the application.

-

More Apache Camel resources

+

Learn more about Node.js reference architecture

-

This article ends here, but this should only be the start of your journey with Apache Camel. The Developer Sandbox for Red Hat OpenShift gives you the opportunity to play on a Kubernetes-based application platform with an integrated developer IDE (OpenShift Dev Spaces).

+

I hope that this quick overview of the load balancing, scaling and multithreading part of the Node.js reference architecture, along with the team discussions that led to that content, has been helpful, and that the information shared in the architecture helps you in your future implementations.

-

We encourage you to check out the resources below to learn more about Camel and explore different ways to build applications with Apache Camel:

+

We plan to cover new topics regularly for the Node.js reference architecture series. Until the next installment, we invite you to visit the Node.js reference architecture repository on GitHub, where you will see the work we have done and future topics.

- -The post Try Camel on Quarkus in the Developer Sandbox for Red Hat OpenShift appeared first on Red Hat Developer. +

To learn more about what Red Hat is up to on the Node.js front, check out our Node.js page.

+The post Load balancing, threading, and scaling in Node.js appeared first on Red Hat Developer.

+ + Quarkus: Processing Kafka records on virtual threads + 2023-10-09T00:00:00Z + https://quarkus.io/blog/virtual-threads-4/ + + In another blog post, we have seen how you can implement a CRUD application with Quarkus to utilize virtual threads. The virtual threads support in Quarkus is not limited to REST and HTTP. Many other parts support virtual threads, such as gRPC, scheduled tasks, and messaging. In this post, we... + + + JBoss Blogs: Processing Kafka records on virtual threads + 2023-10-09T00:00:00Z + https://quarkus.io/blog/virtual-threads-4/ + + + + + JBoss Blogs: How to solve the error java.net.SocketException: Connection reset + 2023-10-06T11:40:04Z + https://www.mastertheboss.com/java/how-to-solve-the-error-java-net-socketexception-connection-reset/ + + + Red Hat Developer: Use fwupd to deploy Linux firmware updates and more 2023-10-06T07:00:00Z @@ -1241,6 +1122,125 @@ for attr in client.get_host_security_attrs(cancellable):

The fwupd project is a mature, safe, and reliable service that can easily be integrated into existing security endpoint solutions and deployment agents. Using three different methods, developers can enumerate devices, query security levels, and control the fwupd daemon to deploy firmware updates.

The post Use fwupd to deploy Linux firmware updates and more appeared first on Red Hat Developer. +

+ + + Red Hat Developer: Try Camel on Quarkus in the Developer Sandbox for Red Hat OpenShift + 2023-10-06T07:00:00Z + c7e19de1-859c-41da-bf71-75056f779eea + +

You can now try Apache Camel on Quarkus in the Developer Sandbox for Red Hat OpenShift, an OpenShift environment you can access for a no-cost, hands-on experience in building and deploying cloud-native applications quickly. This article will guide you to the Developer Sandbox and through a Camel Quarkus integration in a fully web-based experience—no local installs needed.

+ +

Camel Quarkus is the latest Camel runtime generation. It allows you to run integration processes with super-low memory usage, fast startup, and outstanding performance. The article Boost Apache Camel performance on Quarkus introduces the topic well.

+ +

To learn more about the various Camel runtimes available, read the 3-part series article Choose the best Camel for your integration ride.

+ +

REST and SOAP with Camel on Quarkus

+ +

One recent addition to Camel Quarkus is the ability to perform SOAP operations using Apache CXF, a well-known Java library historically used by Camel but only recently available for Quarkus. 

+ +

The demo defines a front-facing OpenAPI service called simple, which integrates with a back-end SOAP service. The code also includes the stub to simulate the SOAP endpoint. The flow showcases an adaptation layer from REST to SOAP. It’s a relatively simple use case but very common in the enterprise. It defines a REST API and hides a legacy service behind the scenes (Figure 1).

+ + +
+
+ Diagram showing the Camel on Quarkus flow from client to SOAP service. +
+
+
Figure 1: The Camel on Quarkus flow.

Demo highlights

+ +

The big win for you is the chance to play the demo in a free-to-access environment from your browser and view/edit the code, and build, deploy, and test the application. Regarding Camel on Quarkus, the example stands out on various fronts.

+ +

First of all, both REST and SOAP interfaces are implemented following a contract-first approach. In essence, we rely on the OpenAPI (REST) and WSDL (SOAP) definitions to auto-generate the application’s input/output interfaces. Figure 2 shows how a contract-first OpenAPI helps the developer; this is also valid for SOAP.

+ + +
+
+ Figure 2: Contract-first development. +
+
+
Figure 2: Contract-first development.

Another notable feature in the demo is data transformation. JSON input must be mapped to outgoing SOAP during the request flow and perform the reverse operation during the response flow. It is all done in a single transformation stylesheet using XSLTs, as in Figure 3.

+ + +
+
+ Diagram of the JSON / SOAP data mapping flow. +
+
+
Figure 3: JSON / SOAP data mapping operations.

And finally, included in the code, you’ll find a test unit (JUnit) to validate the entire request/response flow (Figure 4). You will discover how to use Camel’s testing framework to dynamically spin up an actual SOAP back end to test against, trigger the processing flow, and tear it all down when done.

+ + +
+
+ The unit testing flow with Camel and JUnit. +
+
+
Figure 4: Unit testing with Camel.

What is unique about Camel running on Quarkus, but also true for all runtimes, is how little code is required and how elegantly it is laid out. This simplicity guarantees economical, long-term, and sustainable support for your landscape of implemented Camel services.

+ +

If you would like to see how all of this is done, jump straight into the Developer Sandbox to explore the code and execute it.

+ +

Access the Developer Sandbox

+ +

Follow these instructions to get started in the Developer Sandbox: How to access the Developer Sandbox for Red Hat OpenShift

+ +

Once you have your browser connected to the Developer Sandbox console, you’ll be all set to start the article’s tutorial.

+ +

Inside OpenShift Dev Spaces

+ +

The Developer Sandbox ships with an entire web-based IDE called Red Hat OpenShift Dev Spaces (formerly Red Hat CodeReady Workspaces).

+ +

Set up your dev environment with the Camel tutorials

+ +

The animated sequence in Figure 5 illustrates the actions to follow to open your development environment along with your tutorial instructions.

+ + +
+
+ The steps for setting up your development environment in the OpenShift Dev Spaces user interface. +
+
+
Figure 5: The OpenShift Dev Spaces UI.

Follow these steps:

+ +
  1. From the web console, click the Applications icon as shown in Figure 5 (marked 1).
  2. +
  3. Select Red Hat OpenShift Dev Spaces (2). You will be prompted to log in and authorize access; select the Allow selected permissions option.
  4. +
  5. +

    When the Create Workspace dashboard in OpenShift Dev Spaces opens, copy the snippet below:

    + +
    +https://github.com/RedHat-Middleware-Workshops/devsandbox-camel.git
    + +

    Then, paste it into the Git Repo URL field (3).

    +
  6. +
  7. Click Create & Open (4).
  8. +
  9. When the workspace finishes provisioning and the IDE opens, click the deployable Endpoints accordion (5).
  10. +
  11. Then, click on the icon (6), which opens the tutorial in a new browser tab.
  12. +
  13. Choose the tutorial indicated in the next section.
  14. +

Start the Camel Quarkus tutorial

+ +

Select the Camel Quarkus - Rest/Soap Demo tile, highlighted in Figure 6.

+ + +
+
+ The Camel Quarkus - Rest/Soap Demo tile highlighted in the Solution Explorer. +
+
+
Figure 6: Locating the Camel Quarkus tutorial.

When you click on the tile, the Solution Explorer will show the lab introductions and the exercise chapters included, which you should be able to complete in around 15 minutes.

+ +

Enjoy the Camel ride!

+ +

More Apache Camel resources

+ +

This article ends here, but this should only be the start of your journey with Apache Camel. The Developer Sandbox for Red Hat OpenShift gives you the opportunity to play on a Kubernetes-based application platform with an integrated developer IDE (OpenShift Dev Spaces).

+ +

We encourage you to check out the resources below to learn more about Camel and explore different ways to build applications with Apache Camel:

+ + +The post Try Camel on Quarkus in the Developer Sandbox for Red Hat OpenShift appeared first on Red Hat Developer.

@@ -2218,25 +2218,18 @@ The post - Quarkus: Writing CRUD applications using virtual threads + JBoss Blogs: Writing CRUD applications using virtual threads 2023-09-25T00:00:00Z https://quarkus.io/blog/virtual-threads-2/ - Last week, we published a video demonstrating the creation of a CRUD application using virtual threads in Quarkus. It’s as simple as adding the @RunOnVirtualThread annotation on your HTTP resource (or your controller class if you use the Spring compatibility layer). This companion post explains how it works behind the... + - JBoss Blogs: Writing CRUD applications using virtual threads + Quarkus: Writing CRUD applications using virtual threads 2023-09-25T00:00:00Z https://quarkus.io/blog/virtual-threads-2/ - - - - JBoss Blogs: Quarkus 3.4.1 released - Redis 7.2 and Flyway changes - 2023-09-20T00:00:00Z - https://quarkus.io/blog/quarkus-3-4-1-released/ - - + Last week, we published a video demonstrating the creation of a CRUD application using virtual threads in Quarkus. It’s as simple as adding the @RunOnVirtualThread annotation on your HTTP resource (or your controller class if you use the Spring compatibility layer). This companion post explains how it works behind the... Quarkus: Quarkus 3.4.1 released - Redis 7.2 and Flyway changes @@ -2252,6 +2245,13 @@ The post Observability in Quarkus Observability on a software system can be described as the capability to allow a human to ask and answer questions. To enable developers and support engineers in understanding how their applications behave, Quarkus 3.3 includes many improvements to its main observability related extensions: quarkus-opentelemetry (tracing) quarkus-micrometer (metrics)... + + JBoss Blogs: Quarkus 3.4.1 released - Redis 7.2 and Flyway changes + 2023-09-20T00:00:00Z + https://quarkus.io/blog/quarkus-3-4-1-released/ + + + JBoss Blogs: Observability in Quarkus 3 2023-09-20T00:00:00Z @@ -2260,18 +2260,25 @@ The post - JBoss Blogs: When Quarkus meets Virtual Threads + Quarkus: When Quarkus meets Virtual Threads 2023-09-19T00:00:00Z https://quarkus.io/blog/virtual-thread-1/ - + Java 21 offers a new feature that will reshape the development of concurrent applications in Java. For over two years, the Quarkus team explored integrating this new feature to ease the development of distributed applications, including microservices and event-driven applications. This blog post is the first part of a series... - Quarkus: When Quarkus meets Virtual Threads + JBoss Blogs: When Quarkus meets Virtual Threads 2023-09-19T00:00:00Z https://quarkus.io/blog/virtual-thread-1/ - Java 21 offers a new feature that will reshape the development of concurrent applications in Java. For over two years, the Quarkus team explored integrating this new feature to ease the development of distributed applications, including microservices and event-driven applications. This blog post is the first part of a series... + + + + JBoss Blogs: This Week in JBoss - July 03, 2023 + 2023-07-03T00:00:00Z + https://www.jboss.org/posts/weekly-2023-07-03.html + + JBoss Tools: JBoss Tools 4.28.0.Final for Eclipse 2023-03 @@ -2324,13 +2331,6 @@ The post - - JBoss Tools: JBoss Tools for Eclipse 2023-06M2 2023-06-05T00:00:00Z @@ -3140,6 +3140,13 @@ The post + + JBoss Blogs: This Week in JBoss - January 13th 2022 + 2022-01-13T00:00:00Z + https://www.jboss.org/posts/weekly-2022-01-13.html + + + JBoss Tools: JBoss Tools and Red Hat CodeReady Studio for Eclipse 2021-09 security fix release for Apache Log4j CVE-2021-45105 and CVE-2021-44832 2022-01-13T00:00:00Z @@ -3164,13 +3171,6 @@ The post - - JBoss Tools: JBoss Tools 4.21.2.AM1 for Eclipse 2021-09 2021-12-22T00:00:00Z