Nexeed
    • Introduction
    • Getting started
      • Getting access
      • Login
      • Main screen
      • Welcome dashboard
      • Detecting process anomalies
      • Analyzing data and detecting event sequences
      • Analyzing KPIs
    • How-tos
      • Monitors on production lines
        • Configuring the automatic login in the Nexeed Industrial Application System
        • Configuring the automatic login to the identity provider with the Windows user
        • Setting cookies in the browser
        • Configuring the automatic logout in the Nexeed Industrial Application System
        • Configuring the command line parameters in the browser
        • Known limitations and troubleshooting
      • Try out the APIs
    • Integration guide
      • Underlying concepts
        • Underlying concepts
        • Onboarding
        • Security
        • Communication
      • Integration journey
      • Overview of APIs
    • Operations manual
      • Release
      • System architecture and interfaces
      • System requirements
        • Cluster requirements
        • Database requirements
        • Support for service meshes
      • Migration from previous Nexeed IAS versions
      • Setup and configuration
        • Deployment process
        • Deployment with Helm
        • Advanced configuration
        • Integrations with external secret management solutions
        • Context paths
        • Service accounts and authorizations
        • Validation tests
        • Setup click once
        • Database user setup and configuration
      • Start and shutdown
      • Regular operations
        • User management & authentication
        • How to add additional tenants
        • How to access the cluster and pods
        • Automatic module role assignments in customer tenants
        • User credentials rotation - database and messaging secrets
      • Failure handling
        • Failure handling guidelines
        • Ansible operator troubleshooting
        • How to reach BCI for unresolved issues
      • Backup and restore
      • Logging and monitoring
        • The concept and conventions
        • ELK stack
        • ELK configurations aspects for beats
        • Proxy setup for ELK
        • Health endpoints configurations
      • Known limitations
      • Supporting functions
      • Security recommendations
        • Kubernetes
        • Security Best Practices for Databases
        • Certificates
        • Threat detection tools
    • Infrastructure manual
      • Release
      • System architecture and interfaces
        • RabbitMQ version support
      • System requirements
      • Migration from previous Nexeed infrastructure versions
      • Setup and configuration
        • Deployment process of the Nexeed infrastructure Helm chart
        • Deployment with Helm
      • Start and shutdown
      • Regular operations
        • RabbitMQ
          • User management & authentication
          • Disk size change
          • Upgrade performance with high performant disk type
          • Pod management policy
      • Failure handling
        • Connection failures
        • Data safety on the RabbitMQ side
        • Fix RabbitMQ cluster partitions
        • Delete unsynchronized RabbitMQ queues
        • How to reach BCI for unresolved issues
      • Backup and restore
      • Logging and monitoring
      • Known limitations
    • Glossary
    • Further information and contact
Industrial Application System
  • Industrial Application System
  • Core Services
    • Block Management
    • Deviation Processor
    • ID Builder
    • Multitenant Access Control
    • Notification Service
    • Ticket Management
    • Web Portal
  • Shopfloor Management
    • Andon Live
    • Global Production Overview
    • KPI Reporting
    • Operational Routines
    • Shift Book
    • Shopfloor Management Administration
  • Product & Quality
    • Product Setup Management
    • Part Traceability
    • Process Quality
    • Setup Specs
  • Execution
    • Line Control
    • Material Management
    • Order Management
    • Packaging Control
    • Rework Control
  • Intralogistics
    • AGV Control Center
    • Stock Management
    • Transport Management
  • Machine & Equipment
    • Condition Monitoring
    • Device Portal
    • Maintenance Management
    • Tool Management
  • Enterprise & Shopfloor Integration
    • Archiving Bridge
    • Data Publisher
    • Direct Data Link
    • Engineering UI
    • ERP Connectivity
    • Gateway
    • Information Router
    • Master Data Management
    • Orchestrator

Nexeed Learning Portal

  • Industrial Application System
  • Operations manual
  • Migration from previous Nexeed IAS versions
preview 2025.03.00

Migration from previous Nexeed IAS versions

The migration from an older Helm-based version to a new Helm-based one is in general non-disruptive, however some changes in parametrization might be required (if additional mandatory parameters are introduced, for example). Migration is only supported between consecutive major releases.

Migration to version 2025.03

A security enhanced measure is taken place by Utility Toolkit.

By default, we have set the pod spec to enable securityContext:

global:
  defaultPodSecurityContext:
    enabled: true
    fsGroup: 0
    runAsNonRoot: true

For Azure istio-enabled environments, you should merge the snippet to your custom-values.yaml and set runAsNonRoot to false to avoid istio related issues.

The Istio sidecar on Azure istio-enabled environments will require root access to perform iptables operations.

To override this securityContext for a specific module in custom-values.yaml:

module:
  deployments: # or statefulSets or jobs
    appname:
      securityContext:
        runAsNonRoot: false

Migration to version 2024.02.01

Helm binary >= 3.15 is needed.

The RabbitMQ Helm chart is no longer included in the ias chart. Instead, the nexeed-infra chart will provide this component.

The value override yaml in ias is called custom-values.yaml, for the same in the nexeed-infra chart we will refer it as custom-values-infra.yaml to avoid confusion. However, the value override file names for both charts can be arbitrarily selected.

The Nexeed IAS Infrastructure Operational Manual (the manual for nexeed-infra) will provide more information regarding the operation of infrastructure components that are part of that dedicated Helm chart, such as RabbitMQ. You should deploy the nexeed-infra Helm chart before performing the migration.

Additionally, in the custom-values.yaml file, the global.nexeedRabbitMQHost key is now deprecated, removed and not allowed any more. There should be no impact in the deployment process.

In this manual, we assume the default namespace for the RabbitMQ cluster installation is shared.

Same namespace deployment scenario

This suits the case if your RabbitMQ cluster deployed by nexeed-infra would stay at the same namespace as the original ias chart.

It is necessary to perform the following migration steps for the existing RabbitMQ cluster that is going to be managed by the nexeed-infra Helm chart without downtime.

  1. Alter your Kubeconfig (usually <home_dir>/.kube/config) to the correct Kubernetes context.

  2. You can find the RabbitMQ statefulset name and its namespace via command: kubectl get sts -A -l app=rabbitmq

    • You need to fill the placeholders in the script, command and the yaml files mentioned below

    • You can add or modify global.modules.rabbitmq.namespaceSuffix to change the value for the default namespace

  3. Execute the RabbitMQ objects Helm annotations change commands via the bash script:

for resource in $(kubectl get sts,ing,svc,secret,cm,roles,rolebindings,sa -n shared | awk '{print $1}' | grep -i rabbitmq)
do
  kubectl annotate --overwrite $resource -n <namespace> meta.helm.sh/release-name=<nexeed-infra-release-name>
  kubectl annotate --overwrite $resource -n <namespace> meta.helm.sh/release-namespace=<nexeed-infra-release-namespace>
done

After executing the script, you:

  1. Move the global.modules.rabbitmq section from custom-values.yaml to custom-values-infra.yaml file. Add the remote-rabbitmq reference in your business modules in custom-values.yaml, refer to RMQCustomValueRemote

  2. Apply the new ias Helm chart and the nexeed-infra chart to your cluster. No RabbitMQ pods will be restarted as long as the image version used are the same. Migration is completed.

Custom-values.yaml update

In the custom-values.yaml file, the RabbitMQ cluster installed by nexeed-infra must be mentioned in the global.serverInstances section:

# custom-values.yaml
global:
  serverInstances:
    remote-rabbitmq:
      host: rabbitmq-service.<namespace>.svc.cluster.local
      port: 5672
      adminPort: 15672
      tls: false
      default: true
      adminUser: admin
      adminPassword: <same_as_global.embeddedRabbitMQAdminPassword>
      type: RABBITMQ

And your business modules should reference this remote-rabbitmq in their messaging section, if the global.serverInstances.remote-rabbitmq.default is not true:

# custom-values.yaml
global:
  modules:
    your-module1:
      messaging:
        <messaging_name>:
          serverInstance: remote-rabbitmq

Your custom-values-infra.yaml should look like this in the global.modules.rabbitmq section:

# custom-values-infra.yaml
global:
  modules:
    rabbitmq:
      enabled: true
      # more rabbitmq optional parameters

Please note that the global.embeddedRabbitMQAdminPassword value should be defined and kept the same across the 2 custom-value files.

Migrate RabbitMQ to a different namespace

To migrate the existing RabbitMQ cluster to a new namespace, the overall logic is:

  1. Deploy the new RabbitMQ cluster with the same major.minor version (i.e. 3.13.2 vs 3.13.6) in the new namespace.

  2. On the new pods, join themselves to the old cluster one by one.

  3. Reference the new cluster in your custom-values.yaml for your applications, and apply it with the upgrade maintenance window.

    • This will also remove the old RabbitMQ pods and most of its related Kubernetes resources.

  4. Remove the old cluster information from the new RabbitMQ cluster, point the old RabbitMQ management ingress controller to the new cluster.

To deploy a new cluster of RabbitMQ via nexeed-infra Helm chart, you need to temporarily alter the global.modules.rabbitmq.contextPath to be a different one as the old ias chart. For example:

# custom-values-infra.yaml
global:
  modules:
    rabbitmq:
      contextPath: rabbitmq_new
      namespaceSuffix: rmq2

Then you can deploy nexeed-infra with this value override.

Assuming the new RabbitMQ is already deployed in the rmq2 namespace with 3 replicas, they are up and running with no client connecting to it.

  1. Execute the shell on the new rabbitmq-statefulset-0 pod in rmq2 namespace

  2. Join the old cluster with rabbitmqctl join_cluster rabbit@rabbitmq-statefulset-0.rabbitmq-service-headless.<old_namespace>.svc.cluster.local

  3. Perform the same commands in the other 2 new pods

  4. Wait until everything syncs, please confirm its content via the management portal and the pod log

    • In the pod log, you should see the string Peer discovery: all known cluster nodes are up.

    • In the management portal, you should see all old and new nodes in the Nodes section

    • Check any large queues, all nodes should have been synchronized

  5. Scale down the old RabbitMQ statefulset to 1

  6. On one of the new RabbitMQ pods' shell, perform the command to forget old pod 1 and 2: rabbitmqctl forget_cluster_node rabbit@rabbitmq-statefulset-<id>.rabbitmq-service-headless.<old_namespace>.svc.cluster.local

  7. Add the remote-rabbitmq information with the new namespace in your new ias Helm chart custom-values.yaml, see RMQCustomValueRemote

  8. Remove the old RabbitMQ cluster by upgrading to 2024.02.01 ias Helm chart

    • This should remove the old RabbitMQ cluster and most of its related Kubernetes resources

    • Maintenance window starts and new modules containers will point to the new RMQ cluster (possible downtime, should be back as soon as the new pods are up and running)

    • Check by kubectl get pods -n <old_namespace> -l app=rabbitmq

  9. Remove the last old pod from any of the new cluster pods: rabbitmqctl forget_cluster_node rabbit@rabbitmq-statefulset-0.rabbitmq-service-headless.<old_namespace>.svc.cluster.local

  10. Update the global.modules.rabbitmq.contextPath in custom-values-infra.yaml to the original value in custom-values.yaml (or remove the overwrite)

  11. Re-deploy the nexeed-infra Helm chart with the updated custom-values-infra.yaml

  12. Check the RabbitMQ cluster status to make sure only new pods is forming this cluster in shell: rabbitmqctl cluster_status

  13. After confirming the content, you may delete the old PersistentVolumesAndPersistentVolumeClaims for the old RabbitMQ cluster in the old namespace

Migration to version 2024.01.02

The default tenant ID of the root tenant was removed from the deployment. In case you did not set it in your deployment already, please add it to your configuration. The previous default tenant ID was '7311ea8c-5d48-43fe-acf9-980eedf24b6c'.

global:
  nexeedMacmaTenant0Id: <your tenant id>

Migration from non-Helm releases

Migration from the latest Nexeed IAS non-Helm version (2022.02.02) is supported.

If you run an older version please upgrade first to 2022.02.02.

The process begins by stopping the previous infrastructure, which is crucial to avoid any conflicts between the old and new versions of the application. Once the previous infrastructure is shut down, any necessary data is migrated to the new version. This step ensures that all critical information and settings are preserved and can be accessed by the new version of the application.

The old Ansible inventory files must be first converted to the Helm overrides. Since this requires a good knowledge of module configuration it is better to ask support for validation for this task.

After the data migration, the installation script for the new version of the Nexeed IAS system is run. This script is designed to handle the installation and configuration of the new software, making the process as straightforward as possible. This step is crucial as it guarantees that all the necessary components for the new version of the application are correctly installed and configured.

Contents

© Robert Bosch Manufacturing Solutions GmbH 2023-2025, all rights reserved

Changelog Corporate information Legal notice Data protection notice Third party licenses