Migration from previous Nexeed IAS versions
The migration from an older Helm-based version to a new Helm-based one is in general non-disruptive, however some changes in parametrization might be required (if additional mandatory parameters are introduced, for example). Migration is only supported between consecutive major releases.
Migration to version 2025.03
A security enhanced measure is taken place by Utility Toolkit.
By default, we have set the pod spec to enable securityContext:
global:
defaultPodSecurityContext:
enabled: true
fsGroup: 0
runAsNonRoot: true
For Azure istio-enabled environments, you should merge the snippet to your custom-values.yaml and set runAsNonRoot to false to avoid istio related issues.
The Istio sidecar on Azure istio-enabled environments will require root access to perform iptables operations.
To override this securityContext for a specific module in custom-values.yaml:
module:
deployments: # or statefulSets or jobs
appname:
securityContext:
runAsNonRoot: false
Migration to version 2024.02.01
Helm binary >= 3.15 is needed.
The RabbitMQ Helm chart is no longer included in the ias chart. Instead, the nexeed-infra chart will provide this component.
The value override yaml in ias is called custom-values.yaml, for the same in the nexeed-infra chart we will refer it as custom-values-infra.yaml to avoid confusion. However, the value override file names for both charts can be arbitrarily selected.
The Nexeed IAS Infrastructure Operational Manual (the manual for nexeed-infra) will provide more information regarding the operation of infrastructure components that are part of that dedicated Helm chart, such as RabbitMQ. You should deploy the nexeed-infra Helm chart before performing the migration.
Additionally, in the custom-values.yaml file, the global.nexeedRabbitMQHost key is now deprecated, removed and not allowed any more. There should be no impact in the deployment process.
In this manual, we assume the default namespace for the RabbitMQ cluster installation is shared.
Same namespace deployment scenario
This suits the case if your RabbitMQ cluster deployed by nexeed-infra would stay at the same namespace as the original ias chart.
It is necessary to perform the following migration steps for the existing RabbitMQ cluster that is going to be managed by the nexeed-infra Helm chart without downtime.
-
Alter your Kubeconfig (usually
<home_dir>/.kube/config) to the correct Kubernetes context. -
You can find the RabbitMQ statefulset name and its namespace via command:
kubectl get sts -A -l app=rabbitmq-
You need to fill the placeholders in the script, command and the yaml files mentioned below
-
You can add or modify
global.modules.rabbitmq.namespaceSuffixto change the value for the default namespace
-
-
Execute the RabbitMQ objects Helm annotations change commands via the bash script:
for resource in $(kubectl get sts,ing,svc,secret,cm,roles,rolebindings,sa -n shared | awk '{print $1}' | grep -i rabbitmq)
do
kubectl annotate --overwrite $resource -n <namespace> meta.helm.sh/release-name=<nexeed-infra-release-name>
kubectl annotate --overwrite $resource -n <namespace> meta.helm.sh/release-namespace=<nexeed-infra-release-namespace>
done
After executing the script, you:
-
Move the
global.modules.rabbitmqsection fromcustom-values.yamltocustom-values-infra.yamlfile. Add theremote-rabbitmqreference in your business modules incustom-values.yaml, refer to RMQCustomValueRemote -
Apply the new ias Helm chart and the
nexeed-infrachart to your cluster. No RabbitMQ pods will be restarted as long as the image version used are the same. Migration is completed.
Custom-values.yaml update
In the custom-values.yaml file, the RabbitMQ cluster installed by nexeed-infra must be mentioned in the global.serverInstances section:
# custom-values.yaml
global:
serverInstances:
remote-rabbitmq:
host: rabbitmq-service.<namespace>.svc.cluster.local
port: 5672
adminPort: 15672
tls: false
default: true
adminUser: admin
adminPassword: <same_as_global.embeddedRabbitMQAdminPassword>
type: RABBITMQ
And your business modules should reference this remote-rabbitmq in their messaging section, if the global.serverInstances.remote-rabbitmq.default is not true:
# custom-values.yaml
global:
modules:
your-module1:
messaging:
<messaging_name>:
serverInstance: remote-rabbitmq
Your custom-values-infra.yaml should look like this in the global.modules.rabbitmq section:
# custom-values-infra.yaml
global:
modules:
rabbitmq:
enabled: true
# more rabbitmq optional parameters
Please note that the global.embeddedRabbitMQAdminPassword value should be defined and kept the same across the 2 custom-value files.
Migrate RabbitMQ to a different namespace
To migrate the existing RabbitMQ cluster to a new namespace, the overall logic is:
-
Deploy the new RabbitMQ cluster with the same
major.minorversion (i.e.3.13.2vs3.13.6) in the new namespace. -
On the new pods, join themselves to the old cluster one by one.
-
Reference the new cluster in your
custom-values.yamlfor your applications, and apply it with the upgrade maintenance window.-
This will also remove the old RabbitMQ pods and most of its related Kubernetes resources.
-
-
Remove the old cluster information from the new RabbitMQ cluster, point the old RabbitMQ management ingress controller to the new cluster.
To deploy a new cluster of RabbitMQ via nexeed-infra Helm chart, you need to temporarily alter the global.modules.rabbitmq.contextPath to be a different one as the old ias chart. For example:
# custom-values-infra.yaml
global:
modules:
rabbitmq:
contextPath: rabbitmq_new
namespaceSuffix: rmq2
Then you can deploy nexeed-infra with this value override.
Assuming the new RabbitMQ is already deployed in the rmq2 namespace with 3 replicas, they are up and running with no client connecting to it.
-
Execute the shell on the new rabbitmq-statefulset-0 pod in
rmq2namespace -
Join the old cluster with
rabbitmqctl join_cluster rabbit@rabbitmq-statefulset-0.rabbitmq-service-headless.<old_namespace>.svc.cluster.local -
Perform the same commands in the other 2 new pods
-
Wait until everything syncs, please confirm its content via the management portal and the pod log
-
In the pod log, you should see the string
Peer discovery: all known cluster nodes are up. -
In the management portal, you should see all old and new nodes in the Nodes section
-
Check any large queues, all nodes should have been synchronized
-
-
Scale down the old RabbitMQ statefulset to 1
-
On one of the new RabbitMQ pods' shell, perform the command to forget old pod 1 and 2:
rabbitmqctl forget_cluster_node rabbit@rabbitmq-statefulset-<id>.rabbitmq-service-headless.<old_namespace>.svc.cluster.local -
Add the
remote-rabbitmqinformation with the new namespace in your new ias Helm chartcustom-values.yaml, see RMQCustomValueRemote -
Remove the old RabbitMQ cluster by upgrading to 2024.02.01
iasHelm chart-
This should remove the old RabbitMQ cluster and most of its related Kubernetes resources
-
Maintenance window starts and new modules containers will point to the new RMQ cluster (possible downtime, should be back as soon as the new pods are up and running)
-
Check by
kubectl get pods -n <old_namespace> -l app=rabbitmq
-
-
Remove the last old pod from any of the new cluster pods:
rabbitmqctl forget_cluster_node rabbit@rabbitmq-statefulset-0.rabbitmq-service-headless.<old_namespace>.svc.cluster.local -
Update the
global.modules.rabbitmq.contextPathincustom-values-infra.yamlto the original value incustom-values.yaml(or remove the overwrite) -
Re-deploy the
nexeed-infraHelm chart with the updatedcustom-values-infra.yaml -
Check the RabbitMQ cluster status to make sure only new pods is forming this cluster in shell:
rabbitmqctl cluster_status -
After confirming the content, you may delete the old PersistentVolumesAndPersistentVolumeClaims for the old RabbitMQ cluster in the old namespace
Migration to version 2024.01.02
The default tenant ID of the root tenant was removed from the deployment. In case you did not set it in your deployment already, please add it to your configuration. The previous default tenant ID was '7311ea8c-5d48-43fe-acf9-980eedf24b6c'.
global:
nexeedMacmaTenant0Id: <your tenant id>
Migration from non-Helm releases
Migration from the latest Nexeed IAS non-Helm version (2022.02.02) is supported.
| If you run an older version please upgrade first to 2022.02.02. |
The process begins by stopping the previous infrastructure, which is crucial to avoid any conflicts between the old and new versions of the application. Once the previous infrastructure is shut down, any necessary data is migrated to the new version. This step ensures that all critical information and settings are preserved and can be accessed by the new version of the application.
The old Ansible inventory files must be first converted to the Helm overrides. Since this requires a good knowledge of module configuration it is better to ask support for validation for this task.
After the data migration, the installation script for the new version of the Nexeed IAS system is run. This script is designed to handle the installation and configuration of the new software, making the process as straightforward as possible. This step is crucial as it guarantees that all the necessary components for the new version of the application are correctly installed and configured.