Deployment with Helm
Prerequisites
The following prerequisites must be in place:
-
Kubernetes cluster installed and configured (k3s, kind, aks, eks etc.)
-
Possibility to hand-over kubernetes manifests or a helm chart to the kubernetes cluster admin for creation of:
-
namespaces
-
CRDs
-
cluster role or the availability of kubernetes cluster admin for Nexeed IAS application deployment
-
-
Access to Kubernetes API with a non-kubernetes admin user for deployment of namespaced-scoped objects
-
Nginx IngressController installed and configured with valid CA-signed SSL certificates
-
Helm binary installed (version >= 3.15)
-
Access to
BCIDockerRegistryor a mirror of it, which provides modules' images for deployment -
The Nexeed IAS umbrella chart artifact for the specific version (which can also be retrieved from
BCIDockerRegistry)
|
* * For nginx ingress controller you can install the Nginx ingress chart based on documentation available at: https://kubernetes.github.io/ingress-nginx/deploy/ |
Nginx ingress controller configuration recommendations
The following configurations should be applied to the Nginx Ingress Controller to ensure proper functioning of the Nexeed IAS application:
config:
allow-snippet-annotations: "true"
location-snippet: |
proxy_pass_header "X-Accel-Buffering";
proxy-read-timeout: "300"
proxy-body-size: "100m"
Snippet annotations support is only needed if Nexeed IAS deployment is exposed via context path.
The given configuration example can be used together with the official Nginx Ingress Helm chart. Additional configurations might be applied, depending on the cloud or on-premises environment.
Installed custom resource definitions
Besides the core Nexeed IAS applications modules, a set of CRDs have to be installed on the cluster to prepare the Nexeed IAS application deployment. The following sections give an overview of the deployed CRDs and describe their use cases as part of the Nexeed IAS deployment orchestration.
| Name | Version | Kind |
|---|---|---|
|
v1alpha1 |
InfluxDBUser |
|
v1alpha1 |
MacmaConfiguration |
|
v1alpha1 |
MongoDBUser |
|
v1alpha1 |
MssqlAccount |
|
v1alpha1 |
OracleAccount |
|
v1alpha1 |
PostgresAccount |
|
v1alpha1 |
RabbitMQTTUser |
|
v1alpha1 |
RabbitMQUser |
CRD: influxdbusers.nexeed.bosch.io
CRs of kind influxdbusers.nexeed.bosch.io specify the connection details (e.g., host, port) as well as the credentials to be used by the modules to connect to the Influx DB instances of the given deployment. Based on the given configurations, the configmaps and secrets, used by the modules that connect to Influx DB, are generated by the corresponding operator.
CRD: macmaconfigurations.nexeed.bosch.io
CRs of kind macmaconfigurations.nexeed.bosch.io are deployed in all module namespaces and define the MACMA configuration required for a given module. Configurations defined in these CRs are applied to MACMA instance by the MACMA operator. Besides generic URL configurations, these CRs are responsible to define the accounts (admin, service) and corresponding roles for the specified modules.
CRD: mongodbusers.nexeed.bosch.io
CRs of kind mongodbusers.nexeed.bosch.io specify the connection details (e.g., host, port) as well as the credentials to be used by the modules to connect to the Mongo DB instances of the given deployment. Based on the given configurations, the configmaps and secrets, used by the modules that connect to Mongo DB, are generated by the corresponding operator.
CRD: mssqlaccounts.nexeed.bosch.io
CRs of kind mssqlaccounts.nexeed.bosch.io specify the connection details (e.g., host, port) as well as the credentials to be used by the modules to connect to the MSSQL DB instances. In addition, roles and permissions to access the MSSQL DB can be defined. Based on the given configurations are generated by the corresponding operator.
CRD: oracleaccounts.nexeed.bosch.io
CRs of kind oracleaccounts.nexeed.bosch.io specify the connection details (e.g., host, port) as well as the credentials to be used by the modules to connect to the Oracle DB instances. In addition, roles and permissions to access the Oracle DB can be defined. Based on the given configurations are generated by the corresponding operator.
CRD: postgresaccounts.nexeed.bosch.io
CRs of kind postgresaccounts.nexeed.bosch.io specify the connection details (e.g., host, port) as well as the credentials to be used by the modules to connect to the Postgres DB instances. In addition, roles and permissions to access the Postgres DB can be defined. Based on the given configurations are generated by the corresponding operator.
CRD: rabbitmqttusers.nexeed.bosch.io
CRs of kind rabbitmqttusers.nexeed.bosch.io specify the connection and subscription details and credentials used by the deployed modules to connect to the RabbitMQ message broker for MQTT communication. This includes configuration options for the used vhost and tags.
CRD: rabbitmqusers.nexeed.bosch.io
CRs of kind rabbitmqusers.nexeed.bosch.io specify the connection and subscription details and credentials used by the deployed modules to connect to the RabbitMQ message broker. This includes configuration options for the used vhost and tags.
Quick test/development setup
The simplest way to deploy the HelmUmbrellaChart is to start from the sample Helm override file included in the umbrella chart artifact:
tar zxvf ias-<version>.tgz nexeed/custom-values-template.yaml
|
Minimal configuration
For a quick start, in the next section, two minimal configurations will be provided. These parameters are the minimum required parameters to start the web portal and the MACMA module.
global:
nexeedStage: <DEV | PREP | PROD>
targetDeployment: < azure | k3s | kind; defaults to k3s >
nexeedHost: <the hostname where the system will be exposed>
imageCredentials:
docker-registry-secret:
registry: <the adress of the container registry provided by the Bosch teams>
username: <username>
password: <password>
email: <email>
nexeedMacmaTenant0Id: <a tenant id in the form of an uid>
embeddedRabbitMQAdminPassword: <a password to be used by the Rabbit MQ>
embeddedMSSQLAdminPassword: <a password to be used by MSSQL>
serverInstances:
externalmssql:
host: <host>
port: 1433
tls: true
type: MSSQL
default: true
adminUser: <adminUsername>
adminPassword: <adminPassword>
nexeedCACerts: |
-----BEGIN CERTIFICATE-----
<certificate content>
-----END CERTIFICATE-----
<additional certs if exists>
modules:
macma:
enabled: true
keycloakUser: <keycloak username; usually keycloak>
keycloakPassword: <a password for the previously defined username>
keycloakClientSecret: <keycloack client id>
keycloakBCIMasterdataClientSecret: <keyvloack master data client secret>
portal:
enabled: true
macmaPortalAdminUser: <macma username; usually admin>
macmaPortalAdminPassword: <a password for the previously defined username>
The deployment may be further customized by overriding HelmChildChart parameters.
For instance, to change the number of replicas for macma-core deployment, you can override the value macma.deployments.macma-core.replicaCount by using the following configuration snippet inside helm override file:
macma:
deployments:
macma-core:
replicaCount: 4
For advanced configuration please consult Advanced configuration chapter in this manual.
|
The Nexeed IAS deployment process for an usual one k8s cluster deployment requires two helm releases: That’s the reason for having to run two helm upgrade commands. |
Before starting the deployment run the helm template command twice (one for each release) to check for validation errors.
Render chart templates locally and display the output with the following command:
helm template --namespace kube-system --values custom-values.yaml --set global.deploySystem=true nexeed .
helm template --namespace shared --values custom-values.yaml nexeed .
|
Instead of |
Install the charts into the prepared Kubernetes cluster with the following commands:
Step 1: Install the required namespaces, CRDs and cluster roles:
helm upgrade --install --namespace kube-system --values custom-values.yaml --set global.deploySystem=true nexeed .
Step 2: Install the application components:
helm upgrade --install --namespace shared --values custom-values.yaml nexeed .
|
In the first release of Nexeed IAS helm (2023.01) the modules charts were enabled by default (opt-out behavior). Starting with 2023.01.01 the module charts are disabled by default (opt-in behavior). |
-
Enable / Disable specific modules in
custom-values.yaml.
The chart name used to enable a module often not be exactly the same as the name of the module itself, you can find their correspondence with area of use in the table below:
| Chart Name | Module Name | Area of use |
|---|---|---|
configurator |
Configurator |
Operating Base |
engineering |
Engineering |
Operating Base |
idbuilder |
ID Builder |
Operating Base |
connectivity |
Information Router |
Operating Base |
mmpd |
Master Data Management |
Operating Base |
macma |
Multitenant Access Control |
Operating Base |
monitoring |
Elastic Monitoring |
Operating Base |
gateway |
Nexeed Gateway |
Operating Base |
portal |
Web Portal |
Operating Base |
smdp |
Deviation Processor |
Shopfloor Management |
smessentials |
Shopfloor Management Essentials |
Shopfloor Management |
gpo |
Global Production Overview |
Shopfloor Management |
smor |
Operational Routines |
Shopfloor Management |
parttrace |
Part Traceability |
Product and Quality |
pqm |
Process Quality |
Product and Quality |
specs |
Setup Specs |
Product and Quality |
dnc |
Setup Specs DNC |
Product and Quality |
blockman |
Block Management |
Execution |
linecon |
Line Control - linecon service |
Execution |
lineasm |
Line Control - lineasm (AssemblyLine) service |
Execution |
mat |
Material Management |
Execution |
om |
Order Management |
Execution |
paco |
Packaging Control |
Execution |
psm |
Product Setup Management |
Execution |
rework |
Rework Control |
Execution |
cm |
Condition Monitoring |
Machine and Equipment |
mm |
Maintenance Management |
Machine and Equipment |
toma |
Tool Management |
Machine and Equipment |
ies |
Intralogistics Execution System - IES Core |
Intralogistics Execution System (Transport and Stock Management) |
iesedge |
Intralogistics Execution System - IES Edge |
Intralogistics Execution System (Transport and Stock Management) |
agvcc |
AGV Control Center |
Intralogistics Execution System (Transport and Stock Management) |
ai |
AI Services |
Enabling Services |
datapublisher |
Data Publisher |
Valuable Extensions |
erpconn |
ERP Connectivity |
Valuable Extensions |
notification |
Notification Service |
Valuable Extensions |
orchestrator |
Orchestrator |
Valuable Extensions |
tm |
Ticket Management |
Valuable Extensions |
kafka |
Kafka |
Infrastructure and System Components |
ansible-operator |
Ansible Opeartor |
Infrastructure and System Components |
doc |
Documentation |
Infrastructure and System Components |
Please pay attention to spaces to make the layer alignment in yaml format, e.g.,
kafka:
enabled: false
doc:
enabled: true
Production setup
For a production setup additional care has to be taken.
Helm storage backend
By default, helm is storing the release revision data in a secret in the release namespace. There is a limit on the size of the secret in k8s and this limit is also governing the amount of data that can be stored as part of a helm release revision.
Since Nexeed IAS umbrella chart contains data which increases with the number of module charts deployed, it is recommended to start with a different storage backend, the database one. However, the backend can be switched at any time by following the steps highlighted in the official documentation: https://helm.sh/docs/topics/advanced/#storage-backends
At the time of the writing of this manual, the only supported database backend is Postgres.
One possibility is to install Postgres in the same kubernetes cluster (using postgres bitnami helm chart, for example):
helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl create namespace postgres
helm install postgres -n postgres bitnami/postgresql
export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres postgres-postgresql -o jsonpath="{.data.postgres-password}" |base64 -d)
kubectl run -it --rm postgres-postgresql-client --restart='Never' --namespace postgres --image docker.io/bitnami/postgresql:15.2.0-debian-11-r2 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host postgres-postgresql -U postgres -d postgres -p 5432 -c "CREATE DATABASE HELM WITH ENCODING 'UTF8'" -S
and to add the following shell steps in your deployment pipelines (for instructing helm to use the postgres storage backend):
kubectl port-forward --namespace postgres svc/postgres-postgresql 5432:5432 &
sleep 2
export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres postgres-postgresql -o jsonpath="{.data.postgres-password}" |base64 -d) && export HELM_DRIVER=sql && export HELM_DRIVER_SQL_CONNECTION_STRING=postgresql://postgres:$POSTGRES_PASSWORD@127.0.0.1:5432/helm?sslmode=disable