Nexeed
    • Introduction
    • User manual
      • Condition monitoring and its tabs
        • Live
        • Counters
        • Measurements
        • Events
        • Rules
        • View configuration
        • Details
      • Rules management
        • Rule types and standard functions
        • Rule details
      • Function configuration
      • Condition Monitoring widgets
      • Access Management
        • Application Roles
        • Fine-Grained Access Control and Configuration
        • How to Configure Organization Roles
    • Operations manual
      • Overview
      • System architecture and interfaces
        • System components
      • System requirements
        • General notes
        • cm/condition-monitoring-core
        • cm/rule-service-app
        • cm/rule-function-executor
        • cm/rule-result-aggregator
        • cm/rule-value-aggregator
        • cm/rule-value-provider
        • cm/stateful-function-executor
      • Migration from previous versions
        • Migration to 2.1+
        • Migration from CPM 1.5.4 to CM and RM 3.0.x (Nexeed IAS 2023.02.00.xx)
          • CPM to CM relational database migration
          • CPM to RM relational database migration
          • CM Influx database migration
          • Deletion of an old CPM installation
        • Resources mapping from MES to IAS Condition Monitoring
        • Migration to 4.0.0+ (Nexeed IAS 2024.01.00.xx)
        • Migration to 4.3.x (Nexeed IAS 2024.02.01.x)
        • Migration to 4.5.x (Nexeed IAS 2025.01.00.x)
        • Migration to 4.6.x (Nexeed IAS 2025.01.01.x)
        • Migration to 4.8.x (Nexeed IAS 2025.02.00.x)
        • Migration to 4.9.x (Nexeed IAS 2025.02.01.x)
      • Setup and configuration
        • Manual MACMA configuration after setting up a new tenant
        • RabbitMQ
        • Influx configuration
        • Kafka topics
        • Condition Monitoring - Helm Configuration
        • Advanced configuration parameters
          • cm/condition-monitoring-core
            • Common shared variables
            • Portal shared variables
            • MDM shared variables
            • RabbitMQ shared variables
            • OTEL shared variables
          • cm/rule-service-app
            • Rules Management shared variables
            • KAFKA shared variables
          • cm/rule-function-executor
          • cm/rule-result-aggregator
          • cm/rule-value-aggregator
          • cm/rule-value-provider
          • cm/stateful-function-executor
      • Start and shutdown
      • Regular operations
      • Failure handling
        • Rule Management Light Helm installation failing when Kafka is disabled or Kafka is not configured at all
        • User manual injection into Rule Management
        • Infrastructure outages: health verification Endpoints
        • OPP/PPMP are not received in CM
        • Master data (Devices, Facilities, Measuring Points, DeviceTypes) is missing in CM
        • CM is not visible in the portal
        • How to verify if the broker is out of sync
      • Backup and Restore
      • Logging and monitoring
        • General logging characteristics
        • Required monitoring
        • General logging format
        • Request-based logging format
        • Security logging format
        • Lifecycle logging format
        • Module health Endpoints and K8s probes
      • Known limitations
    • API documentation
      • Condition Monitoring HTTP API
      • Rules Management HTTP API
    • Glossary
Condition Monitoring
  • Industrial Application System
  • Core Services
    • Block Management
    • Deviation Processor
    • ID Builder
    • Multitenant Access Control
    • Notification Service
    • Ticket Management
    • Web Portal
  • Shopfloor Management
    • Andon Live
    • Global Production Overview
    • KPI Reporting
    • Operational Routines
    • Shift Book
    • Shopfloor Management Administration
  • Product & Quality
    • Product Setup Management
    • Part Traceability
    • Process Quality
    • Setup Specs
  • Execution
    • Line Control
    • Material Management
    • Order Management
    • Packaging Control
    • Rework Control
  • Intralogistics
    • AGV Control Center
    • Stock Management
    • Transport Management
  • Machine & Equipment
    • Condition Monitoring
    • Device Portal
    • Maintenance Management
    • Tool Management
  • Enterprise & Shopfloor Integration
    • Archiving Bridge
    • Data Publisher
    • Direct Data Link
    • Engineering UI
    • ERP Connectivity
    • Gateway
    • Information Router
    • Master Data Management
    • Orchestrator

Nexeed Learning Portal

  • Condition Monitoring
  • Operations manual
  • Migration from previous versions
  • Migration from CPM 1.5.4 to CM and RM 3.0.x (Nexeed IAS 2023.02.00.xx)
  • CM Influx database migration
preview 4.10.0

CM Influx database migration

In the course of the split of the Condition Process Monitoring (CPM) module into Condition Monitoring (CM) and Process Quality (PQ) the existing CPM data in the Influx database needs to be migrated to a new Influx database for each respective module.

This explains the CPM to CM data migration. For CPM to PQM data migration, please check the PQM migration docs.

To migrate the CM Influx data use the following guides:

  1. Migrating Condition Monitoring timeseries data except raw measurements

  2. Migrating raw measurements only

For more details on how to use the Influx data migration tool see Appendix: Influx migration tool

Prerequisites

  1. Prepare the new Influx Databases

  2. Clarify with the business which data needs to be migrated

    • Should all data be migrated or just the data after a specific date?

    • Should the original data be deleted after the migration?

  3. Inspect the retention policies in the existing Influx DB and their disk sizes

    • What is a suitable chunk size for migrating the data so that we don’t run out of memory?

  4. Derive the fitting parameters for the migration tool (see below for details) and create a migration plan according to your previous findings

  5. Install Java JDK 17 on the machine on which you want to execute the migration tool

  6. Add the needed Certificates in the Java KeyStore (optional if 'disable-ssl' option is enabled).

  7. The JAR file can be found in artifactory or network share (use the newest released version):

    • https://artifactory.boschdevcloud.com/artifactory/lab000003-bci-mvn-release-local/com/bosch/bci/trinity/influx-migration/

    • \\rb-repobci.de.bosch.com\InternalShare\IAS\IAS2023.02\MigrationCPMToCM\CM\influx

Make sure that there is a valid and stable connection between the Source, Target and the host where the Influx Migration Application is running.
The migration mainly depends on 3 variables: the data size per Influx Point that is to be migrated (an Event is small), the query-chunk-size and the quality of the HTTP connection between the Source, Target and host where the Influx Migration application is running. Therefore, there is no golden value for setting the query-chunk-size; it all depends on how much data there is to migrate and how fast and reliable the HTTP connection is.
Please be advised that K9s’s port forwarding feature proved to be very unreliable during our tests. If you rely on this feature to do the migration, it is a high chance that you will encounter problems. However, there is a workaround by setting ok-http-logging-interceptor-level to BODY. This will result in lots of output information, so grep for errors to make sure the migration ran successfully. We advise to find a better solution and DO NOT rely on this workaround.

Migrating Condition Monitoring timeseries data except raw measurements

Events and measurements have a small data size per Influx point. Thus, a larger query-chunk-size can be used to speed up the migration process, but there are multiple source-retentions that need to be migrated.

It has to be decided if the measurement raw data has to be migrated, as this can be a lot of data and can take some time. That is why we propose to migrate measurement raw data in a separate step (Migrating raw measurements only).

If the raw data should not be migrated in this step, then rp_msm_raw should be added to the source-retention-exclusions (as shown in the example).

If the migration is done within a trusted network, the --disable-ssl option can be used in order to trust the certificates of the Source and Target. This implies that there will be no need to add the self-signed certificates to the Java TrustStore.

During our test phase, we used the following query to migrate Condition Monitoring timeseries data except raw measurements (rp_event,rp_msm_deferred,rp_msm_level1,rp_msm_level2,rp_msm_level3,rp_ppmp_meta_data):

Exclusions: rp_processes and rp_msm_raw

java -jar influx-migration-<release_version>.jar \
--tenant-id=7311ea8c-5d48-43fe-acf9-980eedf24b6c \
--query-chunk-size=40000 \
--source-url=https://si0vmc3101.de.bosch.com:8086 \
--source-db=cpm \
--source-username=cpm \
--source-password=aklnnNBUIf-efekla \
--target-url=http://localhost:8086 \
--target-db=cpm \
--target-username=admin \
--target-password=admin_password \
--skip-continuous-queries=true \
--source-retention-exclusions=autogen,rp_process,rp_msm_raw

Migrating raw measurements only

Raw measurements have a small data size per Influx point. Thus, a larger query-chunk-size can be used to speed up the migration process.

If the migration is done within a trusted network, the --disable-ssl option can be used in order to trust the certificates of the Source and Target. This implies that there will be no need to add the self-signed certificates to the Java TrustStore.

During our test phase, we used the following query to migrate raw measurements only (rp_msm_raw):

Exclusions: rp_event,rp_msm_deferred,rp_msm_level1,rp_msm_level2,rp_msm_level3,rp_ppmp_meta_data, rp_processes

java -jar influx-migration-<release_version>.jar \
--tenant-id=7311ea8c-5d48-43fe-acf9-980eedf24b6c \
--query-chunk-size=40000 \
--source-url=https://si0vmc3101.de.bosch.com:8086 \
--source-db=cpm \
--source-username=cpm \
--source-password=aklnnNBUIf-efekla \
--target-url=http://localhost:8086 \
--target-db=cpm \
--target-username=admin \
--target-password=admin_password \
--skip-continuous-queries=true \
--source-retention-exclusions=autogen,rp_event,rp_msm_deferred,rp_msm_level1,rp_msm_level2,rp_msm_level3,rp_ppmp_meta_data,rp_process

Appendix: Influx migration tool

General notes

  1. This tool streams the Influx points into memory and then writes them to the target DB. So make sure that you have good internet connection between the source host, the host where you execute the tool and the target host.

  2. This tool does NOT create, delete, or modify rights to a specific database in any way

  3. As continuous queries do not belong to a retention policy, they will either all be copied or none (use --skip-continuous-queries)

  4. This tool does NOT delete any Continuous Queries whatsoever. The reason for this is that not all data is guaranteed to be moved to the new database, and then the Continuous Query might still be relevant to the remaining data in the original database.

  5. One of the stretch goals for this tool was to be able to continue the migration at a later stage (to continue where it was previously stopped). Although this was not specifically implemented, the tool will not duplicate any Data, Measurements, Retention Policies, Continuous Queries or anything else when it is run again with the same parameters.

  6. BE CAREFUL when choosing to add the --delete-source tag. The results are permanent and immediate. (It has been set to default=false to prevent this accidentally happening)

  7. ALWAYS check and double-check the results, especially Retention Policies and Continuous Queries that were created.

  8. The tool will output the number of points copied per chunk to give a sense of progress. You can use a count query to know how many points to expect before starting the migration tool

    • If the tool doesn’t output progress for some time (> 30s) and seems stuck, hit CTRL-C and decrease the chunk size
      with --query-chunk-size

  9. Migrate raw measurements and other Condition Monitoring timeseries data in separate steps, see Migrating raw measurements only and Migrating Condition Monitoring timeseries data except raw measurements

  10. Figure out which retention policies you do not want to migrate and use --source-retention-exclusions to exclude them

  11. If the tool crashes at some time, you can run it again - existing points will just be overridden. If you know what data is missing, you can tweak it with the --from-timestamp and --to-timestamp options.

  12. There might be some points that run out of retention when writing them into the database!

Using the migration tool

The provided migration tool can be run via the command line. Prerequisite: Java 17 installed.
The following parameters needs to be provided and should be specified before running the tool:

Source Database Configuration

Parameter Required Description Example

source-db

yes

Name of the migration source database

cpm

source-url

yes

URL to the migration source Influx DB instance

https://rngvmc0129.de.bosch.com:8086

source-username

yes

Username to read data from source database. Recommendation: use user with admin rights.

admin

source-password

yes

Password to the respective username to read data from source database

influx-db-admin-secret!

Target Database Configuration

Parameter Required Description Example

target-db

yes

Name of the migration target database

cm

target-url

yes

URL to the migration target Influx DB instance

https://rngvmc0129.de.bosch.com:8086

target-username

yes

Username to write data to target database. Recommendation: use user with admin rights.

admin

target-password

yes

Password to the respective username to write data to target database

influx-db-admin-secret!

Migration Related Parameters

Parameter Required Description Default Example

tenant-id

yes

Id of the tenant whose data should be migrated

-

7311ea8c-5d48-43fe-acf9-980eedf24b6c

query-chunk-size

yes

The number of data points to be migrated at the same time.

10000

10000

source-retention-exclusions

no

Comma-separated list of retention policies to be excluded from migration.If not set or empty, will migrate all retention policies

-

rp_process, rp_event

skip-continuous-queries

no

Whether the continuous queries shall be skipped

false

true

from-timestamp

no

Only data newer than this UTC ISO timestamp will be migrated

-

2022-11-30T11:09:35+01:00

to-timestamp

no

Only data up until this timestamp will be migrated

The UTC timestamp when the migration tool was started

2022-11-30T12:09:35+01:00

delete-source

no

Whether the Source data should be deleted or not

false

false

disable-ssl

no

Whether to disable the SSL certificate checks

false

true

ok-http-logging-interceptor-level

no

Sets the logging level of the OK HTTP Client. Accepted values: NONE, BASIC, HEADERS or BODY

NONE

BODY

ok-http-read-timeout

no

OK HTTP Client read timeout in seconds

30

30

ok-http-write-timeout

no

OK HTTP Client write timeout in seconds

30

30

To get the help of all options:

java -jar influx-migration-<version>.jar --help

Example Run in Command Line:

java -jar influx-migration-0.0.1-SNAPSHOT.jar \
--source-url=https://rngvmc0129.de.bosch.com:8086 \
--source-db=cpm \
--source-username=admin \
--source-password=pool31-admin-influxdb-secret! \
--target-url=https://rngvmc0129.de.bosch.com:8086 \
--target-db=cpm_migration \
--target-username=admin \
--target-password=pool31-admin-influxdb-secret! \
--tenant-id=7311ea8c-5d48-43fe-acf9-980eedf24b6c \
--query-chunk-size=50 \
--source-retention-exclusions=rp_process,rp_msm_raw \
--from-timestamp=2022-11-30T11:09:35+01:00 \
--delete-source=false

With the above parameters, the tool will use the SSL protocol to establish the connections to both Source and Target and it will create all retention policies of the source database 'cpm' except 'rp_process' and 'rp_msm_raw' on the target database 'cpm_migration'. Next, it will copy all data newer than 11/30/2022 9:35:01 from all retention policies except 'rp_process' and 'rp_msm_raw' from database 'cpm' to database 'cpm_migration' in batches of 50 data points each. It will also create Continuous Queries on 'cpm_migration' that existed on 'cpm' and are related to any but the excluded retention policies, and will NOT delete the original source data

Contents

© Robert Bosch Manufacturing Solutions GmbH 2023-2025, all rights reserved

Changelog Corporate information Legal notice Data protection notice Third party licenses