Migration to 4.9.x (Nexeed IAS 2025.02.01.x)
In this release we verified that the services of Condition Monitoring with Rules Extended(Kafka) mode is able to horizontal scale and we defined a horizontal scaling guide based on performance tests according to our reference customer.
Redefining system requirements
Resource requests and limits
In this release we removed Traefik as gateway and redefined the system requirements for Condition Monitoring services after doing performance testing according to our reference customer. The following table summarizes the changes of the system requirements for Condition Monitoring services.
| Service Name | Default Replicas | Requested Resource (Before 4.9.0) | Requested Resource (After 4.9.0+) | Upper Limit (Before 4.9.0) | Upper Limit (After 4.9.0+) |
|---|---|---|---|---|---|
cm-traefik-gateway |
2 |
RAM: 256MB, CPU: 200m |
Removed |
RAM: 512MB, CPU: 600m |
Removed |
condition-monitoring-core |
2 |
RAM: 512MB, CPU: 100m |
RAM: 512MB, CPU: 200m |
RAM: 4GB, CPU: 2 |
RAM: 2GB, CPU: 1250m |
rule-service-app |
2 |
RAM: 512MB, CPU: 100m |
RAM: 512MB, CPU: 100m |
RAM: 1024MB, CPU: 750m |
RAM: 1024MB, CPU: 1000m |
rule-function-executor |
2 |
RAM: 256MB, CPU: 100m |
RAM: 256MB, CPU: 100m |
RAM: 1536MB, CPU: 1500m |
RAM: 1536MB, CPU: 1000m |
rule-value-provider |
2 |
RAM: 256MB, CPU: 200m |
RAM: 256MB, CPU: 200m |
RAM: 2048MB, CPU: 2000m |
RAM: 1280MB, CPU: 1250m |
rule-value-aggregator |
2 |
RAM: 256MB, CPU: 100m |
RAM: 256MB, CPU: 100m |
RAM: 2048MB, CPU: 2000m |
RAM: 1280MB, CPU: 1250m |
rule-result-aggregator |
2 |
RAM: 256MB, CPU: 100m |
RAM: 256MB, CPU: 100m |
RAM: 768MB, CPU: 1500m |
RAM: 1024MB, CPU: 1250m |
SUM (2 replicas) |
RAM: 4.5GB, CPU: 1.8 |
RAM: 4GB, CPU: 1.6 |
RAM: 23.5GB, CPU: 20.7 |
RAM: 16.0GB, CPU: 14.0 |
Kafka topics configuration
Version |
Partitions per Topic |
Total Partitions (28 Topics) |
Before 4.9.0 |
50 |
1400 |
4.9.0 and later |
5 |
140 |
We have redefined the Kafka topics configuration, recommending a reduction in the number of partitions per topic to 5 (previously 50 in versions before 4.9.0). As shown in the example configuration on the example_kafka_topic_list page, with 28 topics, this results in a total of 140 partitions (down from 1400 in earlier versions). If your service experiences high load or requires scaling beyond 5 instances, consider increasing the number of partitions, but we recommend starting with 5 partitions per topic.
|
Kafka allows you to increase the number of partitions after topic creation (e.g., from 6 to 10). However, you cannot decrease the number of partitions (no shrink operation)—if you need fewer partitions, you must delete and recreate the topic. This will lead to data loss. |
Horizontal and service scaling guidance
For detailed guidance on horizontal scaling, recommended service instance counts, and performance-based scaling factors, please refer to the Horizontal Scaling Guidance section in the Required Monitoring documentation.