Runtime configuration handling
With the term Runtime Configuration, we mean those configuration values, which the user can update during runtime without a need to restart the service(s). Please see runtime_configuration section for more information about the configuration keys, and the usage of the UI for updating/downloading configuration.
This section is focusing on a different area, about how the configuration behaves in the background, and is aiming to highlight important notes.
Configuration usage in the service
Since the configuration is rather static, but most of the use-cases are needing some kind of business configuration, these values are cached in all service instances (pods), so the access to the values are fast, and does not slow down the cycle-time. In order to make sure all caches are up-to date, it is required to refresh them in case an update to the configuration happens. This is done via the message broker (messaging_middleware)
Configuration updates via message broker (messaging_middleware)
As a prerequisite lets assume that the installation is running with 2 replicas. Whenever the user is updating the configuration on the UI, one of the replicas will serve the request. This replica will refresh the values in the database, and also refresh its own cache.
In order to refresh the other replica’s cache as well, at the end of the request the initial replica will publish a message to the message bus, to notify every other interested parties about the change. All replicas are subscribed to this notification. The replica initially sending the notification will simply ignore it. The other replicas will process the message and re-read the configuration from the database, and will update their cache with the new values.
In case the message broker (messaging_middleware) is unavailable, there is a fall-back mechanism in place, which will refresh the configuration every five minutes.
Queue handling
As described above, all of the replicas will try to subscribe to the internal notifications about configuration updates. In order to do so, all replicas will create a queue, which contains the InstanceId of the replica. This queue will then connect to the x.materialmanagement.domain exchange which forwards the notification to all connected queues. Whenever a pod is stopped, it will teardown its queue, so there are only queues alive for instances which are currently running. As an example for a system with 2 replicas: