Setup and configuration
Base configuration
The following environment variables are true in Archiving Bridge deployment
|
Note the PREFIX of common environment variables as follows
|
Common variables
NEXEED_GLOBAL_ENVIRONMENT_NAME
- Description
-
Defined on IAS System level.
- Sources
-
-
Environment Variable
-
LoggingLogLevelDefault
- Description
-
Log Level for the Service.
- Required
-
No
- Defaults to
-
Warning
- Sources
-
-
Environment Variable
-
OIDC
OIDC__SystemId
- Description
-
SystemId used for the Portal registration - should be the tenant0
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
OIDC__TenantId
- Description
-
TenantId used for MACMA - should be the tenant0
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
OIDC__ServiceUrl
- Description
-
Baseurl of MACMA, server specific.
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
OIDC__ClientId
- Description
-
Client ID of the client associated with this service in MACMA
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
OIDC__ClientSecret
- Description
-
Client secret of the client associated with this service in MACMA
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
OIDCAclServiceScope
- Description
-
Scope required for MACMA
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
Database & messaging settings
NEXEED_ARCHIVINGBRIDGE_activeDatasource
- Description
-
Defines the datasource "kind". Possible values:
-
"bufferOra" for ORACLE
-
- Defaults to
-
bufferOra
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
NEXEED_ARCHIVINGBRIDGE_dataSourcesbufferOraConnectionString
- Description
-
Connection String to buffer database (in case of Oracle)
- Required
-
Yes - if "bufferOra" is set as "activeDatasource"
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
NEXEED_ARCHIVINGBRIDGE_MessageQueueQueueName
- Description
-
Queue name (RMQ) for incoming archiving export messages
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable
-
NEXEED_ARCHIVINGBRIDGE_MessageQueueExchangeName
- Description
-
Exchange name (RMQ) for all archiving related messages
- Required
-
Yes
- Defaults to
-
(set via global configuration)
- Sources
-
-
Environment Variable = Resources in Archiving Bridge
-
| ID | Name | Type | Description | Privileges |
|---|---|---|---|---|
health |
Health endpoint - Provides information about the service and its dependencies |
urn:com:bosch:bci:operation |
Get service status via information |
execute |
ArchivingBridgeConfiguration |
ArchivingBridgeConfiguration |
urn:bosch:nexeed:archivingbridge:config |
Modify ArchivingBridge Configuration |
read, modify, add, delete |
ArchivingBridgeExport |
ArchivingBridgeExport |
urn:bosch:nexeed:archivingbridge:export |
ArchivingBridge DataExport API |
execute, read, delete |
OpenTelemetry configuration
OpenTelemetry enhances observability in the Archiving Bridge system by providing comprehensive insights into performance, enabling efficient failure troubleshooting, and facilitating detailed analysis of archiving operations through the collection of metrics, traces, and logs.
| Parameter | Defaults to | Required | Sources | Description |
|---|---|---|---|---|
local.observability.otelAutoInjectEnvParams |
true |
Yes |
Environment Variable |
Flag to enable or disable injection of OTEL environment variables. |
Scoped based configuration
Configuration scopes
Archive Route Parameters Configuration
By configuring archive route parameters, users can manage the archiving process. This enables a part to be archived in multiple archives according to the settings specified in the Archiving Bridge (AB).
The Archiving Bridge receives the archive route parameters along with the archiving request and chooses the appropriate configurations from the database based on the archive routes, module ID, and tenant ID.
By default, archiving is disabled (False) for all configuration scopes. The "Enable Archiving" setting can be set to true or false. When enabled, it will initiate the archiving of parts from that point onward. Note that this setting does not apply to previously created parts.
If no scope information is available in the archive request, the default scope will be used.
If the user changes the archive route parameter settings in the default scope, these changes will automatically be reflected in other scopes if their values are identical.
If the user updates the default scope value with an empty one, the current scope value will also become empty in other scopes if their current scope and default scope values are identical.
If the current scope value is empty for any other scope apart from default and the user changes the current scope value of the default scope, the updated value will not be reflected in the respective scope.
If the user changes the current scope value, the values will be set only for the selected scope.
Archiving bridge
| If you are using a Production Archive to archive your data, please refer to the separate documentation in order to mount the SMB share. |
Base configuration
OIDC settings
NEXEED_ARCHIVINGBRIDGE_OIDC__TenantId
- Description
-
TenantId used for MACMA - should be the tenant0
- Required
-
Yes
- Sources
-
-
Environment Variable
-
NEXEED_ARCHIVINGBRIDGE_OIDC__ServiceUrl
- Description
-
Baseurl of MACMA, server specific.
- Required
-
Yes
- Sources
-
-
Environment Variable
-
NEXEED_ARCHIVINGBRIDGE_OIDC__ClientId
- Description
-
Client ID of the client associated with this service in MACMA
- Required
-
Yes
- Sources
-
-
Environment Variable
-
Database and messaging settings
NEXEED_ARCHIVINGBRIDGE_activeDatasource
- Description
-
Defines the datasource "kind". Possible values:
-
"bufferOra" for ORACLE
-
- Defaults to
-
bufferOra
- Required
-
Yes
- Sources
-
-
Environment Variable
-
NEXEED_ARCHIVINGBRIDGE_dataSourcesbufferOraConnectionString
- Description
-
Connection String to buffer database (in case of Oracle)
- Defaults to
-
-
- Required
-
Yes - if "bufferOra" is set as "activeDatasource"
- Sources
-
-
Environment Variable
-
Runtime settings
Note: The current service version needs to be configured via REST API. The following sub chapters describe the corresponding (REST) parameter contents.
As an initial "setup" support an appropriate Postman collection is available on the IAS distribution share.
{
"contentFormat": {
"schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"metaInfoItems": {
"type": "object",
"properties": {
"uniquepart_id": {
"type": "string"
},
"serial_number": {
"type": "string"
},
"serial_number_date": {
"type": "string"
},
"order_id": {
"type": "string"
},
"part_attribute": {
"type": "integer"
},
"type_number": {
"type": "string"
},
"type_variant": {
"type": "string"
},
"result_date": {
"type": "string"
},
"result_state": {
"type": "integer"
},
"batch": {
"type": "string"
},
"lastknown_proc_no": {
"type": "integer"
},
"lastknownlocation": {
"type": "string"
},
"part_class": {
"type": "string"
},
"release": {
"type": "string"
},
"lotid": {
"type": "string"
},
"package_id": {
"type": "string"
},
"packaging_result_date": {
"type": "string"
},
"product": {
"type": "string"
},
"progress_date": {
"type": "string"
}
},
"required": [
"uniquepart_id",
"result_date",
"result_state",
"progress_date"
]
}
},
"required": [
"metaInfoItems"
]
},
"validationEnabled": true
},
"module": {
"url": "https://localhost/qd-archiving/{tenantId}/{id}",
"clientName": "someClientId",
"validateHash": true
},
"adapter": {
"name": "prod-archive",
"exportWorkers": 1,
"archiveRouteParameters":{
"moduleSpecificKey_1":"moduleSpecificValue_1",
"moduleSpecificKey_2":"moduleSpecificValue_2"
},
"uiInformation":{
"title":"A brief title identifying this specific archive configuration set.",
"description":"A description helping the user to identify this specific archive configuration set."
},
"targetDbReference": "nexeed-prod-archive",
"archiveId": "ZM",
"exportSettings": {
"maxBundleSize": 1000,
"ciInputShare": "/mnt/prod-archive/opcon_test_zip",
"tempDataPath": "/tmp/archiving-bridge/working_dir",
"targetDbConnectionString": "Data Source=<SERVER_ADDRESS>:<PORT>/<SERVICAME>;Persist Security Info=True;User Id=INSERT-USERID-HERE;Password=INSERT-PW-HERE;Enlist=true;HA events=true;Min Pool Size=10;Decr Pool Size=3;Connection Lifetime=180;Connection Timeout=60;Validate Connection=true",
"uniqueIdentifierPropertyName": "uniquepart_id",
"uniqueIdentifierColumnName": "UNIQUEPART_ID",
"uniqueDatePropertyName": "progress_date",
"uniqueDateColumnName": "PROGRESS_DATE",
"exportTimeout": "01:30:00"
},
"importSettings": {
"verifyFiles": true,
"datasource": {
"url": "http://RB-AC-ASxxxx.de:8080/archive?get&Version=0045&contRep{0}&docId={1}",
"parameters": {
"Version": "0045",
"contRep": "{contRep}",
"docId": "{docId}"
}
}
},
"dataTypes": {
"uniquepart_id": "1",
"serial_number": "1",
"serial_number_date": "3",
"part_attribute": "1",
"type_number": "1",
"type_variant": "1",
"result_date": "3",
"result_state": "1",
"batch": "1",
"part_class": "1",
"product": "1",
"package_id": "1",
"packaging_result_date": "3",
"progress_date": "3"
},
"ixAttribSettings": {
"accessTableNamePattern": null,
"tableNamePattern": "T_NEXEED_QD_<SOME_KEY>",
"documentFormat": "application/x-zip-compressed",
"archivingDateExpression": "to_date(sysdate)",
"docIdType": "%s",
"archiveIdType": "%s",
"dateTimeFormat": "YYYY-MM-DD HH24:MI:SS.FF TZH:TZM"
},
"cmdSettings": {
"docType": "application/x-zip-compressed",
"archiveId": "ZM"
}
}
}
The whole document consists of 3 main parts:
-
contentFormat
-
module
-
adapter
It represents a complete client module configuration set that describes the meta-data format ("contentFormat"), the source module address info ("module") and the archiving adapter type ("adapter") to use for final long term data storage.
Section "contentFormat"
Section contains the JSON schema definition for the modules' meta-data information that is provided with each archiving request.
| Attribute | Description | Required |
|---|---|---|
schema |
JSON schema definition for meta data validation |
x |
validationEnabled |
Flag to enable/disable meta data validation. Default: true |
x |
Section "module"
| Attribute | Description | Required |
|---|---|---|
url |
URL (pattern) where the client module provides |
x |
clientName |
(Unique) Name of source / client module |
x |
validateHash |
Defines if the provided hash code within the archiving request message needs to be validated (with the essential archiving binary/repository data). Default: true |
x |
Target: (BD) Production.Archive
Note: To configure Archiving Bridge to export data into the production archive, it is necessary to provide the following REST parameters to send using the configuration endpoint:
Before configuring the archiving bridge service, it is essential to define and create two persistence volumes (folders) in your cluster specifically for use with the production archive adapter. These volumes will facilitate the sharing of files with the production archive (parameter: ciInputShare) and the temporary storage of files (parameter: tempDataPath).
These two parameters must be configured in the variables ciInputShare and tempDataPath within the configuration settings. The following example illustrates their placement:
{
...,
"adapter": {
...,
"exportSettings": {
...,
"ciInputShare": "/mnt/smb/prod-archive1",
"tempDataPath": "/tmp/archiving-bridge/working_dir",
...
},
...
}
}
In the following example, a complete configuration is provided for uploading via a REST endpoint.
{
"contentFormat": {
"schema": {
"type": "object",
"properties": {
"uniquepart_id": {
"type": "string"
},
"serial_number": {
"type": "string"
},
"serial_number_date": {
"type": "string"
},
"order_id": {
"type": "string"
},
"part_attribute": {
"type": "string"
},
"type_number": {
"type": "string"
},
"type_variant": {
"type": "string"
},
"result_date": {
"type": "string"
},
"result_state": {
"type": "string"
},
"batch": {
"type": "string"
},
"lastknown_proc_no": {
"type": "string"
},
"lastknownlocation": {
"type": "string"
},
"part_class": {
"type": "string"
},
"release": {
"type": "string"
},
"lotid": {
"type": "string"
},
"package_id": {
"type": "string"
},
"packaging_result_date": {
"type": "string"
},
"product": {
"type": "string"
},
"progress_date": {
"type": "string"
}
,
"working_code": {
"type": "string"
}
},
"required": [
"uniquepart_id",
"result_date",
"result_state",
"progress_date"
]
},
"validationEnabled": true
},
"module": {
"url": "https://localhost/parttrace/qdarchivingbridge-service/api/v1/{tenantId}/Export/zipfile/{id}",
"clientName": "qdab_client_1",
"validateHash": true
},
"adapter": {
"name": "prod-archive",
"exportWorkers": 1,
"archiveRouteParameters":{
"moduleSpecificKey_1":"moduleSpecificValue_1",
"moduleSpecificKey_2":"moduleSpecificValue_2"
},
"uiInformation":{
"title":"A brief title identifying this specific archive configuration set.",
"description":"A description helping the user to identify this specific archive configuration set."
},
"archiveId": "ZM",
"exportSettings": {
"maxBundleSize": 1000,
"ciInputShare": "/mnt/smb/prod-archive1",
"tempDataPath": "/tmp/archiving-bridge/working_dir",
"targetDbConnectionString": "Data Source=Server1/Instance1;Persist Security Info=True;User Id=T_NEXEED_USER1;Password=Password1;Enlist=true;HA events=true;Min Pool Size=10;Decr Pool Size=3;Connection Lifetime=180;Connection Timeout=60;Validate Connection=true",
"uniqueIdentifierPropertyName": "uniquepart_id",
"uniqueIdentifierColumnName": "UNIQUEPART_ID",
"uniqueDatePropertyName": "progress_date",
"uniqueDateColumnName": "PROGRESS_DATE",
"exportTimeout": "01:30:00"
},
"importSettings": {
"datasource": {
"url": "http://Server2:8080/archive?get\u0026amp;Version=0045\u0026amp;contRep{0}\u0026amp;docId={1}",
"parameters": {
"Version": "0045",
"contRep": "{contRep}",
"docId": "{docId}"
}
}
},
"dataTypes": {
"uniquepart_id": "1",
"serial_number": "1",
"serial_number_date": "3",
"part_attribute": "1",
"type_number": "1",
"type_variant": "1",
"result_date": "3",
"result_state": "1",
"batch": "1",
"lastknown_proc_no": "1",
"part_class": "1",
"product": "1",
"package_id": "1",
"packaging_result_date": "3",
"progress_date": "3",
"working_code": "1"
},
"ixAttribSettings": {
"accessTableNamePattern": "Namespace1.T_NEXEED_USER1",
"tableNamePattern": "T_NEXEED_USER1",
"documentFormat": "application/x-zip-compressed",
"archivingDateExpression": "to_date(sysdate)",
"docIdType": "%s",
"archiveIdType": "%s",
"dateTimeFormat": "YYYY-MM-DD HH24:MI:SS.FF TZH:TZM"
},
"cmdSettings": {
"docType": "application/x-zip-compressed",
"archiveId": "ZM"
}
}
}
Section "contentformat"
Section contains the JSON schema definition for the modules' meta-data information that is provided with each archiving request.
| Attribute | Description | Required |
|---|---|---|
moduleId |
Name for the module to be configured. Any string value is accepted that is unique within this Archiving Bridge cluster. |
x |
adapterType |
"prod-archive", this qualifier is used for Production.Archive |
x |
settings |
JSON document, containing all the settings used for export into Prod.Archive. |
x |
schema |
JSON schema definition for meta data validation |
x |
type |
"object", this the qualifier used for type |
x |
properties |
JSON document, containing all the properties used in the meta data within the zip file and its corresponding data type. Currently there is only a distinction between "string" for string properties and "integer" for number properties |
x |
required |
JSON document, containing all meta data properties that are required to be provided within an archiving request, sent to the Archiving Bridge. |
x |
validationEnabled |
Flag to enable/disable meta data validation. Default: true |
x |
module |
Url of the caller module |
x |
clientName |
Name of the caller module; NOTE: Make sure that clientName does not contain any irregular characters, e.g. the character '-' is not allowed in the HELM chart variables, so use character '_' instead. |
x |
validateHash |
Flag to enabled/disable zip file hash validation. Default: true |
|
adapter |
JSON document, containing all adapter type specific configuration data. |
x |
name |
For Production.Archive the qualifier "prod-archive" is used. |
x |
exportWorkers |
Number of concurrent export worker threads, default is 1. |
x |
archiveRouteParameters |
Each consuming module can maintain its own archive configuration, structured with its unique set of key-value pairs, with a combined total length not exceeding 300 characters. |
x |
uiInformation |
The UI information section enable users to identify the archive configuration by a meaningful name. |
x |
archiveId |
The id of the Prod.Archive type, default is "ZM" and currently no other archive id is used. |
x |
exportSettings |
JSON document, containing all the settings used for export into Prod.Archive. |
x |
maxBundleSize |
Maximum number of zip files that are allowed to be within one bundle zip file. |
x |
ciInputShare |
Path of the mounted BD Input Share used to put the bundle files into. For Linux is usually something like "/mnt/<share_name>" NOTE: This path is checked for read/write access. If the required access is not allowed Archiving Bridge will log an error. |
x |
tempDataPath |
Path of the temporary working directory Archiving Bridge is using to create bundle zip files. For Linux is usually something like "/tmp/<directory_name>" NOTE: This path is checked for read/write access. If the required access is not allowed Archiving Bridge will log an error. Also make sure that this folder has enough physical space available to allow the bundle zip files to be created. After a service restart this folder is used to detect not fully processed bundle files to be continued with. |
x |
targetDbConnectionString |
Oracle DB connection string used to access the Prod.Archive table containing the meta data of already archived zip files. |
x |
uniqueIdentifierPropertyName |
Name of the unique identifier property. For Traceability (QD) it is "uniquepart_id". |
x |
uniqueIdentifierColumnName |
Name of the unique identifier column. For Traceability (QD) it is "UNIQUEPART_ID". |
x |
uniqueDatePropertyName |
Name of the unique identifier property. For Traceability (QD) it is "progress_date". |
x |
uniqueDateColumnName |
Name of the unique date column. For Traceability (QD) it is "PROGRESS_DATE". |
x |
exportTimeout |
Time out provided in format "HH:MM:SS" when an export operation times out. This is used for the polling of the Prod.Archive meta data table to determine if a specific zip file has been archived. The recommended value for production site is currently two days. This would be a value of "48:00:00". |
x |
importSettings |
JSON document, containing all settings used for the re-import of zip files. |
x |
datasource.url |
URL provided by Prod.Archive where the archived zip files, identified by a docId, can be downloaded from. |
x |
datasource.parameters |
Parameters currently used for re-import. These values should not change. |
x |
dataTypes |
JSON document, containing all meta data properties and its corresponding data types used for the IXATTR file. Currently string and number values are of data type "1". Datetime values are of data type "3". |
x |
ixAttribSettings |
JSON document, containing all settings used to create the IXATTR file. |
x |
accessTableNamePattern |
Pattern of the BD Prod.Archive table name, including the Oracle Database User. Usually it is somethings like "<OracleDbUser>.<OracleDbTableName>" |
x |
tableNamePattern |
Pattern of the BD Prod.Archive table name. Usually it is somethings like "<OracleDbTableName>" |
x |
documentFormat |
Document format of the zip files. This is "application/x-zip-compressed" and should not be changed. |
x |
archivingDateExpression |
Oracle SQL date expression used for the time stamp. This is "to_date(sysdate)" and should not be changed. |
x |
docIdType |
Type of the docId. This is "%s" and should not be changed. |
x |
archiveIdType |
Type of the archiveId. This is "%s" and should not be changed. |
x |
dateTimeFormat |
Date time format internally used by BD Prod.Archive. This is "YYYY-MM-DD HH24:MI:SS.FF TZH:TZM" and should not be changed. |
x |
cmdSettings |
JSON document, containing all settings used to create the COMMANDS file. |
x |
docType |
Document type used within the COMMANDS file. This is "application/x-zip-compressed" and should not be changed. |
x |
archiveId |
Archive id used within the COMMANDS file. This is "ZM" and should not be changed. |
x |
Target: FileSystem
Note: To configure Archiving Bridge to export data into a File System, like a NAS, it is necessary to provide the following REST parameters to send using the configuration endpoint:
Before configuring the archiving bridge service, it is essential to define and create two persistence volumes (folders) in your cluster specifically for use with the file system adapter. These volumes will facilitate the archiving of files in the filesystem (archiveRootPath) and the temporary storage of files (tempDataPath).
These two parameters must be configured in the variables archiveRootPath and tempDataPath within the configuration settings. The following example illustrates their placement:
{
...,
"adapter": {
...,
"archiveRootPath": "/mnt/nas1/prod-storage-cp4",
"tempDataPath": "/tmp/archiving-bridge/working_dir",
...
}
}
In the following example, a complete configuration is provided for uploading via a REST endpoint.
{
"contentFormat": {
"schema": {
"type": "object",
"properties": {
"uniquepart_id": {
"type": "string"
},
"serial_number": {
"type": "string"
},
"serial_number_date": {
"type": "string"
},
"order_id": {
"type": "string"
},
"part_attribute": {
"type": "string"
},
"type_number": {
"type": "string"
},
"type_variant": {
"type": "string"
},
"result_date": {
"type": "string"
},
"result_state": {
"type": "string"
},
"batch": {
"type": "string"
},
"lastknown_proc_no": {
"type": "string"
},
"lastknownlocation": {
"type": "string"
},
"part_class": {
"type": "string"
},
"release": {
"type": "string"
},
"lotid": {
"type": "string"
},
"package_id": {
"type": "string"
},
"packaging_result_date": {
"type": "string"
},
"product": {
"type": "string"
},
"progress_date": {
"type": "string"
}
},
"required": [
"uniquepart_id",
"result_date",
"result_state",
"progress_date"
]
},
"validationEnabled": true
},
"module": {
"url": "https://Server1/parttrace/qdarchivingbridge-service/api/v1/{tenantId}/Export/zipfile/{id}",
"clientName": "qdab_client_1",
"validateHash": true
},
"adapter": {
"name": "filesystem",
"exportWorkers": 1,
"archiveRouteParameters":{
"moduleSpecificKey_1":"moduleSpecificValue_1",
"moduleSpecificKey_2":"moduleSpecificValue_2"
},
"uiInformation":{
"title":"A brief title identifying this specific archive configuration set.",
"description":"A description helping the user to identify this specific archive configuration set."
},
"pathPattern": "{year(progress_date)}/{month(progress_date)}/{day(progress_date)}/{hour(progress_date)}",
"archiveRootPath": "/mnt/nas1/prod-storage-cp4",
"tempDataPath": "/tmp/archiving-bridge/working_dir",
"uniqueIdentifierPropertyName": "uniquepart_id",
"uniqueDatePropertyName": "progress_date"
}
}
Section "contentFormat"
Section contains the JSON schema definition for the modules' meta-data information that is provided with each archiving request.
| Attribute | Description | Required |
|---|---|---|
moduleId |
Name for the module to be configured. Any string value is accepted that is unique within this Archiving Bridge cluster. |
x |
adapterType |
"filesystem", this qualifier is used for the export into the filesystem (NAS). |
x |
settings |
JSON document, containing all the settings used for export into the filesystem (NAS). |
x |
schema |
JSON schema definition for meta data validation |
x |
type |
"object", this the qualifier used for type |
x |
properties |
JSON document, containing all the properties used in the meta data within the zip file and its corresponding data type. Currently there is only a distinction between "string" for string properties and "integer" for number properties |
x |
required |
JSON document, containing all meta data properties that are required to be provided within an archiving request, sent to the Archiving Bridge. |
x |
validationEnabled |
Flag to enable/disable meta data validation. Default: true |
x |
module |
Url of the caller module |
x |
clientName |
Name of the caller module; NOTE: Make sure that clientName does not contain any irregular characters, e.g. the character '-' is not allowed in the HELM chart variables, so use character '_' instead. |
x |
validateHash |
Flag to enabled/disable zip file hash validation. Default: true |
|
adapter |
JSON document, containing all adapter type specific configuration data. |
x |
name |
For export into filesystem (NAS) the qualifier "filesystem" is used. |
x |
exportWorkers |
Number of concurrent export worker threads, default is 1. |
x |
archiveRouteParameters |
Each consuming module can maintain its own archive configuration, structured with its unique set of key-value pairs, with a combined total length not exceeding 300 characters. |
x |
uiInformation |
The UI information section enable users to identify the archive configuration by a meaningful name. |
x |
pathPattern |
The pattern of the filepath. The default value for Traceability (QD) is "\{year(progress_date)}/\{month(progress_date)}/\{day(progress_date)}/\{hour(progress_date)}".
The syntax of that pattern is * "\{<DatePart>(<PropertyName>)}" |
x |
archiveRootPath |
The root path for the specific filesystem archive (NAS). For Linux is usually something like "/mnt/<NasRootDirectory>/<ProductionLineName>" is used. |
x |
tempDataPath |
Path of the temporary working directory Archiving Bridge is using to create bundle zip files. For Linux is usually something like "/tmp/<directory_name>" NOTE: This path is checked for read/write access. If the required access is not allowed Archiving Bridge will log an error. Also make sure that this folder has enough physical space available to allow the zip files to be created. After a service restart this folder is used to detect not fully processed zip files to be continued with. |
x |
uniqueIdentifierPropertyName |
Name of the unique identifier property. For Traceability (QD) it is "uniquepart_id". |
x |
uniqueIdentifierColumnName |
Name of the unique identifier column. For Traceability (QD) it is "UNIQUEPART_ID". |
x |
uniqueDatePropertyName |
Name of the unique identifier property. For Traceability (QD) it is "progress_date". |
x |
uniqueDateColumnName |
Name of the unique date column. For Traceability (QD) it is "PROGRESS_DATE". |
x |
Archiving Bridge database
Installation
User and password
| User & Password can have a limited duration of validity. In case the password expires and the services cannot access the database anymore that means in terms of Part Traceability there is a production line stop if the process data cannot be inserted into or read from the database anymore. |
Sys-script
-
Admin permission is needed for installation of Sys-Script
-
The NEXEED Industrial Application System requires an oracle schema user for the installation of database objects. Different modules can use different user schemas.
-
The Archiving Bridge needs a user with different tablespaces e.g. for different kind of data and for administration and maintenance proposes.
-
The naming of the tablespaces can vary. Nevertheless some conventions exist, so that the database objects can be installed within this tablespace.
-
The data file path must be set from a database administrator because it depends of the infrastructure and the operation system.
-
Further adaptions like the size of the parameters or the number of datafiles can vary. These tasks must be done by an Oracle database administrator.
-
Grants and roles: A number of grants which are based on the previous experiences maintaining of existing installations in production environments. It needs to be considered that a number of grants may have an impact on the overall security. For example allowing querying session information the result may include client host names and other sensible information which needs to be checked against existing data protection guidelines.
Do’s and don’ts in Nexeed databases
| Never modify Nexeed databases or read/ write data directly not using the standard interfaces otherwise BCI can’t give support in case of problems. |
What is this article for?
The Nexeed IAS is using a verity of databases to store all kind of data.
Sometimes the functionality of IAS seems not to be sufficient and you want to extend the system for your needs:
-
Read data stored in IAS
-
Write data which cannot be entered with a standard IAS module
-
New use case, change internal behavior
These are all valid wishes, but don’t realize this by accessing directly to any of the Nexeed database!
Continue reading to find out further details…
Read data from Nexeed IAS
DO
There are a variety of ways to obtain data from Nexeed IAS or to connect 3rd party systems. Here are some examples:
-
DataPublisher
-
Public interfaces of different IAS modules
Please ask BCI for a possibility if you need support finding the right way
DON’T
Don’t access any Nexeed database directly (e.g. using oracle client) to obtain data in productive environment
| BCI can’t give support in case of problems with 3rd party applications reading data directly or any resulting performance problems on affected Nexeed system. |
Write data to Nexeed IAS
DO
Use the Nexeed standard modules to enter data or use a official public API to do so, even the data doesn’t come from any "IAS connected" line or Nexeed module.
-
DirectDataLink
-
SmartScan/ Orchestration (CommHub)
-
increasing number of public APIs for various IAS modules
Please ask BCI for a possibility if you need support finding the right way
DON’T
Don’t use any clients to directly enter data into the database.
Never insert / update dataset directly in the database. This might cause serious problems in the system. In case the written data are not fitting the standard and the database gets corrupts (now or even starting much later with a future software update), BCI is not responsible for the problems and it’s solving.
| BCI can’t give support in case of problems with 3rd party applications writing data directly or any resulting performance problems on affected Nexeed system. Also support during updates might be limited, if entered data does not match the definitions. |
Changes on database schema
DO
If you want to change internal product behavior (further than configuration parameter allow) or implement a customized solution, which would require an extension or modification of Nexeed database follow like this
*specify your use case and place the requirements to be discussed with your Bosch division working group (forwarded to BCI product management)
-
use the above mentioned public interfaces to read/write data
-
create dedicated database for your custom solution
Please ask BCI for a possibility if you need support finding the right way
DON’T
NEVER do any changes in database, such as adding, deleting, modifying:
-
data types, data tables, indices
-
stored procedures, packages
-
scheduled jobs
-
data changes in tables, e.g. version entries
In case changes were done in the database, this might lead to corrupt data and crash the whole system. In case you run an update on such a system, the update scripts might fail. In this case the update can not be finished in time and BCI is not able to support the recovering process.
| BCI can’t give support in case of problems or performance issues related to such modifications. BCI also might decline the update on such a system, until it is recovered to the original state. |
Why not follow the "don’t do" way?
BCI reserves the right to change database schema in new distributions without previous notification (whether to customer nor internally to Application team).
Results may be…
-
Problems during a standard Nexeed update
-
corrupt database/ invalid objects → causing production stop
-
performance issues → affecting production
-
3rd party applications not working or causing performance issues → affecting production
-
-
Problems during regular operation, sometimes way after any change
-
performance issues
-
corrupt database, deadlocks
-