Task "XSuiteHelixDataImport"
The "XSuiteHelixDataImport" task is used to transfer master data to the Data Service in the xSuite Helix Cloud. The master data is transferred via the "DataValuesImport" interface of the xSuite Helix Cloud. This interface only supports selected types of master data records in a predefined format. Only an SQL database with a predefined table structure can be used as the source. This database corresponds to the "xFlowERP_MasterData" database, which is used in xSuite Invoice Prism projects. Filling in these intermediate tables with the data of the customer-specific ERP system is the responsibility of the customer or must be defined as a project deliverable. The mapping of the table fields to the data objects of the Data Service is firmly anchored in the program code and cannot be configured.
The following data record types are supported for master data imports:
BusinessPartnerOrderCostCenterGLAccountCompany
The master data is transferred differently, depending on the type of data record. For the CostCenter, GLAccount and Company types, a complete transfer of all source data records takes place each time the import task is called up and all old records are deleted. For the BusinessPartner and Order types, a significantly higher number of records is assumed, which causes excessive data traffic if the entire transfer is carried out regularly. Therefore, for these data types, an incremental import takes place. As this incremental import requires extended processing logic in order to distinguish between unchanged, changed, new and data records to be deleted, the hash values of all data records that have already been transferred will be saved in the internal administration database and compared with the source data records each time the task is called up. Only records that have been changed or are completely new will be transferred to the Data Service. Records to be deleted are identified by the fact that 1) they no longer exist in the source database, and 2) an entry exists in the management database via the import in the cloud.
The following properties apply specifically to the task of type "xSuiteHelixDataImport."
Property | Description |
|---|---|
Schedule[].Task[].ImportType[] | Data record type that is imported The following data types are available:
Multiple data record types can be processed one after the other in a task call so that a separate task does not have to be defined for each type. |
Schedule[].Task[].Database.ConnectionString*(§) Schedule[].Task[].Database.Password(§) Schedule[].Task[].Database.Scheme(*) | The property The password required in the connection string can be defined separately in The |
Schedule[].Task[].Database.CustomerId Schedule[].Task[].Database.Principal Schedule[].Task[].Database.CompanyCode | According to the predefined source database model, all tables use the fields |
Schedule[].Task[].DataService.Url Schedule[].Task[].DataService.RequestTimeout Schedule[].Task[].DataService.Keycloak Schedule[].Task[].DataService.ProxyServer | Connection data for the target system (Data Service in the xSuite Helix Cloud As the Set authentication properties under |
Schedule[].Task[].DataService.BatchSize | Number of records per batch The data records are transferred to the Data Service in batches of multiple records per call. Default value: |
Schedule[].Task[].DataService.ClearAllDocs | Boolean value determining whether all existing records in the Data Service will be deleted before the records are imported This is only relevant for incrementally transferred record types Default value: |
Schedule[].Task[].LogRecords | Boolean value determining whether the contents of the source data records will be output in the log The output generally only takes place at the "Trace" log level and should only be activated temporarily for the purposes of analysis. Default value: |
Schedule[].Task[].ContentLogFolder(%) | Folder path where the full contents of the data records transferred to the Data Service are logged (optional This property is primarily intended for the purposes of analysis and not for permanent operation. System variables can be included in the path specification to dynamically generate subfolders. A JSON file with the batch content is generated in the folder for each batch import call. If there are error responses, a corresponding response file is also generated. The separate calls for records to be deleted are not logged at this point, as they only contain record keys and therefore are no better than normal "trace" logging. |
Schedule[].Task[].MaxErrors | Maximum number of incorrectly processed data records before the entire import for the record type in question is canceled If no maximum number is specified, an attempt will be made to transfer all records. Errors that affect entire records are only logged as warnings. A log entry of the type "Error" may only be generated once as a summary of the entire import process. Make sure that emails are only actually sent in the case of errors, but not in the case of warnings (property |