Table of Contents

Import Requests

On Key supports a dedicated, generic Import API to action (Insert, Update, Delete or Merge) different types of records within a single POST request.

The Import API can be scheduled and executed asynchronously and supports the same execution mode and error handling behaviours as existing batch requests. Schema converters can be used within Import requests to resolve reference lookups and to uniquely identify the records to action.

Records can also be grouped and executed in order. Simply append the optional ordered=true query string parameter to the request uri and assign individual records within the request the same GroupOrder identifier. Groups with a smaller GroupOrder identifier are actioned first.

In addition to importing records using a POST request, records can also be imported by using Import Schemas and Import Files.

Content Types

The content types used by import requests are:

Name Description Docs
application/vnd.onkey.entityimportcollection+json Import multiple entities using a POST request. Link
text/csv Import multiple entities contained within the CSV file using a POST request Link

Query Parameters

The HTTP uri parameters used by import requests are:

Name Description Example
mode Mode in which to execute the request:
  • AllOrNone - A batch request runs all resources provided in the batch within a single transaction (i.e. the whole batch of resources succeeds or fails together)
  • PerRecord - A batch request runs all resources provided in the batch within separate transactions (i.e. individual resources succeed or fail on their own)
?mode=PerRecord
ordered Execute an ordered import. Records with the same groupOrder are executed together ?ordered=true

Example

To illustrate, consider the following request where a single, ordered import is used to assign a Resource Trade to a staff member Resource and then set the same Resource Trade as the default for the staff member Resource:

curl -v -X PATCH https://{server}/api/tenants/{client}/{connection}/modules/system/imports?mode=PerRecord&ordered=true \
-H 'Content-Type:application/vnd.onkey.entityimportcollection+json' \
-H 'Authorization: Bearer {accessToken}' \
-d '[
{
    "entityType": "ResourceTrade",
    "action": "Merge",
    "userObjectId": "record1",
    "groupOrder": "1",
    "Id": {
        "converter": "ReferenceLookup",
        "type": "Custom",
        "properties": {
            "ResourceTrade->Resource_Code": "SHBE",
            "ResourceTrade->Trade_Code": "HAND"
        }
    },
    "properties": {
        "resourceId": {
            "converter": "ReferenceLookup",
            "code": "SHBE"
        },
        "tradeId": {
            "converter": "ReferenceLookup",
            "code": "HAND"
        },
        "permissionTreeId": {
            "converter": "ReferenceLookup",
            "code": "A20"
        }
    }
  },
  {
    "entityType": "Resource",
    "action": "Update",
    "userObjectId": "record2",
    "groupOrder": "2",
    "Id": {
        "converter": "ReferenceLookup",
        "type": "Custom",
        "properties": {
          "Resource->Code": "SHBE"
        }
    },
    "properties": {
        "defaultResourceTradeId": {
            "converter": "ReferenceLookup",
            "type": "Custom",
            "properties": {
                "ResourceTrade->Resource_Code": "SHBE",
                "ResourceTrade->Trade_Code": "HAND"
            }
        }
    }
  }
]'

Notice the use of the ?ordered=true query string parameter and that the two records within the request have different groupOrder identifiers. This ensures that the request for assigning the Resource Trade to the staff member Resource executes first. A merge action is used to create or update the Resource Trade using Reference Lookup Schema converters to identify the staff member Resource and Trade. The second record subsequently uses the same reference lookup to find and assign the id of the Resource Trade created to the defaultResourceTradeId of the same staff member Resource identified by the SHBE code.

When executing this valid request using ?mode=PerRecord, the response code will be 200 OK with the following response body:

{
    "messages": [
        {
            "code": "Core_Data_RecordCreated",
            "message": "Record successfully created",
            "objectId": "1624859842827473",
            "objectType": "ResourceTrade",
            "severity": "information",
            "userObjectId": "record1"
        },
        {
            "code": "Core_Data_RecordUpdated",
            "message": "Record successfully updated",
            "objectId": "5000001002",
            "objectType": "Resource",
            "severity": "information",
            "userObjectId": "record2"
        }
    ]
}

Import Schemas and Import Files

Import Schemas are YAML-based configuration schemas that represent the input data and mapping of this input data for different resources being imported. Built on top of the existing generic Import API, Import Schemas are linked to Import Files and are used to instruct On Key on how to interpret and map the data received within CSV based Import Files.

The process steps for importing data using Import Schemas and Import Files are:

Define Import SchemaCreate CSV File with columns for SchemaPopulate CSV File with dataUpload CSV File using APIPoll for Import Background Task to completeDownload Import Results/Results file

In addition to uploading and processing the data asynchronously using a file, the CSV related data can also be synchronously imported by sending the data within the body of a POST request:

Define Import SchemaSend CSV data matching Schema using APIReceive Import Results

Define Import Schema

On Key ships with system generated Import Schemas for Resources that can be imported. Consider the following schema for importing a Commodity resource:

entities:
  - entity: Commodity
    input:
      - code
      - permissionTreeCode
      - description
      - sequenceNumber
      - costElementCode
      - notes
      - isActive
    mappings:
      - id:
          converter:
            input: code
            type: CodeReferenceLookup
      - permissionTreeId:
          converter:
            input: permissionTreeCode
            type: CodeReferenceLookup
      - code:
      - description:
      - sequenceNumber:
      - costElementId:
          converter:
            input: costElementCode
            type: CodeReferenceLookup
      - notes:
      - isActive:

The Commodity schema includes the following elements:

  • entity - Resource being imported
  • input - List of possible data fields required. Not all fields are necessarily required.
  • mappings - List of mappings to instruct the server what data to receive and how to potentially resolve and map the data using for example Schema Converters

Users can also define their own custom schemas to import data matching their unique data requirements:

curl -X 'POST' \
  'https://{server}/api/tenants/{client}/{connection}/modules/system/imports/schemas' \
  -H 'Authorization: Bearer {accessToken}' \
  -H 'Content-Type: application/vnd.onkey.entity+json' \
  -d '{
  "properties": {
      "name": "My Commodity Import",
      "restrictionLevel": "Private",
      "description": "My custom Commodity Import",
      "schema": "
entities:
- entity: Commodity
  alias: 
  input:
    - code
    - description
    - permissionTreeCode
    - costElementCode
    - isActive
  mappings:
    - id:
        converter:
          type: CodeReferenceLookup
          input: code
    - code:
    - description:
    - permissionTreeId:
        converter:
          type: CodeReferenceLookup          
          input: permissionTreeCode
    - costElementId:
        converter:
          type: CodeReferenceLookup          
          input: costElementCode
    - isActive:      
      "       
    }
}'
Tip

Multiple resources can be imported using a single schema. Simply add multiple entity definitions to the entities collection. Refer to My Custom Stock Interface for such an example schema.

Create Import CSV File

To import Commodity data, create a matching import CSV file:

!Commodity
#Entity,Action,code,permissionTreeCode,description,sequenceNumber,costElementCode,notes,isActive
Commodity,Merge,C1,A11,Com1,,APR,,TRUE
Commodity,Merge,C2,A11,Com2,,APR,,TRUE
Commodity,Merge,C3,A12,Com3,,APR,,TRUE
Commodity,Merge,C4,A11,Com4,,APR,,TRUE
Commodity,Merge,C5,A11,Com5,,APR,,TRUE
Commodity,Merge,C6,A32,Com6,,APR,,TRUE
Commodity,Merge,C7,A11,Com7,,APR,,TRUE
Commodity,Merge,C8,A12,Com8,,APR,,TRUE
Commodity,Merge,C9,A11,Com9,,APR,,TRUE
Commodity,Merge,C10,A11,Com10,,APR,,TRUE

The CSV file includes the following rows:

  • Optional Schema Header row marked by ! to identify the name of the schema to use. If the Schema Header Row is not present, the Id for the schema must be provided separately when uploading the Import File
  • Column header row marked by # to identify the columns to import. The Column Header row is also optional and only used to guide users when populating the data file.
  • Data rows containing the entity, action and data values being imported. The first two columns always need to include the Entity and Action for the row being imported. For grouped imports, a third GroupOrder column can also be specified. The remainder of the values must align with the order of the inputs in the Import Schema.

On Key can also generate a CSV File prepared for any schema using the following request:

curl -v -X GET https://{server}/api/tenants/{client}/{connection}/modules/system/imports/schemas/1742392251300351/generate/csv/download \
-H 'Authorization: Bearer {accessToken}' 
Tip

Multiple resources can be imported using different schemas or a single schema in a single CSV file by adding multiple Schema Header, Column and Data rows in the same file. Use the ! and # delimiters to start a new group of Schema Header and Column rows. Refer to Prepare Import CSV Files and My Custom Stock Interface for some examples.

Warning

When importing data using the Merge or Update action, remember that an empty column signals an intent to clear the value for the specific field for that record if that record already exists.

Upload Import CSV File

To upload the Commodity CSV file, execute the following request:

curl -v -X POST https://{server}/api/tenants/{client}/{connection}/modules/system/imports/files/upload \
-H 'Content-Type: multipart/form-data; boundary=boundary' \
-H 'Authorization: Bearer {accessToken}' \
-F 'file={path}\Commodity.csv;filename=Commodity.csv;type=*/*' \
-F 'orderedImport=false;type=*/*' \
-F 'executionMode=PerRecord;type=*/*' \
-F 'description=Daily Commodity Import;type=*/*'

Once uploaded, On Key will automatically register a Background Task to asynchronously read and start importing the data contained within the Import File. The Id of the Import File associated with the Background Task is returned as the OnKey-Resource-Id response header and can be used to poll the progress of the import.

Alternatively, data can also be synchronously imported using the following request:

curl -v -X POST https://{server}/api/tenants/{client}/{connection}/modules/system/imports/files?mode=PerRecord \
-H 'Content-Type: text/csv' \
-H 'Authorization: Bearer {accessToken}' \
-d '!Commodity
#Entity,Action,code,permissionTreeCode,description,sequenceNumber,costElementCode,notes,isActive
Commodity,Merge,C1,A11,Com1,,APR,,TRUE
Commodity,Merge,C2,A11,Com2,,APR,,TRUE
Commodity,Merge,C3,A12,Com3,,APR,,TRUE
Commodity,Merge,C4,A11,Com4,,APR,,TRUE
Commodity,Merge,C5,A11,Com5,,APR,,TRUE
Commodity,Merge,C6,A32,Com6,,APR,,TRUE
Commodity,Merge,C7,A11,Com7,,APR,,TRUE
Commodity,Merge,C8,A12,Com8,,APR,,TRUE
Commodity,Merge,C9,A11,Com9,,APR,,TRUE
Commodity,Merge,C10,A11,Com10,,APR,,TRUE
'

Poll For Import To Complete

To poll for the completion of the asynchronous file import, use the Id of the Import File created:

curl -v -X GET https://{server}/api/tenants/{client}/{connection}/modules/system/imports/files/1742392251300351 \
-H 'Authorization: Bearer {accessToken}' 

The response will contain information like the current backgroundTaskStatus and backgroundTaskFailedSteps.

{
  "class": "ImportFile",
  "id": 1742392251300351,
  "version": 1,
  "properties": {
    "createdByUserId": 500000000000,
    "createdByUserCode": "DefaultNonAuthUser",
    "createdByUserFullName": "Administrator",
    "createdOn": "2025-03-19T13:50:51.6445040Z",
    "modifiedByUserId": 500000000000,
    "modifiedByUserCode": "DefaultNonAuthUser",
    "modifiedByUserFullName": "Administrator",
    "modifiedOn": "2025-03-19T13:50:51.6445200Z",
    "backgroundTaskId": 1742392251300903,
    "backgroundTaskName": "Import File - Daily Commodity Import",
    "backgroundTaskStartedOn": "2025-03-19T13:50:52.4481610Z",
    "backgroundTaskStatus": "Completed",
    "backgroundTaskFailedSteps": 0,
    "description": "Daily Commodity Import",
    "executionMode": "PerRecord",
    "fileContent": null,
    "fileName": "Commodity.csv",
    "fileResults": null,
    "importSchemaId": null,
    "importSchemaName": null,
    "importSchemaDescription": null,
    "orderedImport": false
  },
  "links": []
}

Inspect Import Results

Once completed, the results for any file-based, asynchronous import can be downloaded using the following request:

curl -v -X GET https://{server}/api/tenants/{client}/{connection}/modules/system/imports/files/1742392251300351/results/download \
-H 'Authorization: Bearer {accessToken}' 

The response will contain a CSV formatted output result with status information on every row processed:

#Code,Message,Severity,LineNumber,Properties,EntityId
Core_ImportFile_SkippedHeaderLine,Skipped header line '2',Information,2,,
Core_Data_RecordCreated,Record successfully created,Information,3,,1742392252300001
Core_Data_RecordCreated,Record successfully created,Information,4,,1742392252300002
Core_Data_RecordCreated,Record successfully created,Information,5,,1742392252300003
Core_Data_RecordCreated,Record successfully created,Information,6,,1742392252300004
Core_Data_RecordCreated,Record successfully created,Information,7,,1742392252300005
Core_Data_RecordCreated,Record successfully created,Information,8,,1742392252300006
Core_Data_RecordCreated,Record successfully created,Information,9,,1742392252300007
Core_Data_RecordCreated,Record successfully created,Information,10,,1742392252300008
Core_Data_RecordCreated,Record successfully created,Information,11,,1742392252300009
Core_Data_RecordCreated,Record successfully created,Information,12,,1742392252300010

The different columns included within the response CSV file are:

Column Description
Code message code
Message message for record processed within input file
Severity severity of message (Information, Warning or Error)
LineNumber line number for record within input file
Properties additional data/context for the message
EntityId id of record imported

Results can also be downloaded as separate records within a json payload using the following request:

curl -v -X GET https://{server}/api/tenants/{client}/{connection}/modules/system/imports/files/1742392251300351/results \
-H 'Authorization: Bearer {accessToken}' 

The response will contain a collection of ImportFileResult resources for every record imported:

{
  "count": 11,
  "items": [
    {
      "class": "ImportFileResult",
      "properties": {
        "code": "Core_ImportFile_SkippedHeaderLine",
        "entityId": "",
        "importRecordNumber": 2,
        "message": "Skipped header line '2'",
        "properties": [
          ""
        ],
        "severity": "information"
      },
      "links": []
    },
    {
      "class": "ImportFileResult",
      "properties": {
        "code": "Core_Data_RecordCreated",
        "entityId": "1742392252300001",
        "importRecordNumber": 3,
        "message": "Record successfully created",
        "properties": [
          ""
        ],
        "severity": "information"
      },
      "links": []
    },
  ...  
}