Process Intelligence Documentation
processmaker.comDeveloper DocumentationKnowledge Center
Developer Documentation
Developer Documentation
  • Home
  • 🛡️Windows Agent
    • Overview
    • Installing Workfellow Agent
      • Register Agent
      • Auto Update
    • Understanding Workflow Collection Data Model
    • Network Configurations
      • Workfellow Network Tester
      • Configuring Workfellow Plug-In to Use a Proxy
  • ⚙️Configuration Tooling
    • CSV Tool
    • Configuration Tester
  • ⚒️Data Collection and Configuration
    • Create Data Collection Configurations
    • Process Mining
    • Work API Syntax for Sluice Rule Engine
  • ✨Tutorials and Training
    • Tutorials and Training
Powered by GitBook
On this page
  • Installation
  • Requirements Template
  • Configuration Test Case
  1. Configuration Tooling

Configuration Tester

The Configuration Tester is a command line interface that allows users to test the Work API settings against the data collection requirements.

Installation

  1. Download the CSV-tool from as a part of the tool bundle from Admin Panel.

  2. Follow the installation guide to correctly install the tool in your environment.

Usage

Shell

Unset
configuration-tester-exe --test-file-path <test-file-path> --config-file-path <config-file-path> [--uuids <uuids>] [--coverage bool] [--session bool] [--batch-size int]

Arguments

  • --test-file-path: Path to JSON-file containing the tests. Required.

  • --config-file-path: Path to JSON-file containing the settings. Required.

  • --uuids: Comma-separated string of UUIDs. If given, only these tests are executed. Optional.

  • --coverage: Boolean. Controls whether test coverage is calculated. Optional, defaults to False.

  • --session: Boolean. Controls whether tests are executed in session mode. Optional, defaults to True.

  • --batch-size: Integer. The number of tests to execute in one batch. Applicable only in session mode. Optional, defaults to 10.

Examples

Shell

# Basic usage
configuration-tester.exe requirements.json settings.json

# Run only specified tests
configuration-tester.exe requirements.json settings.json --uuids "cbdc73a8-7e85-4419-bb78-b4f2decbb505,9026cefb-7be2-4646-9900-358ff20e7f47"

# Calculate coverage
configuration-tester.exe requirements.json settings.json --coverage True

# Change batch size
configuration-tester.exe requirements.json settings.json --batch-size 20

Requirements Template

A requirements template is a CSV-file that contains requirements for data collection and can be automatically converted into a list of ConfigTestCases. The table below describes its format.

Field
Description
Comment

DESCRIPTION

Description of the requirement

INPUT:<field>

Columns with INPUT: in the header become ConfigTestCase.row_event data in the test case. <field> becomes the name of the field.

ALLOW_TRACKING

Should tracking be allowed for the row's inputs? TRUE (boolean in Excel), 1, and empty string ("") are interpreted as True. Everything else becomes False

TAG:<key>

Columns with TAG: in the header become ConfigTestCase.expected_output.tags data in the test case. <key> becomes the tag's key. If multiple tag columns are defined, the additional headers must include a unique number: TAG1:<key>

Examples: TAG:appname, TAG2:content-category, TAG3:content-category

IDENTIFIER:<identifier_name>

Columns with IDENTIFIER: in the header become ConfigTestCase.expected_output.extracted_identifiers data in the test case. <identifier_name> becomes the extracted identifier's identifier_name. If multiple identifier columns are defined, the additional headers must include a unique number: IDENTIFIER1:<identifier_name>

Examples: IDENTIFIER:invoice_number, IDENTIFIER67:account_number

HASH_IDENTIFIERS

Should the identifiers be hashed? TRUE (boolean in Excel), 1, and empty string ("") are interpreted as True. Everything else becomes False.

Notice! This setting applies to ALL identifiers in the test case!

SALVAGE

Columns whose header starts with SALVAGE become values in ConfigTestCase.expected_salvage_fields list.

Encoding is expected to be UTF-8-SIG, which is used by Microsoft Excel by default.

Configuration Test Case

Testing of settings is built around Configuration Test Cases. The table below describes the data model of a Configuration Test Case.

Field
Type
Description
Comment

test_id

UUID

Unique identifier for a test case

Mandatory. Tests fail if duplicated test_ids are detected.

description

string

Description of what is being tested

Mandatory.

rule_id

list of UUIDs

List of rules to be tested with the test case.

Optional. If missing, all available rules are tested.

row_event

dictionary

Changes to generated input events. Use to define tested input.

Optional. Only fields relevant to the test case need to be defined.

event_generators

list of strings

Generators to be used for test event generation.

Optional. Defaults to hard-coded minimal (only mandatory fields included) and maximal (all fields included) events.

force_tracking

boolean

Should the tracking be allowed, even if none of the tested rules allow it.

Optional. Defaults to true.

expect_output

boolean

Is output expected?

Optional. Defaults to true

expected_salvage_fields

list of strings

Optional fields that are expected in the output.

Optional.

expect_hashed_identifiers

boolean

Are identifiers expected to be hashed?

Optional. Defaults to true.

expected_processed_event

dictionary

How output with rules applied is expected to differ from output without any rules applied. List expected tags, extracted identifiers, and salvaged optional fields here.

Optional.

rule_generation_hint

string

Regular expression to be used in the matching rule

Optional. Used only in automatic rule generation.

PreviousCSV ToolNextCreate Data Collection Configurations

Last updated 1 year ago

⚙️