Parameters to pass to the flow.
The log level for the job:
The default row logging interval.
Steps generate a progress feedback line in the log whenever they finish processing this many rows.
Initial heap memory for the JVM running the job
Leave blank or set to 0 to auto-manage memory.
Limit for heap memory for the JVM running the job
Tracking options determine how much information is collected from the running job and displayed in the user interface.
Flows can in turn launch sub-flows. This setting determines which flows to track.
main flow only
Runtime information is collected for the main flow only. Any sub-flows are not tracked.
main flow only
Runtime information is collected for the main flow and any sub-flows it launches. Please note that if the job launches a substantial number of flows, collecting too much information can cause the user interface to become unresponsive.
Does not collect any runtime information for any flows. Only high level information about the run process is available in the user interface.
This setting determines how processed data is collected for display in the user interface.
Select if and how many data rows to capture from hops, and which flows to collect data from.
This limit prevents overloading the data viewing UI with excessively large values.
Determines the maximum size in bytes of any single captured field in a row. Fields bigger than the specified limit are truncated.
This setting allows balancing processing speed vs. responsiveness of the user interface.
A data processing job can easily take up most of the computing resources of a desktop computer. This setting allows to throttle the processing, so the user interface has a chance to catch up and responsively display processed data.
The name of the job.
This name appears on the jobs and logs panel. Distinct names make it easier to distinguish individual runs of the same flow.
Leave empty to use an auto-generated name.