Action nodes in Integrator define tasks involved in collecting, processing, and ingesting raw data in the Hadoop cluster. The supported Hadoop jobs and individual system tasks (Java, Shell, etc.) are as follows:
Retrieves data from RDP or runs a simple query.
Runs JAR files in a local directory.
Runs local files such as Python and shell.
Runs a Java class. (Note that the main function must be defined.)
Runs a HIVE query.
Runs a command remotely. Note that SSH passwordless login must be set up for the remote server.
Used for association with existing workflows. When running an association of multiple workflows, it defines each workflow as a task.
Copies files from the source Hadoop cluster to the target Hadoop cluster.
Used to manage Hadoop files.
Creates a Done file upon completion.
Used for incremental ingestion of data into the Druid engine.