Filecatalyst Workload Automation May 2026

# Retry up to 3 times RETRIES=3 for i in $(seq 1 $RETRIES); do fta-cli --put critical_file.dat --target /incoming/ && break || sleep 10 done | Tool | Integration Method | |------|--------------------| | Apache Airflow | Use BashOperator with fta-cli or SimpleHttpOperator for REST API | | Jenkins | Execute shell script step calling fta-cli | | Rundeck | Create a job step: "Command" → fta-cli ... | | Control-M | FileCatalyst provides a Control-M plugin (File Transfer Hub) | | Apache NiFi | Use ExecuteProcess processor to call fta-cli |

fta-cli --server hostname --port 21 --username user --password pass \ --put /local/file.txt --target /remote/destination/ filecatalyst workload automation

fta-cli --log-level DEBUG --log-file /var/log/fc_workload.log --put file.dat # Retry up to 3 times RETRIES=3 for

For native workload automation features (dependency management, SLA tracking, visual pipelines), you would typically wrap FileCatalyst commands into a dedicated workload automation platform like , using FileCatalyst as the file movement plugin. success = run_fta(f, "/incoming/", "fc-server

Integrate FileCatalyst with OS schedulers.

success = run_fta(f, "/incoming/", "fc-server.company.com", "auto", "secret") if success: logging.info(f"Success: f") # Post-processing: log to database subprocess.run(["psql", "-c", f"INSERT INTO transfers VALUES('f', 'original_hash')"]) else: logging.error(f"Failed: f") time.sleep(30) # Backoff before retry if == " main ": main() Summary Table: Choosing an Automation Method | Requirement | Recommended Method | |-------------|--------------------| | Simple directory watching | Hotfolder | | Scripted, scheduled transfers | CLI + cron/systemd timer | | Complex workflow with multiple steps | CLI + Bash/Python logic | | Integration with Airflow/Jenkins | REST API or BashOperator | | Central management of many transfers | REST API + custom dashboard |