Available for: Distribution job, Storage tiering and Archival job.
Report сonfiguration
Performance, storage and system requirements
Reading the reports
Limitations and edge cases
Overview
This feature provides a structured and detailed reporting mechanism for Distribution and Storage tiering and Archival jobs. Rather than simply indicating that a job has completed, the system captures file-level events — showing whether each file was transferred, skipped, failed, or impacted by specific conditions like access restrictions, conflicts, or filtering rules. These events are compiled into detailed reports that administrators can generate manually or configure to be built automatically once a job completes.
Report configuration
The report is generated based on file events received by the Management Console (MC) from agents. It can be triggered manually for both active and completed job runs, or automatically upon job completion — regardless of the job’s final status — if the corresponding option is enabled in the job configuration.
The MC supports generating only one report at a time. Any additional reports are placed in a queue. For guidance on expected report generation durations under load, refer to the performance section below.
Performance, storage and system requirements
Generating the report requires additional 2 Gb RAM which is freed up after report is complete.
Additional storage is required as well, depending on the number and size of events, on top of the system’s baseline requirements. On average, a report containing 1 million files with 2 agents (source and destination) takes about 200–300 MB of storage. A report with 1 million files and 10 agents will require approximately 2–3 GB of storage.
Generated reports are stored in the MC storage, ttl is configurable from MC config file. Reports are deleted together with job run deletion (per "Job run ttl" setting) or job deletion.
Report generation performance is directly tied to the underlying VM specifications and the volume of file events. On a system with 8 vCPUs and 32 GB of memory, the average processing rate is around 400,000 events every 2 hours under typical load. Large-scale jobs — such as those involving 5 million files or more — may hit performance thresholds, potentially resulting in delays or partial reports.
Reading the reports
A generated report provides a comprehensive summary of all files processed during the job run, including file types, statuses, and high-level statistics. A detailed breakdown by file and agent is available in the downloadable CSV file, which can be used for further analysis or auditing.
The report captures behavior across the most common operational scenarios. However, exact file statuses may vary depending on Profile parameters (such as "Skip file errors", "Resolve filename system conflicts", etc):
- Files skipped or errored on source due to access issues, global file and folder filter (ignore list).
- Files skipped by source based on 'storage tiering and archival job" parameters.
- Pre-seeded files that already exist on the destination and therefore are not transferred over the network.
- Files that fail to reach the destination due to ACL issues, conflicts, lack of permissions, or IgnoreList matches. If a known reason is captured in the event, it is reflected in the report.
- AWS Glacier archiving and retrieving from archive, failed retrieval.
- Files removed from the source after job completion, either archived or skipped from archiving, depending on the settings.
A file’s type may be labeled as Unknown in the following situations:
- The event originated from an agent running a version earlier than 4.2, which lacks the necessary metadata.
- It's a locked/unlocked file.
- The file is in an invalid or ambiguous state — such as a symbolic link pointing to a non-existent location, particularly when the agent is configured to follow symlinks.
During report generation, only the data corresponding to file events that the Management Console has received from agents at that point in time will be included. In many cases, especially with high-volume jobs, event delivery may still be in progress even after the job run itself has completed. As such, early-stage reports may not reflect the full scope of the job until all events have been collected.
As a result, report may show
win-1-2659 agent - Not present on source status - meaning that file events were not delivered to MC
linux_0 - agent received file events, so the status is Downloaded
linux_0 _1 - Not delivered - it means that file events were not delivered to MC
Limitations and edge cases
While the reporting feature has been designed to cover most common scenarios, there are edge cases that may result in discrepancies in the produced reports. Potential issues include:
- Jobs where files are deleted due to "Delete files absent on source Agent (Transfer jobs only): Yes" parameter in the Agent Profile are missing from the report.
- With Resilio Archive disabled in Storage tiering job, previous file versions are not archived, and it’s not reported.
- Files with the Modification timestamp has a future date error are not included in the report.
- No clear indication that files are not delivered because of low disk space on an agent or unavailable/missing folder.
- A job run restarted on an Agent, while it executes the script, may cause invalid reports.
- Agents older than 4.2 may report incomplete or undefined delivery status.
- In case of cross platform synchronization between Linux and Windows, with the
fix_conflicting_paths
option in the Agent Profile enabled, files with names containing special characters not allowed by Windows (< > : " / \ | ? *
), will be reported as not delivered.
Additionally, file events may be missing from the report if:
- A job run is aborted.
- The MC has been restarted.
- An Agent has been restarted.
- An Agent is offline.
- Events were not delivered to the MC before midnight.