Job Execution Model
Every fialr operation that modifies the filesystem is structured as a job. A job is a named directory containing a manifest, a plan, an operation log, and a report. Jobs are the unit of execution, auditing, and recovery.
This is not optional instrumentation layered on top of file operations. The job is the operation. There is no code path that moves or renames a file outside of a job context.
Job directory structure
Section titled “Job directory structure”Each job creates a timestamped directory:
jobs/2025-09-14_reorganize_a1b2c3d4/ manifest.json # pre-execution file state snapshot plan.csv # proposed operations, reviewed before execution log.json # append-only structured operation log report.md # human-readable job summary checkpoint.json # last completed operation index for resumeThe directory name encodes the date, the operation type, and a truncated UUID. This makes jobs sortable by date and identifiable at a glance.
Six-stage lifecycle
Section titled “Six-stage lifecycle”Every job passes through six stages in order. No stage can be skipped.
| Stage | What happens | Output |
|---|---|---|
| Init | Job directory created. Configuration snapshot taken. UUID assigned. | Job directory, config snapshot |
| Plan | Proposed operations generated based on classification, naming rules, and schema. | plan.csv with source path, destination path, operation type, and confidence |
| Validate | Pre-flight checks: hash verification, path collision detection, permission checks, disk space. | manifest.json with current file state |
| Execute | Approved operations run. Each file is hashed before and after modification. Each operation logged. | log.json entries, checkpoint updates |
| Verify | Post-execution integrity check. All modified files re-hashed and compared against expected values. | Verification results appended to log |
| Report | Human-readable summary generated. Statistics, errors, warnings, and skipped files documented. | report.md |
If any stage fails, the job stops. It does not proceed to the next stage. The failure is logged with the exact error, the file involved, and the operation that was attempted.
Safety guarantees
Section titled “Safety guarantees”The job model exists to make file operations auditable and recoverable. These guarantees are structural, not behavioral — they are enforced by the code, not by convention.
Dry-run by default. Every command that modifies files defaults to dry-run mode. The Plan and Validate stages run, but Execute does not. You see exactly what would happen before anything happens. Execution requires an explicit --execute flag.
Pre-flight manifest. Before execution begins, manifest.json captures the current state of every file that will be touched: path, content hash, size, permissions, and modification time. This is the “before” snapshot. If something goes wrong, this manifest defines what “before” looked like.
Append-only audit ledger. Every operation writes a structured entry to log.json before and after execution. The log records the file path, operation type, source and destination hashes, timestamp, and result. Entries are never modified or deleted. The operations table in SQLite mirrors this data for querying across jobs.
Checkpoint and resume. After every N operations (configurable), a checkpoint is written with the index of the last completed operation. If a job is interrupted — by a crash, a power failure, or a manual kill — it can be resumed from the last checkpoint. The resumed job re-verifies the checkpoint file’s integrity before continuing.
Hash verification. Every file is hashed before modification and hashed again after. If the post-operation hash does not match the expected value, the operation is flagged and the job halts. Corruption is caught at the point of origin, not discovered later.
Job artifacts
Section titled “Job artifacts”| Artifact | Format | Purpose |
|---|---|---|
manifest.json | JSON | Pre-execution state snapshot. Every file’s path, hash, size, and metadata before the job runs. |
plan.csv | CSV | Proposed operations. Columns: source path, destination path, operation type, confidence score, reason. Reviewable in any spreadsheet or text editor. |
log.json | JSON | Append-only operation log. One entry per operation, recording before/after state, timestamps, and results. |
report.md | Markdown | Human-readable summary. File counts, operation counts, errors, warnings, and timing. |
checkpoint.json | JSON | Resume state. Records the index of the last successfully completed operation and the job configuration at that point. |
All artifacts use open formats. The job directory is self-contained and readable without fialr installed.
The reviewed flag
Section titled “The reviewed flag”The executor refuses to run a plan that has not been explicitly marked as reviewed.
This is a hard gate. The executor checks for a reviewed flag in the plan metadata. If the flag is absent or false, execution does not proceed. There is no prompt, no “are you sure?” dialog, no override flag. The plan must be reviewed and the flag must be set.
The intent is to prevent accidental execution of unreviewed plans. Dry-run produces a plan. The operator reviews it. The operator marks it reviewed. Then — and only then — execution is permitted.
Checkpoint configuration
Section titled “Checkpoint configuration”The checkpoint interval is configured in fialr.toml:
[jobs]checkpoint_every = 50This writes a checkpoint after every 50 completed operations. Lower values provide finer-grained resume capability at the cost of more disk writes. Higher values reduce I/O but risk re-executing more operations on resume.
For large jobs (thousands of files), a value between 25 and 100 is reasonable. For small jobs, the default is sufficient.
Relationship to the CLI
Section titled “Relationship to the CLI”The organize command is the primary interface to the job execution model. It orchestrates the full lifecycle: init, plan, validate, execute, verify, report. Other commands (plan, deduplicate) create jobs with subsets of these stages.
Every CLI command that creates a job prints the job directory path on completion. The operator can inspect any artifact at any time.