In traditional systems, errors interrupt control flow. They unwind stacks, trigger retries, and often collapse entire processes. In a graph-native system, errors are not interruptions. They are data.
Yielding Errors Instead of Throwing
When a function encounters uncertainty, it emits a row describing what happened. That row can include context, input references, and severity. The system routes it like any other output. Anomaly handlers, review queues, or automated repair nodes can subscribe to those outputs.
This turns error handling into a routing decision, not a local panic.
Promise Tables and Deferred Work
If a function needs external conditions—network access, a dependency, a future event—it doesn’t wait. It writes a promise record describing what it needs. A resolver later wakes up and completes the work.
This eliminates the “await” mindset. You stop suspending execution and instead persist intent in data.
Why This Is Resilient
1) No Lost Work: Inputs and partial outputs are stored. A crash doesn’t erase progress.
2) Retry as Data: A retry is just a new row. The system can schedule retries without reconstructing state.
3) Isolation: A failed function doesn’t bring down a server because there is no shared server. Each function is a standalone executable.
4) Observability: Error records are linked to inputs and outputs. You can traverse lineage and understand the system’s behavior.
Error Policies as Graph Structure
Instead of `try/catch` scattered across code, you define policies in the graph:
- Errors of type A route to a human review queue.
- Errors of type B route to an automatic resolver.
- Errors of type C simply park and await a schema update.
This makes error handling composable and transparent.
The Result: Ecological Failure
Failure becomes part of the ecosystem. It doesn’t halt flow; it becomes an input to other processes. The system evolves by absorbing anomalies, not by crashing and rebooting.
In this model, resilience is not a patch. It is the default behavior of data-driven control flow.