Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,9 @@ pids/
# Optional stylelint cache
.stylelintcache
.claude

# Nix
result

# pnpm store
.pnpm-store/
43 changes: 43 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,3 +136,46 @@ Used throughout sync-engine: if heads are available, calls `handle.changeAt(head
- **Server load.** `enableRemoteHeadsGossiping` is disabled — pushwork syncs directly with the server so the gossip protocol is unnecessary overhead. `waitForSync` processes documents in batches of 10 (`SYNC_BATCH_SIZE`) to avoid flooding the server with concurrent sync messages. Without batching, syncing 100+ documents simultaneously can overwhelm the sync server (which is single-threaded with no backpressure).
- **`waitForBidirectionalSync` on large trees.** Full tree traversal (`getAllDocumentHeads`) is expensive because it `repo.find()`s every document. For post-push stabilization, pass the `handles` option to only check documents that actually changed. The initial pre-pull call still needs the full scan to discover remote changes. The dynamic timeout adds the first scan's duration on top of the base timeout, since the first scan is just establishing baseline — its duration shouldn't count against stability-wait time.
- **Versioned URLs and `repo.find()`.** `repo.find(versionedUrl)` returns a view handle whose `.heads()` returns the VERSION heads, not the current document heads. Always use `getPlainUrl()` when you need the current/mutable state. The snapshot head update loop at the end of `sync()` must use `getPlainUrl(snapshotEntry.url)` — without this, artifact directories (which store versioned URLs) get stale heads written to the snapshot, causing `changeAt()` to fork from the wrong point on the next sync. This was the root cause of the artifact deletion resurrection bug: `batchUpdateDirectory` would `changeAt` from an empty directory state where the file entry didn't exist yet, so the splice found nothing to delete.

## Subduction sync backend (`--sub`)

The `--sub` flag switches from the default WebSocket sync adapter to the Subduction backend built into `[email protected]`. The Repo manages a `SubductionSource` internally — pushwork just passes `subductionWebsocketEndpoints` and the Repo handles connection management, sync, and retries.

### How it works

- `repo-factory.ts`: Initializes Subduction Wasm via ESM dynamic import, then creates Repo. When `sub: true`, passes `subductionWebsocketEndpoints: [syncServer]` and `periodicSyncInterval: 2000` (CLI needs fast sync, not the default 10s). When `sub: false`, uses the traditional WebSocket network adapter instead.
- Default server: `wss://subduction.sync.inkandswitch.com` (vs `wss://sync3.automerge.org` for WebSocket)
- `network-sync.ts`: When no `StorageId` is provided (Subduction mode), `waitForSync` falls back to head-stability polling (3 consecutive stable checks at 100ms intervals) instead of `getSyncInfo`-based verification
- `sync-engine.ts`: In sub mode, skips `recreateFailedDocuments` — SubductionSource has its own heal-sync retry logic
- Everything else (push/pull phases, artifact handling, `nukeAndRebuildDocs`, change detection) is identical

### Wasm initialization

As of `[email protected]`, the Repo constructor _always_ creates a `SubductionSource` internally (even without Subduction endpoints), which imports `MemorySigner` and `set_subduction_logger` from `@automerge/automerge-subduction/slim`. The `/slim` entry does NOT auto-init the Wasm — so Wasm must be initialized before _any_ `new Repo()` call, including the default WebSocket path.

`automerge-repo` exports `initSubduction()` which dynamically imports `@automerge/automerge-subduction` (the non-`/slim` entry that auto-inits Wasm). Pushwork calls this via `repoMod.initSubduction()` after loading the Repo module — no direct dependency on `@automerge/automerge-subduction` is needed.

`repo-factory.ts` uses a `new Function("specifier", "return import(specifier)")` wrapper to perform _real_ ESM `import()` calls that Node.js evaluates as ESM. This is necessary because TypeScript with `"module": "commonjs"` compiles `await import("x")` to `require("x")`, which resolves CJS entries. The CJS and ESM module graphs have separate Wasm instances, so initializing via CJS `require()` doesn't help the ESM `/slim` imports inside `automerge-repo`. The `new Function` trick bypasses tsc's transformation and shares the same ESM module graph as the Repo's internal imports.

The Repo class itself is also loaded via this ESM dynamic import (cached after first call) so that `new Repo()` sees the initialized Wasm module.

### Packaging notes

- `[email protected]` correctly pins `@automerge/[email protected]` — no pnpm override needed (unlike subduction.7 which required an override to fix a version mismatch).
- `RepoConfig` now properly types all Subduction options (`subductionWebsocketEndpoints`, `periodicSyncInterval`, `batchSyncInterval`, `signer`, `subductionPolicy`, `subductionAdapters`) — no `as any` cast needed.
- The `automerge-repo-network-websocket` adapter's `NetworkAdapter` types are slightly behind the repo's `NetworkAdapterInterface` (missing `state()` method in declarations). The adapter works at runtime; the type mismatch is worked around with `as unknown as NetworkAdapterInterface`.
- New `"heal-exhausted"` event on Repo fires when self-healing sync gives up after all retry attempts for a document. Not currently used by pushwork but available for better error reporting.

### Subduction mode persistence

`--sub` is only accepted on `init` and `clone`. It persists `subduction: true` in `.pushwork/config.json`. All subsequent commands (`sync`, `watch`, etc.) read it from config via `config.subduction ?? false`. The force-defaults path in `setupCommandContext` preserves `subduction` alongside `root_directory_url`.

When Subduction mode is active, commands print a banner: "Using Subduction sync backend (from config)".

Every `sync` run prints the root Automerge URL at the end.

### Corrupt storage recovery

`repo-factory.ts` scans `.pushwork/automerge/` for 0-byte files before creating the Repo. These indicate incomplete writes from a previous run (process exited before storage flushed). If any are found, the entire automerge cache is wiped and recreated — data will re-download from the sync server. The snapshot (`.pushwork/snapshot.json`) is preserved so all document URLs are retained.

This is a safety net for the Subduction `HydrationError: LooseCommit too short` crash. The upstream fix (`Repo.shutdown()` now calls `flush()` and `SubductionSource.shutdown()` awaits pending writes) prevents the corruption from happening in the first place, but edge cases (SIGKILL, OOM, power loss) can still produce 0-byte files.
128 changes: 128 additions & 0 deletions flake.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

66 changes: 66 additions & 0 deletions flake.nix
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
{
description = "Pushwork: Bidirectional directory synchronization using Automerge CRDTs";

inputs = {
command-utils.url = "git+https://codeberg.org/expede/nix-command-utils";
flake-utils.url = "github:numtide/flake-utils";
nixpkgs.url = "github:nixos/nixpkgs/nixos-25.11";
};

outputs = {
self,
command-utils,
flake-utils,
nixpkgs,
}:
flake-utils.lib.eachDefaultSystem (system: let
pkgs = import nixpkgs {inherit system;};

nodejs = pkgs.nodejs_24;
pnpm-pkg = pkgs.pnpm;
pnpm' = "${pnpm-pkg}/bin/pnpm";

asModule = command-utils.asModule.${system};
cmd = command-utils.cmd.${system};
pnpm = command-utils.pnpm.${system};

pnpm-cfg = {pnpm = pnpm';};

menu =
command-utils.commands.${system}
[
(pnpm.build pnpm-cfg)
(pnpm.dev pnpm-cfg)
(pnpm.install pnpm-cfg)
(pnpm.lint pnpm-cfg)
(pnpm.test pnpm-cfg)
(pnpm.typecheck pnpm-cfg)
(asModule {
"clean" = cmd "Remove dist and node_modules" "rm -rf dist node_modules";
"start" = cmd "Run pushwork CLI" "node dist/cli.js \"$@\"";
"sync" = cmd "Build and run sync" "${pnpm'} build && node dist/cli.js sync \"$@\"";
"watch" = cmd "Watch, build, and sync loop" "node dist/cli.js watch \"$@\"";
})
];
in {
devShells.default = pkgs.mkShell {
name = "Pushwork Dev Shell";

nativeBuildInputs =
[
nodejs
pkgs.nodePackages.vscode-langservers-extracted
pkgs.typescript
pkgs.typescript-language-server
pnpm-pkg
]
++ menu;

shellHook = ''
menu
'';
};

formatter = pkgs.alejandra;
});
}
12 changes: 6 additions & 6 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "pushwork",
"version": "1.1.8",
"version": "1.1.8-subduction",
"description": "Bidirectional directory synchronization using Automerge CRDTs",
"main": "dist/index.js",
"exports": {
Expand Down Expand Up @@ -38,11 +38,11 @@
"author": "Peter van Hardenberg",
"license": "MIT",
"dependencies": {
"@automerge/automerge": "^3.2.4",
"@automerge/automerge-repo": "^2.5.3",
"@automerge/automerge-repo-network-websocket": "^2.5.3",
"@automerge/automerge-repo-storage-indexeddb": "^2.5.3",
"@automerge/automerge-repo-storage-nodefs": "^2.5.3",
"@automerge/automerge": "^3.2.5",
"@automerge/automerge-repo": "2.6.0-subduction.12",
"@automerge/automerge-repo-network-websocket": "2.6.0-subduction.12",
"@automerge/automerge-repo-storage-nodefs": "2.6.0-subduction.12",
"@automerge/automerge-subduction": "0.7.0",
"@commander-js/extra-typings": "^14.0.0",
"chalk": "^5.3.0",
"commander": "^14.0.2",
Expand Down
Loading