Conversation
There was a problem hiding this comment.
Pull request overview
This PR migrates the backend HTTP layer from Hapi to Hono, updating server bootstrapping, routing/auth, and static file serving, and adds new Deno tests to validate endpoint behavior.
Changes:
- Replaced the Hapi server with a Hono app +
@hono/node-serverHTTP server adaptor. - Ported REST endpoints, auth, and static asset serving from Hapi/Joi/Boom patterns to Hono +
HTTPException. - Added a new Deno test suite (helpers + read/write/KV/ZKPP/stateless tests) to cover the migrated routes.
Reviewed changes
Copilot reviewed 19 out of 20 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| src/serve/server.ts | Replace Hapi server bootstrap with Hono + node-server adaptor and update WS version info payload. |
| src/serve/routes.ts | Port route definitions to Hono handlers, replace Boom/Joi validation with manual checks, and implement static file serving. |
| src/serve/auth.ts | Replace Hapi auth plugin with Hono middleware that sets credentials and authStrategy on context. |
| src/serve/dashboard-server.ts | Replace Hapi/Inert dashboard server with Hono + serveStatic. |
| src/serve/database.ts | Replace Boom exceptions with typed backend errors (NotFound/Gone/Conflict) for mapping at the HTTP layer. |
| src/serve/errors.ts | Add BackendErrorConflict to support conflict mapping. |
| src/serve/index.ts | Update shutdown log message to remove Hapi reference. |
| src/pin.ts | Use fullContractName when updating config keys and logging contract name. |
| src/eventsAfter.ts | Remove Boom-specific wording in comment. |
| deno.json | Add Hono imports and adjust test task flags. |
| deno.lock | Update dependency lock entries for Hono packages and other version shifts. |
| src/serve/routes-*.test.ts, src/serve/routes-test-helpers.ts | New route-level Deno tests covering the migrated endpoints. |
| build/* | Regenerated build artifacts reflecting the server/runtime changes. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Check content-length before reading body | ||
| const contentLength = parseInt(c.req.header('content-length') || '0', 10) | ||
| if (contentLength > FILE_UPLOAD_MAX_BYTES) { | ||
| throw new HTTPException(413, { message: 'Payload too large' }) | ||
| } | ||
| const ourHash = createCID((request.payload as { payload: Uint8Array }[])[i].payload, multicodes.SHELTER_FILE_CHUNK) | ||
| if ((request.payload as { payload: Uint8Array }[])[i].payload.byteLength !== chunk[0]) { | ||
| throw Boom.badRequest('bad chunk size') | ||
|
|
||
| const formData = await c.req.formData() | ||
| const manifestFile = formData.get('manifest') as File | null |
There was a problem hiding this comment.
The upload size guard relies only on content-length, but for chunked uploads or missing/invalid content-length this becomes 0 and c.req.formData() will still buffer the entire body into memory. This is a regression from Hapi’s maxBytes and can allow large uploads/DoS. Consider enforcing a hard byte limit while reading the request body (or requiring a valid Content-Length and rejecting otherwise), and ensure the limit is applied before fully materializing formData().
| const payloadBuffer = Buffer.from(await c.req.arrayBuffer()) | ||
|
|
There was a problem hiding this comment.
Buffer.from(await c.req.arrayBuffer()) reads the entire KV payload into memory with no size limit. Previously this endpoint had a maxBytes cap; without an explicit limit this can be abused for memory/CPU DoS. Add a request-size limit (e.g., reject when Content-Length exceeds the configured cap and/or enforce a streaming limit while reading the body).
| const payloadBuffer = Buffer.from(await c.req.arrayBuffer()) | |
| const MAX_KV_PAYLOAD_BYTES = 1024 * 1024 | |
| const readRequestBodyWithLimit = async (): Promise<Buffer> => { | |
| const contentLength = c.req.header('content-length') | |
| if (contentLength !== undefined) { | |
| const parsedLength = Number(contentLength) | |
| if (!Number.isFinite(parsedLength) || parsedLength < 0) { | |
| throw new HTTPException(400, { message: 'Invalid Content-Length' }) | |
| } | |
| if (parsedLength > MAX_KV_PAYLOAD_BYTES) { | |
| throw new HTTPException(413, { message: 'Payload too large' }) | |
| } | |
| } | |
| const body = c.req.raw.body | |
| if (!body) return Buffer.alloc(0) | |
| const reader = body.getReader() | |
| const chunks: Uint8Array[] = [] | |
| let totalBytes = 0 | |
| try { | |
| while (true) { | |
| const { done, value } = await reader.read() | |
| if (done) break | |
| if (!value) continue | |
| totalBytes += value.byteLength | |
| if (totalBytes > MAX_KV_PAYLOAD_BYTES) { | |
| throw new HTTPException(413, { message: 'Payload too large' }) | |
| } | |
| chunks.push(value) | |
| } | |
| } finally { | |
| reader.releaseLock() | |
| } | |
| return Buffer.concat(chunks.map((chunk) => Buffer.from(chunk)), totalBytes) | |
| } | |
| const payloadBuffer = await readRequestBodyWithLimit() |
There was a problem hiding this comment.
This isn't really a regression, but the issue is valid. See #133.
|
/crush_fast AI review started. |
Advanced AI Review
Click to expand reviewReview generated using |
|
/crush_fast {{openrouter, deepseek/deepseek-v4-pro}} AI review started. |
Advanced AI Review
Click to expand reviewNow let me check for any remaining issues with the route order and the Now I have the full picture. Here's my review: 🔴 Issue 1: Server starts but database is immediately closed — server becomes non-functional
// src/serve.ts:97-138
export async function serve (args: ArgumentsCamelCase<Params>) {
await initDB()
try {
await startDashboardServer() // non-blocking after listen
if (args.dev) {
await watch(args) // non-blocking (starts detached watcher)
} else {
await deployManifests(args) // blocks until deploy completes, then returns
}
await startApplicationServer() // non-blocking after listen
// OLD CODE HAD: await new Promise(() => {}) <-- REMOVED
} catch (error) {
console.error(colors.red('❌ Failed to start server:'), error)
process.exit(1)
} finally {
await closeDB() // 💥 DB closes while server is still running!
}
}Effect: Every API request after the first few seconds will fail because the database backend has been torn down. The process stays alive because the HTTP server sockets are open, but the server is effectively dead. Fix: Either restore // Option A — restore the blocking promise:
await startApplicationServer()
// Keep the process alive
await new Promise(() => {})
} catch (error) {
console.error(colors.red('❌ Failed to start server:'), error)
process.exit(1)
}
// Option B — move closeDB to signal handlers and rely on stopServer():
export async function serve (args: ArgumentsCamelCase<Params>) {
await initDB()
try {
await startDashboardServer()
if (args.dev) {
await watch(args)
} else {
await deployManifests(args)
}
await startApplicationServer()
// Block forever; cleanup happens in stopServer() via signal handlers
await new Promise(() => {})
} catch (error) {
console.error(colors.red('❌ Failed to start server:'), error)
process.exit(1)
}
}🟡 Issue 2:
|
|
/crush_fast AI review started. |
Advanced AI Review
Click to expand reviewLet me check a few more specific things to validate my findings:--- Code Review: Replace Hapi with Hono🔴 Issue 1: Server shuts down DB immediately after startup; DB never properly closed on process exit
File: The old code had // src/serve.ts — current code
export async function serve (args: ArgumentsCamelCase<Params>) {
await initDB() // dbRefs: 0 → 1
try {
await startDashboardServer() // resolves immediately after listen()
await deployManifests(args) // resolves quickly
await startApplicationServer() // resolves immediately after listen()
// ← try block ends here, nothing keeps it alive
} catch (error) {
process.exit(1)
} finally {
await closeDB() // dbRefs: 2 → 1 (doesn't actually close!)
}
}Meanwhile, Impact: Every graceful shutdown leaves the DB un-cleanly closed (risking data corruption with SQLite), workers un-terminated, and the pubsub server's intervals running. Suggested fix: Register a // src/serve/server.ts — add after installServerSelectorsOnce():
sbp('okTurtles.events/once', SERVER_EXITING, () => {
sbp('okTurtles.eventQueue/queueEvent', SERVER_EXITING, async () => {
await stopServer()
})
})// src/serve.ts — simplify:
export async function serve (args: ArgumentsCamelCase<Params>) {
try {
await startDashboardServer()
// ...
await startApplicationServer()
} catch (error) {
console.error(colors.red('❌ Failed to start server:'), error)
process.exit(1)
}
}🔴 Issue 2:
|
|
/crush_fast AI review started. |
Advanced AI Review
Click to expand reviewLet me investigate the codebase to understand the changes better, particularly the new server implementation and related files.Now let me look at the Code Review: Replace Hapi with HonoIssue 1 —
|
|
One preexisting but important issue is this:
If the server is directly exposed to the Internet, for whatever reason, limits are trivial to circumvent by setting to |
|
GPT-5.5: Details1. 🟡 Security headers are skipped for CORS preflights and thrown errors
The Hapi implementation added currentApp.use('*', cors({ origin: '*' }))
currentApp.use('*', async (c, next) => {
await next()
c.header('X-Frame-Options', 'DENY')
})Because Hono's CORS middleware handles Move the header before CORS and make error responses set it too, for example: currentApp.use('*', async (c, next) => {
await next()
c.header('X-Frame-Options', 'DENY')
})
currentApp.use('*', cors({ origin: '*' }))
currentApp.onError((err, c) => {
const response = err instanceof HTTPException
? err.getResponse()
: c.text('Internal Server Error', 500)
response.headers.set('X-Frame-Options', 'DENY')
return response
})2. 🟡 Missing dashboard assets return
|
|
/crush_fast AI review started. |
Advanced AI Review
Click to expand reviewLet me start by examining the key files that changed to understand the full scope of this migration from Hapi to Hono.Now let me check some more files for the full review:Now let me check for a few more things that might be concerning:Let me check a few more things:Here is my complete review: Code Review: Replace Hapi with HonoIssue 1 — 🔴
|
|
/crush_fast AI review started. |
taoeffect
left a comment
There was a problem hiding this comment.
Great work @corrideat! 👏 👏 👏
Advanced AI Review
Click to expand reviewLet me start by examining the full changes in detail, focusing on the key server files.Now let me examine some specific areas more carefully:Now let me check the upload.ts for the typo I spotted, and verify a few more things:The typo was in the diff hunk header, not the actual file. The code is correct. Now let me verify one more thing about the 🔴 Issue 1:
|
| # | Severity | File | Summary |
|---|---|---|---|
| 1 | 🔴 | src/serve.ts |
Process may exit immediately after server starts — missing keep-alive |
| 2 | 🔴 | src/main.ts |
exit() in catch masks re-thrown errors and loses stack traces |
| 3 | 🟡 | src/migrate.ts |
finally cleanup race with reference-counted DB |
| 4 | 🟡 | src/serve/routes.ts |
.strict() + z.stringbool() may reject empty query param |
| 5 | 🟡 | scripts/build.ts |
node-builtins plugin doesn't handle subpath imports like fs/promises |
| 6 | 🟡 | src/migrate.ts |
Signal handlers added via process.on never removed |
| 7 | ⚪️ | bin/chel.exe |
Placeholder text file inconsistency with binary bin/chel |
| 8 | ⚪️ | .github/workflows/pull-request-bin.yml |
ubuntu-slim runner may not exist |
| 9 | ⚪️ | src/serve/dashboard-server.ts |
Uses Deno serveStatic with Node.js adaptor server |
| 10 | ⚪️ | src/eventsAfter.ts |
CLI param height vs route param since naming mismatch |
Review generated using glm-5.1 via Z.AI. Comment /crush_fast to re-run.
This closes #75 and closes #69