diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 4a23e7a5..075f208d 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -206,7 +206,7 @@ jobs: if: matrix.go-version == '1.24' uses: codecov/codecov-action@v5 with: - file: ./cli/coverage.out + files: ./cli/coverage.out flags: cli fail_ci_if_error: false @@ -336,7 +336,7 @@ jobs: if: matrix.go-version == '1.24' uses: codecov/codecov-action@v5 with: - file: ./sdk/golang/coverage.out + files: ./sdk/golang/coverage.out flags: go-sdk fail_ci_if_error: false diff --git a/README.md b/README.md index 08ad9710..dbdbe6dc 100644 --- a/README.md +++ b/README.md @@ -1,458 +1,104 @@ -# SyftHub - -A registry and discovery platform for AI/ML endpoints with identity provider capabilities. - -[![CI](https://github.com/IonesioJunior/syfthub/actions/workflows/ci.yml/badge.svg)](https://github.com/IonesioJunior/syfthub/actions/workflows/ci.yml) -[![Python](https://img.shields.io/badge/python-3.9%2B-blue)](https://www.python.org/downloads/) -[![Node.js](https://img.shields.io/badge/node.js-18%2B-green)](https://nodejs.org/) -[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) -[![uv](https://img.shields.io/badge/uv-package%20manager-orange)](https://github.com/astral-sh/uv) - -## What is SyftHub? - -SyftHub is a platform for discovering, managing, and sharing AI/ML endpoints — think of it as **"GitHub for AI endpoints"**. It enables developers and organizations to: - -- **Discover** — Browse and search public ML models and data sources, find trending endpoints by stars -- **Share** — Publish your own endpoints with flexible visibility controls (public, internal, private) -- **Collaborate** — Create organizations to manage endpoints as a team with role-based access -- **Integrate** — Use official Python and TypeScript SDKs for programmatic access -- **Federate** — Built-in Identity Provider (IdP) enables satellite services to verify users via RS256-signed tokens - -## Core Concepts - -### Endpoints - -Endpoints are the primary resource in SyftHub. Each endpoint represents either: - -| Type | Description | -|------|-------------| -| **Model** | An ML model endpoint (inference, predictions, embeddings, etc.) | -| **Data Source** | A data access endpoint (databases, APIs, datasets, etc.) | - -Endpoints support: -- **Visibility**: `public` (anyone), `internal` (authenticated users), `private` (owner/members only) -- **Versioning**: Semantic version tracking (e.g., `0.1.0`) -- **README**: Markdown documentation with syntax highlighting -- **Policies**: Flexible JSON configuration for access policies -- **Connections**: Multiple connection methods with custom configuration -- **Stars**: Community rating system - -Endpoints are accessed via GitHub-style URLs: `/{owner}/{endpoint-slug}` - -### Organizations +
-Organizations enable team collaboration on endpoints: +OpenMined -| Role | Permissions | -|------|-------------| -| **Owner** | Full control, can delete organization, manage all members | -| **Admin** | Manage endpoints, add/remove members, update settings | -| **Member** | Access organization endpoints, basic collaboration | +# SyftHub -### Identity Provider (IdP) +**The home for AI/ML endpoints.** -SyftHub acts as an Identity Provider for satellite services: +Discover, share, and run AI models and data sources — like GitHub, but for endpoints. -1. User authenticates with SyftHub (gets HS256 access token) -2. User requests a satellite token for a specific service (audience) -3. Hub issues RS256-signed short-lived token (60 seconds) -4. Satellite service verifies token using Hub's JWKS endpoint (`/.well-known/jwks.json`) -5. No API call to Hub needed for verification — fully distributed +[![CI](https://github.com/IonesioJunior/syfthub/actions/workflows/ci.yml/badge.svg)](https://github.com/IonesioJunior/syfthub/actions/workflows/ci.yml) +[![License: Apache 2.0](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE) -This enables federated authentication across multiple services with a single SyftHub login. +[Documentation](docs/index.md) · [Python SDK](sdk/python) · [TypeScript SDK](sdk/typescript) · [Go SDK](sdk/golang) · [Contributing](CONTRIBUTING.md) -### Stars +
-Users can star endpoints to show appreciation and help surface popular content. The trending page ranks endpoints by star count. +--- -## Features +## What is SyftHub? -### Endpoint Management -- Create, update, and delete endpoints with full CRUD support -- Auto-generated URL-safe slugs from endpoint names -- Markdown README rendering with syntax highlighting -- Flexible policy and connection configuration (JSON) -- Star/unstar endpoints with trending discovery +SyftHub is a registry and discovery platform for AI/ML endpoints. Publish a model or a data source, point friends or teammates at it, and let anyone build on top of it — through a web UI, an SDK, or a chat interface backed by retrieval-augmented generation. -### Organization Collaboration -- Create organizations with unique slugs -- Invite members with role-based permissions -- Shared endpoint ownership across team members -- Protected operations (cannot remove last owner) +If you've ever wished there were a single place to find the model your team's been talking about, share your own, or wire one into a chat app without standing up new infrastructure — that's what SyftHub is for. -### Identity Provider -- JWKS endpoint for distributed key verification -- RS256-signed satellite tokens with audience scoping -- Configurable audience allowlist -- Server-side token verification endpoint +## Highlights -### Authentication & Security -- JWT-based authentication (HS256 for hub, RS256 for satellites) -- Access + refresh token flow with configurable expiry -- Token blacklist for secure logout -- Argon2 password hashing -- Role-based access control (admin, user, guest) +- **Browse and discover** public models and data sources, sorted by popularity and freshness. +- **Publish your own** with public, internal, or private visibility, full markdown READMEs, and versioning. +- **Chat with endpoints** — built-in RAG aggregator orchestrates retrieval and generation across endpoints. +- **Collaborate in organizations** with role-based access for teams. +- **Federated identity** — SyftHub issues short-lived signed tokens so satellite services can verify users without phoning home. +- **First-class SDKs** for Python, TypeScript, and Go, plus a CLI. -### SDKs -- Official Python SDK (`syfthub-sdk`) -- Official TypeScript SDK (`@syfthub/sdk`) -- Full API parity with language-appropriate conventions +## Quick Start -## Architecture +Clone, copy the env template, and run: -``` -┌─────────────────────────────────────────────────────────────────┐ -│ NGINX (Reverse Proxy) │ -│ Port 80/443 (prod) | 8080 (dev) │ -└─────────────────────┬───────────────────────┬───────────────────┘ - │ │ - ┌────────────▼────────────┐ ┌───────▼────────────┐ - │ Frontend (React 19) │ │ Backend (FastAPI) │ - │ TypeScript + Vite │ │ Python 3.12 │ - │ Tailwind + shadcn/ui │ │ SQLAlchemy ORM │ - └────────────┬────────────┘ └───────┬────────────┘ - │ │ - │ @syfthub/sdk │ - └───────────────────────┤ - │ - ┌───────────────────────┴────────────────┐ - │ │ - ┌───────▼───────┐ ┌───────────▼───────┐ - │ PostgreSQL │ │ Redis │ - │ Database │ │ Sessions/Cache │ - └───────────────┘ └───────────────────┘ +```bash +git clone https://github.com/IonesioJunior/syfthub.git +cd syfthub +cp .env.example .env +make dev ``` -| Layer | Technology | -|-------|------------| -| **Backend** | FastAPI + SQLAlchemy + PostgreSQL | -| **Frontend** | React 19 + TypeScript + Vite + Tailwind CSS | -| **Package Management** | uv (backend) + npm (frontend) | -| **Testing** | pytest (backend) + Playwright (frontend) | -| **CI/CD** | GitHub Actions with parallel jobs | +The full stack starts behind nginx at . API docs live at . -## SDKs +To stop everything: `make stop`. -SyftHub provides official SDKs for Python and TypeScript. See the [SDK documentation](./sdk/README.md) for full details. - -### Python SDK - -```bash -pip install syfthub-sdk -# or -uv add syfthub-sdk -``` +## Using the SDKs ```python +# Python from syfthub_sdk import SyftHubClient client = SyftHubClient(base_url="https://hub.syft.com") +client.auth.login(email="alice@example.com", password="...") -# Authentication -client.auth.login(email="alice@example.com", password="secret123") - -# Browse public endpoints for endpoint in client.hub.browse(): - print(f"{endpoint.name}: {endpoint.description}") - -# Get trending endpoints -trending = client.hub.trending(min_stars=5) - -# Create an endpoint -endpoint = client.my_endpoints.create({ - "name": "My ML Model", - "type": "model", - "visibility": "public", - "description": "A powerful classification model" -}) - -# Star an endpoint -client.hub.star("alice/awesome-model") -``` - -### TypeScript SDK - -```bash -npm install @syfthub/sdk -# or -yarn add @syfthub/sdk + print(endpoint.name) ``` ```typescript +// TypeScript import { SyftHubClient } from '@syfthub/sdk'; const client = new SyftHubClient({ baseUrl: 'https://hub.syft.com' }); +await client.auth.login({ email: 'alice@example.com', password: '...' }); -// Authentication -await client.auth.login({ email: 'alice@example.com', password: 'secret123' }); - -// Browse public endpoints for await (const endpoint of client.hub.browse()) { - console.log(`${endpoint.name}: ${endpoint.description}`); + console.log(endpoint.name); } - -// Create an endpoint -const endpoint = await client.myEndpoints.create({ - name: 'My Data Source', - type: 'data_source', - visibility: 'private', - description: 'Internal company data' -}); -``` - -## API Overview - -**Base URL**: `/api/v1` - -| Group | Endpoints | Description | -|-------|-----------|-------------| -| **Auth** | `POST /auth/register`
`POST /auth/login`
`POST /auth/refresh`
`GET /auth/me` | User authentication | -| **Users** | `GET /users/me`
`PUT /users/me`
`GET /users/{id}` | User management | -| **Endpoints** | `POST /endpoints`
`GET /endpoints`
`GET /endpoints/public`
`GET /endpoints/trending`
`POST /endpoints/{id}/star` | Endpoint CRUD & discovery | -| **Organizations** | `POST /organizations`
`GET /organizations`
`POST /organizations/{id}/members` | Organization management | -| **IdP** | `GET /token?aud={service}`
`GET /.well-known/jwks.json`
`POST /verify` | Identity provider | -| **Public** | `GET /{owner}/{slug}` | GitHub-style endpoint access | - -Full API documentation available at `/docs` (Swagger UI) when running the server. - -## Installation - -### Prerequisites - -- **Docker** (recommended) or: - - **Backend**: Python 3.9+ and [uv](https://github.com/astral-sh/uv) - - **Frontend**: Node.js 18+ and npm - -Install uv (for local development): -```bash -curl -LsSf https://astral.sh/uv/install.sh | sh -``` - -### Quick Start - -1. Clone the repository: -```bash -git clone https://github.com/IonesioJunior/syfthub.git -cd syfthub -``` - -2. Copy environment template: -```bash -cp .env.example .env -``` - -3. Start the development environment: -```bash -make dev ``` -This starts all services (backend, frontend, PostgreSQL, Redis) via Docker. - -## Usage +See the [SDK guides](docs/guides/python-sdk.md) for the full reference. -### Development Mode (Docker - Recommended) +## Documentation -Start the full-stack environment with one command: - -```bash -make dev # Start all services -make logs # View container logs -make stop # Stop all services -``` - -The application will be available at: -- **App**: http://localhost -- **API Documentation**: http://localhost/docs -- **Database**: PostgreSQL on localhost:5432 (user: `syfthub`, password: `syfthub_dev_password`) - -### Local Development (Without Docker) - -For backend-only local development: - -```bash -cd backend -uv sync --all-extras --dev -uv run uvicorn syfthub.main:app --reload --port 8000 -``` +Full docs live in [`docs/`](docs/index.md): -For frontend-only local development: +- [Architecture overview](docs/architecture/overview.md) — services, data flow, tokens. +- [Local setup](docs/guides/local-setup.md) — clone to running. +- [Publishing endpoints](docs/guides/publishing-endpoints.md) — share your first model. +- [API reference](docs/api/backend.md) — backend, aggregator, MCP. +- [Runbooks](docs/runbooks/deploy.md) — deploy, rollback, incident response. -```bash -cd frontend -npm install -npm run dev -``` - -### Production Mode - -```bash -docker compose -f docker-compose.prod.yml up -d -``` - -## Development - -### Running Tests - -```bash -make test # Run all tests (backend + frontend) -``` - -Or run separately: - -```bash -# Backend tests -cd components/backend && uv run python -m pytest - -# Frontend tests (Playwright E2E) -cd components/frontend && npm test -``` - -### Code Quality - -```bash -make check # Run all code quality checks -``` - -This runs: -- **Backend**: Ruff linting, Ruff formatting, mypy type checking -- **Frontend**: ESLint, TypeScript type checking - -### Manual Quality Commands - -**Backend:** -```bash -cd components/backend -uv run ruff check src/ tests/ # Linting -uv run ruff format src/ tests/ # Formatting -uv run mypy src/ # Type checking -``` - -**Frontend:** -```bash -cd components/frontend -npm run lint # ESLint -npm run format # Prettier -npm run typecheck # TypeScript -``` - -### Available Make Commands - -```bash -make help # Show available commands -make dev # Start development environment -make stop # Stop all services -make test # Run all tests -make check # Run code quality checks -make logs # View container logs -``` - -## Project Structure +## Project layout ``` syfthub/ -├── backend/ # Python FastAPI backend -│ ├── src/syfthub/ # Main Python package -│ │ ├── api/ # FastAPI routes & endpoints -│ │ │ └── endpoints/ # Route handlers (users, endpoints, orgs, tokens) -│ │ ├── auth/ # Authentication & security -│ │ │ ├── security.py # JWT tokens, password hashing -│ │ │ ├── keys.py # RSA key management for IdP -│ │ │ └── satellite_tokens.py # Satellite token logic -│ │ ├── core/ # Configuration -│ │ ├── database/ # Database connection & dependencies -│ │ ├── domain/ # Value objects & exceptions -│ │ ├── models/ # SQLAlchemy ORM models -│ │ ├── repositories/ # Data access layer (Repository pattern) -│ │ ├── schemas/ # Pydantic request/response DTOs -│ │ ├── services/ # Business logic layer -│ │ ├── templates/ # Jinja2 templates (endpoint HTML views) -│ │ └── main.py # FastAPI app entry point -│ ├── tests/ # Backend test suite -│ ├── scripts/ # Utility scripts -│ ├── pyproject.toml # Dependencies & tool config -│ └── uv.lock # Locked Python dependencies -├── frontend/ # React TypeScript frontend -│ ├── src/ -│ │ ├── components/ # React components -│ │ │ ├── ui/ # shadcn/ui base components -│ │ │ ├── auth/ # Authentication (login, register modals) -│ │ │ ├── settings/ # Settings tabs (profile, security, etc.) -│ │ │ └── chat/ # Chat interface components -│ │ ├── context/ # React Context providers (auth, modals) -│ │ ├── hooks/ # Custom hooks (useAPI, useForm) -│ │ ├── lib/ # Utilities, SDK client, types -│ │ ├── layouts/ # Layout components -│ │ ├── pages/ # Route pages (lazy-loaded) -│ │ ├── assets/ # Static assets (fonts, images) -│ │ ├── styles/ # Global CSS & design tokens -│ │ ├── app.tsx # App root with routing -│ │ └── main.tsx # React entry point -│ ├── __tests__/ # Playwright E2E tests -│ ├── package.json # Frontend dependencies -│ └── vite.config.ts # Vite build configuration -├── sdk/ # Official client SDKs -│ ├── python/ # Python SDK (syfthub-sdk) -│ │ ├── src/syfthub_sdk/ # SDK source code -│ │ ├── tests/ # SDK tests -│ │ └── pyproject.toml # SDK dependencies -│ ├── typescript/ # TypeScript SDK (@syfthub/sdk) -│ │ ├── src/ # SDK source code -│ │ ├── dist/ # Built output -│ │ └── package.json # SDK dependencies -│ └── README.md # SDK documentation -├── nginx/ # Nginx reverse proxy config -│ ├── nginx.dev.conf # Development configuration -│ └── nginx.prod.conf # Production configuration (SSL) -├── docs/ # Documentation -│ ├── authentication.md # Auth system documentation -│ └── pki-workflow.md # PKI/IdP workflow documentation -├── docker-compose.dev.yml # Development environment -├── docker-compose.prod.yml # Production environment -├── .github/workflows/ # CI/CD pipelines -├── Makefile # Development commands -├── .pre-commit-config.yaml # Code quality hooks -├── .env.example # Environment template -└── README.md # This file +├── components/ services (backend, frontend, aggregator, mcp) +├── sdk/ official SDKs (python, typescript, go) +├── cli/ command-line client +├── deploy/ deployment configs +└── docs/ documentation ``` -## Authentication - -SyftHub uses a dual-token authentication system: - -### Hub Tokens (HS256) -- Standard JWT access + refresh tokens -- Access tokens expire in 30 minutes (configurable) -- Refresh tokens expire in 7 days (configurable) -- Token blacklist for secure logout - -### Satellite Tokens (RS256) -- Short-lived tokens for external services (60 seconds) -- Signed with RSA private key -- Services verify using JWKS endpoint without calling Hub API -- Audience-scoped for specific services - -For detailed authentication documentation, see [docs/authentication.md](./docs/authentication.md). - -For PKI and IdP workflow details, see [docs/pki-workflow.md](./docs/pki-workflow.md). - -## Environment Variables - -Key environment variables (see `.env.example` for full list): - -| Variable | Description | Default | -|----------|-------------|---------| -| `DATABASE_URL` | PostgreSQL connection string | Required | -| `SECRET_KEY` | JWT signing secret (HS256) | Required | -| `CORS_ORIGINS` | Allowed CORS origins | `*` | -| `ACCESS_TOKEN_EXPIRE_MINUTES` | Access token lifetime | `30` | -| `REFRESH_TOKEN_EXPIRE_DAYS` | Refresh token lifetime | `7` | -| `ALLOWED_AUDIENCES` | Comma-separated satellite service names | `syftai-space` | -| `AUTO_GENERATE_RSA_KEYS` | Auto-generate RSA keys in dev | `true` | - ## Contributing -See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. +Issues and pull requests are welcome — see [CONTRIBUTING.md](CONTRIBUTING.md). For release process and versioning, see [RELEASING.md](RELEASING.md). ## License -MIT License - see LICENSE file for details. +[Apache 2.0](LICENSE) diff --git a/cli/internal/clientutil/client.go b/cli/internal/clientutil/client.go new file mode 100644 index 00000000..7b3db37a --- /dev/null +++ b/cli/internal/clientutil/client.go @@ -0,0 +1,43 @@ +// Package clientutil centralizes construction of the syfthub.Client used by +// the CLI, so that config-to-option translation lives in exactly one place. +package clientutil + +import ( + "time" + + "github.com/OpenMined/syfthub/cli/internal/nodeconfig" + "github.com/openmined/syfthub/sdk/golang/syfthub" +) + +// NewClient builds a *syfthub.Client from the given NodeConfig. +// +// If aggregatorAlias is non-empty, the resolved aggregator URL +// (cfg.GetAggregatorURL(aggregatorAlias)) is applied when non-empty. +// If timeoutOverride > 0, it is used in place of cfg.TimeoutDuration(). +// Any extra options are appended last and therefore take precedence over +// the config-derived ones. +func NewClient(cfg *nodeconfig.NodeConfig, aggregatorAlias string, timeoutOverride time.Duration, extra ...syfthub.Option) (*syfthub.Client, error) { + opts := []syfthub.Option{ + syfthub.WithBaseURL(cfg.HubURL), + } + + timeout := timeoutOverride + if timeout <= 0 { + timeout = cfg.TimeoutDuration() + } + opts = append(opts, syfthub.WithTimeout(timeout)) + + if cfg.HasAPIToken() { + opts = append(opts, syfthub.WithAPIToken(cfg.APIToken)) + } + + if aggregatorAlias != "" { + if url := cfg.GetAggregatorURL(aggregatorAlias); url != "" { + opts = append(opts, syfthub.WithAggregatorURL(url)) + } + } + + opts = append(opts, extra...) + + return syfthub.NewClient(opts...) +} diff --git a/cli/internal/cmd/add.go b/cli/internal/cmd/add.go index f066fdfa..4e86fde9 100644 --- a/cli/internal/cmd/add.go +++ b/cli/internal/cmd/add.go @@ -2,9 +2,6 @@ package cmd import ( "github.com/spf13/cobra" - - "github.com/OpenMined/syfthub/cli/internal/config" - "github.com/OpenMined/syfthub/cli/internal/output" ) var addCmd = &cobra.Command{ @@ -24,7 +21,9 @@ var addAggregatorCmd = &cobra.Command{ Use: "aggregator ", Short: "Add an aggregator alias", Args: cobra.ExactArgs(2), - RunE: runAddAggregator, + RunE: func(cmd *cobra.Command, args []string) error { + return runAddAlias(aggregatorKind, args[0], args[1], addAggregatorDefault, addAggregatorJSONOutput) + }, } // Add accounting subcommand @@ -37,7 +36,9 @@ var addAccountingCmd = &cobra.Command{ Use: "accounting ", Short: "Add an accounting service alias", Args: cobra.ExactArgs(2), - RunE: runAddAccounting, + RunE: func(cmd *cobra.Command, args []string) error { + return runAddAlias(accountingKind, args[0], args[1], addAccountingDefault, addAccountingJSONOutput) + }, } func init() { @@ -53,111 +54,3 @@ func init() { addCmd.AddCommand(addAggregatorCmd) addCmd.AddCommand(addAccountingCmd) } - -func runAddAggregator(cmd *cobra.Command, args []string) error { - alias := args[0] - url := args[1] - - cfg := config.Load() - - if _, exists := cfg.Aggregators[alias]; exists { - if addAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": "Aggregator '" + alias + "' already exists", - }) - } else { - output.Error("Aggregator '%s' already exists. Use 'syft update aggregator' to modify it.", alias) - } - return nil - } - - cfg.Aggregators[alias] = config.AggregatorConfig{URL: url} - - if addAggregatorDefault { - cfg.DefaultAggregator = alias - } - - if err := cfg.Save(); err != nil { - if addAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to save config: %v", err) - } - return err - } - - if addAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "alias": alias, - "url": url, - "is_default": addAggregatorDefault, - }) - } else { - msg := "Added aggregator '" + alias + "' -> " + url - if addAggregatorDefault { - msg += " (default)" - } - output.Success(msg) - } - - return nil -} - -func runAddAccounting(cmd *cobra.Command, args []string) error { - alias := args[0] - url := args[1] - - cfg := config.Load() - - if _, exists := cfg.AccountingServices[alias]; exists { - if addAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": "Accounting service '" + alias + "' already exists", - }) - } else { - output.Error("Accounting service '%s' already exists. Use 'syft update accounting' to modify it.", alias) - } - return nil - } - - cfg.AccountingServices[alias] = config.AccountingConfig{URL: url} - - if addAccountingDefault { - cfg.DefaultAccounting = alias - } - - if err := cfg.Save(); err != nil { - if addAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to save config: %v", err) - } - return err - } - - if addAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "alias": alias, - "url": url, - "is_default": addAccountingDefault, - }) - } else { - msg := "Added accounting service '" + alias + "' -> " + url - if addAccountingDefault { - msg += " (default)" - } - output.Success(msg) - } - - return nil -} diff --git a/cli/internal/cmd/agent.go b/cli/internal/cmd/agent.go index 30a4f961..a7f93eda 100644 --- a/cli/internal/cmd/agent.go +++ b/cli/internal/cmd/agent.go @@ -8,10 +8,10 @@ import ( "os/signal" "strings" "syscall" - "time" "github.com/spf13/cobra" + "github.com/OpenMined/syfthub/cli/internal/clientutil" "github.com/OpenMined/syfthub/cli/internal/config" "github.com/OpenMined/syfthub/cli/internal/output" "github.com/openmined/syfthub/sdk/golang/syfthub" @@ -115,20 +115,7 @@ func runAgent(cmd *cobra.Command, args []string) error { cfg := config.Load() - aggregatorURL := cfg.GetAggregatorURL(agentAggregator) - - opts := []syfthub.Option{ - syfthub.WithBaseURL(cfg.HubURL), - syfthub.WithTimeout(time.Duration(cfg.Timeout) * time.Second), - } - if aggregatorURL != "" { - opts = append(opts, syfthub.WithAggregatorURL(aggregatorURL)) - } - if cfg.HasAPIToken() { - opts = append(opts, syfthub.WithAPIToken(cfg.APIToken)) - } - - client, err := syfthub.NewClient(opts...) + client, err := clientutil.NewClient(cfg, agentAggregator, 0) if err != nil { output.Error("Failed to create client: %v", err) return err diff --git a/cli/internal/cmd/alias_kind.go b/cli/internal/cmd/alias_kind.go new file mode 100644 index 00000000..e0eab8d3 --- /dev/null +++ b/cli/internal/cmd/alias_kind.go @@ -0,0 +1,244 @@ +package cmd + +import ( + "sort" + + "github.com/OpenMined/syfthub/cli/internal/config" + "github.com/OpenMined/syfthub/cli/internal/output" +) + +// aliasKind captures everything that differs between the aggregator and +// accounting-service alias CRUD commands. The command handlers are shared +// and parameterized via this struct. +type aliasKind struct { + name string // "aggregator" / "accounting service" — for user-facing messages + jsonKey string // "aggregators" / "accounting_services" — for JSON envelope field + tableTitle string // "Aggregator" / "Accounting" — passed to output.PrintAliasesTable + get func(cfg *config.Config) map[string]string + set func(cfg *config.Config, alias, url string) + del func(cfg *config.Config, alias string) + getDefault func(cfg *config.Config) string + setDefault func(cfg *config.Config, alias string) +} + +var aggregatorKind = aliasKind{ + name: "aggregator", + jsonKey: "aggregators", + tableTitle: "Aggregator", + get: func(cfg *config.Config) map[string]string { + out := make(map[string]string, len(cfg.Aggregators)) + for k, v := range cfg.Aggregators { + out[k] = v.URL + } + return out + }, + set: func(cfg *config.Config, alias, url string) { + cfg.Aggregators[alias] = config.AggregatorConfig{URL: url} + }, + del: func(cfg *config.Config, alias string) { + delete(cfg.Aggregators, alias) + }, + getDefault: func(cfg *config.Config) string { return cfg.DefaultAggregator }, + setDefault: func(cfg *config.Config, alias string) { cfg.DefaultAggregator = alias }, +} + +var accountingKind = aliasKind{ + name: "accounting service", + jsonKey: "accounting_services", + tableTitle: "Accounting", + get: func(cfg *config.Config) map[string]string { + out := make(map[string]string, len(cfg.AccountingServices)) + for k, v := range cfg.AccountingServices { + out[k] = v.URL + } + return out + }, + set: func(cfg *config.Config, alias, url string) { + cfg.AccountingServices[alias] = config.AccountingConfig{URL: url} + }, + del: func(cfg *config.Config, alias string) { + delete(cfg.AccountingServices, alias) + }, + getDefault: func(cfg *config.Config) string { return cfg.DefaultAccounting }, + setDefault: func(cfg *config.Config, alias string) { cfg.DefaultAccounting = alias }, +} + +// capitalize returns s with its first ASCII letter upper-cased. +func capitalize(s string) string { + if s == "" { + return s + } + b := []byte(s) + if b[0] >= 'a' && b[0] <= 'z' { + b[0] -= 'a' - 'A' + } + return string(b) +} + +func runAddAlias(k aliasKind, alias, url string, setDefault, jsonMode bool) error { + cfg := config.Load() + + if _, exists := k.get(cfg)[alias]; exists { + if jsonMode { + output.JSON(map[string]any{ + "status": output.StatusError, + "message": capitalize(k.name) + " '" + alias + "' already exists", + }) + } else { + output.Error("%s '%s' already exists. Use 'syft update %s' to modify it.", capitalize(k.name), alias, firstWord(k.name)) + } + return nil + } + + k.set(cfg, alias, url) + + if setDefault { + k.setDefault(cfg, alias) + } + + if err := cfg.Save(); err != nil { + return output.ReplyError(jsonMode, "Failed to save config: %v", err) + } + + msg := "Added " + k.name + " '" + alias + "' -> " + url + if setDefault { + msg += " (default)" + } + output.ReplySuccess(jsonMode, map[string]any{ + "alias": alias, + "url": url, + "is_default": setDefault, + }, "%s", msg) + + return nil +} + +func runUpdateAlias(k aliasKind, alias, newURL string, setDefault, jsonMode bool) error { + cfg := config.Load() + + if _, exists := k.get(cfg)[alias]; !exists { + if jsonMode { + output.JSON(map[string]any{ + "status": output.StatusError, + "message": capitalize(k.name) + " '" + alias + "' not found", + }) + } else { + output.Error("%s '%s' not found.", capitalize(k.name), alias) + } + return nil + } + + if newURL == "" && !setDefault { + if jsonMode { + output.JSON(map[string]any{ + "status": output.StatusError, + "message": "Nothing to update", + }) + } else { + output.Warning("Nothing to update. Specify --url or --default.") + } + return nil + } + + if newURL != "" { + k.set(cfg, alias, newURL) + } + + if setDefault { + k.setDefault(cfg, alias) + } + + if err := cfg.Save(); err != nil { + return output.ReplyError(jsonMode, "Failed to save config: %v", err) + } + + isDefault := k.getDefault(cfg) == alias + + output.ReplySuccess(jsonMode, map[string]any{ + "alias": alias, + "url": k.get(cfg)[alias], + "is_default": isDefault, + }, "Updated %s '%s'", k.name, alias) + + return nil +} + +func runListAlias(k aliasKind, jsonMode bool) error { + cfg := config.Load() + entries := k.get(cfg) + def := k.getDefault(cfg) + + if jsonMode { + result := make(map[string]any) + for alias, url := range entries { + result[alias] = map[string]any{ + "url": url, + "is_default": def == alias, + } + } + output.JSON(map[string]any{ + "status": output.StatusSuccess, + k.jsonKey: result, + }) + return nil + } + + aliases := make([]output.AliasInfo, 0, len(entries)) + for alias, url := range entries { + aliases = append(aliases, output.AliasInfo{ + Name: alias, + URL: url, + IsDefault: def == alias, + }) + } + sort.Slice(aliases, func(i, j int) bool { + return aliases[i].Name < aliases[j].Name + }) + output.PrintAliasesTable(aliases, k.tableTitle) + return nil +} + +func runRemoveAlias(k aliasKind, alias string, jsonMode bool) error { + cfg := config.Load() + + if _, exists := k.get(cfg)[alias]; !exists { + if jsonMode { + output.JSON(map[string]any{ + "status": output.StatusError, + "message": capitalize(k.name) + " '" + alias + "' not found", + }) + } else { + output.Error("%s '%s' not found.", capitalize(k.name), alias) + } + return nil + } + + k.del(cfg, alias) + + // Clear default if it was this alias + if k.getDefault(cfg) == alias { + k.setDefault(cfg, "") + } + + if err := cfg.Save(); err != nil { + return output.ReplyError(jsonMode, "Failed to save config: %v", err) + } + + output.ReplySuccess(jsonMode, map[string]any{ + "alias": alias, + "message": "Removed", + }, "Removed %s '%s'", k.name, alias) + + return nil +} + +// firstWord returns the first space-separated word of s. +// Used to build command hints like "syft update aggregator" from "aggregator service". +func firstWord(s string) string { + for i := 0; i < len(s); i++ { + if s[i] == ' ' { + return s[:i] + } + } + return s +} diff --git a/cli/internal/cmd/config.go b/cli/internal/cmd/config.go index 2148210e..6c65fac6 100644 --- a/cli/internal/cmd/config.go +++ b/cli/internal/cmd/config.go @@ -83,8 +83,8 @@ func runConfigSet(cmd *cobra.Command, args []string) error { if _, ok := allowedKeys[key]; !ok { if configSetJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", + output.JSON(map[string]any{ + "status": output.StatusError, "message": fmt.Sprintf("Unknown key '%s'", key), "allowed_keys": getAllowedKeys(), }) @@ -103,8 +103,8 @@ func runConfigSet(cmd *cobra.Command, args []string) error { timeout, err := strconv.ParseFloat(value, 64) if err != nil { if configSetJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", + output.JSON(map[string]any{ + "status": output.StatusError, "message": fmt.Sprintf("Invalid timeout value: %s", value), }) } else { @@ -139,26 +139,13 @@ func runConfigSet(cmd *cobra.Command, args []string) error { } if err := cfg.Save(); err != nil { - if configSetJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to save config: %v", err) - } - return err + return output.ReplyError(configSetJSONOutput, "Failed to save config: %v", err) } - if configSetJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "key": key, - "value": typedValue, - }) - } else { - output.Success("Set %s = %v", key, typedValue) - } + output.ReplySuccess(configSetJSONOutput, map[string]any{ + "key": key, + "value": typedValue, + }, "Set %s = %v", key, typedValue) return nil } @@ -179,7 +166,7 @@ func runConfigShow(cmd *cobra.Command, args []string) error { accountingServices[alias] = map[string]string{"url": acc.URL} } - data := map[string]interface{}{ + data := map[string]any{ "api_token": cfg.APIToken, "aggregators": aggregators, "accounting_services": accountingServices, @@ -189,8 +176,8 @@ func runConfigShow(cmd *cobra.Command, args []string) error { "hub_url": cfg.HubURL, } - output.JSON(map[string]interface{}{ - "status": "success", + output.JSON(map[string]any{ + "status": output.StatusSuccess, "config": data, }) } else { @@ -252,8 +239,8 @@ func runConfigShow(cmd *cobra.Command, args []string) error { func runConfigPath(cmd *cobra.Command, args []string) error { if configPathJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", + output.JSON(map[string]any{ + "status": output.StatusSuccess, "path": config.ConfigFile, }) } else { diff --git a/cli/internal/cmd/list_aliases.go b/cli/internal/cmd/list_aliases.go index ae0d1626..3fe46daf 100644 --- a/cli/internal/cmd/list_aliases.go +++ b/cli/internal/cmd/list_aliases.go @@ -1,12 +1,7 @@ package cmd import ( - "sort" - "github.com/spf13/cobra" - - "github.com/OpenMined/syfthub/cli/internal/config" - "github.com/OpenMined/syfthub/cli/internal/output" ) var listCmd = &cobra.Command{ @@ -22,7 +17,9 @@ var listAggregatorJSONOutput bool var listAggregatorCmd = &cobra.Command{ Use: "aggregator", Short: "List aggregator aliases", - RunE: runListAggregator, + RunE: func(cmd *cobra.Command, args []string) error { + return runListAlias(aggregatorKind, listAggregatorJSONOutput) + }, } // List accounting subcommand @@ -31,7 +28,9 @@ var listAccountingJSONOutput bool var listAccountingCmd = &cobra.Command{ Use: "accounting", Short: "List accounting service aliases", - RunE: runListAccounting, + RunE: func(cmd *cobra.Command, args []string) error { + return runListAlias(accountingKind, listAccountingJSONOutput) + }, } func init() { @@ -41,73 +40,3 @@ func init() { listCmd.AddCommand(listAggregatorCmd) listCmd.AddCommand(listAccountingCmd) } - -func runListAggregator(cmd *cobra.Command, args []string) error { - cfg := config.Load() - - if listAggregatorJSONOutput { - result := make(map[string]interface{}) - for alias, agg := range cfg.Aggregators { - isDefault := cfg.DefaultAggregator == alias - result[alias] = map[string]interface{}{ - "url": agg.URL, - "is_default": isDefault, - } - } - output.JSON(map[string]interface{}{ - "status": "success", - "aggregators": result, - }) - } else { - aliases := make([]output.AliasInfo, 0, len(cfg.Aggregators)) - for alias, agg := range cfg.Aggregators { - isDefault := cfg.DefaultAggregator == alias - aliases = append(aliases, output.AliasInfo{ - Name: alias, - URL: agg.URL, - IsDefault: isDefault, - }) - } - sort.Slice(aliases, func(i, j int) bool { - return aliases[i].Name < aliases[j].Name - }) - output.PrintAliasesTable(aliases, "Aggregator") - } - - return nil -} - -func runListAccounting(cmd *cobra.Command, args []string) error { - cfg := config.Load() - - if listAccountingJSONOutput { - result := make(map[string]interface{}) - for alias, acc := range cfg.AccountingServices { - isDefault := cfg.DefaultAccounting == alias - result[alias] = map[string]interface{}{ - "url": acc.URL, - "is_default": isDefault, - } - } - output.JSON(map[string]interface{}{ - "status": "success", - "accounting_services": result, - }) - } else { - aliases := make([]output.AliasInfo, 0, len(cfg.AccountingServices)) - for alias, acc := range cfg.AccountingServices { - isDefault := cfg.DefaultAccounting == alias - aliases = append(aliases, output.AliasInfo{ - Name: alias, - URL: acc.URL, - IsDefault: isDefault, - }) - } - sort.Slice(aliases, func(i, j int) bool { - return aliases[i].Name < aliases[j].Name - }) - output.PrintAliasesTable(aliases, "Accounting") - } - - return nil -} diff --git a/cli/internal/cmd/login.go b/cli/internal/cmd/login.go index 929fecd8..b505c90d 100644 --- a/cli/internal/cmd/login.go +++ b/cli/internal/cmd/login.go @@ -6,11 +6,11 @@ import ( "fmt" "os" "strings" - "time" "github.com/spf13/cobra" "golang.org/x/term" + "github.com/OpenMined/syfthub/cli/internal/clientutil" "github.com/OpenMined/syfthub/cli/internal/config" "github.com/OpenMined/syfthub/cli/internal/output" "github.com/openmined/syfthub/sdk/golang/syfthub" @@ -65,7 +65,7 @@ func runLogin(cmd *cobra.Command, args []string) error { if token == "" { msg := "API token is required" if loginJSONOutput { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) + output.JSON(map[string]any{"status": output.StatusError, "message": msg}) } else { output.Error(msg) } @@ -73,18 +73,9 @@ func runLogin(cmd *cobra.Command, args []string) error { } // Validate the token by calling /auth/me - client, err := syfthub.NewClient( - syfthub.WithBaseURL(cfg.HubURL), - syfthub.WithAPIToken(token), - syfthub.WithTimeout(time.Duration(cfg.Timeout)*time.Second), - ) + client, err := clientutil.NewClient(cfg, "", 0, syfthub.WithAPIToken(token)) if err != nil { - if loginJSONOutput { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("Failed to create client: %v", err) - } - return err + return output.ReplyError(loginJSONOutput, "Failed to create client: %v", err) } defer client.Close() @@ -92,7 +83,7 @@ func runLogin(cmd *cobra.Command, args []string) error { user, err := client.Me(ctx) if err != nil { if loginJSONOutput { - output.JSON(map[string]interface{}{"status": "error", "message": "Invalid API token"}) + output.JSON(map[string]any{"status": output.StatusError, "message": "Invalid API token"}) } else { output.Error("Invalid API token: %v", err) } @@ -102,23 +93,13 @@ func runLogin(cmd *cobra.Command, args []string) error { // Store token in config cfg.SetAPIToken(token) if err := cfg.Save(); err != nil { - if loginJSONOutput { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("Failed to save token: %v", err) - } - return err + return output.ReplyError(loginJSONOutput, "Failed to save token: %v", err) } - if loginJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "username": user.Username, - "email": user.Email, - }) - } else { - output.Success("Logged in as %s", user.Username) - } + output.ReplySuccess(loginJSONOutput, map[string]any{ + "username": user.Username, + "email": user.Email, + }, "Logged in as %s", user.Username) return nil } diff --git a/cli/internal/cmd/logout.go b/cli/internal/cmd/logout.go index 2127a6e4..67bb2f92 100644 --- a/cli/internal/cmd/logout.go +++ b/cli/internal/cmd/logout.go @@ -28,37 +28,15 @@ func runLogout(cmd *cobra.Command, args []string) error { cfg := config.Load() if !cfg.HasAPIToken() { - if logoutJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "message": "Already logged out", - }) - } else { - output.Success("Already logged out") - } + output.ReplySuccess(logoutJSONOutput, map[string]any{"message": "Already logged out"}, "Already logged out") return nil } if err := config.ClearAPITokenAndSave(); err != nil { - if logoutJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to clear token: %v", err) - } - return err + return output.ReplyError(logoutJSONOutput, "Failed to clear token: %v", err) } - if logoutJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "message": "Logged out", - }) - } else { - output.Success("Logged out successfully") - } + output.ReplySuccess(logoutJSONOutput, map[string]any{"message": "Logged out"}, "Logged out successfully") return nil } diff --git a/cli/internal/cmd/ls.go b/cli/internal/cmd/ls.go index bbc25c6c..e689628f 100644 --- a/cli/internal/cmd/ls.go +++ b/cli/internal/cmd/ls.go @@ -3,10 +3,10 @@ package cmd import ( "context" "strings" - "time" "github.com/spf13/cobra" + "github.com/OpenMined/syfthub/cli/internal/clientutil" "github.com/OpenMined/syfthub/cli/internal/completion" "github.com/OpenMined/syfthub/cli/internal/config" "github.com/OpenMined/syfthub/cli/internal/output" @@ -50,24 +50,9 @@ func runLs(cmd *cobra.Command, args []string) error { } // Create client - clientOpts := []syfthub.Option{ - syfthub.WithBaseURL(cfg.HubURL), - syfthub.WithTimeout(time.Duration(cfg.Timeout) * time.Second), - } - if cfg.HasAPIToken() { - clientOpts = append(clientOpts, syfthub.WithAPIToken(cfg.APIToken)) - } - client, err := syfthub.NewClient(clientOpts...) + client, err := clientutil.NewClient(cfg, "", 0) if err != nil { - if lsJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to create client: %v", err) - } - return err + return output.ReplyError(lsJSONOutput, "Failed to create client: %v", err) } defer client.Close() @@ -88,15 +73,7 @@ func runLs(cmd *cobra.Command, args []string) error { func listUsers(ctx context.Context, client *syfthub.Client) error { owners, err := client.Hub.Owners(ctx, syfthub.WithOwnersLimit(lsLimit)) if err != nil { - if lsJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to list: %v", err) - } - return err + return output.ReplyError(lsJSONOutput, "Failed to list: %v", err) } // Convert to output format @@ -111,17 +88,17 @@ func listUsers(ctx context.Context, client *syfthub.Client) error { } if lsJSONOutput { - result := make([]map[string]interface{}, 0, len(ownerInfos)) + result := make([]map[string]any, 0, len(ownerInfos)) for _, owner := range ownerInfos { - result = append(result, map[string]interface{}{ + result = append(result, map[string]any{ "username": owner.Username, "endpoint_count": owner.EndpointCount, "model_count": owner.ModelCount, "data_source_count": owner.DataSourceCount, }) } - output.JSON(map[string]interface{}{ - "status": "success", + output.JSON(map[string]any{ + "status": output.StatusSuccess, "owners": result, }) } else if lsLongFormat { @@ -137,15 +114,7 @@ func listUserEndpoints(ctx context.Context, client *syfthub.Client, username str // Use the efficient ByOwner API (GET /{owner_slug}) eps, err := client.Hub.ByOwner(ctx, username, syfthub.WithByOwnerLimit(lsLimit)) if err != nil { - if lsJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to list: %v", err) - } - return err + return output.ReplyError(lsJSONOutput, "Failed to list: %v", err) } // Convert to output format @@ -163,9 +132,9 @@ func listUserEndpoints(ctx context.Context, client *syfthub.Client, username str } if lsJSONOutput { - result := make([]map[string]interface{}, 0, len(endpoints)) + result := make([]map[string]any, 0, len(endpoints)) for _, ep := range endpoints { - result = append(result, map[string]interface{}{ + result = append(result, map[string]any{ "name": ep.Name, "type": ep.Type, "version": ep.Version, @@ -173,8 +142,8 @@ func listUserEndpoints(ctx context.Context, client *syfthub.Client, username str "description": ep.Description, }) } - output.JSON(map[string]interface{}{ - "status": "success", + output.JSON(map[string]any{ + "status": output.StatusSuccess, "endpoints": result, }) } else if lsLongFormat { @@ -189,15 +158,7 @@ func listUserEndpoints(ctx context.Context, client *syfthub.Client, username str func showEndpoint(ctx context.Context, client *syfthub.Client, path string) error { ep, err := client.Hub.Get(ctx, path) if err != nil { - if lsJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("%v", err) - } - return err + return output.ReplyError(lsJSONOutput, "%v", err) } if lsJSONOutput { @@ -211,9 +172,9 @@ func showEndpoint(ctx context.Context, client *syfthub.Client, path string) erro updatedAt = &s } - output.JSON(map[string]interface{}{ - "status": "success", - "endpoint": map[string]interface{}{ + output.JSON(map[string]any{ + "status": output.StatusSuccess, + "endpoint": map[string]any{ "owner": ep.OwnerUsername, "name": ep.Name, "type": string(ep.Type), diff --git a/cli/internal/cmd/node_endpoint.go b/cli/internal/cmd/node_endpoint.go index 26b83743..c8a0236c 100644 --- a/cli/internal/cmd/node_endpoint.go +++ b/cli/internal/cmd/node_endpoint.go @@ -69,17 +69,13 @@ func runNodeEndpointCreate(cmd *cobra.Command, args []string) error { Version: nodeEPCreateVersion, }) if err != nil { - if nodeEPCreateJSON { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("%v", err) - } + output.ReplyErrorSoft(nodeEPCreateJSON, "%v", err) return nil } if nodeEPCreateJSON { - output.JSON(map[string]interface{}{ - "status": "success", + output.JSON(map[string]any{ + "status": output.StatusSuccess, "slug": slug, "path": fmt.Sprintf("%s/%s", cfg.EndpointsPath, slug), }) @@ -115,17 +111,13 @@ func runNodeEndpointList(cmd *cobra.Command, args []string) error { endpoints, err := mgr.ListEndpoints() if err != nil { - if nodeEPListJSON { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("%v", err) - } + output.ReplyErrorSoft(nodeEPListJSON, "%v", err) return nil } if len(endpoints) == 0 { if nodeEPListJSON { - output.JSON(map[string]interface{}{"status": "success", "endpoints": []interface{}{}}) + output.JSON(map[string]any{"status": output.StatusSuccess, "endpoints": []any{}}) } else { fmt.Println("No endpoints found.") fmt.Printf("Create one with: syft node endpoint create --type model\n") @@ -134,7 +126,7 @@ func runNodeEndpointList(cmd *cobra.Command, args []string) error { } if nodeEPListJSON { - output.JSON(map[string]interface{}{"status": "success", "endpoints": endpoints}) + output.JSON(map[string]any{"status": output.StatusSuccess, "endpoints": endpoints}) return nil } @@ -214,16 +206,12 @@ func runNodeEndpointDelete(cmd *cobra.Command, args []string) error { } if err := mgr.DeleteEndpoint(slug); err != nil { - if nodeEPDeleteJSON { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("%v", err) - } + output.ReplyErrorSoft(nodeEPDeleteJSON, "%v", err) return nil } if nodeEPDeleteJSON { - output.JSON(map[string]interface{}{"status": "success", "slug": slug}) + output.JSON(map[string]any{"status": output.StatusSuccess, "slug": slug}) } else { output.Success("Deleted endpoint '%s'.", slug) } @@ -265,7 +253,7 @@ func runNodeEndpointEdit(cmd *cobra.Command, args []string) error { readmePath := fmt.Sprintf("%s/%s/README.md", cfg.EndpointsPath, slug) // Build updates from flags - updates := make(map[string]interface{}) + updates := make(map[string]any) if cmd.Flags().Changed("name") { updates["name"] = nodeEPEditName } @@ -274,12 +262,7 @@ func runNodeEndpointEdit(cmd *cobra.Command, args []string) error { } if cmd.Flags().Changed("type") { if nodeEPEditType != "model" && nodeEPEditType != "data_source" { - msg := "type must be 'model' or 'data_source'" - if nodeEPEditJSON { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPEditJSON, "type must be 'model' or 'data_source'") return nil } updates["type"] = nodeEPEditType @@ -294,37 +277,23 @@ func runNodeEndpointEdit(cmd *cobra.Command, args []string) error { case "false", "no", "0": updates["enabled"] = false default: - msg := "enabled must be 'true' or 'false'" - if nodeEPEditJSON { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPEditJSON, "enabled must be 'true' or 'false'") return nil } } if len(updates) == 0 { - msg := "No changes specified. Use --name, --description, --type, --version, or --enabled flags." - if nodeEPEditJSON { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPEditJSON, "No changes specified. Use --name, --description, --type, --version, or --enabled flags.") return nil } if err := nodeops.UpdateReadmeFrontmatter(readmePath, updates); err != nil { - if nodeEPEditJSON { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("%v", err) - } + output.ReplyErrorSoft(nodeEPEditJSON, "%v", err) return nil } if nodeEPEditJSON { - updates["status"] = "success" + updates["status"] = output.StatusSuccess updates["slug"] = slug output.JSON(updates) } else { diff --git a/cli/internal/cmd/node_endpoint_log.go b/cli/internal/cmd/node_endpoint_log.go index 31a33d22..063bb660 100644 --- a/cli/internal/cmd/node_endpoint_log.go +++ b/cli/internal/cmd/node_endpoint_log.go @@ -23,6 +23,35 @@ var ( nodeEndpointLogJSON bool ) +// jsonlBuf is a reusable 1 MiB scratch buffer for the tail-follow hot path in +// followEndpointLogs. Safe because followEndpointLogs is single-goroutine. +// Concurrent callers (e.g. readLogFile) must allocate their own buffer. +var jsonlBuf [1 << 20]byte + +// scanJSONL calls fn for each decoded log entry in r. Lines that fail to +// unmarshal are skipped silently (matches prior behavior). Raw line bytes are +// passed to fn alongside the parsed entry so callers can emit raw JSON mode +// without re-marshaling. If buf is non-nil, it is installed as the scanner's +// buffer (with max size len(buf)); otherwise the default bufio buffer is used. +func scanJSONL(r io.Reader, buf []byte, fn func(line []byte, entry *logEntry)) error { + scanner := bufio.NewScanner(r) + if buf != nil { + scanner.Buffer(buf, len(buf)) + } + for scanner.Scan() { + line := scanner.Bytes() + if len(line) == 0 { + continue + } + var entry logEntry + if json.Unmarshal(line, &entry) != nil { + continue + } + fn(line, &entry) + } + return scanner.Err() +} + var nodeEndpointLogCmd = &cobra.Command{ Use: "log ", Short: "View request logs for an endpoint", @@ -79,8 +108,8 @@ func runNodeEndpointLog(cmd *cobra.Command, args []string) error { if _, err := os.Stat(logDir); os.IsNotExist(err) { if nodeEndpointLogJSON { - output.JSON(map[string]interface{}{ - "status": "error", + output.JSON(map[string]any{ + "status": output.StatusError, "message": fmt.Sprintf("No logs found for endpoint '%s'", slug), }) } else { @@ -106,9 +135,9 @@ func showRecentLogs(logDir, slug string) error { if len(files) == 0 { if nodeEndpointLogJSON { - output.JSON(map[string]interface{}{ - "status": "success", - "logs": []interface{}{}, + output.JSON(map[string]any{ + "status": output.StatusSuccess, + "logs": []any{}, "total": 0, }) } else { @@ -141,8 +170,8 @@ func showRecentLogs(logDir, slug string) error { } if nodeEndpointLogJSON { - output.JSON(map[string]interface{}{ - "status": "success", + output.JSON(map[string]any{ + "status": output.StatusSuccess, "logs": entries, "total": len(entries), }) @@ -166,76 +195,63 @@ func showRecentLogs(logDir, slug string) error { func followEndpointLogs(logDir, slug string) error { fmt.Printf("Following logs for '%s' (Ctrl+C to stop)...\n\n", slug) - // Start by showing the last few entries + emit := func(line []byte, entry *logEntry) { + if nodeEndpointLogJSON { + fmt.Println(string(line)) + } else { + printLogEntry(entry) + } + } + today := time.Now().Format("2006-01-02") todayFile := filepath.Join(logDir, today+".jsonl") - var offset int64 - - // Print existing entries from today's file - if f, err := os.Open(todayFile); err == nil { - scanner := bufio.NewScanner(f) - buf := make([]byte, 1024*1024) - scanner.Buffer(buf, 1024*1024) - for scanner.Scan() { - line := scanner.Bytes() - if len(line) == 0 { - continue - } - var entry logEntry - if err := json.Unmarshal(line, &entry); err == nil { - if nodeEndpointLogJSON { - fmt.Println(string(line)) - } else { - printLogEntry(&entry) - } - } - } - offset, _ = f.Seek(0, io.SeekCurrent) - f.Close() + // Open the current date's file once and hold the handle across polls. + var f *os.File + if openF, err := os.Open(todayFile); err == nil { + f = openF + // Initial catch-up: scan from the start to EOF. + _ = scanJSONL(f, jsonlBuf[:], emit) } + // Ensure we close on exit paths (loop is infinite today, but future-proof). + defer func() { + if f != nil { + f.Close() + } + }() - // Poll for new entries + // Poll for new entries. for { time.Sleep(500 * time.Millisecond) - // Check if the date rolled over + // Handle date rollover: close the old handle, open the new file. currentDate := time.Now().Format("2006-01-02") - currentFile := filepath.Join(logDir, currentDate+".jsonl") if currentDate != today { today = currentDate - offset = 0 - } - - f, err := os.Open(currentFile) - if err != nil { - continue - } - - if offset > 0 { - f.Seek(offset, io.SeekStart) + if f != nil { + f.Close() + f = nil + } } - scanner := bufio.NewScanner(f) - buf := make([]byte, 1024*1024) - scanner.Buffer(buf, 1024*1024) - for scanner.Scan() { - line := scanner.Bytes() - if len(line) == 0 { + // If we don't currently have an open handle, try to open the current + // date's file. Skip this tick if it's not yet created. + if f == nil { + currentFile := filepath.Join(logDir, today+".jsonl") + openF, err := os.Open(currentFile) + if err != nil { continue } - var entry logEntry - if err := json.Unmarshal(line, &entry); err == nil { - if nodeEndpointLogJSON { - fmt.Println(string(line)) - } else { - printLogEntry(&entry) - } - } + f = openF } - offset, _ = f.Seek(0, io.SeekCurrent) - f.Close() + // Scan new bytes from current offset to EOF, leaving the handle open. + if err := scanJSONL(f, jsonlBuf[:], emit); err != nil { + // On read error, drop the handle and reopen next tick. + f.Close() + f = nil + continue + } } } @@ -333,20 +349,9 @@ func readLogFile(path string) ([]logEntry, error) { defer f.Close() var entries []logEntry - scanner := bufio.NewScanner(f) - buf := make([]byte, 1024*1024) - scanner.Buffer(buf, 1024*1024) - - for scanner.Scan() { - line := scanner.Bytes() - if len(line) == 0 { - continue - } - var entry logEntry - if err := json.Unmarshal(line, &entry); err == nil { - entries = append(entries, entry) - } - } - - return entries, scanner.Err() + buf := make([]byte, 1<<20) + err = scanJSONL(f, buf, func(_ []byte, entry *logEntry) { + entries = append(entries, *entry) + }) + return entries, err } diff --git a/cli/internal/cmd/node_endpoint_setup.go b/cli/internal/cmd/node_endpoint_setup.go index 367124cb..d07bca0a 100644 --- a/cli/internal/cmd/node_endpoint_setup.go +++ b/cli/internal/cmd/node_endpoint_setup.go @@ -45,12 +45,7 @@ func runNodeEndpointSetup(cmd *cobra.Command, args []string) error { // 1. Verify endpoint exists if _, err := os.Stat(endpointDir); os.IsNotExist(err) { - msg := fmt.Sprintf("Endpoint '%s' not found.", slug) - if nodeEPSetupJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPSetupJSON, "Endpoint '%s' not found.", slug) return nil } @@ -58,21 +53,11 @@ func runNodeEndpointSetup(cmd *cobra.Command, args []string) error { setupPath := filepath.Join(endpointDir, "setup.yaml") spec, err := nodeops.ParseSetupYaml(setupPath) if err != nil { - msg := fmt.Sprintf("Failed to parse setup.yaml: %v", err) - if nodeEPSetupJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPSetupJSON, "Failed to parse setup.yaml: %v", err) return nil } if spec == nil { - msg := fmt.Sprintf("No setup.yaml found for endpoint '%s'.", slug) - if nodeEPSetupJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPSetupJSON, "No setup.yaml found for endpoint '%s'.", slug) return nil } @@ -105,19 +90,14 @@ func runNodeEndpointSetup(cmd *cobra.Command, args []string) error { } if err := engine.Execute(ctx); err != nil { - msg := fmt.Sprintf("Setup failed: %v", err) - if nodeEPSetupJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPSetupJSON, "Setup failed: %v", err) return nil } // 6. Report result status, _ := nodeops.GetSetupStatus(endpointDir) if nodeEPSetupJSON { - output.JSON(map[string]any{"status": "success", "setup_status": status}) + output.JSON(map[string]any{"status": output.StatusSuccess, "setup_status": status}) } else { if status != nil { output.Success("Setup complete for '%s' (%d/%d steps)", slug, status.CompletedN, status.TotalSteps) @@ -149,12 +129,7 @@ func runNodeEndpointSetupStatus(cmd *cobra.Command, args []string) error { endpointDir := filepath.Join(cfg.EndpointsPath, slug) if _, err := os.Stat(endpointDir); os.IsNotExist(err) { - msg := fmt.Sprintf("Endpoint '%s' not found.", slug) - if nodeEPSetupStatusJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPSetupStatusJSON, "Endpoint '%s' not found.", slug) return nil } @@ -165,17 +140,12 @@ func runNodeEndpointSetupStatus(cmd *cobra.Command, args []string) error { } if status == nil { - msg := fmt.Sprintf("Endpoint '%s' has no setup.yaml.", slug) - if nodeEPSetupStatusJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeEPSetupStatusJSON, "Endpoint '%s' has no setup.yaml.", slug) return nil } if nodeEPSetupStatusJSON { - output.JSON(map[string]any{"status": "success", "setup_status": status}) + output.JSON(map[string]any{"status": output.StatusSuccess, "setup_status": status}) return nil } diff --git a/cli/internal/cmd/node_endpoint_setup_init.go b/cli/internal/cmd/node_endpoint_setup_init.go index c30854dd..02da9fe6 100644 --- a/cli/internal/cmd/node_endpoint_setup_init.go +++ b/cli/internal/cmd/node_endpoint_setup_init.go @@ -62,7 +62,7 @@ func runSetupInit(cmd *cobra.Command, args []string) error { "tags": m.Tags, } } - output.JSON(map[string]any{"status": "success", "connectors": items}) + output.JSON(map[string]any{"status": output.StatusSuccess, "connectors": items}) return nil } @@ -85,22 +85,12 @@ func runSetupInit(cmd *cobra.Command, args []string) error { // Scaffold mode requires a slug if len(args) == 0 { - msg := "Endpoint slug is required. Usage: syft node endpoint setup-init --connector " - if setupInitJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(setupInitJSON, "Endpoint slug is required. Usage: syft node endpoint setup-init --connector ") return nil } if len(setupInitConnectors) == 0 { - msg := "At least one --connector is required. Use --list to see available connectors." - if setupInitJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(setupInitJSON, "At least one --connector is required. Use --list to see available connectors.") return nil } @@ -110,52 +100,32 @@ func runSetupInit(cmd *cobra.Command, args []string) error { // Check endpoint exists if _, err := os.Stat(endpointDir); os.IsNotExist(err) { - msg := fmt.Sprintf("Endpoint '%s' not found at %s.", slug, endpointDir) - if setupInitJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(setupInitJSON, "Endpoint '%s' not found at %s.", slug, endpointDir) return nil } // Check setup.yaml doesn't already exist (unless --force) setupPath := filepath.Join(endpointDir, "setup.yaml") if _, err := os.Stat(setupPath); err == nil && !setupInitForce { - msg := fmt.Sprintf("setup.yaml already exists for '%s'. Use --force to overwrite.", slug) - if setupInitJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(setupInitJSON, "setup.yaml already exists for '%s'. Use --force to overwrite.", slug) return nil } // Scaffold spec, err := registry.Scaffold(setupInitConnectors, nil) if err != nil { - msg := fmt.Sprintf("Failed to scaffold setup.yaml: %v", err) - if setupInitJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(setupInitJSON, "Failed to scaffold setup.yaml: %v", err) return nil } if err := nodeops.WriteSetupYaml(setupPath, spec); err != nil { - msg := fmt.Sprintf("Failed to write setup.yaml: %v", err) - if setupInitJSON { - output.JSON(map[string]any{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(setupInitJSON, "Failed to write setup.yaml: %v", err) return nil } if setupInitJSON { output.JSON(map[string]any{ - "status": "success", + "status": output.StatusSuccess, "slug": slug, "connectors": setupInitConnectors, "path": setupPath, diff --git a/cli/internal/cmd/node_init.go b/cli/internal/cmd/node_init.go index c0e9fdab..a30817fb 100644 --- a/cli/internal/cmd/node_init.go +++ b/cli/internal/cmd/node_init.go @@ -70,7 +70,7 @@ func runNodeInit(cmd *cobra.Command, args []string) error { if alreadyRunning { if nodeInitJSON { output.JSON(map[string]any{ - "status": "error", + "status": output.StatusError, "message": "Node is already configured and running. Use --force to reinitialize.", "path": nodeconfig.ConfigFile, }) @@ -89,7 +89,7 @@ func runNodeInit(cmd *cobra.Command, args []string) error { if err != nil { if nodeInitJSON { output.JSON(map[string]any{ - "status": "error", + "status": output.StatusError, "message": fmt.Sprintf("Failed to start daemon: %v", err), }) } else { @@ -100,7 +100,7 @@ func runNodeInit(cmd *cobra.Command, args []string) error { } if nodeInitJSON { output.JSON(map[string]any{ - "status": "success", + "status": output.StatusSuccess, "config_path": nodeconfig.ConfigFile, "endpoints_path": existing.EndpointsPath, "syfthub_url": existing.HubURL, @@ -126,7 +126,6 @@ func runNodeInit(cmd *cobra.Command, args []string) error { cfgCopy := *existing cfg := &cfgCopy - cfg.IsConfigured = false // Apply flag overrides if nodeInitHubURL != "" { @@ -157,7 +156,7 @@ func runNodeInit(cmd *cobra.Command, args []string) error { if err := nodeconfig.EnsureConfigDir(); err != nil { if nodeInitJSON { - output.JSON(map[string]any{"status": "error", "message": err.Error()}) + output.JSON(map[string]any{"status": output.StatusError, "message": err.Error()}) } else { output.Error("Failed to create config directory: %v", err) } @@ -166,7 +165,7 @@ func runNodeInit(cmd *cobra.Command, args []string) error { if err := os.MkdirAll(cfg.EndpointsPath, 0755); err != nil { if nodeInitJSON { - output.JSON(map[string]any{"status": "error", "message": err.Error()}) + output.JSON(map[string]any{"status": output.StatusError, "message": err.Error()}) } else { output.Error("Failed to create endpoints directory: %v", err) } @@ -175,7 +174,7 @@ func runNodeInit(cmd *cobra.Command, args []string) error { if err := cfg.Save(); err != nil { if nodeInitJSON { - output.JSON(map[string]any{"status": "error", "message": err.Error()}) + output.JSON(map[string]any{"status": output.StatusError, "message": err.Error()}) } else { output.Error("Failed to save configuration: %v", err) } @@ -186,7 +185,7 @@ func runNodeInit(cmd *cobra.Command, args []string) error { if err != nil { if nodeInitJSON { output.JSON(map[string]any{ - "status": "error", + "status": output.StatusError, "message": fmt.Sprintf("Config saved but failed to start daemon: %v", err), }) } else { @@ -199,7 +198,7 @@ func runNodeInit(cmd *cobra.Command, args []string) error { if nodeInitJSON { output.JSON(map[string]any{ - "status": "success", + "status": output.StatusSuccess, "config_path": nodeconfig.ConfigFile, "endpoints_path": cfg.EndpointsPath, "syfthub_url": cfg.HubURL, diff --git a/cli/internal/cmd/node_policy.go b/cli/internal/cmd/node_policy.go index eee56a02..fb2d9b6f 100644 --- a/cli/internal/cmd/node_policy.go +++ b/cli/internal/cmd/node_policy.go @@ -59,16 +59,11 @@ func runNodePolicyAdd(cmd *cobra.Command, args []string) error { cfg := nodeconfig.Load() // Parse config key=value pairs - configMap := make(map[string]interface{}) + configMap := make(map[string]any) for _, kv := range nodePolicyAddConfig { parts := strings.SplitN(kv, "=", 2) if len(parts) != 2 { - msg := fmt.Sprintf("Invalid config format: %q (expected key=value)", kv) - if nodePolicyAddJSON { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodePolicyAddJSON, "Invalid config format: %q (expected key=value)", kv) return nil } configMap[parts[0]] = parts[1] @@ -82,17 +77,13 @@ func runNodePolicyAdd(cmd *cobra.Command, args []string) error { policiesPath := filepath.Join(cfg.EndpointsPath, slug, "policies.yaml") if err := nodeops.SavePolicy(policiesPath, policy); err != nil { - if nodePolicyAddJSON { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("%v", err) - } + output.ReplyErrorSoft(nodePolicyAddJSON, "%v", err) return nil } if nodePolicyAddJSON { - output.JSON(map[string]interface{}{ - "status": "success", + output.JSON(map[string]any{ + "status": output.StatusSuccess, "slug": slug, "policy": nodePolicyAddName, }) @@ -125,16 +116,12 @@ func runNodePolicyList(cmd *cobra.Command, args []string) error { policiesPath := filepath.Join(cfg.EndpointsPath, slug, "policies.yaml") policies, err := nodeops.GetPolicies(policiesPath) if err != nil { - if nodePolicyListJSON { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("%v", err) - } + output.ReplyErrorSoft(nodePolicyListJSON, "%v", err) return nil } if nodePolicyListJSON { - output.JSON(map[string]interface{}{"status": "success", "slug": slug, "policies": policies}) + output.JSON(map[string]any{"status": output.StatusSuccess, "slug": slug, "policies": policies}) return nil } @@ -199,16 +186,12 @@ func runNodePolicyRemove(cmd *cobra.Command, args []string) error { policiesPath := filepath.Join(cfg.EndpointsPath, slug, "policies.yaml") if err := nodeops.DeletePolicy(policiesPath, nodePolicyRemoveName); err != nil { - if nodePolicyRemoveJSON { - output.JSON(map[string]interface{}{"status": "error", "message": err.Error()}) - } else { - output.Error("%v", err) - } + output.ReplyErrorSoft(nodePolicyRemoveJSON, "%v", err) return nil } if nodePolicyRemoveJSON { - output.JSON(map[string]interface{}{"status": "success", "slug": slug, "policy": nodePolicyRemoveName}) + output.JSON(map[string]any{"status": output.StatusSuccess, "slug": slug, "policy": nodePolicyRemoveName}) } else { output.Success("Removed policy '%s' from endpoint '%s'.", nodePolicyRemoveName, slug) } diff --git a/cli/internal/cmd/node_run.go b/cli/internal/cmd/node_run.go index d6aad1e5..b3f52379 100644 --- a/cli/internal/cmd/node_run.go +++ b/cli/internal/cmd/node_run.go @@ -14,7 +14,6 @@ import ( "github.com/spf13/cobra" - "github.com/openmined/syfthub/sdk/golang/syfthub" "github.com/openmined/syfthub/sdk/golang/syfthubapi" "github.com/openmined/syfthub/sdk/golang/syfthubapi/containermode" "github.com/openmined/syfthub/sdk/golang/syfthubapi/filemode" @@ -22,6 +21,7 @@ import ( "github.com/openmined/syfthub/sdk/golang/syfthubapi/setupflow" "github.com/openmined/syfthub/sdk/golang/syfthubapi/transport" + "github.com/OpenMined/syfthub/cli/internal/clientutil" "github.com/OpenMined/syfthub/cli/internal/nodeconfig" ) @@ -53,10 +53,7 @@ func runNodeRun(cmd *cobra.Command, args []string) error { // Derive tunnel username from API key fmt.Println("Authenticating with SyftHub...") - hubClient, err := syfthub.NewClient( - syfthub.WithBaseURL(cfg.HubURL), - syfthub.WithAPIToken(cfg.APIToken), - ) + hubClient, err := clientutil.NewClient(cfg, "", 0) if err != nil { return fmt.Errorf("failed to create hub client: %w", err) } @@ -212,20 +209,23 @@ func runNodeRun(cmd *cobra.Command, args []string) error { select { case <-ticker.C: results := lifecycleMgr.CheckAndRefresh(endpointsPath) + anyRefreshed := false for _, r := range results { if r.Success { logger.Info("refreshed token", "slug", r.Slug, "step", r.StepID) - // Trigger endpoint reload - reloadEndpoints, loadErr := provider.LoadEndpoints() - if loadErr != nil { - logger.Warn("failed to reload endpoints after token refresh", "error", loadErr) - } else { - api.Registry().ReplaceFileBased(reloadEndpoints) - } + anyRefreshed = true } else if r.Error != nil { logger.Warn("token refresh failed", "slug", r.Slug, "step", r.StepID, "error", r.Error) } } + if anyRefreshed { + reloadEndpoints, loadErr := provider.LoadEndpoints() + if loadErr != nil { + logger.Warn("failed to reload endpoints after token refresh", "error", loadErr) + } else { + api.Registry().ReplaceFileBased(reloadEndpoints) + } + } case <-ctx.Done(): return } @@ -284,8 +284,3 @@ type slogAdapter struct { func newSlogAdapter(l *slog.Logger) *slogAdapter { return &slogAdapter{l} } - -func (s *slogAdapter) Debug(msg string, args ...any) { s.Logger.Debug(msg, args...) } -func (s *slogAdapter) Info(msg string, args ...any) { s.Logger.Info(msg, args...) } -func (s *slogAdapter) Warn(msg string, args ...any) { s.Logger.Warn(msg, args...) } -func (s *slogAdapter) Error(msg string, args ...any) { s.Logger.Error(msg, args...) } diff --git a/cli/internal/cmd/node_status.go b/cli/internal/cmd/node_status.go index 7a3b582c..3f9caa47 100644 --- a/cli/internal/cmd/node_status.go +++ b/cli/internal/cmd/node_status.go @@ -69,8 +69,8 @@ func runNodeStatus(cmd *cobra.Command, args []string) error { } if nodeStatusJSON { - data := map[string]interface{}{ - "status": "success", + data := map[string]any{ + "status": output.StatusSuccess, "running": running, "server_reachable": serverReachable, "configured": cfg.Configured(), diff --git a/cli/internal/cmd/node_stop.go b/cli/internal/cmd/node_stop.go index 120a9ae7..d2763fe9 100644 --- a/cli/internal/cmd/node_stop.go +++ b/cli/internal/cmd/node_stop.go @@ -1,7 +1,6 @@ package cmd import ( - "fmt" "os" "syscall" "time" @@ -29,24 +28,14 @@ func init() { func runNodeStop(cmd *cobra.Command, args []string) error { pid, err := nodeconfig.ReadPID() if err != nil { - msg := "No running node found." - if nodeStopJSON { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeStopJSON, "No running node found.") return nil } proc, err := os.FindProcess(pid) if err != nil { nodeconfig.RemovePID() - msg := fmt.Sprintf("Process %d not found.", pid) - if nodeStopJSON { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) - } else { - output.Error(msg) - } + output.ReplyErrorSoft(nodeStopJSON, "Process %d not found.", pid) return nil } @@ -55,7 +44,7 @@ func runNodeStop(cmd *cobra.Command, args []string) error { nodeconfig.RemovePID() msg := "Node is not running (stale PID file removed)." if nodeStopJSON { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) + output.JSON(map[string]any{"status": output.StatusError, "message": msg}) } else { output.Warning(msg) } @@ -64,13 +53,7 @@ func runNodeStop(cmd *cobra.Command, args []string) error { // Send SIGTERM if err := proc.Signal(syscall.SIGTERM); err != nil { - msg := fmt.Sprintf("Failed to stop node: %v", err) - if nodeStopJSON { - output.JSON(map[string]interface{}{"status": "error", "message": msg}) - } else { - output.Error(msg) - } - return err + return output.ReplyError(nodeStopJSON, "Failed to stop node: %v", err) } // Wait for process to exit (up to 5 seconds) @@ -87,13 +70,13 @@ func runNodeStop(cmd *cobra.Command, args []string) error { if stopped { if nodeStopJSON { - output.JSON(map[string]interface{}{"status": "success", "message": "Node stopped.", "pid": pid}) + output.JSON(map[string]any{"status": output.StatusSuccess, "message": "Node stopped.", "pid": pid}) } else { output.Success("Node stopped (PID %d).", pid) } } else { if nodeStopJSON { - output.JSON(map[string]interface{}{"status": "warning", "message": "SIGTERM sent but process may still be running.", "pid": pid}) + output.JSON(map[string]any{"status": "warning", "message": "SIGTERM sent but process may still be running.", "pid": pid}) } else { output.Warning("SIGTERM sent to PID %d but process may still be running.", pid) } diff --git a/cli/internal/cmd/query.go b/cli/internal/cmd/query.go index 56643091..0cb3101d 100644 --- a/cli/internal/cmd/query.go +++ b/cli/internal/cmd/query.go @@ -3,10 +3,10 @@ package cmd import ( "context" "fmt" - "time" "github.com/spf13/cobra" + "github.com/OpenMined/syfthub/cli/internal/clientutil" "github.com/OpenMined/syfthub/cli/internal/completion" "github.com/OpenMined/syfthub/cli/internal/config" "github.com/OpenMined/syfthub/cli/internal/output" @@ -61,32 +61,12 @@ func runQuery(cmd *cobra.Command, args []string) error { cfg := config.Load() - // Resolve aggregator URL + // Resolve aggregator URL (still needed downstream for ChatCompleteRequest). aggregatorURL := cfg.GetAggregatorURL(queryAggregator) - // Create client options - opts := []syfthub.Option{ - syfthub.WithBaseURL(cfg.HubURL), - syfthub.WithTimeout(time.Duration(cfg.Timeout) * time.Second), - } - if aggregatorURL != "" { - opts = append(opts, syfthub.WithAggregatorURL(aggregatorURL)) - } - if cfg.HasAPIToken() { - opts = append(opts, syfthub.WithAPIToken(cfg.APIToken)) - } - - client, err := syfthub.NewClient(opts...) + client, err := clientutil.NewClient(cfg, queryAggregator, 0) if err != nil { - if queryJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to create client: %v", err) - } - return err + return output.ReplyError(queryJSONOutput, "Failed to create client: %v", err) } defer client.Close() @@ -111,26 +91,22 @@ func queryComplete(ctx context.Context, client *syfthub.Client, target, prompt, response, err := client.Chat().Complete(ctx, req) if err != nil { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - return err + return output.ReplyError(true, "%s", err.Error()) } // Format sources - sources := make([]map[string]interface{}, 0) + sources := make([]map[string]any, 0) for title, doc := range response.Sources { - sources = append(sources, map[string]interface{}{ + sources = append(sources, map[string]any{ "title": title, "slug": doc.Slug, }) } // Format retrieval info - retrievalInfo := make([]map[string]interface{}, 0) + retrievalInfo := make([]map[string]any, 0) for _, info := range response.RetrievalInfo { - retrievalInfo = append(retrievalInfo, map[string]interface{}{ + retrievalInfo = append(retrievalInfo, map[string]any{ "path": info.Path, "documents_retrieved": info.DocumentsRetrieved, "status": string(info.Status), @@ -149,8 +125,8 @@ func queryComplete(ctx context.Context, client *syfthub.Client, target, prompt, usage["total_tokens"] = response.Usage.TotalTokens } - output.JSON(map[string]interface{}{ - "status": "success", + output.JSON(map[string]any{ + "status": output.StatusSuccess, "response": response.Response, "sources": sources, "retrieval_info": retrievalInfo, diff --git a/cli/internal/cmd/remove.go b/cli/internal/cmd/remove.go index 3f4f3b78..5cf675e8 100644 --- a/cli/internal/cmd/remove.go +++ b/cli/internal/cmd/remove.go @@ -2,9 +2,6 @@ package cmd import ( "github.com/spf13/cobra" - - "github.com/OpenMined/syfthub/cli/internal/config" - "github.com/OpenMined/syfthub/cli/internal/output" ) var removeCmd = &cobra.Command{ @@ -21,7 +18,9 @@ var removeAggregatorCmd = &cobra.Command{ Use: "aggregator ", Short: "Remove an aggregator alias", Args: cobra.ExactArgs(1), - RunE: runRemoveAggregator, + RunE: func(cmd *cobra.Command, args []string) error { + return runRemoveAlias(aggregatorKind, args[0], removeAggregatorJSONOutput) + }, } // Remove accounting subcommand @@ -31,7 +30,9 @@ var removeAccountingCmd = &cobra.Command{ Use: "accounting ", Short: "Remove an accounting service alias", Args: cobra.ExactArgs(1), - RunE: runRemoveAccounting, + RunE: func(cmd *cobra.Command, args []string) error { + return runRemoveAlias(accountingKind, args[0], removeAccountingJSONOutput) + }, } func init() { @@ -41,101 +42,3 @@ func init() { removeCmd.AddCommand(removeAggregatorCmd) removeCmd.AddCommand(removeAccountingCmd) } - -func runRemoveAggregator(cmd *cobra.Command, args []string) error { - alias := args[0] - - cfg := config.Load() - - if _, exists := cfg.Aggregators[alias]; !exists { - if removeAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": "Aggregator '" + alias + "' not found", - }) - } else { - output.Error("Aggregator '%s' not found.", alias) - } - return nil - } - - delete(cfg.Aggregators, alias) - - // Clear default if it was this alias - if cfg.DefaultAggregator == alias { - cfg.DefaultAggregator = "" - } - - if err := cfg.Save(); err != nil { - if removeAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to save config: %v", err) - } - return err - } - - if removeAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "alias": alias, - "message": "Removed", - }) - } else { - output.Success("Removed aggregator '%s'", alias) - } - - return nil -} - -func runRemoveAccounting(cmd *cobra.Command, args []string) error { - alias := args[0] - - cfg := config.Load() - - if _, exists := cfg.AccountingServices[alias]; !exists { - if removeAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": "Accounting service '" + alias + "' not found", - }) - } else { - output.Error("Accounting service '%s' not found.", alias) - } - return nil - } - - delete(cfg.AccountingServices, alias) - - // Clear default if it was this alias - if cfg.DefaultAccounting == alias { - cfg.DefaultAccounting = "" - } - - if err := cfg.Save(); err != nil { - if removeAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to save config: %v", err) - } - return err - } - - if removeAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "alias": alias, - "message": "Removed", - }) - } else { - output.Success("Removed accounting service '%s'", alias) - } - - return nil -} diff --git a/cli/internal/cmd/update_aliases.go b/cli/internal/cmd/update_aliases.go index c0dbb291..23495557 100644 --- a/cli/internal/cmd/update_aliases.go +++ b/cli/internal/cmd/update_aliases.go @@ -2,9 +2,6 @@ package cmd import ( "github.com/spf13/cobra" - - "github.com/OpenMined/syfthub/cli/internal/config" - "github.com/OpenMined/syfthub/cli/internal/output" ) var updateCmd = &cobra.Command{ @@ -25,7 +22,9 @@ var updateAggregatorCmd = &cobra.Command{ Use: "aggregator ", Short: "Update an aggregator alias", Args: cobra.ExactArgs(1), - RunE: runUpdateAggregator, + RunE: func(cmd *cobra.Command, args []string) error { + return runUpdateAlias(aggregatorKind, args[0], updateAggregatorURL, updateAggregatorDefault, updateAggregatorJSONOutput) + }, } // Update accounting subcommand @@ -39,7 +38,9 @@ var updateAccountingCmd = &cobra.Command{ Use: "accounting ", Short: "Update an accounting service alias", Args: cobra.ExactArgs(1), - RunE: runUpdateAccounting, + RunE: func(cmd *cobra.Command, args []string) error { + return runUpdateAlias(accountingKind, args[0], updateAccountingURL, updateAccountingDefault, updateAccountingJSONOutput) + }, } func init() { @@ -57,133 +58,3 @@ func init() { updateCmd.AddCommand(updateAggregatorCmd) updateCmd.AddCommand(updateAccountingCmd) } - -func runUpdateAggregator(cmd *cobra.Command, args []string) error { - alias := args[0] - - cfg := config.Load() - - if _, exists := cfg.Aggregators[alias]; !exists { - if updateAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": "Aggregator '" + alias + "' not found", - }) - } else { - output.Error("Aggregator '%s' not found.", alias) - } - return nil - } - - if updateAggregatorURL == "" && !updateAggregatorDefault { - if updateAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": "Nothing to update", - }) - } else { - output.Warning("Nothing to update. Specify --url or --default.") - } - return nil - } - - if updateAggregatorURL != "" { - cfg.Aggregators[alias] = config.AggregatorConfig{URL: updateAggregatorURL} - } - - if updateAggregatorDefault { - cfg.DefaultAggregator = alias - } - - if err := cfg.Save(); err != nil { - if updateAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to save config: %v", err) - } - return err - } - - isDefault := cfg.DefaultAggregator == alias - - if updateAggregatorJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "alias": alias, - "url": cfg.Aggregators[alias].URL, - "is_default": isDefault, - }) - } else { - output.Success("Updated aggregator '%s'", alias) - } - - return nil -} - -func runUpdateAccounting(cmd *cobra.Command, args []string) error { - alias := args[0] - - cfg := config.Load() - - if _, exists := cfg.AccountingServices[alias]; !exists { - if updateAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": "Accounting service '" + alias + "' not found", - }) - } else { - output.Error("Accounting service '%s' not found.", alias) - } - return nil - } - - if updateAccountingURL == "" && !updateAccountingDefault { - if updateAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": "Nothing to update", - }) - } else { - output.Warning("Nothing to update. Specify --url or --default.") - } - return nil - } - - if updateAccountingURL != "" { - cfg.AccountingServices[alias] = config.AccountingConfig{URL: updateAccountingURL} - } - - if updateAccountingDefault { - cfg.DefaultAccounting = alias - } - - if err := cfg.Save(); err != nil { - if updateAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to save config: %v", err) - } - return err - } - - isDefault := cfg.DefaultAccounting == alias - - if updateAccountingJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "alias": alias, - "url": cfg.AccountingServices[alias].URL, - "is_default": isDefault, - }) - } else { - output.Success("Updated accounting service '%s'", alias) - } - - return nil -} diff --git a/cli/internal/cmd/whoami.go b/cli/internal/cmd/whoami.go index f01106ed..408b2f68 100644 --- a/cli/internal/cmd/whoami.go +++ b/cli/internal/cmd/whoami.go @@ -3,13 +3,12 @@ package cmd import ( "context" "fmt" - "time" "github.com/spf13/cobra" + "github.com/OpenMined/syfthub/cli/internal/clientutil" "github.com/OpenMined/syfthub/cli/internal/config" "github.com/OpenMined/syfthub/cli/internal/output" - "github.com/openmined/syfthub/sdk/golang/syfthub" ) var whoamiJSONOutput bool @@ -32,8 +31,8 @@ func runWhoami(cmd *cobra.Command, args []string) error { if !cfg.HasAPIToken() { if whoamiJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", + output.JSON(map[string]any{ + "status": output.StatusError, "message": "Not logged in", }) } else { @@ -42,42 +41,22 @@ func runWhoami(cmd *cobra.Command, args []string) error { return fmt.Errorf("not logged in") } - client, err := syfthub.NewClient( - syfthub.WithBaseURL(cfg.HubURL), - syfthub.WithAPIToken(cfg.APIToken), - syfthub.WithTimeout(time.Duration(cfg.Timeout)*time.Second), - ) + client, err := clientutil.NewClient(cfg, "", 0) if err != nil { - if whoamiJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to create client: %v", err) - } - return err + return output.ReplyError(whoamiJSONOutput, "Failed to create client: %v", err) } defer client.Close() ctx := context.Background() user, err := client.Me(ctx) if err != nil { - if whoamiJSONOutput { - output.JSON(map[string]interface{}{ - "status": "error", - "message": err.Error(), - }) - } else { - output.Error("Failed to get user info: %v", err) - } - return err + return output.ReplyError(whoamiJSONOutput, "Failed to get user info: %v", err) } if whoamiJSONOutput { - output.JSON(map[string]interface{}{ - "status": "success", - "user": map[string]interface{}{ + output.JSON(map[string]any{ + "status": output.StatusSuccess, + "user": map[string]any{ "id": fmt.Sprintf("%d", user.ID), "username": user.Username, "email": user.Email, diff --git a/cli/internal/completion/completion.go b/cli/internal/completion/completion.go index c87f58a6..8e958628 100644 --- a/cli/internal/completion/completion.go +++ b/cli/internal/completion/completion.go @@ -12,6 +12,7 @@ import ( "github.com/spf13/cobra" + "github.com/OpenMined/syfthub/cli/internal/clientutil" "github.com/OpenMined/syfthub/cli/internal/config" "github.com/openmined/syfthub/sdk/golang/syfthub" ) @@ -65,14 +66,7 @@ func getCachedEndpoints() []CachedEndpoint { func fetchAndCacheEndpoints() []CachedEndpoint { cfg := config.Load() - opts := []syfthub.Option{ - syfthub.WithBaseURL(cfg.HubURL), - syfthub.WithTimeout(10 * time.Second), - } - if cfg.HasAPIToken() { - opts = append(opts, syfthub.WithAPIToken(cfg.APIToken)) - } - client, err := syfthub.NewClient(opts...) + client, err := clientutil.NewClient(cfg, "", 10*time.Second) if err != nil { return nil } diff --git a/cli/internal/nodeconfig/config.go b/cli/internal/nodeconfig/config.go index ae6b87a9..d6008e3d 100644 --- a/cli/internal/nodeconfig/config.go +++ b/cli/internal/nodeconfig/config.go @@ -12,6 +12,7 @@ import ( "strconv" "strings" "sync" + "time" ) var configMutex sync.Mutex @@ -213,6 +214,15 @@ func (c *NodeConfig) HasAPIToken() bool { return c.APIToken != "" } +// TimeoutDuration returns the configured request timeout as a time.Duration. +// Returns 0 if Timeout is not positive. +func (c *NodeConfig) TimeoutDuration() time.Duration { + if c.Timeout <= 0 { + return 0 + } + return time.Duration(c.Timeout * float64(time.Second)) +} + func (c *NodeConfig) SetAPIToken(token string) { c.APIToken = token } diff --git a/cli/internal/output/output.go b/cli/internal/output/output.go index 61329c84..c212e05a 100644 --- a/cli/internal/output/output.go +++ b/cli/internal/output/output.go @@ -3,6 +3,7 @@ package output import ( "encoding/json" + "errors" "fmt" "os" "sort" @@ -86,6 +87,51 @@ func Error(format string, args ...interface{}) { fmt.Fprintf(os.Stderr, format+"\n", args...) } +// Status constants for JSON envelopes used by the CLI. +const ( + StatusSuccess = "success" + StatusError = "error" +) + +// ReplyError emits a JSON error envelope when jsonMode is true; otherwise +// prints the formatted message via Error. Returns an error carrying the +// formatted message, suitable as a Cobra RunE return value. +func ReplyError(jsonMode bool, format string, args ...any) error { + msg := fmt.Sprintf(format, args...) + if jsonMode { + JSON(map[string]any{"status": StatusError, "message": msg}) + } else { + Error("%s", msg) + } + return errors.New(msg) +} + +// ReplyErrorSoft is ReplyError without a return value. Use it at sites that +// print the error but return nil to avoid double-printing (e.g. user-input +// validation paths). +func ReplyErrorSoft(jsonMode bool, format string, args ...any) { + msg := fmt.Sprintf(format, args...) + if jsonMode { + JSON(map[string]any{"status": StatusError, "message": msg}) + } else { + Error("%s", msg) + } +} + +// ReplySuccess emits a JSON success envelope merged with the given fields +// when jsonMode is true; otherwise prints the text message via Success. +func ReplySuccess(jsonMode bool, fields map[string]any, textFormat string, args ...any) { + if jsonMode { + out := map[string]any{"status": StatusSuccess} + for k, v := range fields { + out[k] = v + } + JSON(out) + } else { + Success(textFormat, args...) + } +} + // Success prints a success message. func Success(format string, args ...interface{}) { Green.Printf(format+"\n", args...) @@ -329,24 +375,6 @@ func PrintOwnersGrid(owners []OwnerInfo) { display = name } - // Calculate visual width - width := len(owner.Username) + 1 // +1 for the slash - if len(counts) > 0 { - // Add space + parens + counts - countWidth := 3 // " ()" - for i, c := range counts { - if i > 0 { - countWidth++ // comma - } - // Count digits + letter - countWidth += len(fmt.Sprintf("%d", owner.ModelCount)) + 1 - _ = c - } - // Simplified: just measure the actual count string - countStr := strings.Join(counts, ",") - width += 3 + len(countStr) // " (" + counts + ")" - } - items = append(items, gridItem{ display: display, width: visualWidth(display), diff --git a/cli/internal/update/update.go b/cli/internal/update/update.go index 99bc9866..0c121797 100644 --- a/cli/internal/update/update.go +++ b/cli/internal/update/update.go @@ -158,10 +158,10 @@ func GetLatestPreRelease() (*VersionInfo, error) { return getLatestRelease(true) } -func getLatestRelease(includePreRelease bool) (*VersionInfo, error) { - client := &http.Client{Timeout: 10 * time.Second} +var githubHTTPClient = &http.Client{Timeout: 10 * time.Second} - resp, err := client.Get(GitHubAPIURL) +func getLatestRelease(includePreRelease bool) (*VersionInfo, error) { + resp, err := githubHTTPClient.Get(GitHubAPIURL + "?per_page=10") if err != nil { return nil, err } @@ -319,36 +319,50 @@ func CheckForUpdates(force bool) (*VersionInfo, error) { return nil, nil } -// IsBinaryInstall checks if the CLI was installed as a standalone binary. +// IsBinaryInstall reports whether the running CLI was installed as a +// standalone binary (install script or system package) as opposed to via +// `go install`. Self-update only makes sense for the former. func IsBinaryInstall() bool { - // Check if executable is in a typical binary location exe, err := os.Executable() if err != nil { return false } - - // Resolve symlinks exe, err = filepath.EvalSymlinks(exe) if err != nil { return false } - - // Check if it's in standard binary directories dir := filepath.Dir(exe) - binaryDirs := []string{"/usr/local/bin", "/usr/bin", "/opt/homebrew/bin"} - for _, d := range binaryDirs { + + // Exclude go-install locations: GOBIN or GOPATH/bin (and the default ~/go/bin). + if gobin := os.Getenv("GOBIN"); gobin != "" && dir == gobin { + return false + } + if gopath := os.Getenv("GOPATH"); gopath != "" { + if dir == filepath.Join(gopath, "bin") { + return false + } + } + if home := os.Getenv("HOME"); home != "" { + if dir == filepath.Join(home, "go", "bin") { + return false + } + } + + // Accept well-known install-script targets. + systemBins := []string{"/usr/local/bin", "/usr/bin", "/opt/homebrew/bin"} + for _, d := range systemBins { if dir == d { return true } } - - // Check HOME/bin or similar - home := os.Getenv("HOME") - if home != "" && strings.HasPrefix(dir, home) { - return true + if home := os.Getenv("HOME"); home != "" { + if strings.HasPrefix(dir, filepath.Join(home, ".local", "bin")) || + strings.HasPrefix(dir, filepath.Join(home, "bin")) || + strings.HasPrefix(dir, filepath.Join(home, ".syfthub")) { + return true + } } - - return true // Default to true for most cases + return false } // GetCurrentExecutable returns the path to the current executable. diff --git a/components/aggregator/Dockerfile b/components/aggregator/Dockerfile index aa713e0c..24686057 100644 --- a/components/aggregator/Dockerfile +++ b/components/aggregator/Dockerfile @@ -11,8 +11,8 @@ COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv # Set working directory WORKDIR /app -# Copy dependency files and README (required by hatchling) -COPY pyproject.toml README.md ./ +# Copy dependency files +COPY pyproject.toml ./ # Install git (required to fetch git-hosted dependencies) RUN apt-get update && apt-get install -y --no-install-recommends git && rm -rf /var/lib/apt/lists/* diff --git a/components/aggregator/README.md b/components/aggregator/README.md deleted file mode 100644 index dbeb0d3a..00000000 --- a/components/aggregator/README.md +++ /dev/null @@ -1,241 +0,0 @@ -# SyftHub Aggregator - -RAG (Retrieval-Augmented Generation) orchestration service for SyftHub, designed to work with SyftAI-Space endpoints. - -## Overview - -The aggregator service coordinates the chat workflow by: - -1. Receiving user prompts with model and data source endpoint references -2. Querying SyftAI-Space data source endpoints for relevant context (in parallel) -3. Building an augmented prompt with retrieved context -4. Calling the SyftAI-Space model endpoint -5. Streaming/returning the response - -**Key Feature:** The aggregator is **stateless** - all required connection information (URLs, slugs, tenant names, user email) is provided in each request. - -## Architecture - -``` -External Service (e.g., Frontend) - │ - │ ChatRequest with: - │ - user_email - │ - model: {url, slug, tenant_name} - │ - data_sources: [{url, slug, tenant_name}, ...] - ▼ -┌─────────────────────────────────────────┐ -│ AGGREGATOR │ -│ │ -│ 1. Query SyftAI-Space data sources │ -│ POST {url}/api/v1/endpoints/{slug}/query -│ 2. Build RAG prompt with context │ -│ 3. Call SyftAI-Space model endpoint │ -│ POST {url}/api/v1/endpoints/{slug}/query -│ 4. Stream response back │ -└─────────────────────────────────────────┘ - │ - ▼ - SyftAI-Space Instances -``` - -## API - -### POST /api/v1/chat - -Non-streaming chat completion with RAG context. - -**Request:** -```json -{ - "prompt": "What are the key features?", - "user_email": "user@example.com", - "model": { - "url": "http://syftai-space-1:8080", - "slug": "gpt-model", - "name": "GPT Model", - "tenant_name": "acme-corp" - }, - "data_sources": [ - { - "url": "http://syftai-space-1:8080", - "slug": "docs-dataset", - "name": "Documentation", - "tenant_name": "acme-corp" - }, - { - "url": "http://syftai-space-2:8080", - "slug": "wiki-dataset", - "name": "Wiki", - "tenant_name": null - } - ], - "top_k": 5, - "max_tokens": 1024, - "temperature": 0.7, - "similarity_threshold": 0.5 -} -``` - -**Request Fields:** - -| Field | Type | Required | Description | -|-------|------|----------|-------------| -| `prompt` | string | Yes | The user's question or prompt | -| `user_email` | string | Yes | User email for SyftAI-Space visibility/policy checks | -| `model` | EndpointRef | Yes | Model endpoint reference | -| `data_sources` | EndpointRef[] | No | Data source endpoint references | -| `top_k` | int | No | Documents per source (1-20, default: 5) | -| `max_tokens` | int | No | Max tokens for LLM (default: 1024) | -| `temperature` | float | No | LLM temperature (0.0-2.0, default: 0.7) | -| `similarity_threshold` | float | No | Min similarity score (0.0-1.0, default: 0.5) | - -**EndpointRef Fields:** - -| Field | Type | Required | Description | -|-------|------|----------|-------------| -| `url` | string | Yes | Base URL of the SyftAI-Space instance | -| `slug` | string | Yes | Endpoint slug for the API path | -| `name` | string | No | Display name for logging/attribution | -| `tenant_name` | string | No | Tenant name for X-Tenant-Name header | - -**Response:** -```json -{ - "response": "Based on the context...", - "sources": [ - { - "path": "Documentation", - "documents_retrieved": 5, - "status": "success", - "error_message": null - } - ], - "metadata": { - "retrieval_time_ms": 150, - "generation_time_ms": 2000, - "total_time_ms": 2150 - } -} -``` - -### POST /api/v1/chat/stream - -Streaming chat with Server-Sent Events. - -**Request:** Same as `/api/v1/chat` - -**Events:** -- `retrieval_start` - Starting data source queries: `{"sources": N}` -- `source_complete` - One data source finished: `{"path": "...", "status": "success", "documents": N}` -- `retrieval_complete` - All sources done: `{"total_documents": N, "time_ms": N}` -- `generation_start` - Starting model generation: `{}` -- `token` - Response chunk: `{"content": "..."}` -- `done` - Complete with metadata: `{"sources": [...], "metadata": {...}}` -- `error` - Error occurred: `{"message": "..."}` - -## Configuration - -Environment variables (prefix: `AGGREGATOR_`): - -| Variable | Default | Description | -|----------|---------|-------------| -| `DEBUG` | `false` | Enable debug mode | -| `HOST` | `0.0.0.0` | Server host | -| `PORT` | `8001` | Server port | -| `RETRIEVAL_TIMEOUT` | `30.0` | Timeout for data source queries (seconds) | -| `GENERATION_TIMEOUT` | `120.0` | Timeout for model generation (seconds) | - -## SyftAI-Space Compatibility - -The aggregator is designed to work with SyftAI-Space's unified endpoint API: - -``` -POST /api/v1/endpoints/{slug}/query -``` - -### Requirements for SyftAI-Space Endpoints - -**For Data Sources:** -- Endpoint must have a dataset configured -- Endpoint's `response_type` should include references (`"raw"` or `"both"`) -- Endpoint must be `published: true` -- User email must be in the endpoint's `visibility` list (or visibility = `["*"]`) - -**For Models:** -- Endpoint must have a model configured -- Endpoint's `response_type` should include summary (`"summary"` or `"both"`) -- Endpoint must be `published: true` -- User email must be in the endpoint's `visibility` list (or visibility = `["*"]`) - -### Multi-tenancy Support - -When connecting to SyftAI-Space instances with multi-tenancy enabled: -- Set `tenant_name` in the EndpointRef -- The aggregator will include `X-Tenant-Name` header in requests - -## Development - -### Prerequisites - -- Python 3.11+ -- [uv](https://github.com/astral-sh/uv) package manager - -### Setup - -```bash -cd aggregator - -# Create virtual environment and install dependencies -uv venv -source .venv/bin/activate -uv pip install -e ".[dev]" - -# Run the server -uvicorn aggregator.main:app --reload --port 8001 -``` - -### Docker - -```bash -# Build and run standalone -docker compose up --build - -# Or with the main SyftHub stack -cd .. && docker compose -f docker-compose.dev.yml up --build -``` - -### Testing - -```bash -uv run pytest tests/ -v -``` - -## Example Usage - -```python -import httpx - -response = httpx.post( - "http://localhost:8001/api/v1/chat", - json={ - "prompt": "What is machine learning?", - "user_email": "user@example.com", - "model": { - "url": "http://localhost:8080", - "slug": "gpt-endpoint", - }, - "data_sources": [ - { - "url": "http://localhost:8080", - "slug": "ml-docs", - "name": "ML Documentation", - } - ], - "top_k": 5, - "max_tokens": 512, - } -) - -print(response.json()) -``` diff --git a/components/aggregator/pyproject.toml b/components/aggregator/pyproject.toml index 19dd3fe2..9740c813 100644 --- a/components/aggregator/pyproject.toml +++ b/components/aggregator/pyproject.toml @@ -2,7 +2,6 @@ name = "syfthub-aggregator" version = "0.1.0" description = "RAG orchestration service for SyftHub - aggregates context from data sources and generates responses via model endpoints" -readme = "README.md" license = { text = "Apache-2.0" } requires-python = ">=3.11" authors = [{ name = "SyftHub Team" }] @@ -48,7 +47,7 @@ dependencies = [ [project.optional-dependencies] dev = [ - "pytest>=8.3.0", + "pytest>=9.0.3", "pytest-asyncio>=0.24.0", "pytest-cov>=6.0.0", "pytest-xdist>=3.5.0", diff --git a/components/aggregator/uv.lock b/components/aggregator/uv.lock index aeca4305..ac0c7d24 100644 --- a/components/aggregator/uv.lock +++ b/components/aggregator/uv.lock @@ -386,14 +386,14 @@ wheels = [ [[package]] name = "click" -version = "8.3.1" +version = "8.1.8" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "colorama", marker = "sys_platform == 'win32'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/3d/fa/656b739db8587d7b5dfa22e22ed02566950fbfbcdc20311993483657a5c0/click-8.3.1.tar.gz", hash = "sha256:12ff4785d337a1bb490bb7e9c2b1ee5da3112e94a8622f26a6c77f5d2fc6842a", size = 295065, upload-time = "2025-11-15T20:45:42.706Z" } +sdist = { url = "https://files.pythonhosted.org/packages/b9/2e/0090cbf739cee7d23781ad4b89a9894a41538e4fcf4c31dcdd705b78eb8b/click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a", size = 226593, upload-time = "2024-12-21T18:38:44.339Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/98/78/01c019cdb5d6498122777c1a43056ebb3ebfeef2076d9d026bfe15583b2b/click-8.3.1-py3-none-any.whl", hash = "sha256:981153a64e25f12d547d3426c367a4857371575ee7ad18df2a6183ab0545b2a6", size = 108274, upload-time = "2025-11-15T20:45:41.139Z" }, + { url = "https://files.pythonhosted.org/packages/7e/d4/7ebdbd03970677812aac39c869717059dbb71a4cfc033ca6e5221787892c/click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2", size = 98188, upload-time = "2024-12-21T18:38:41.666Z" }, ] [[package]] @@ -1056,14 +1056,14 @@ wheels = [ [[package]] name = "importlib-metadata" -version = "8.7.1" +version = "8.5.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "zipp" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/f3/49/3b30cad09e7771a4982d9975a8cbf64f00d4a1ececb53297f1d9a7be1b10/importlib_metadata-8.7.1.tar.gz", hash = "sha256:49fef1ae6440c182052f407c8d34a68f72efc36db9ca90dc0113398f2fdde8bb", size = 57107, upload-time = "2025-12-21T10:00:19.278Z" } +sdist = { url = "https://files.pythonhosted.org/packages/cd/12/33e59336dca5be0c398a7482335911a33aa0e20776128f038019f1a95f1b/importlib_metadata-8.5.0.tar.gz", hash = "sha256:71522656f0abace1d072b9e5481a48f07c138e00f079c38c8f883823f9c26bd7", size = 55304, upload-time = "2024-09-11T14:56:08.937Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/fa/5e/f8e9a1d23b9c20a551a8a02ea3637b4642e22c2626e3a13a9a29cdea99eb/importlib_metadata-8.7.1-py3-none-any.whl", hash = "sha256:5a1f80bf1daa489495071efbb095d75a634cf28a8bc299581244063b53176151", size = 27865, upload-time = "2025-12-21T10:00:18.329Z" }, + { url = "https://files.pythonhosted.org/packages/a0/d9/a1e041c5e7caa9a05c925f4bdbdfb7f006d1f74996af53467bc394c97be7/importlib_metadata-8.5.0-py3-none-any.whl", hash = "sha256:45e54197d28b7a7f1559e60b95e7c567032b602131fbd588f1497f47880aa68b", size = 26514, upload-time = "2024-09-11T14:56:07.019Z" }, ] [[package]] @@ -1192,7 +1192,7 @@ wheels = [ [[package]] name = "jsonschema" -version = "4.26.0" +version = "4.23.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "attrs" }, @@ -1200,9 +1200,9 @@ dependencies = [ { name = "referencing" }, { name = "rpds-py" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/b3/fc/e067678238fa451312d4c62bf6e6cf5ec56375422aee02f9cb5f909b3047/jsonschema-4.26.0.tar.gz", hash = "sha256:0c26707e2efad8aa1bfc5b7ce170f3fccc2e4918ff85989ba9ffa9facb2be326", size = 366583, upload-time = "2026-01-07T13:41:07.246Z" } +sdist = { url = "https://files.pythonhosted.org/packages/38/2e/03362ee4034a4c917f697890ccd4aec0800ccf9ded7f511971c75451deec/jsonschema-4.23.0.tar.gz", hash = "sha256:d71497fef26351a33265337fa77ffeb82423f3ea21283cd9467bb03999266bc4", size = 325778, upload-time = "2024-07-08T18:40:05.546Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/69/90/f63fb5873511e014207a475e2bb4e8b2e570d655b00ac19a9a0ca0a385ee/jsonschema-4.26.0-py3-none-any.whl", hash = "sha256:d489f15263b8d200f8387e64b4c3a75f06629559fb73deb8fdfb525f2dab50ce", size = 90630, upload-time = "2026-01-07T13:41:05.306Z" }, + { url = "https://files.pythonhosted.org/packages/69/4a/4f9dbeb84e8850557c02365a0eee0649abe5eb1d84af92a25731c6c0f922/jsonschema-4.23.0-py3-none-any.whl", hash = "sha256:fbadb6f8b144a8f8cf9f0b89ba94501d143e50411a1278633f56a7acf7fd5566", size = 88462, upload-time = "2024-07-08T18:40:00.165Z" }, ] [[package]] @@ -1282,7 +1282,7 @@ wheels = [ [[package]] name = "litellm" -version = "1.83.0" +version = "1.83.14" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "aiohttp" }, @@ -1298,9 +1298,9 @@ dependencies = [ { name = "tiktoken" }, { name = "tokenizers" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/22/92/6ce9737554994ca8e536e5f4f6a87cc7c4774b656c9eb9add071caf7d54b/litellm-1.83.0.tar.gz", hash = "sha256:860bebc76c4bb27b4cf90b4a77acd66dba25aced37e3db98750de8a1766bfb7a", size = 17333062, upload-time = "2026-03-31T05:08:25.331Z" } +sdist = { url = "https://files.pythonhosted.org/packages/8d/7c/c095649380adc96c8630273c1768c2ad1e74aa2ee1dd8dd05d218a60569f/litellm-1.83.14.tar.gz", hash = "sha256:24aef9b47cdc424c833e32f3727f411741c690832cd1fe4405e0077144fe09c9", size = 14836599, upload-time = "2026-04-26T03:16:10.176Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/19/2c/a670cc050fcd6f45c6199eb99e259c73aea92edba8d5c2fc1b3686d36217/litellm-1.83.0-py3-none-any.whl", hash = "sha256:88c536d339248f3987571493015784671ba3f193a328e1ea6780dbebaa2094a8", size = 15610306, upload-time = "2026-03-31T05:08:21.987Z" }, + { url = "https://files.pythonhosted.org/packages/7f/5c/1b5691575420135e90578543b2bf219497caa33cfd0af64cb38f30288450/litellm-1.83.14-py3-none-any.whl", hash = "sha256:92b11ba2a32cf80707ddf388d18526696c7999a21b418c5e3b6eda1243d2cfdb", size = 16457054, upload-time = "2026-04-26T03:16:05.72Z" }, ] [[package]] @@ -1995,89 +1995,89 @@ wheels = [ [[package]] name = "pillow" -version = "12.1.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/1f/42/5c74462b4fd957fcd7b13b04fb3205ff8349236ea74c7c375766d6c82288/pillow-12.1.1.tar.gz", hash = "sha256:9ad8fa5937ab05218e2b6a4cff30295ad35afd2f83ac592e68c0d871bb0fdbc4", size = 46980264, upload-time = "2026-02-11T04:23:07.146Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/2b/46/5da1ec4a5171ee7bf1a0efa064aba70ba3d6e0788ce3f5acd1375d23c8c0/pillow-12.1.1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:e879bb6cd5c73848ef3b2b48b8af9ff08c5b71ecda8048b7dd22d8a33f60be32", size = 5304084, upload-time = "2026-02-11T04:20:27.501Z" }, - { url = "https://files.pythonhosted.org/packages/78/93/a29e9bc02d1cf557a834da780ceccd54e02421627200696fcf805ebdc3fb/pillow-12.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:365b10bb9417dd4498c0e3b128018c4a624dc11c7b97d8cc54effe3b096f4c38", size = 4657866, upload-time = "2026-02-11T04:20:29.827Z" }, - { url = "https://files.pythonhosted.org/packages/13/84/583a4558d492a179d31e4aae32eadce94b9acf49c0337c4ce0b70e0a01f2/pillow-12.1.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d4ce8e329c93845720cd2014659ca67eac35f6433fd3050393d85f3ecef0dad5", size = 6232148, upload-time = "2026-02-11T04:20:31.329Z" }, - { url = "https://files.pythonhosted.org/packages/d5/e2/53c43334bbbb2d3b938978532fbda8e62bb6e0b23a26ce8592f36bcc4987/pillow-12.1.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc354a04072b765eccf2204f588a7a532c9511e8b9c7f900e1b64e3e33487090", size = 8038007, upload-time = "2026-02-11T04:20:34.225Z" }, - { url = "https://files.pythonhosted.org/packages/b8/a6/3d0e79c8a9d58150dd98e199d7c1c56861027f3829a3a60b3c2784190180/pillow-12.1.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7e7976bf1910a8116b523b9f9f58bf410f3e8aa330cd9a2bb2953f9266ab49af", size = 6345418, upload-time = "2026-02-11T04:20:35.858Z" }, - { url = "https://files.pythonhosted.org/packages/a2/c8/46dfeac5825e600579157eea177be43e2f7ff4a99da9d0d0a49533509ac5/pillow-12.1.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:597bd9c8419bc7c6af5604e55847789b69123bbe25d65cc6ad3012b4f3c98d8b", size = 7034590, upload-time = "2026-02-11T04:20:37.91Z" }, - { url = "https://files.pythonhosted.org/packages/af/bf/e6f65d3db8a8bbfeaf9e13cc0417813f6319863a73de934f14b2229ada18/pillow-12.1.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:2c1fc0f2ca5f96a3c8407e41cca26a16e46b21060fe6d5b099d2cb01412222f5", size = 6458655, upload-time = "2026-02-11T04:20:39.496Z" }, - { url = "https://files.pythonhosted.org/packages/f9/c2/66091f3f34a25894ca129362e510b956ef26f8fb67a0e6417bc5744e56f1/pillow-12.1.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:578510d88c6229d735855e1f278aa305270438d36a05031dfaae5067cc8eb04d", size = 7159286, upload-time = "2026-02-11T04:20:41.139Z" }, - { url = "https://files.pythonhosted.org/packages/7b/5a/24bc8eb526a22f957d0cec6243146744966d40857e3d8deb68f7902ca6c1/pillow-12.1.1-cp311-cp311-win32.whl", hash = "sha256:7311c0a0dcadb89b36b7025dfd8326ecfa36964e29913074d47382706e516a7c", size = 6328663, upload-time = "2026-02-11T04:20:43.184Z" }, - { url = "https://files.pythonhosted.org/packages/31/03/bef822e4f2d8f9d7448c133d0a18185d3cce3e70472774fffefe8b0ed562/pillow-12.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:fbfa2a7c10cc2623f412753cddf391c7f971c52ca40a3f65dc5039b2939e8563", size = 7031448, upload-time = "2026-02-11T04:20:44.696Z" }, - { url = "https://files.pythonhosted.org/packages/49/70/f76296f53610bd17b2e7d31728b8b7825e3ac3b5b3688b51f52eab7c0818/pillow-12.1.1-cp311-cp311-win_arm64.whl", hash = "sha256:b81b5e3511211631b3f672a595e3221252c90af017e399056d0faabb9538aa80", size = 2453651, upload-time = "2026-02-11T04:20:46.243Z" }, - { url = "https://files.pythonhosted.org/packages/07/d3/8df65da0d4df36b094351dce696f2989bec731d4f10e743b1c5f4da4d3bf/pillow-12.1.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ab323b787d6e18b3d91a72fc99b1a2c28651e4358749842b8f8dfacd28ef2052", size = 5262803, upload-time = "2026-02-11T04:20:47.653Z" }, - { url = "https://files.pythonhosted.org/packages/d6/71/5026395b290ff404b836e636f51d7297e6c83beceaa87c592718747e670f/pillow-12.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:adebb5bee0f0af4909c30db0d890c773d1a92ffe83da908e2e9e720f8edf3984", size = 4657601, upload-time = "2026-02-11T04:20:49.328Z" }, - { url = "https://files.pythonhosted.org/packages/b1/2e/1001613d941c67442f745aff0f7cc66dd8df9a9c084eb497e6a543ee6f7e/pillow-12.1.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:bb66b7cc26f50977108790e2456b7921e773f23db5630261102233eb355a3b79", size = 6234995, upload-time = "2026-02-11T04:20:51.032Z" }, - { url = "https://files.pythonhosted.org/packages/07/26/246ab11455b2549b9233dbd44d358d033a2f780fa9007b61a913c5b2d24e/pillow-12.1.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:aee2810642b2898bb187ced9b349e95d2a7272930796e022efaf12e99dccd293", size = 8045012, upload-time = "2026-02-11T04:20:52.882Z" }, - { url = "https://files.pythonhosted.org/packages/b2/8b/07587069c27be7535ac1fe33874e32de118fbd34e2a73b7f83436a88368c/pillow-12.1.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a0b1cd6232e2b618adcc54d9882e4e662a089d5768cd188f7c245b4c8c44a397", size = 6349638, upload-time = "2026-02-11T04:20:54.444Z" }, - { url = "https://files.pythonhosted.org/packages/ff/79/6df7b2ee763d619cda2fb4fea498e5f79d984dae304d45a8999b80d6cf5c/pillow-12.1.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7aac39bcf8d4770d089588a2e1dd111cbaa42df5a94be3114222057d68336bd0", size = 7041540, upload-time = "2026-02-11T04:20:55.97Z" }, - { url = "https://files.pythonhosted.org/packages/2c/5e/2ba19e7e7236d7529f4d873bdaf317a318896bac289abebd4bb00ef247f0/pillow-12.1.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ab174cd7d29a62dd139c44bf74b698039328f45cb03b4596c43473a46656b2f3", size = 6462613, upload-time = "2026-02-11T04:20:57.542Z" }, - { url = "https://files.pythonhosted.org/packages/03/03/31216ec124bb5c3dacd74ce8efff4cc7f52643653bad4825f8f08c697743/pillow-12.1.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:339ffdcb7cbeaa08221cd401d517d4b1fe7a9ed5d400e4a8039719238620ca35", size = 7166745, upload-time = "2026-02-11T04:20:59.196Z" }, - { url = "https://files.pythonhosted.org/packages/1f/e7/7c4552d80052337eb28653b617eafdef39adfb137c49dd7e831b8dc13bc5/pillow-12.1.1-cp312-cp312-win32.whl", hash = "sha256:5d1f9575a12bed9e9eedd9a4972834b08c97a352bd17955ccdebfeca5913fa0a", size = 6328823, upload-time = "2026-02-11T04:21:01.385Z" }, - { url = "https://files.pythonhosted.org/packages/3d/17/688626d192d7261bbbf98846fc98995726bddc2c945344b65bec3a29d731/pillow-12.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:21329ec8c96c6e979cd0dfd29406c40c1d52521a90544463057d2aaa937d66a6", size = 7033367, upload-time = "2026-02-11T04:21:03.536Z" }, - { url = "https://files.pythonhosted.org/packages/ed/fe/a0ef1f73f939b0eca03ee2c108d0043a87468664770612602c63266a43c4/pillow-12.1.1-cp312-cp312-win_arm64.whl", hash = "sha256:af9a332e572978f0218686636610555ae3defd1633597be015ed50289a03c523", size = 2453811, upload-time = "2026-02-11T04:21:05.116Z" }, - { url = "https://files.pythonhosted.org/packages/d5/11/6db24d4bd7685583caeae54b7009584e38da3c3d4488ed4cd25b439de486/pillow-12.1.1-cp313-cp313-ios_13_0_arm64_iphoneos.whl", hash = "sha256:d242e8ac078781f1de88bf823d70c1a9b3c7950a44cdf4b7c012e22ccbcd8e4e", size = 4062689, upload-time = "2026-02-11T04:21:06.804Z" }, - { url = "https://files.pythonhosted.org/packages/33/c0/ce6d3b1fe190f0021203e0d9b5b99e57843e345f15f9ef22fcd43842fd21/pillow-12.1.1-cp313-cp313-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:02f84dfad02693676692746df05b89cf25597560db2857363a208e393429f5e9", size = 4138535, upload-time = "2026-02-11T04:21:08.452Z" }, - { url = "https://files.pythonhosted.org/packages/a0/c6/d5eb6a4fb32a3f9c21a8c7613ec706534ea1cf9f4b3663e99f0d83f6fca8/pillow-12.1.1-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:e65498daf4b583091ccbb2556c7000abf0f3349fcd57ef7adc9a84a394ed29f6", size = 3601364, upload-time = "2026-02-11T04:21:10.194Z" }, - { url = "https://files.pythonhosted.org/packages/14/a1/16c4b823838ba4c9c52c0e6bbda903a3fe5a1bdbf1b8eb4fff7156f3e318/pillow-12.1.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:6c6db3b84c87d48d0088943bf33440e0c42370b99b1c2a7989216f7b42eede60", size = 5262561, upload-time = "2026-02-11T04:21:11.742Z" }, - { url = "https://files.pythonhosted.org/packages/bb/ad/ad9dc98ff24f485008aa5cdedaf1a219876f6f6c42a4626c08bc4e80b120/pillow-12.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:8b7e5304e34942bf62e15184219a7b5ad4ff7f3bb5cca4d984f37df1a0e1aee2", size = 4657460, upload-time = "2026-02-11T04:21:13.786Z" }, - { url = "https://files.pythonhosted.org/packages/9e/1b/f1a4ea9a895b5732152789326202a82464d5254759fbacae4deea3069334/pillow-12.1.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:18e5bddd742a44b7e6b1e773ab5db102bd7a94c32555ba656e76d319d19c3850", size = 6232698, upload-time = "2026-02-11T04:21:15.949Z" }, - { url = "https://files.pythonhosted.org/packages/95/f4/86f51b8745070daf21fd2e5b1fe0eb35d4db9ca26e6d58366562fb56a743/pillow-12.1.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc44ef1f3de4f45b50ccf9136999d71abb99dca7706bc75d222ed350b9fd2289", size = 8041706, upload-time = "2026-02-11T04:21:17.723Z" }, - { url = "https://files.pythonhosted.org/packages/29/9b/d6ecd956bb1266dd1045e995cce9b8d77759e740953a1c9aad9502a0461e/pillow-12.1.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5a8eb7ed8d4198bccbd07058416eeec51686b498e784eda166395a23eb99138e", size = 6346621, upload-time = "2026-02-11T04:21:19.547Z" }, - { url = "https://files.pythonhosted.org/packages/71/24/538bff45bde96535d7d998c6fed1a751c75ac7c53c37c90dc2601b243893/pillow-12.1.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:47b94983da0c642de92ced1702c5b6c292a84bd3a8e1d1702ff923f183594717", size = 7038069, upload-time = "2026-02-11T04:21:21.378Z" }, - { url = "https://files.pythonhosted.org/packages/94/0e/58cb1a6bc48f746bc4cb3adb8cabff73e2742c92b3bf7a220b7cf69b9177/pillow-12.1.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:518a48c2aab7ce596d3bf79d0e275661b846e86e4d0e7dec34712c30fe07f02a", size = 6460040, upload-time = "2026-02-11T04:21:23.148Z" }, - { url = "https://files.pythonhosted.org/packages/6c/57/9045cb3ff11eeb6c1adce3b2d60d7d299d7b273a2e6c8381a524abfdc474/pillow-12.1.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a550ae29b95c6dc13cf69e2c9dc5747f814c54eeb2e32d683e5e93af56caa029", size = 7164523, upload-time = "2026-02-11T04:21:25.01Z" }, - { url = "https://files.pythonhosted.org/packages/73/f2/9be9cb99f2175f0d4dbadd6616ce1bf068ee54a28277ea1bf1fbf729c250/pillow-12.1.1-cp313-cp313-win32.whl", hash = "sha256:a003d7422449f6d1e3a34e3dd4110c22148336918ddbfc6a32581cd54b2e0b2b", size = 6332552, upload-time = "2026-02-11T04:21:27.238Z" }, - { url = "https://files.pythonhosted.org/packages/3f/eb/b0834ad8b583d7d9d42b80becff092082a1c3c156bb582590fcc973f1c7c/pillow-12.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:344cf1e3dab3be4b1fa08e449323d98a2a3f819ad20f4b22e77a0ede31f0faa1", size = 7040108, upload-time = "2026-02-11T04:21:29.462Z" }, - { url = "https://files.pythonhosted.org/packages/d5/7d/fc09634e2aabdd0feabaff4a32f4a7d97789223e7c2042fd805ea4b4d2c2/pillow-12.1.1-cp313-cp313-win_arm64.whl", hash = "sha256:5c0dd1636633e7e6a0afe7bf6a51a14992b7f8e60de5789018ebbdfae55b040a", size = 2453712, upload-time = "2026-02-11T04:21:31.072Z" }, - { url = "https://files.pythonhosted.org/packages/19/2a/b9d62794fc8a0dd14c1943df68347badbd5511103e0d04c035ffe5cf2255/pillow-12.1.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0330d233c1a0ead844fc097a7d16c0abff4c12e856c0b325f231820fee1f39da", size = 5264880, upload-time = "2026-02-11T04:21:32.865Z" }, - { url = "https://files.pythonhosted.org/packages/26/9d/e03d857d1347fa5ed9247e123fcd2a97b6220e15e9cb73ca0a8d91702c6e/pillow-12.1.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5dae5f21afb91322f2ff791895ddd8889e5e947ff59f71b46041c8ce6db790bc", size = 4660616, upload-time = "2026-02-11T04:21:34.97Z" }, - { url = "https://files.pythonhosted.org/packages/f7/ec/8a6d22afd02570d30954e043f09c32772bfe143ba9285e2fdb11284952cd/pillow-12.1.1-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:2e0c664be47252947d870ac0d327fea7e63985a08794758aa8af5b6cb6ec0c9c", size = 6269008, upload-time = "2026-02-11T04:21:36.623Z" }, - { url = "https://files.pythonhosted.org/packages/3d/1d/6d875422c9f28a4a361f495a5f68d9de4a66941dc2c619103ca335fa6446/pillow-12.1.1-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:691ab2ac363b8217f7d31b3497108fb1f50faab2f75dfb03284ec2f217e87bf8", size = 8073226, upload-time = "2026-02-11T04:21:38.585Z" }, - { url = "https://files.pythonhosted.org/packages/a1/cd/134b0b6ee5eda6dc09e25e24b40fdafe11a520bc725c1d0bbaa5e00bf95b/pillow-12.1.1-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e9e8064fb1cc019296958595f6db671fba95209e3ceb0c4734c9baf97de04b20", size = 6380136, upload-time = "2026-02-11T04:21:40.562Z" }, - { url = "https://files.pythonhosted.org/packages/7a/a9/7628f013f18f001c1b98d8fffe3452f306a70dc6aba7d931019e0492f45e/pillow-12.1.1-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:472a8d7ded663e6162dafdf20015c486a7009483ca671cece7a9279b512fcb13", size = 7067129, upload-time = "2026-02-11T04:21:42.521Z" }, - { url = "https://files.pythonhosted.org/packages/1e/f8/66ab30a2193b277785601e82ee2d49f68ea575d9637e5e234faaa98efa4c/pillow-12.1.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:89b54027a766529136a06cfebeecb3a04900397a3590fd252160b888479517bf", size = 6491807, upload-time = "2026-02-11T04:21:44.22Z" }, - { url = "https://files.pythonhosted.org/packages/da/0b/a877a6627dc8318fdb84e357c5e1a758c0941ab1ddffdafd231983788579/pillow-12.1.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:86172b0831b82ce4f7877f280055892b31179e1576aa00d0df3bb1bbf8c3e524", size = 7190954, upload-time = "2026-02-11T04:21:46.114Z" }, - { url = "https://files.pythonhosted.org/packages/83/43/6f732ff85743cf746b1361b91665d9f5155e1483817f693f8d57ea93147f/pillow-12.1.1-cp313-cp313t-win32.whl", hash = "sha256:44ce27545b6efcf0fdbdceb31c9a5bdea9333e664cda58a7e674bb74608b3986", size = 6336441, upload-time = "2026-02-11T04:21:48.22Z" }, - { url = "https://files.pythonhosted.org/packages/3b/44/e865ef3986611bb75bfabdf94a590016ea327833f434558801122979cd0e/pillow-12.1.1-cp313-cp313t-win_amd64.whl", hash = "sha256:a285e3eb7a5a45a2ff504e31f4a8d1b12ef62e84e5411c6804a42197c1cf586c", size = 7045383, upload-time = "2026-02-11T04:21:50.015Z" }, - { url = "https://files.pythonhosted.org/packages/a8/c6/f4fb24268d0c6908b9f04143697ea18b0379490cb74ba9e8d41b898bd005/pillow-12.1.1-cp313-cp313t-win_arm64.whl", hash = "sha256:cc7d296b5ea4d29e6570dabeaed58d31c3fea35a633a69679fb03d7664f43fb3", size = 2456104, upload-time = "2026-02-11T04:21:51.633Z" }, - { url = "https://files.pythonhosted.org/packages/03/d0/bebb3ffbf31c5a8e97241476c4cf8b9828954693ce6744b4a2326af3e16b/pillow-12.1.1-cp314-cp314-ios_13_0_arm64_iphoneos.whl", hash = "sha256:417423db963cb4be8bac3fc1204fe61610f6abeed1580a7a2cbb2fbda20f12af", size = 4062652, upload-time = "2026-02-11T04:21:53.19Z" }, - { url = "https://files.pythonhosted.org/packages/2d/c0/0e16fb0addda4851445c28f8350d8c512f09de27bbb0d6d0bbf8b6709605/pillow-12.1.1-cp314-cp314-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:b957b71c6b2387610f556a7eb0828afbe40b4a98036fc0d2acfa5a44a0c2036f", size = 4138823, upload-time = "2026-02-11T04:22:03.088Z" }, - { url = "https://files.pythonhosted.org/packages/6b/fb/6170ec655d6f6bb6630a013dd7cf7bc218423d7b5fa9071bf63dc32175ae/pillow-12.1.1-cp314-cp314-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:097690ba1f2efdeb165a20469d59d8bb03c55fb6621eb2041a060ae8ea3e9642", size = 3601143, upload-time = "2026-02-11T04:22:04.909Z" }, - { url = "https://files.pythonhosted.org/packages/59/04/dc5c3f297510ba9a6837cbb318b87dd2b8f73eb41a43cc63767f65cb599c/pillow-12.1.1-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:2815a87ab27848db0321fb78c7f0b2c8649dee134b7f2b80c6a45c6831d75ccd", size = 5266254, upload-time = "2026-02-11T04:22:07.656Z" }, - { url = "https://files.pythonhosted.org/packages/05/30/5db1236b0d6313f03ebf97f5e17cda9ca060f524b2fcc875149a8360b21c/pillow-12.1.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:f7ed2c6543bad5a7d5530eb9e78c53132f93dfa44a28492db88b41cdab885202", size = 4657499, upload-time = "2026-02-11T04:22:09.613Z" }, - { url = "https://files.pythonhosted.org/packages/6f/18/008d2ca0eb612e81968e8be0bbae5051efba24d52debf930126d7eaacbba/pillow-12.1.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:652a2c9ccfb556235b2b501a3a7cf3742148cd22e04b5625c5fe057ea3e3191f", size = 6232137, upload-time = "2026-02-11T04:22:11.434Z" }, - { url = "https://files.pythonhosted.org/packages/70/f1/f14d5b8eeb4b2cd62b9f9f847eb6605f103df89ef619ac68f92f748614ea/pillow-12.1.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d6e4571eedf43af33d0fc233a382a76e849badbccdf1ac438841308652a08e1f", size = 8042721, upload-time = "2026-02-11T04:22:13.321Z" }, - { url = "https://files.pythonhosted.org/packages/5a/d6/17824509146e4babbdabf04d8171491fa9d776f7061ff6e727522df9bd03/pillow-12.1.1-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b574c51cf7d5d62e9be37ba446224b59a2da26dc4c1bb2ecbe936a4fb1a7cb7f", size = 6347798, upload-time = "2026-02-11T04:22:15.449Z" }, - { url = "https://files.pythonhosted.org/packages/d1/ee/c85a38a9ab92037a75615aba572c85ea51e605265036e00c5b67dfafbfe2/pillow-12.1.1-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a37691702ed687799de29a518d63d4682d9016932db66d4e90c345831b02fb4e", size = 7039315, upload-time = "2026-02-11T04:22:17.24Z" }, - { url = "https://files.pythonhosted.org/packages/ec/f3/bc8ccc6e08a148290d7523bde4d9a0d6c981db34631390dc6e6ec34cacf6/pillow-12.1.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:f95c00d5d6700b2b890479664a06e754974848afaae5e21beb4d83c106923fd0", size = 6462360, upload-time = "2026-02-11T04:22:19.111Z" }, - { url = "https://files.pythonhosted.org/packages/f6/ab/69a42656adb1d0665ab051eec58a41f169ad295cf81ad45406963105408f/pillow-12.1.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:559b38da23606e68681337ad74622c4dbba02254fc9cb4488a305dd5975c7eeb", size = 7165438, upload-time = "2026-02-11T04:22:21.041Z" }, - { url = "https://files.pythonhosted.org/packages/02/46/81f7aa8941873f0f01d4b55cc543b0a3d03ec2ee30d617a0448bf6bd6dec/pillow-12.1.1-cp314-cp314-win32.whl", hash = "sha256:03edcc34d688572014ff223c125a3f77fb08091e4607e7745002fc214070b35f", size = 6431503, upload-time = "2026-02-11T04:22:22.833Z" }, - { url = "https://files.pythonhosted.org/packages/40/72/4c245f7d1044b67affc7f134a09ea619d4895333d35322b775b928180044/pillow-12.1.1-cp314-cp314-win_amd64.whl", hash = "sha256:50480dcd74fa63b8e78235957d302d98d98d82ccbfac4c7e12108ba9ecbdba15", size = 7176748, upload-time = "2026-02-11T04:22:24.64Z" }, - { url = "https://files.pythonhosted.org/packages/e4/ad/8a87bdbe038c5c698736e3348af5c2194ffb872ea52f11894c95f9305435/pillow-12.1.1-cp314-cp314-win_arm64.whl", hash = "sha256:5cb1785d97b0c3d1d1a16bc1d710c4a0049daefc4935f3a8f31f827f4d3d2e7f", size = 2544314, upload-time = "2026-02-11T04:22:26.685Z" }, - { url = "https://files.pythonhosted.org/packages/6c/9d/efd18493f9de13b87ede7c47e69184b9e859e4427225ea962e32e56a49bc/pillow-12.1.1-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:1f90cff8aa76835cba5769f0b3121a22bd4eb9e6884cfe338216e557a9a548b8", size = 5268612, upload-time = "2026-02-11T04:22:29.884Z" }, - { url = "https://files.pythonhosted.org/packages/f8/f1/4f42eb2b388eb2ffc660dcb7f7b556c1015c53ebd5f7f754965ef997585b/pillow-12.1.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1f1be78ce9466a7ee64bfda57bdba0f7cc499d9794d518b854816c41bf0aa4e9", size = 4660567, upload-time = "2026-02-11T04:22:31.799Z" }, - { url = "https://files.pythonhosted.org/packages/01/54/df6ef130fa43e4b82e32624a7b821a2be1c5653a5fdad8469687a7db4e00/pillow-12.1.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:42fc1f4677106188ad9a55562bbade416f8b55456f522430fadab3cef7cd4e60", size = 6269951, upload-time = "2026-02-11T04:22:33.921Z" }, - { url = "https://files.pythonhosted.org/packages/a9/48/618752d06cc44bb4aae8ce0cd4e6426871929ed7b46215638088270d9b34/pillow-12.1.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:98edb152429ab62a1818039744d8fbb3ccab98a7c29fc3d5fcef158f3f1f68b7", size = 8074769, upload-time = "2026-02-11T04:22:35.877Z" }, - { url = "https://files.pythonhosted.org/packages/c3/bd/f1d71eb39a72fa088d938655afba3e00b38018d052752f435838961127d8/pillow-12.1.1-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d470ab1178551dd17fdba0fef463359c41aaa613cdcd7ff8373f54be629f9f8f", size = 6381358, upload-time = "2026-02-11T04:22:37.698Z" }, - { url = "https://files.pythonhosted.org/packages/64/ef/c784e20b96674ed36a5af839305f55616f8b4f8aa8eeccf8531a6e312243/pillow-12.1.1-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6408a7b064595afcab0a49393a413732a35788f2a5092fdc6266952ed67de586", size = 7068558, upload-time = "2026-02-11T04:22:39.597Z" }, - { url = "https://files.pythonhosted.org/packages/73/cb/8059688b74422ae61278202c4e1ad992e8a2e7375227be0a21c6b87ca8d5/pillow-12.1.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:5d8c41325b382c07799a3682c1c258469ea2ff97103c53717b7893862d0c98ce", size = 6493028, upload-time = "2026-02-11T04:22:42.73Z" }, - { url = "https://files.pythonhosted.org/packages/c6/da/e3c008ed7d2dd1f905b15949325934510b9d1931e5df999bb15972756818/pillow-12.1.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:c7697918b5be27424e9ce568193efd13d925c4481dd364e43f5dff72d33e10f8", size = 7191940, upload-time = "2026-02-11T04:22:44.543Z" }, - { url = "https://files.pythonhosted.org/packages/01/4a/9202e8d11714c1fc5951f2e1ef362f2d7fbc595e1f6717971d5dd750e969/pillow-12.1.1-cp314-cp314t-win32.whl", hash = "sha256:d2912fd8114fc5545aa3a4b5576512f64c55a03f3ebcca4c10194d593d43ea36", size = 6438736, upload-time = "2026-02-11T04:22:46.347Z" }, - { url = "https://files.pythonhosted.org/packages/f3/ca/cbce2327eb9885476b3957b2e82eb12c866a8b16ad77392864ad601022ce/pillow-12.1.1-cp314-cp314t-win_amd64.whl", hash = "sha256:4ceb838d4bd9dab43e06c363cab2eebf63846d6a4aeaea283bbdfd8f1a8ed58b", size = 7182894, upload-time = "2026-02-11T04:22:48.114Z" }, - { url = "https://files.pythonhosted.org/packages/ec/d2/de599c95ba0a973b94410477f8bf0b6f0b5e67360eb89bcb1ad365258beb/pillow-12.1.1-cp314-cp314t-win_arm64.whl", hash = "sha256:7b03048319bfc6170e93bd60728a1af51d3dd7704935feb228c4d4faab35d334", size = 2546446, upload-time = "2026-02-11T04:22:50.342Z" }, - { url = "https://files.pythonhosted.org/packages/56/11/5d43209aa4cb58e0cc80127956ff1796a68b928e6324bbf06ef4db34367b/pillow-12.1.1-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:600fd103672b925fe62ed08e0d874ea34d692474df6f4bf7ebe148b30f89f39f", size = 5228606, upload-time = "2026-02-11T04:22:52.106Z" }, - { url = "https://files.pythonhosted.org/packages/5f/d5/3b005b4e4fda6698b371fa6c21b097d4707585d7db99e98d9b0b87ac612a/pillow-12.1.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:665e1b916b043cef294bc54d47bf02d87e13f769bc4bc5fa225a24b3a6c5aca9", size = 4622321, upload-time = "2026-02-11T04:22:53.827Z" }, - { url = "https://files.pythonhosted.org/packages/df/36/ed3ea2d594356fd8037e5a01f6156c74bc8d92dbb0fa60746cc96cabb6e8/pillow-12.1.1-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:495c302af3aad1ca67420ddd5c7bd480c8867ad173528767d906428057a11f0e", size = 5247579, upload-time = "2026-02-11T04:22:56.094Z" }, - { url = "https://files.pythonhosted.org/packages/54/9a/9cc3e029683cf6d20ae5085da0dafc63148e3252c2f13328e553aaa13cfb/pillow-12.1.1-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8fd420ef0c52c88b5a035a0886f367748c72147b2b8f384c9d12656678dfdfa9", size = 6989094, upload-time = "2026-02-11T04:22:58.288Z" }, - { url = "https://files.pythonhosted.org/packages/00/98/fc53ab36da80b88df0967896b6c4b4cd948a0dc5aa40a754266aa3ae48b3/pillow-12.1.1-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f975aa7ef9684ce7e2c18a3aa8f8e2106ce1e46b94ab713d156b2898811651d3", size = 5313850, upload-time = "2026-02-11T04:23:00.554Z" }, - { url = "https://files.pythonhosted.org/packages/30/02/00fa585abfd9fe9d73e5f6e554dc36cc2b842898cbfc46d70353dae227f8/pillow-12.1.1-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8089c852a56c2966cf18835db62d9b34fef7ba74c726ad943928d494fa7f4735", size = 5963343, upload-time = "2026-02-11T04:23:02.934Z" }, - { url = "https://files.pythonhosted.org/packages/f2/26/c56ce33ca856e358d27fda9676c055395abddb82c35ac0f593877ed4562e/pillow-12.1.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:cb9bb857b2d057c6dfc72ac5f3b44836924ba15721882ef103cecb40d002d80e", size = 7029880, upload-time = "2026-02-11T04:23:04.783Z" }, +version = "12.2.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/8c/21/c2bcdd5906101a30244eaffc1b6e6ce71a31bd0742a01eb89e660ebfac2d/pillow-12.2.0.tar.gz", hash = "sha256:a830b1a40919539d07806aa58e1b114df53ddd43213d9c8b75847eee6c0182b5", size = 46987819, upload-time = "2026-04-01T14:46:17.687Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/68/e1/748f5663efe6edcfc4e74b2b93edfb9b8b99b67f21a854c3ae416500a2d9/pillow-12.2.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:8be29e59487a79f173507c30ddf57e733a357f67881430449bb32614075a40ab", size = 5354347, upload-time = "2026-04-01T14:42:44.255Z" }, + { url = "https://files.pythonhosted.org/packages/47/a1/d5ff69e747374c33a3b53b9f98cca7889fce1fd03d79cdc4e1bccc6c5a87/pillow-12.2.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:71cde9a1e1551df7d34a25462fc60325e8a11a82cc2e2f54578e5e9a1e153d65", size = 4695873, upload-time = "2026-04-01T14:42:46.452Z" }, + { url = "https://files.pythonhosted.org/packages/df/21/e3fbdf54408a973c7f7f89a23b2cb97a7ef30c61ab4142af31eee6aebc88/pillow-12.2.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f490f9368b6fc026f021db16d7ec2fbf7d89e2edb42e8ec09d2c60505f5729c7", size = 6280168, upload-time = "2026-04-01T14:42:49.228Z" }, + { url = "https://files.pythonhosted.org/packages/d3/f1/00b7278c7dd52b17ad4329153748f87b6756ec195ff786c2bdf12518337d/pillow-12.2.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8bd7903a5f2a4545f6fd5935c90058b89d30045568985a71c79f5fd6edf9b91e", size = 8088188, upload-time = "2026-04-01T14:42:51.735Z" }, + { url = "https://files.pythonhosted.org/packages/ad/cf/220a5994ef1b10e70e85748b75649d77d506499352be135a4989c957b701/pillow-12.2.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3997232e10d2920a68d25191392e3a4487d8183039e1c74c2297f00ed1c50705", size = 6394401, upload-time = "2026-04-01T14:42:54.343Z" }, + { url = "https://files.pythonhosted.org/packages/e9/bd/e51a61b1054f09437acfbc2ff9106c30d1eb76bc1453d428399946781253/pillow-12.2.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e74473c875d78b8e9d5da2a70f7099549f9eb37ded4e2f6a463e60125bccd176", size = 7079655, upload-time = "2026-04-01T14:42:56.954Z" }, + { url = "https://files.pythonhosted.org/packages/6b/3d/45132c57d5fb4b5744567c3817026480ac7fc3ce5d4c47902bc0e7f6f853/pillow-12.2.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:56a3f9c60a13133a98ecff6197af34d7824de9b7b38c3654861a725c970c197b", size = 6503105, upload-time = "2026-04-01T14:42:59.847Z" }, + { url = "https://files.pythonhosted.org/packages/7d/2e/9df2fc1e82097b1df3dce58dc43286aa01068e918c07574711fcc53e6fb4/pillow-12.2.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:90e6f81de50ad6b534cab6e5aef77ff6e37722b2f5d908686f4a5c9eba17a909", size = 7203402, upload-time = "2026-04-01T14:43:02.664Z" }, + { url = "https://files.pythonhosted.org/packages/bd/2e/2941e42858ebb67e50ae741473de81c2984e6eff7b397017623c676e2e8d/pillow-12.2.0-cp311-cp311-win32.whl", hash = "sha256:8c984051042858021a54926eb597d6ee3012393ce9c181814115df4c60b9a808", size = 6378149, upload-time = "2026-04-01T14:43:05.274Z" }, + { url = "https://files.pythonhosted.org/packages/69/42/836b6f3cd7f3e5fa10a1f1a5420447c17966044c8fbf589cc0452d5502db/pillow-12.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6e6b2a0c538fc200b38ff9eb6628228b77908c319a005815f2dde585a0664b60", size = 7082626, upload-time = "2026-04-01T14:43:08.557Z" }, + { url = "https://files.pythonhosted.org/packages/c2/88/549194b5d6f1f494b485e493edc6693c0a16f4ada488e5bd974ed1f42fad/pillow-12.2.0-cp311-cp311-win_arm64.whl", hash = "sha256:9a8a34cc89c67a65ea7437ce257cea81a9dad65b29805f3ecee8c8fe8ff25ffe", size = 2463531, upload-time = "2026-04-01T14:43:10.743Z" }, + { url = "https://files.pythonhosted.org/packages/58/be/7482c8a5ebebbc6470b3eb791812fff7d5e0216c2be3827b30b8bb6603ed/pillow-12.2.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:2d192a155bbcec180f8564f693e6fd9bccff5a7af9b32e2e4bf8c9c69dbad6b5", size = 5308279, upload-time = "2026-04-01T14:43:13.246Z" }, + { url = "https://files.pythonhosted.org/packages/d8/95/0a351b9289c2b5cbde0bacd4a83ebc44023e835490a727b2a3bd60ddc0f4/pillow-12.2.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f3f40b3c5a968281fd507d519e444c35f0ff171237f4fdde090dd60699458421", size = 4695490, upload-time = "2026-04-01T14:43:15.584Z" }, + { url = "https://files.pythonhosted.org/packages/de/af/4e8e6869cbed569d43c416fad3dc4ecb944cb5d9492defaed89ddd6fe871/pillow-12.2.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:03e7e372d5240cc23e9f07deca4d775c0817bffc641b01e9c3af208dbd300987", size = 6284462, upload-time = "2026-04-01T14:43:18.268Z" }, + { url = "https://files.pythonhosted.org/packages/e9/9e/c05e19657fd57841e476be1ab46c4d501bffbadbafdc31a6d665f8b737b6/pillow-12.2.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b86024e52a1b269467a802258c25521e6d742349d760728092e1bc2d135b4d76", size = 8094744, upload-time = "2026-04-01T14:43:20.716Z" }, + { url = "https://files.pythonhosted.org/packages/2b/54/1789c455ed10176066b6e7e6da1b01e50e36f94ba584dc68d9eebfe9156d/pillow-12.2.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7371b48c4fa448d20d2714c9a1f775a81155050d383333e0a6c15b1123dda005", size = 6398371, upload-time = "2026-04-01T14:43:23.443Z" }, + { url = "https://files.pythonhosted.org/packages/43/e3/fdc657359e919462369869f1c9f0e973f353f9a9ee295a39b1fea8ee1a77/pillow-12.2.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:62f5409336adb0663b7caa0da5c7d9e7bdbaae9ce761d34669420c2a801b2780", size = 7087215, upload-time = "2026-04-01T14:43:26.758Z" }, + { url = "https://files.pythonhosted.org/packages/8b/f8/2f6825e441d5b1959d2ca5adec984210f1ec086435b0ed5f52c19b3b8a6e/pillow-12.2.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:01afa7cf67f74f09523699b4e88c73fb55c13346d212a59a2db1f86b0a63e8c5", size = 6509783, upload-time = "2026-04-01T14:43:29.56Z" }, + { url = "https://files.pythonhosted.org/packages/67/f9/029a27095ad20f854f9dba026b3ea6428548316e057e6fc3545409e86651/pillow-12.2.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fc3d34d4a8fbec3e88a79b92e5465e0f9b842b628675850d860b8bd300b159f5", size = 7212112, upload-time = "2026-04-01T14:43:32.091Z" }, + { url = "https://files.pythonhosted.org/packages/be/42/025cfe05d1be22dbfdb4f264fe9de1ccda83f66e4fc3aac94748e784af04/pillow-12.2.0-cp312-cp312-win32.whl", hash = "sha256:58f62cc0f00fd29e64b29f4fd923ffdb3859c9f9e6105bfc37ba1d08994e8940", size = 6378489, upload-time = "2026-04-01T14:43:34.601Z" }, + { url = "https://files.pythonhosted.org/packages/5d/7b/25a221d2c761c6a8ae21bfa3874988ff2583e19cf8a27bf2fee358df7942/pillow-12.2.0-cp312-cp312-win_amd64.whl", hash = "sha256:7f84204dee22a783350679a0333981df803dac21a0190d706a50475e361c93f5", size = 7084129, upload-time = "2026-04-01T14:43:37.213Z" }, + { url = "https://files.pythonhosted.org/packages/10/e1/542a474affab20fd4a0f1836cb234e8493519da6b76899e30bcc5d990b8b/pillow-12.2.0-cp312-cp312-win_arm64.whl", hash = "sha256:af73337013e0b3b46f175e79492d96845b16126ddf79c438d7ea7ff27783a414", size = 2463612, upload-time = "2026-04-01T14:43:39.421Z" }, + { url = "https://files.pythonhosted.org/packages/4a/01/53d10cf0dbad820a8db274d259a37ba50b88b24768ddccec07355382d5ad/pillow-12.2.0-cp313-cp313-ios_13_0_arm64_iphoneos.whl", hash = "sha256:8297651f5b5679c19968abefd6bb84d95fe30ef712eb1b2d9b2d31ca61267f4c", size = 4100837, upload-time = "2026-04-01T14:43:41.506Z" }, + { url = "https://files.pythonhosted.org/packages/0f/98/f3a6657ecb698c937f6c76ee564882945f29b79bad496abcba0e84659ec5/pillow-12.2.0-cp313-cp313-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:50d8520da2a6ce0af445fa6d648c4273c3eeefbc32d7ce049f22e8b5c3daecc2", size = 4176528, upload-time = "2026-04-01T14:43:43.773Z" }, + { url = "https://files.pythonhosted.org/packages/69/bc/8986948f05e3ea490b8442ea1c1d4d990b24a7e43d8a51b2c7d8b1dced36/pillow-12.2.0-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:766cef22385fa1091258ad7e6216792b156dc16d8d3fa607e7545b2b72061f1c", size = 3640401, upload-time = "2026-04-01T14:43:45.87Z" }, + { url = "https://files.pythonhosted.org/packages/34/46/6c717baadcd62bc8ed51d238d521ab651eaa74838291bda1f86fe1f864c9/pillow-12.2.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5d2fd0fa6b5d9d1de415060363433f28da8b1526c1c129020435e186794b3795", size = 5308094, upload-time = "2026-04-01T14:43:48.438Z" }, + { url = "https://files.pythonhosted.org/packages/71/43/905a14a8b17fdb1ccb58d282454490662d2cb89a6bfec26af6d3520da5ec/pillow-12.2.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:56b25336f502b6ed02e889f4ece894a72612fe885889a6e8c4c80239ff6e5f5f", size = 4695402, upload-time = "2026-04-01T14:43:51.292Z" }, + { url = "https://files.pythonhosted.org/packages/73/dd/42107efcb777b16fa0393317eac58f5b5cf30e8392e266e76e51cff28c3d/pillow-12.2.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f1c943e96e85df3d3478f7b691f229887e143f81fedab9b20205349ab04d73ed", size = 6280005, upload-time = "2026-04-01T14:43:54.242Z" }, + { url = "https://files.pythonhosted.org/packages/a8/68/b93e09e5e8549019e61acf49f65b1a8530765a7f812c77a7461bca7e4494/pillow-12.2.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:03f6fab9219220f041c74aeaa2939ff0062bd5c364ba9ce037197f4c6d498cd9", size = 8090669, upload-time = "2026-04-01T14:43:57.335Z" }, + { url = "https://files.pythonhosted.org/packages/4b/6e/3ccb54ce8ec4ddd1accd2d89004308b7b0b21c4ac3d20fa70af4760a4330/pillow-12.2.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5cdfebd752ec52bf5bb4e35d9c64b40826bc5b40a13df7c3cda20a2c03a0f5ed", size = 6395194, upload-time = "2026-04-01T14:43:59.864Z" }, + { url = "https://files.pythonhosted.org/packages/67/ee/21d4e8536afd1a328f01b359b4d3997b291ffd35a237c877b331c1c3b71c/pillow-12.2.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:eedf4b74eda2b5a4b2b2fb4c006d6295df3bf29e459e198c90ea48e130dc75c3", size = 7082423, upload-time = "2026-04-01T14:44:02.74Z" }, + { url = "https://files.pythonhosted.org/packages/78/5f/e9f86ab0146464e8c133fe85df987ed9e77e08b29d8d35f9f9f4d6f917ba/pillow-12.2.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:00a2865911330191c0b818c59103b58a5e697cae67042366970a6b6f1b20b7f9", size = 6505667, upload-time = "2026-04-01T14:44:05.381Z" }, + { url = "https://files.pythonhosted.org/packages/ed/1e/409007f56a2fdce61584fd3acbc2bbc259857d555196cedcadc68c015c82/pillow-12.2.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:1e1757442ed87f4912397c6d35a0db6a7b52592156014706f17658ff58bbf795", size = 7208580, upload-time = "2026-04-01T14:44:08.39Z" }, + { url = "https://files.pythonhosted.org/packages/23/c4/7349421080b12fb35414607b8871e9534546c128a11965fd4a7002ccfbee/pillow-12.2.0-cp313-cp313-win32.whl", hash = "sha256:144748b3af2d1b358d41286056d0003f47cb339b8c43a9ea42f5fea4d8c66b6e", size = 6375896, upload-time = "2026-04-01T14:44:11.197Z" }, + { url = "https://files.pythonhosted.org/packages/3f/82/8a3739a5e470b3c6cbb1d21d315800d8e16bff503d1f16b03a4ec3212786/pillow-12.2.0-cp313-cp313-win_amd64.whl", hash = "sha256:390ede346628ccc626e5730107cde16c42d3836b89662a115a921f28440e6a3b", size = 7081266, upload-time = "2026-04-01T14:44:13.947Z" }, + { url = "https://files.pythonhosted.org/packages/c3/25/f968f618a062574294592f668218f8af564830ccebdd1fa6200f598e65c5/pillow-12.2.0-cp313-cp313-win_arm64.whl", hash = "sha256:8023abc91fba39036dbce14a7d6535632f99c0b857807cbbbf21ecc9f4717f06", size = 2463508, upload-time = "2026-04-01T14:44:16.312Z" }, + { url = "https://files.pythonhosted.org/packages/4d/a4/b342930964e3cb4dce5038ae34b0eab4653334995336cd486c5a8c25a00c/pillow-12.2.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:042db20a421b9bafecc4b84a8b6e444686bd9d836c7fd24542db3e7df7baad9b", size = 5309927, upload-time = "2026-04-01T14:44:18.89Z" }, + { url = "https://files.pythonhosted.org/packages/9f/de/23198e0a65a9cf06123f5435a5d95cea62a635697f8f03d134d3f3a96151/pillow-12.2.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:dd025009355c926a84a612fecf58bb315a3f6814b17ead51a8e48d3823d9087f", size = 4698624, upload-time = "2026-04-01T14:44:21.115Z" }, + { url = "https://files.pythonhosted.org/packages/01/a6/1265e977f17d93ea37aa28aa81bad4fa597933879fac2520d24e021c8da3/pillow-12.2.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:88ddbc66737e277852913bd1e07c150cc7bb124539f94c4e2df5344494e0a612", size = 6321252, upload-time = "2026-04-01T14:44:23.663Z" }, + { url = "https://files.pythonhosted.org/packages/3c/83/5982eb4a285967baa70340320be9f88e57665a387e3a53a7f0db8231a0cd/pillow-12.2.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d362d1878f00c142b7e1a16e6e5e780f02be8195123f164edf7eddd911eefe7c", size = 8126550, upload-time = "2026-04-01T14:44:26.772Z" }, + { url = "https://files.pythonhosted.org/packages/4e/48/6ffc514adce69f6050d0753b1a18fd920fce8cac87620d5a31231b04bfc5/pillow-12.2.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2c727a6d53cb0018aadd8018c2b938376af27914a68a492f59dfcaca650d5eea", size = 6433114, upload-time = "2026-04-01T14:44:29.615Z" }, + { url = "https://files.pythonhosted.org/packages/36/a3/f9a77144231fb8d40ee27107b4463e205fa4677e2ca2548e14da5cf18dce/pillow-12.2.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:efd8c21c98c5cc60653bcb311bef2ce0401642b7ce9d09e03a7da87c878289d4", size = 7115667, upload-time = "2026-04-01T14:44:32.773Z" }, + { url = "https://files.pythonhosted.org/packages/c1/fc/ac4ee3041e7d5a565e1c4fd72a113f03b6394cc72ab7089d27608f8aaccb/pillow-12.2.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:9f08483a632889536b8139663db60f6724bfcb443c96f1b18855860d7d5c0fd4", size = 6538966, upload-time = "2026-04-01T14:44:35.252Z" }, + { url = "https://files.pythonhosted.org/packages/c0/a8/27fb307055087f3668f6d0a8ccb636e7431d56ed0750e07a60547b1e083e/pillow-12.2.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:dac8d77255a37e81a2efcbd1fc05f1c15ee82200e6c240d7e127e25e365c39ea", size = 7238241, upload-time = "2026-04-01T14:44:37.875Z" }, + { url = "https://files.pythonhosted.org/packages/ad/4b/926ab182c07fccae9fcb120043464e1ff1564775ec8864f21a0ebce6ac25/pillow-12.2.0-cp313-cp313t-win32.whl", hash = "sha256:ee3120ae9dff32f121610bb08e4313be87e03efeadfc6c0d18f89127e24d0c24", size = 6379592, upload-time = "2026-04-01T14:44:40.336Z" }, + { url = "https://files.pythonhosted.org/packages/c2/c4/f9e476451a098181b30050cc4c9a3556b64c02cf6497ea421ac047e89e4b/pillow-12.2.0-cp313-cp313t-win_amd64.whl", hash = "sha256:325ca0528c6788d2a6c3d40e3568639398137346c3d6e66bb61db96b96511c98", size = 7085542, upload-time = "2026-04-01T14:44:43.251Z" }, + { url = "https://files.pythonhosted.org/packages/00/a4/285f12aeacbe2d6dc36c407dfbbe9e96d4a80b0fb710a337f6d2ad978c75/pillow-12.2.0-cp313-cp313t-win_arm64.whl", hash = "sha256:2e5a76d03a6c6dcef67edabda7a52494afa4035021a79c8558e14af25313d453", size = 2465765, upload-time = "2026-04-01T14:44:45.996Z" }, + { url = "https://files.pythonhosted.org/packages/bf/98/4595daa2365416a86cb0d495248a393dfc84e96d62ad080c8546256cb9c0/pillow-12.2.0-cp314-cp314-ios_13_0_arm64_iphoneos.whl", hash = "sha256:3adc9215e8be0448ed6e814966ecf3d9952f0ea40eb14e89a102b87f450660d8", size = 4100848, upload-time = "2026-04-01T14:44:48.48Z" }, + { url = "https://files.pythonhosted.org/packages/0b/79/40184d464cf89f6663e18dfcf7ca21aae2491fff1a16127681bf1fa9b8cf/pillow-12.2.0-cp314-cp314-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:6a9adfc6d24b10f89588096364cc726174118c62130c817c2837c60cf08a392b", size = 4176515, upload-time = "2026-04-01T14:44:51.353Z" }, + { url = "https://files.pythonhosted.org/packages/b0/63/703f86fd4c422a9cf722833670f4f71418fb116b2853ff7da722ea43f184/pillow-12.2.0-cp314-cp314-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:6a6e67ea2e6feda684ed370f9a1c52e7a243631c025ba42149a2cc5934dec295", size = 3640159, upload-time = "2026-04-01T14:44:53.588Z" }, + { url = "https://files.pythonhosted.org/packages/71/e0/fb22f797187d0be2270f83500aab851536101b254bfa1eae10795709d283/pillow-12.2.0-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:2bb4a8d594eacdfc59d9e5ad972aa8afdd48d584ffd5f13a937a664c3e7db0ed", size = 5312185, upload-time = "2026-04-01T14:44:56.039Z" }, + { url = "https://files.pythonhosted.org/packages/ba/8c/1a9e46228571de18f8e28f16fabdfc20212a5d019f3e3303452b3f0a580d/pillow-12.2.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:80b2da48193b2f33ed0c32c38140f9d3186583ce7d516526d462645fd98660ae", size = 4695386, upload-time = "2026-04-01T14:44:58.663Z" }, + { url = "https://files.pythonhosted.org/packages/70/62/98f6b7f0c88b9addd0e87c217ded307b36be024d4ff8869a812b241d1345/pillow-12.2.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:22db17c68434de69d8ecfc2fe821569195c0c373b25cccb9cbdacf2c6e53c601", size = 6280384, upload-time = "2026-04-01T14:45:01.5Z" }, + { url = "https://files.pythonhosted.org/packages/5e/03/688747d2e91cfbe0e64f316cd2e8005698f76ada3130d0194664174fa5de/pillow-12.2.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7b14cc0106cd9aecda615dd6903840a058b4700fcb817687d0ee4fc8b6e389be", size = 8091599, upload-time = "2026-04-01T14:45:04.5Z" }, + { url = "https://files.pythonhosted.org/packages/f6/35/577e22b936fcdd66537329b33af0b4ccfefaeabd8aec04b266528cddb33c/pillow-12.2.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8cbeb542b2ebc6fcdacabf8aca8c1a97c9b3ad3927d46b8723f9d4f033288a0f", size = 6396021, upload-time = "2026-04-01T14:45:07.117Z" }, + { url = "https://files.pythonhosted.org/packages/11/8d/d2532ad2a603ca2b93ad9f5135732124e57811d0168155852f37fbce2458/pillow-12.2.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4bfd07bc812fbd20395212969e41931001fd59eb55a60658b0e5710872e95286", size = 7083360, upload-time = "2026-04-01T14:45:09.763Z" }, + { url = "https://files.pythonhosted.org/packages/5e/26/d325f9f56c7e039034897e7380e9cc202b1e368bfd04d4cbe6a441f02885/pillow-12.2.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:9aba9a17b623ef750a4d11b742cbafffeb48a869821252b30ee21b5e91392c50", size = 6507628, upload-time = "2026-04-01T14:45:12.378Z" }, + { url = "https://files.pythonhosted.org/packages/5f/f7/769d5632ffb0988f1c5e7660b3e731e30f7f8ec4318e94d0a5d674eb65a4/pillow-12.2.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:deede7c263feb25dba4e82ea23058a235dcc2fe1f6021025dc71f2b618e26104", size = 7209321, upload-time = "2026-04-01T14:45:15.122Z" }, + { url = "https://files.pythonhosted.org/packages/6a/7a/c253e3c645cd47f1aceea6a8bacdba9991bf45bb7dfe927f7c893e89c93c/pillow-12.2.0-cp314-cp314-win32.whl", hash = "sha256:632ff19b2778e43162304d50da0181ce24ac5bb8180122cbe1bf4673428328c7", size = 6479723, upload-time = "2026-04-01T14:45:17.797Z" }, + { url = "https://files.pythonhosted.org/packages/cd/8b/601e6566b957ca50e28725cb6c355c59c2c8609751efbecd980db44e0349/pillow-12.2.0-cp314-cp314-win_amd64.whl", hash = "sha256:4e6c62e9d237e9b65fac06857d511e90d8461a32adcc1b9065ea0c0fa3a28150", size = 7217400, upload-time = "2026-04-01T14:45:20.529Z" }, + { url = "https://files.pythonhosted.org/packages/d6/94/220e46c73065c3e2951bb91c11a1fb636c8c9ad427ac3ce7d7f3359b9b2f/pillow-12.2.0-cp314-cp314-win_arm64.whl", hash = "sha256:b1c1fbd8a5a1af3412a0810d060a78b5136ec0836c8a4ef9aa11807f2a22f4e1", size = 2554835, upload-time = "2026-04-01T14:45:23.162Z" }, + { url = "https://files.pythonhosted.org/packages/b6/ab/1b426a3974cb0e7da5c29ccff4807871d48110933a57207b5a676cccc155/pillow-12.2.0-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:57850958fe9c751670e49b2cecf6294acc99e562531f4bd317fa5ddee2068463", size = 5314225, upload-time = "2026-04-01T14:45:25.637Z" }, + { url = "https://files.pythonhosted.org/packages/19/1e/dce46f371be2438eecfee2a1960ee2a243bbe5e961890146d2dee1ff0f12/pillow-12.2.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:d5d38f1411c0ed9f97bcb49b7bd59b6b7c314e0e27420e34d99d844b9ce3b6f3", size = 4698541, upload-time = "2026-04-01T14:45:28.355Z" }, + { url = "https://files.pythonhosted.org/packages/55/c3/7fbecf70adb3a0c33b77a300dc52e424dc22ad8cdc06557a2e49523b703d/pillow-12.2.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5c0a9f29ca8e79f09de89293f82fc9b0270bb4af1d58bc98f540cc4aedf03166", size = 6322251, upload-time = "2026-04-01T14:45:30.924Z" }, + { url = "https://files.pythonhosted.org/packages/1c/3c/7fbc17cfb7e4fe0ef1642e0abc17fc6c94c9f7a16be41498e12e2ba60408/pillow-12.2.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:1610dd6c61621ae1cf811bef44d77e149ce3f7b95afe66a4512f8c59f25d9ebe", size = 8127807, upload-time = "2026-04-01T14:45:33.908Z" }, + { url = "https://files.pythonhosted.org/packages/ff/c3/a8ae14d6defd2e448493ff512fae903b1e9bd40b72efb6ec55ce0048c8ce/pillow-12.2.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0a34329707af4f73cf1782a36cd2289c0368880654a2c11f027bcee9052d35dd", size = 6433935, upload-time = "2026-04-01T14:45:36.623Z" }, + { url = "https://files.pythonhosted.org/packages/6e/32/2880fb3a074847ac159d8f902cb43278a61e85f681661e7419e6596803ed/pillow-12.2.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8e9c4f5b3c546fa3458a29ab22646c1c6c787ea8f5ef51300e5a60300736905e", size = 7116720, upload-time = "2026-04-01T14:45:39.258Z" }, + { url = "https://files.pythonhosted.org/packages/46/87/495cc9c30e0129501643f24d320076f4cc54f718341df18cc70ec94c44e1/pillow-12.2.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:fb043ee2f06b41473269765c2feae53fc2e2fbf96e5e22ca94fb5ad677856f06", size = 6540498, upload-time = "2026-04-01T14:45:41.879Z" }, + { url = "https://files.pythonhosted.org/packages/18/53/773f5edca692009d883a72211b60fdaf8871cbef075eaa9d577f0a2f989e/pillow-12.2.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:f278f034eb75b4e8a13a54a876cc4a5ab39173d2cdd93a638e1b467fc545ac43", size = 7239413, upload-time = "2026-04-01T14:45:44.705Z" }, + { url = "https://files.pythonhosted.org/packages/c9/e4/4b64a97d71b2a83158134abbb2f5bd3f8a2ea691361282f010998f339ec7/pillow-12.2.0-cp314-cp314t-win32.whl", hash = "sha256:6bb77b2dcb06b20f9f4b4a8454caa581cd4dd0643a08bacf821216a16d9c8354", size = 6482084, upload-time = "2026-04-01T14:45:47.568Z" }, + { url = "https://files.pythonhosted.org/packages/ba/13/306d275efd3a3453f72114b7431c877d10b1154014c1ebbedd067770d629/pillow-12.2.0-cp314-cp314t-win_amd64.whl", hash = "sha256:6562ace0d3fb5f20ed7290f1f929cae41b25ae29528f2af1722966a0a02e2aa1", size = 7225152, upload-time = "2026-04-01T14:45:50.032Z" }, + { url = "https://files.pythonhosted.org/packages/ff/6e/cf826fae916b8658848d7b9f38d88da6396895c676e8086fc0988073aaf8/pillow-12.2.0-cp314-cp314t-win_arm64.whl", hash = "sha256:aa88ccfe4e32d362816319ed727a004423aab09c5cea43c01a4b435643fa34eb", size = 2556579, upload-time = "2026-04-01T14:45:52.529Z" }, + { url = "https://files.pythonhosted.org/packages/4e/b7/2437044fb910f499610356d1352e3423753c98e34f915252aafecc64889f/pillow-12.2.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0538bd5e05efec03ae613fd89c4ce0368ecd2ba239cc25b9f9be7ed426b0af1f", size = 5273969, upload-time = "2026-04-01T14:45:55.538Z" }, + { url = "https://files.pythonhosted.org/packages/f6/f4/8316e31de11b780f4ac08ef3654a75555e624a98db1056ecb2122d008d5a/pillow-12.2.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:394167b21da716608eac917c60aa9b969421b5dcbbe02ae7f013e7b85811c69d", size = 4659674, upload-time = "2026-04-01T14:45:58.093Z" }, + { url = "https://files.pythonhosted.org/packages/d4/37/664fca7201f8bb2aa1d20e2c3d5564a62e6ae5111741966c8319ca802361/pillow-12.2.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5d04bfa02cc2d23b497d1e90a0f927070043f6cbf303e738300532379a4b4e0f", size = 5288479, upload-time = "2026-04-01T14:46:01.141Z" }, + { url = "https://files.pythonhosted.org/packages/49/62/5b0ed78fce87346be7a5cfcfaaad91f6a1f98c26f86bdbafa2066c647ef6/pillow-12.2.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0c838a5125cee37e68edec915651521191cef1e6aa336b855f495766e77a366e", size = 7032230, upload-time = "2026-04-01T14:46:03.874Z" }, + { url = "https://files.pythonhosted.org/packages/c3/28/ec0fc38107fc32536908034e990c47914c57cd7c5a3ece4d8d8f7ffd7e27/pillow-12.2.0-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4a6c9fa44005fa37a91ebfc95d081e8079757d2e904b27103f4f5fa6f0bf78c0", size = 5355404, upload-time = "2026-04-01T14:46:06.33Z" }, + { url = "https://files.pythonhosted.org/packages/5e/8b/51b0eddcfa2180d60e41f06bd6d0a62202b20b59c68f5a132e615b75aecf/pillow-12.2.0-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:25373b66e0dd5905ed63fa3cae13c82fbddf3079f2c8bf15c6fb6a35586324c1", size = 6002215, upload-time = "2026-04-01T14:46:08.83Z" }, + { url = "https://files.pythonhosted.org/packages/bc/60/5382c03e1970de634027cee8e1b7d39776b778b81812aaf45b694dfe9e28/pillow-12.2.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:bfa9c230d2fe991bed5318a5f119bd6780cda2915cca595393649fc118ab895e", size = 7080946, upload-time = "2026-04-01T14:46:11.734Z" }, ] [[package]] @@ -2401,7 +2401,7 @@ wheels = [ [[package]] name = "pytest" -version = "9.0.2" +version = "9.0.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "colorama", marker = "sys_platform == 'win32'" }, @@ -2410,9 +2410,9 @@ dependencies = [ { name = "pluggy" }, { name = "pygments" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" } +sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" }, + { url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" }, ] [[package]] @@ -2469,11 +2469,11 @@ wheels = [ [[package]] name = "python-dotenv" -version = "1.2.1" +version = "1.2.2" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/f0/26/19cadc79a718c5edbec86fd4919a6b6d3f681039a2f6d66d14be94e75fb9/python_dotenv-1.2.1.tar.gz", hash = "sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6", size = 44221, upload-time = "2025-10-26T15:12:10.434Z" } +sdist = { url = "https://files.pythonhosted.org/packages/82/ed/0301aeeac3e5353ef3d94b6ec08bbcabd04a72018415dcb29e588514bba8/python_dotenv-1.2.2.tar.gz", hash = "sha256:2c371a91fbd7ba082c2c1dc1f8bf89ca22564a087c2c287cd9b662adde799cf3", size = 50135, upload-time = "2026-03-01T16:00:26.196Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/14/1b/a298b06749107c305e1fe0f814c6c74aea7b2f1e10989cb30f544a1b3253/python_dotenv-1.2.1-py3-none-any.whl", hash = "sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61", size = 21230, upload-time = "2025-10-26T15:12:09.109Z" }, + { url = "https://files.pythonhosted.org/packages/0b/d7/1959b9648791274998a9c3526f6d0ec8fd2233e4d4acce81bbae76b44b2a/python_dotenv-1.2.2-py3-none-any.whl", hash = "sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a", size = 22101, upload-time = "2026-03-01T16:00:25.09Z" }, ] [[package]] @@ -3083,7 +3083,7 @@ requires-dist = [ { name = "pydantic", extras = ["email"], specifier = ">=2.10.0" }, { name = "pydantic-settings", specifier = ">=2.6.0" }, { name = "pyjwt", specifier = ">=2.0.0" }, - { name = "pytest", marker = "extra == 'dev'", specifier = ">=8.3.0" }, + { name = "pytest", marker = "extra == 'dev'", specifier = ">=9.0.3" }, { name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=0.24.0" }, { name = "pytest-cov", marker = "extra == 'dev'", specifier = ">=6.0.0" }, { name = "pytest-xdist", marker = "extra == 'dev'", specifier = ">=3.5.0" }, diff --git a/components/backend/Dockerfile b/components/backend/Dockerfile index df4c902f..a6691648 100644 --- a/components/backend/Dockerfile +++ b/components/backend/Dockerfile @@ -34,7 +34,6 @@ FROM base AS dependencies # Copy dependency files COPY --chown=syfthub:syfthub pyproject.toml ./ -COPY --chown=syfthub:syfthub README.md ./ # Create src structure for package installation RUN mkdir -p src/syfthub && \ @@ -70,7 +69,6 @@ COPY --chown=syfthub:syfthub src src COPY --chown=syfthub:syfthub tests tests COPY --chown=syfthub:syfthub requirements.txt . COPY --chown=syfthub:syfthub uv.lock . -COPY --chown=syfthub:syfthub README.md . COPY --chown=syfthub:syfthub alembic.ini . COPY --chown=syfthub:syfthub alembic alembic @@ -105,7 +103,6 @@ COPY --chown=syfthub:syfthub src src COPY --chown=syfthub:syfthub tests tests COPY --chown=syfthub:syfthub requirements.txt . COPY --chown=syfthub:syfthub uv.lock . -COPY --chown=syfthub:syfthub README.md . COPY --chown=syfthub:syfthub alembic.ini . COPY --chown=syfthub:syfthub alembic alembic diff --git a/components/backend/README.md b/components/backend/README.md deleted file mode 100644 index 77dafe5b..00000000 --- a/components/backend/README.md +++ /dev/null @@ -1,132 +0,0 @@ -# Syfthub - -A modern Python project managed with uv. - -[![CI](https://github.com/IonesioJunior/syfthub/actions/workflows/ci.yml/badge.svg)](https://github.com/IonesioJunior/syfthub/actions/workflows/ci.yml) -[![Python](https://img.shields.io/badge/python-3.9%2B-blue)](https://www.python.org/downloads/) -[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) -[![uv](https://img.shields.io/badge/uv-package%20manager-orange)](https://github.com/astral-sh/uv) - -## Features - -- Modern Python packaging with [uv](https://github.com/astral-sh/uv) -- Src-layout for better package isolation -- Testing with pytest and coverage reporting -- Code formatting and linting with [Ruff](https://github.com/astral-sh/ruff) -- Static type checking with mypy -- Pre-commit hooks for code quality -- GitHub Actions CI/CD pipeline -- Comprehensive configuration in pyproject.toml - -## Installation - -### Prerequisites - -- Python 3.9 or higher -- [uv](https://github.com/astral-sh/uv) package manager - -Install uv: -```bash -curl -LsSf https://astral.sh/uv/install.sh | sh -``` - -Or on Windows: -```powershell -powershell -c "irm https://astral.sh/uv/install.ps1 | iex" -``` - -### Development Setup - -1. Clone the repository: -```bash -git clone https://github.com/IonesioJunior/syfthub.git -cd syfthub -``` - -2. Install dependencies with uv: -```bash -uv sync --dev -``` - -3. Install pre-commit hooks: -```bash -uv run pre-commit install -``` - -## Usage - -Run the main module: -```bash -uv run python -m syfthub.main -``` - -## Development - -### Running Tests - -Run tests with coverage: -```bash -uv run pytest -``` - -Run tests for specific Python version: -```bash -uv run --python 3.11 pytest -``` - -### Code Quality - -Format code: -```bash -uv run ruff format src/ tests/ -``` - -Lint code: -```bash -uv run ruff check src/ tests/ -``` - -Type checking: -```bash -uv run mypy src/ -``` - -Run all pre-commit hooks: -```bash -uv run pre-commit run --all-files -``` - -### Building - -Build the package: -```bash -uv build -``` - -## Project Structure - -``` -syfthub/ -+-- src/ -| +-- syfthub/ # Main package -| +-- __init__.py -| +-- main.py -| +-- py.typed # PEP 561 marker -+-- tests/ # Test suite -| +-- __init__.py -| +-- conftest.py # Pytest fixtures -| +-- test_*.py # Test files -+-- .github/ -| +-- workflows/ # GitHub Actions -+-- pyproject.toml # Project configuration -+-- uv.lock # Locked dependencies -+-- README.md -``` - -## Contributing - -See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. - -## License - -MIT License - see LICENSE file for details. diff --git a/components/backend/alembic/README.md b/components/backend/alembic/README.md deleted file mode 100644 index f23dc892..00000000 --- a/components/backend/alembic/README.md +++ /dev/null @@ -1,167 +0,0 @@ -# Database Migrations - -This directory contains Alembic database migrations for SyftHub. - -## Quick Reference - -```bash -# From components/backend directory: - -# Check current migration status -uv run alembic current - -# Run all pending migrations -uv run alembic upgrade head - -# Generate a new migration (auto-detect changes from models) -uv run alembic revision --autogenerate -m "description of changes" - -# Create an empty migration for manual SQL -uv run alembic revision -m "description of changes" - -# Downgrade one migration -uv run alembic downgrade -1 - -# View migration history -uv run alembic history - -# View SQL without executing (offline mode) -uv run alembic upgrade head --sql -``` - -## For Existing Databases - -If you have an existing database created before migrations were introduced (using `Base.metadata.create_all()`), you need to mark the initial migration as applied: - -```bash -# Mark the initial migration as applied WITHOUT running it -uv run alembic stamp 001_initial -``` - -This tells Alembic that the current schema is at the `001_initial` revision, so future migrations will apply correctly. - -## For New Databases - -New databases will have migrations applied automatically: -- **Docker deployments**: The `entrypoint.sh` script runs `alembic upgrade head` on startup -- **Local development**: Run `uv run alembic upgrade head` after setting up - -## Creating New Migrations - -### Auto-generated migrations (recommended for model changes) - -When you modify SQLAlchemy models, generate a migration that detects the changes: - -```bash -uv run alembic revision --autogenerate -m "add user preferences table" -``` - -**Always review the generated migration!** Auto-generation can miss or misinterpret: -- Column type changes (especially between JSON and JSONB) -- Index changes on existing columns -- Data migrations (you must add these manually) - -### Manual migrations (for data migrations or complex DDL) - -For migrations that involve data transformation or complex schema changes: - -```bash -uv run alembic revision -m "migrate user roles to new format" -``` - -Then edit the generated file to add your `upgrade()` and `downgrade()` logic. - -## Migration Safety Guidelines - -### For Production (PostgreSQL) - -1. **Set lock timeout** to prevent blocking other queries: - ```python - op.execute("SET LOCAL lock_timeout = '3s'") - ``` - -2. **Create indexes concurrently** (outside transaction): - ```python - with op.get_context().autocommit_block(): - op.create_index( - "idx_users_new_column", - "users", - ["new_column"], - postgresql_concurrently=True - ) - ``` - -3. **Add columns as nullable first**, backfill, then add NOT NULL: - ```python - # Step 1: Add nullable - op.add_column("users", sa.Column("new_col", sa.String(100), nullable=True)) - - # Step 2: Backfill (in separate migration for large tables) - op.execute("UPDATE users SET new_col = 'default' WHERE new_col IS NULL") - - # Step 3: Make NOT NULL - op.alter_column("users", "new_col", nullable=False) - ``` - -4. **Never change column types directly** on large tables. Use expand/contract: - - Add new column - - Migrate data - - Update application to use new column - - Drop old column - -### For Development (SQLite) - -SQLite has limited ALTER TABLE support. The `env.py` configuration enables batch mode for SQLite, which recreates tables for complex alterations. - -## Directory Structure - -``` -alembic/ -├── README.md # This file -├── env.py # Alembic environment configuration -├── script.py.mako # Template for new migrations -└── versions/ # Migration scripts - ├── .gitkeep - └── 20250218_000000_initial_schema.py # Initial baseline -``` - -## Environment Configuration - -Migrations use the same database URL as the application, from: -- `DATABASE_URL` environment variable -- `.env` file -- Default: `sqlite:///./syfthub.db` - -## Troubleshooting - -### "Target database is not up to date" - -Run pending migrations: -```bash -uv run alembic upgrade head -``` - -### "Can't locate revision identified by..." - -The database references a migration that doesn't exist. This can happen if migrations were deleted. Options: -1. Re-add the missing migration -2. Manually update the `alembic_version` table -3. Reset with `alembic stamp head` (if you're sure the schema is correct) - -### "Autogenerate detected no changes" - -- Ensure all models are imported in `env.py` -- Check that model changes are saved -- Verify you're comparing against the correct database - -### SQLite "batch mode" errors - -SQLite doesn't support all ALTER TABLE operations. The migration system uses batch mode (recreate table) for SQLite. If you see errors: -1. Ensure `render_as_batch=True` is set for SQLite in `env.py` -2. For complex migrations, consider testing on PostgreSQL first - -## References - -- [Alembic Documentation](https://alembic.sqlalchemy.org/) -- [SQLAlchemy 2.0 Documentation](https://docs.sqlalchemy.org/en/20/) -- [Zero-Downtime Migrations](https://www.braintreepayments.com/blog/safe-operations-for-high-volume-postgresql/) diff --git a/components/backend/alembic/versions/20260413_000000_add_archived_to_endpoints.py b/components/backend/alembic/versions/20260413_000000_add_archived_to_endpoints.py new file mode 100644 index 00000000..9f7c2a1f --- /dev/null +++ b/components/backend/alembic/versions/20260413_000000_add_archived_to_endpoints.py @@ -0,0 +1,42 @@ +"""Add archived column to endpoints table. + +Marks endpoints as archived when the operator wants to stop new purchases/ +subscriptions while keeping the endpoint accessible to existing users. +Archived endpoints are forced to private visibility so they no longer appear +in public marketplace listings. + +Revision ID: 009_add_archived +Revises: 008_encrypt_accounting_pw +Create Date: 2026-04-13 00:00:00.000000+00:00 +""" + +from collections.abc import Sequence + +import sqlalchemy as sa +from alembic import op + +# Revision identifiers, used by Alembic +revision: str = "009_add_archived" +down_revision: str | None = "008_encrypt_accounting_pw" +branch_labels: str | Sequence[str] | None = None +depends_on: str | Sequence[str] | None = None + + +def upgrade() -> None: + """Add archived boolean column (default False) and a covering index.""" + op.add_column( + "endpoints", + sa.Column( + "archived", + sa.Boolean(), + nullable=False, + server_default="false", + ), + ) + op.create_index("idx_endpoints_archived", "endpoints", ["archived"]) + + +def downgrade() -> None: + """Remove archived column and its index.""" + op.drop_index("idx_endpoints_archived", table_name="endpoints") + op.drop_column("endpoints", "archived") diff --git a/components/backend/alembic/versions/20260428_000000_merge_wallet_and_archived_heads.py b/components/backend/alembic/versions/20260428_000000_merge_wallet_and_archived_heads.py new file mode 100644 index 00000000..c2f08e8f --- /dev/null +++ b/components/backend/alembic/versions/20260428_000000_merge_wallet_and_archived_heads.py @@ -0,0 +1,25 @@ +"""Merge the 009_wallet_fields and 009_add_archived branch heads. + +Both migrations descended from 008_encrypt_accounting_pw independently, +creating two alembic heads. This merge migration unifies the graph so that +'alembic upgrade head' resolves to a single target. + +Revision ID: 010_merge_heads +Revises: 009_wallet_fields, 009_add_archived +Create Date: 2026-04-28 00:00:00.000000+00:00 +""" + +from collections.abc import Sequence + +revision: str = "010_merge_heads" +down_revision: tuple[str, str] = ("009_wallet_fields", "009_add_archived") +branch_labels: str | Sequence[str] | None = None +depends_on: str | Sequence[str] | None = None + + +def upgrade() -> None: + pass + + +def downgrade() -> None: + pass diff --git a/components/backend/pyproject.toml b/components/backend/pyproject.toml index 3eca49c4..e8be5418 100644 --- a/components/backend/pyproject.toml +++ b/components/backend/pyproject.toml @@ -2,7 +2,6 @@ name = "syfthub" version = "0.1.0" description = "A Python project managed with uv" -readme = "README.md" authors = [ { name = "Ionesio Junior", email = "ionesiojr@gmail.com" } ] @@ -32,7 +31,7 @@ dependencies = [ "pydantic>=2.12.4", "pydantic-settings>=2.11.0", "pyjwt>=2.10.1", - "python-multipart>=0.0.20", + "python-multipart>=0.0.26", "sqlalchemy>=2.0.36", "uvicorn>=0.38.0", "structlog>=24.0.0", @@ -49,7 +48,7 @@ syfthub-server = "syfthub.main:main" [project.optional-dependencies] dev = [ - "pytest>=8.0.0", + "pytest>=9.0.3", "pytest-cov>=5.0.0", "pytest-asyncio>=0.24.0", "pytest-xdist>=3.5.0", @@ -58,7 +57,7 @@ dev = [ "pre-commit>=3.8.0", ] test = [ - "pytest>=8.0.0", + "pytest>=9.0.3", "pytest-cov>=5.0.0", "pytest-asyncio>=0.24.0", "pytest-xdist>=3.5.0", @@ -246,7 +245,7 @@ dev = [ "httpx>=0.28.1", "mypy>=1.18.2", "pre-commit>=4.3.0", - "pytest>=8.4.2", + "pytest>=9.0.3", "pytest-asyncio>=1.2.0", "pytest-cov>=7.0.0", "pytest-xdist>=3.5.0", diff --git a/components/backend/requirements.txt b/components/backend/requirements.txt index cdb603a9..aba00e24 100644 --- a/components/backend/requirements.txt +++ b/components/backend/requirements.txt @@ -76,9 +76,9 @@ pydantic-settings==2.12.0 # via syfthub (pyproject.toml) pyjwt==2.12.0 # via syfthub (pyproject.toml) -python-dotenv==1.2.1 +python-dotenv==1.2.2 # via pydantic-settings -python-multipart==0.0.22 +python-multipart==0.0.26 # via syfthub (pyproject.toml) sniffio==1.3.1 # via anyio diff --git a/components/backend/scripts/README.md b/components/backend/scripts/README.md deleted file mode 100644 index 5397edce..00000000 --- a/components/backend/scripts/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# Backend Scripts - -## ingest_existing_endpoints.py - -One-time migration script to index all existing public endpoints into Meilisearch. -Run this once after deploying the Meilisearch feature to backfill the search index. - -```bash -cd components/backend/ -uv run python scripts/ingest_existing_endpoints.py -``` - -Options: -- `--dry-run` — Show what would be indexed without actually doing it -- `--batch-size N` — Number of endpoints per batch (default: 50) - ---- - -> For seeding the database with test data, see `.claude/skills/syfthub-dev-tools/scripts/`. diff --git a/components/backend/src/syfthub/main.py b/components/backend/src/syfthub/main.py index 188f84de..9641708e 100644 --- a/components/backend/src/syfthub/main.py +++ b/components/backend/src/syfthub/main.py @@ -623,9 +623,9 @@ async def get_owner_endpoint( readme_html = sanitize_readme_html(raw_html) return templates.TemplateResponse( + request, "endpoint.html", { - "request": request, "endpoint": endpoint, "owner_name": owner_name, "owner_slug": owner_slug, diff --git a/components/backend/src/syfthub/models/endpoint.py b/components/backend/src/syfthub/models/endpoint.py index 4d4fde97..c182e410 100644 --- a/components/backend/src/syfthub/models/endpoint.py +++ b/components/backend/src/syfthub/models/endpoint.py @@ -59,6 +59,7 @@ class EndpointModel(BaseModel, TimestampMixin): String(20), nullable=False, default="public" ) is_active: Mapped[bool] = mapped_column(Boolean, nullable=False, default=True) + archived: Mapped[bool] = mapped_column(Boolean, nullable=False, default=False) # Health check failure tracking - used by health monitor to track consecutive failures # before marking endpoint as inactive (multi-worker safe, persisted in DB) consecutive_failure_count: Mapped[int] = mapped_column( @@ -125,6 +126,7 @@ class EndpointModel(BaseModel, TimestampMixin): Index("idx_endpoints_type", "type"), Index("idx_endpoints_visibility", "visibility"), Index("idx_endpoints_is_active", "is_active"), + Index("idx_endpoints_archived", "archived"), Index("idx_endpoints_version", "version"), Index("idx_endpoints_stars_count", "stars_count"), Index("idx_endpoints_rag_file_id", "rag_file_id"), diff --git a/components/backend/src/syfthub/schemas/endpoint.py b/components/backend/src/syfthub/schemas/endpoint.py index 716e9ec8..6f7cf745 100644 --- a/components/backend/src/syfthub/schemas/endpoint.py +++ b/components/backend/src/syfthub/schemas/endpoint.py @@ -196,6 +196,10 @@ class EndpointBase(BaseModel): visibility: EndpointVisibility = Field( default=EndpointVisibility.PUBLIC, description="Who can access this endpoint" ) + archived: bool = Field( + default=False, + description="Whether this endpoint is archived (no new purchases, kept accessible to existing users)", + ) # REMOVED is_active - server-managed field # REMOVED contributors - will be validated separately version: str = Field( @@ -288,6 +292,9 @@ class EndpointUpdate(BaseModel): visibility: Optional[EndpointVisibility] = Field( None, description="Who can access this endpoint" ) + archived: Optional[bool] = Field( + None, description="Archive or restore the endpoint" + ) # REMOVED is_active - only admin can change this contributors: Optional[List[int]] = Field( None, description="List of contributor user IDs (will be validated)" @@ -365,6 +372,7 @@ class Endpoint(BaseModel): ..., min_length=3, max_length=63, description="URL-safe identifier" ) is_active: bool = Field(..., description="Whether the endpoint is active") + archived: bool = Field(..., description="Whether the endpoint is archived") contributors: List[int] = Field(..., description="List of contributor user IDs") stars_count: int = Field( ..., description="Number of stars this endpoint has received" @@ -406,6 +414,7 @@ class EndpointResponse(BaseModel): ..., description="Who can access this endpoint" ) is_active: bool = Field(..., description="Whether the endpoint is active") + archived: bool = Field(..., description="Whether the endpoint is archived") contributors: List[int] = Field(..., description="List of contributor user IDs") version: str = Field(..., description="Semantic version of the endpoint") readme: str = Field(..., description="Markdown content for the README") @@ -472,6 +481,9 @@ class EndpointPublicResponse(BaseModel): health_checked_at: Optional[datetime] = Field( None, description="When the client last checked this endpoint's health" ) + archived: bool = Field( + default=False, description="Whether the endpoint is archived" + ) # Note: Excludes user_id, id, visibility, is_active, contributors, health_ttl_seconds for security/privacy diff --git a/components/backend/src/syfthub/services/endpoint_service.py b/components/backend/src/syfthub/services/endpoint_service.py index 9c27ab31..1267bc53 100644 --- a/components/backend/src/syfthub/services/endpoint_service.py +++ b/components/backend/src/syfthub/services/endpoint_service.py @@ -246,12 +246,20 @@ def create_endpoint( endpoint_data.tags, endpoint_data.policies ) + # Force private visibility for archived endpoints so they vanish from public listings + effective_visibility = ( + EndpointVisibility.PRIVATE + if endpoint_data.archived + else endpoint_data.visibility + ) + # Create a validated endpoint creation object that includes server-managed fields validated_data = EndpointCreate( name=endpoint_data.name, description=endpoint_data.description, type=endpoint_data.type, - visibility=endpoint_data.visibility, + visibility=effective_visibility, + archived=endpoint_data.archived, version=endpoint_data.version, readme=endpoint_data.readme, tags=final_tags, @@ -396,6 +404,10 @@ def _apply_endpoint_update( if new_tags != effective_tags: endpoint_data.tags = new_tags + # Force private visibility when archiving + if endpoint_data.archived: + endpoint_data.visibility = EndpointVisibility.PRIVATE + updated_endpoint = self.endpoint_repository.update_endpoint( endpoint.id, endpoint_data ) diff --git a/components/backend/tests/test_endpoints.py b/components/backend/tests/test_endpoints.py index 44e4ee15..bce61710 100644 --- a/components/backend/tests/test_endpoints.py +++ b/components/backend/tests/test_endpoints.py @@ -1359,15 +1359,16 @@ def test_complex_policy_configurations(client: TestClient, user1_token: str) -> "enabled": True, "description": "Legal doc search bundle", "config": { - "bundle_tiers": [ - {"name": "Starter", "units": 100, "unit_type": "requests", "price": 50000}, - {"name": "Pro", "units": 1000, "unit_type": "requests", "price": 400000}, - ], + "price_per_request": 500.0, "currency": "IDR", "country": "ID", "applied_to": ["*"], + "bundles": [ + {"name": "Starter", "amount": 10000}, + {"name": "Pro", "amount": 100000}, + ], "payment_url": "https://my-server.example.com/api/v1/payments/gateway/xendit/invoices", - "bundle_usage_url": "https://my-server.example.com/api/v1/payments/gateway/bundles/test-endpoint", + "credits_url": "https://my-server.example.com/api/v1/payments/gateway/bundles/test-endpoint", }, } @@ -1375,53 +1376,11 @@ def test_complex_policy_configurations(client: TestClient, user1_token: str) -> "type": "xendit", "config": { "payment_url": "https://my-server.example.com/api/v1/payments/gateway/xendit/invoices", - "bundle_usage_url": "https://my-server.example.com/api/v1/payments/gateway/bundles/test-endpoint", + "credits_url": "https://my-server.example.com/api/v1/payments/gateway/bundles/test-endpoint", }, } -def test_create_endpoint_with_xendit_policy( - client: TestClient, user1_token: str -) -> None: - """Test that xendit policy config fields round-trip correctly.""" - headers = {"Authorization": f"Bearer {user1_token}"} - - endpoint_data = { - "name": "Xendit Bundle Endpoint", - "type": "model", - "visibility": "public", - "policies": [_XENDIT_POLICY], - } - - response = client.post("/api/v1/endpoints", json=endpoint_data, headers=headers) - assert response.status_code == 201 - - data = response.json() - assert len(data["policies"]) == 1 - policy = data["policies"][0] - assert policy["type"] == "xendit" - assert policy["version"] == "1.0" - assert policy["enabled"] is True - assert policy["description"] == "Legal doc search bundle" - assert policy["config"]["currency"] == "IDR" - assert policy["config"]["country"] == "ID" - assert policy["config"]["applied_to"] == ["*"] - assert ( - policy["config"]["payment_url"] - == "https://my-server.example.com/api/v1/payments/gateway/xendit/invoices" - ) - assert ( - policy["config"]["bundle_usage_url"] - == "https://my-server.example.com/api/v1/payments/gateway/bundles/test-endpoint" - ) - tiers = policy["config"]["bundle_tiers"] - assert len(tiers) == 2 - assert tiers[0]["name"] == "Starter" - assert tiers[0]["units"] == 100 - assert tiers[1]["name"] == "Pro" - assert tiers[1]["price"] == 400000 - - def test_xendit_policy_auto_injects_subscription_tag_on_create( client: TestClient, user1_token: str ) -> None: @@ -1514,6 +1473,43 @@ def test_xendit_auto_tag_is_idempotent(client: TestClient, user1_token: str) -> assert data["tags"].count("subscription") == 1 +def test_create_endpoint_with_xendit_policy( + client: TestClient, user1_token: str +) -> None: + """Test that xendit policy config fields (bundles + price_per_request) round-trip correctly.""" + headers = {"Authorization": f"Bearer {user1_token}"} + + endpoint_data = { + "name": "Xendit Bundle Endpoint", + "type": "model", + "visibility": "public", + "policies": [_XENDIT_POLICY], + } + + response = client.post("/api/v1/endpoints", json=endpoint_data, headers=headers) + assert response.status_code == 201 + + data = response.json() + assert len(data["policies"]) == 1 + policy = data["policies"][0] + assert policy["type"] == "xendit" + assert policy["version"] == "1.0" + assert policy["config"]["price_per_request"] == 500.0 + assert policy["config"]["currency"] == "IDR" + assert policy["config"]["country"] == "ID" + bundles = policy["config"]["bundles"] + assert len(bundles) == 2 + assert bundles[0]["name"] == "Starter" + assert bundles[0]["amount"] == 10000 + assert bundles[1]["name"] == "Pro" + assert bundles[1]["amount"] == 100000 + assert ( + policy["config"]["credits_url"] + == "https://my-server.example.com/api/v1/payments/gateway/bundles/test-endpoint" + ) + assert "subscription" in data["tags"] + + def test_create_endpoint_with_connections(client: TestClient, user1_token: str) -> None: """Test creating a endpoint with connection configurations.""" headers = {"Authorization": f"Bearer {user1_token}"} @@ -2253,11 +2249,11 @@ def test_create_endpoint_with_xendit_policy_auto_tags( == "https://my-server.example.com/api/v1/payments/gateway/xendit/invoices" ) assert ( - policy["config"]["bundle_usage_url"] + policy["config"]["credits_url"] == "https://my-server.example.com/api/v1/payments/gateway/bundles/test-endpoint" ) assert policy["config"]["currency"] == "IDR" - assert len(policy["config"]["bundle_tiers"]) == 2 + assert len(policy["config"]["bundles"]) == 2 # Auto-tag injection assert "subscription" in data["tags"] @@ -2341,7 +2337,7 @@ def test_sync_endpoint_with_xendit_auto_tags( assert "ai" in endpoint["tags"] assert "subscription" in endpoint["tags"] assert endpoint["policies"][0]["type"] == "xendit" - assert endpoint["policies"][0]["config"]["bundle_tiers"][0]["name"] == "Starter" + assert endpoint["policies"][0]["config"]["bundles"][0]["name"] == "Starter" def test_remove_xendit_policy_does_not_remove_tag( diff --git a/components/backend/tests/test_main.py b/components/backend/tests/test_main.py index 285ec675..08bfb168 100644 --- a/components/backend/tests/test_main.py +++ b/components/backend/tests/test_main.py @@ -194,6 +194,7 @@ def test_get_owner_endpoints_user(self): type=EndpointType.MODEL, visibility=EndpointVisibility.PUBLIC, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -242,6 +243,7 @@ def test_get_owner_endpoints_organization(self): type=EndpointType.MODEL, visibility=EndpointVisibility.PUBLIC, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -311,6 +313,7 @@ def test_get_endpoint_by_owner_and_slug_user(self): type=EndpointType.MODEL, visibility=EndpointVisibility.PUBLIC, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -361,6 +364,7 @@ def test_get_endpoint_by_owner_and_slug_organization(self): type=EndpointType.MODEL, visibility=EndpointVisibility.PUBLIC, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -421,6 +425,7 @@ def test_can_access_endpoint_with_org_public(self): type=EndpointType.MODEL, visibility=EndpointVisibility.PUBLIC, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -461,6 +466,7 @@ def test_can_access_endpoint_with_org_unauthenticated_private(self): type=EndpointType.MODEL, visibility=EndpointVisibility.PRIVATE, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -485,6 +491,7 @@ def test_can_access_endpoint_with_org_admin_access(self): type=EndpointType.MODEL, visibility=EndpointVisibility.PRIVATE, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -523,6 +530,7 @@ def test_can_access_endpoint_with_org_user_owned(self, mock_can_access): type=EndpointType.MODEL, visibility=EndpointVisibility.PRIVATE, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -567,6 +575,7 @@ def test_can_access_endpoint_with_org_org_internal(self, mock_is_member): type=EndpointType.MODEL, visibility=EndpointVisibility.INTERNAL, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -614,6 +623,7 @@ def test_can_access_endpoint_with_org_org_private(self, mock_is_member): type=EndpointType.MODEL, visibility=EndpointVisibility.PRIVATE, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -905,6 +915,7 @@ def mock_endpoint_with_connection(self): type=EndpointType.MODEL, visibility=EndpointVisibility.PUBLIC, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -935,6 +946,7 @@ def mock_data_source_endpoint(self): type=EndpointType.DATA_SOURCE, visibility=EndpointVisibility.PUBLIC, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", @@ -1059,6 +1071,7 @@ def test_invoke_endpoint_no_connection( type=EndpointType.MODEL, visibility=EndpointVisibility.PUBLIC, is_active=True, + archived=False, contributors=[], version="1.0.0", readme="", diff --git a/components/backend/tests/test_services/test_endpoint_service.py b/components/backend/tests/test_services/test_endpoint_service.py index 8018dc6a..6b07ede8 100644 --- a/components/backend/tests/test_services/test_endpoint_service.py +++ b/components/backend/tests/test_services/test_endpoint_service.py @@ -74,6 +74,7 @@ def sample_endpoint(): policies=[], connect=[], is_active=True, + archived=False, contributors=[], user_id=1, organization_id=None, diff --git a/components/backend/uv.lock b/components/backend/uv.lock index 7f0c6bbd..1bca1678 100644 --- a/components/backend/uv.lock +++ b/components/backend/uv.lock @@ -1996,7 +1996,7 @@ wheels = [ [[package]] name = "pytest" -version = "8.4.2" +version = "9.0.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "colorama", marker = "sys_platform == 'win32'" }, @@ -2005,22 +2005,22 @@ dependencies = [ { name = "pluggy" }, { name = "pygments" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a3/5c/00a0e072241553e1a7496d638deababa67c5058571567b92a7eaa258397c/pytest-8.4.2.tar.gz", hash = "sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01", size = 1519618, upload-time = "2025-09-04T14:34:22.711Z" } +sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a8/a4/20da314d277121d6534b3a980b29035dcd51e6744bd79075a6ce8fa4eb8d/pytest-8.4.2-py3-none-any.whl", hash = "sha256:872f880de3fc3a5bdc88a11b39c9710c3497a547cfa9320bc3c5e62fbf272e79", size = 365750, upload-time = "2025-09-04T14:34:20.226Z" }, + { url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" }, ] [[package]] name = "pytest-asyncio" -version = "1.2.0" +version = "1.3.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "pytest" }, { name = "typing-extensions", marker = "python_full_version < '3.13'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/42/86/9e3c5f48f7b7b638b216e4b9e645f54d199d7abbbab7a64a13b4e12ba10f/pytest_asyncio-1.2.0.tar.gz", hash = "sha256:c609a64a2a8768462d0c99811ddb8bd2583c33fd33cf7f21af1c142e824ffb57", size = 50119, upload-time = "2025-09-12T07:33:53.816Z" } +sdist = { url = "https://files.pythonhosted.org/packages/90/2c/8af215c0f776415f3590cac4f9086ccefd6fd463befeae41cd4d3f193e5a/pytest_asyncio-1.3.0.tar.gz", hash = "sha256:d7f52f36d231b80ee124cd216ffb19369aa168fc10095013c6b014a34d3ee9e5", size = 50087, upload-time = "2025-11-10T16:07:47.256Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/04/93/2fa34714b7a4ae72f2f8dad66ba17dd9a2c793220719e736dda28b7aec27/pytest_asyncio-1.2.0-py3-none-any.whl", hash = "sha256:8e17ae5e46d8e7efe51ab6494dd2010f4ca8dae51652aa3c8d55acf50bfb2e99", size = 15095, upload-time = "2025-09-12T07:33:52.639Z" }, + { url = "https://files.pythonhosted.org/packages/e5/35/f8b19922b6a25bc0880171a2f1a003eaeb93657475193ab516fd87cac9da/pytest_asyncio-1.3.0-py3-none-any.whl", hash = "sha256:611e26147c7f77640e6d0a92a38ed17c3e9848063698d5c93d5aa7aa11cebff5", size = 15075, upload-time = "2025-11-10T16:07:45.537Z" }, ] [[package]] @@ -2061,11 +2061,11 @@ wheels = [ [[package]] name = "python-multipart" -version = "0.0.22" +version = "0.0.26" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/94/01/979e98d542a70714b0cb2b6728ed0b7c46792b695e3eaec3e20711271ca3/python_multipart-0.0.22.tar.gz", hash = "sha256:7340bef99a7e0032613f56dc36027b959fd3b30a787ed62d310e951f7c3a3a58", size = 37612, upload-time = "2026-01-25T10:15:56.219Z" } +sdist = { url = "https://files.pythonhosted.org/packages/88/71/b145a380824a960ebd60e1014256dbb7d2253f2316ff2d73dfd8928ec2c3/python_multipart-0.0.26.tar.gz", hash = "sha256:08fadc45918cd615e26846437f50c5d6d23304da32c341f289a617127b081f17", size = 43501, upload-time = "2026-04-10T14:09:59.473Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/1b/d0/397f9626e711ff749a95d96b7af99b9c566a9bb5129b8e4c10fc4d100304/python_multipart-0.0.22-py3-none-any.whl", hash = "sha256:2b2cd894c83d21bf49d702499531c7bafd057d730c201782048f7945d82de155", size = 24579, upload-time = "2026-01-25T10:15:54.811Z" }, + { url = "https://files.pythonhosted.org/packages/9a/22/f1925cdda983ab66fc8ec6ec8014b959262747e58bdca26a4e3d1da29d56/python_multipart-0.0.26-py3-none-any.whl", hash = "sha256:c0b169f8c4484c13b0dcf2ef0ec3a4adb255c4b7d18d8e420477d2b1dd03f185", size = 28847, upload-time = "2026-04-10T14:09:58.131Z" }, ] [[package]] @@ -2464,15 +2464,15 @@ requires-dist = [ { name = "pydantic-settings", specifier = ">=2.11.0" }, { name = "pyjwt", specifier = ">=2.10.1" }, { name = "pympp", extras = ["tempo"], specifier = ">=0.4.0" }, - { name = "pytest", marker = "extra == 'dev'", specifier = ">=8.0.0" }, - { name = "pytest", marker = "extra == 'test'", specifier = ">=8.0.0" }, + { name = "pytest", marker = "extra == 'dev'", specifier = ">=9.0.3" }, + { name = "pytest", marker = "extra == 'test'", specifier = ">=9.0.3" }, { name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=0.24.0" }, { name = "pytest-asyncio", marker = "extra == 'test'", specifier = ">=0.24.0" }, { name = "pytest-cov", marker = "extra == 'dev'", specifier = ">=5.0.0" }, { name = "pytest-cov", marker = "extra == 'test'", specifier = ">=5.0.0" }, { name = "pytest-xdist", marker = "extra == 'dev'", specifier = ">=3.5.0" }, { name = "pytest-xdist", marker = "extra == 'test'", specifier = ">=3.5.0" }, - { name = "python-multipart", specifier = ">=0.0.20" }, + { name = "python-multipart", specifier = ">=0.0.26" }, { name = "redis", extras = ["hiredis"], specifier = ">=5.0.0" }, { name = "resend", specifier = ">=2.21.0" }, { name = "ruff", marker = "extra == 'dev'", specifier = ">=0.8.0" }, @@ -2487,7 +2487,7 @@ dev = [ { name = "httpx", specifier = ">=0.28.1" }, { name = "mypy", specifier = ">=1.18.2" }, { name = "pre-commit", specifier = ">=4.3.0" }, - { name = "pytest", specifier = ">=8.4.2" }, + { name = "pytest", specifier = ">=9.0.3" }, { name = "pytest-asyncio", specifier = ">=1.2.0" }, { name = "pytest-cov", specifier = ">=7.0.0" }, { name = "pytest-xdist", specifier = ">=3.5.0" }, diff --git a/components/frontend/README.md b/components/frontend/README.md deleted file mode 100644 index 9af04010..00000000 --- a/components/frontend/README.md +++ /dev/null @@ -1,83 +0,0 @@ -# SyftHub UI - -Modern React application built with Vite, TypeScript, Tailwind CSS, and shadcn/ui components. - -## Tech Stack - -- **React 19** - Latest React with modern features -- **TypeScript** - Type-safe development -- **Vite 7** - Lightning-fast development server and build tool -- **Tailwind CSS 4** - Utility-first CSS framework -- **shadcn/ui** - Beautiful, accessible React components -- **React Router v7** - Client-side routing -- **ESLint 9 & Prettier** - Code quality and formatting -- **SWC** - Speedy Web Compiler for faster builds - -## Getting Started - -### Prerequisites - -- Node.js 18+ -- npm - -### Installation - -```bash -# Install dependencies -npm install - -# Start development server -npm run dev - -# Build for production -npm run build - -# Preview production build -npm run preview -``` - -## Available Scripts - -- `npm run dev` - Start development server on port 3000 -- `npm run build` - Build for production -- `npm run preview` - Preview production build -- `npm run typecheck` - Check TypeScript types -- `npm run lint` - Run ESLint -- `npm run lint:fix` - Fix ESLint errors -- `npm run format` - Format code with Prettier -- `npm run test` - Run tests with Playwright -- `npm run test:ui` - Run tests with Playwright UI - -## Project Structure - -``` -syfthub-ui/ -├── src/ -│ ├── components/ # React components -│ │ ├── ui/ # shadcn/ui components -│ │ └── ... # Custom components -│ ├── lib/ # Utility functions -│ ├── styles/ # Global styles -│ ├── assets/ # Static assets -│ ├── app.tsx # Main app component -│ └── main.tsx # Application entry point -├── public/ # Public assets -├── __tests__/ # Test files -└── ...config files -``` - -## Features - -- ⚡ Fast development with Vite and SWC -- 🎨 Modern UI with shadcn/ui components -- 🎯 Type-safe with TypeScript -- 🎨 Styled with Tailwind CSS 4 -- 📦 Optimized production builds -- 🧪 Testing with Playwright -- 🔧 Pre-configured ESLint and Prettier -- 🪝 Git hooks with Husky -- 🌙 Dark mode support - -## License - -MIT diff --git a/components/frontend/package-lock.json b/components/frontend/package-lock.json index cf7c3adc..015dd5cd 100644 --- a/components/frontend/package-lock.json +++ b/components/frontend/package-lock.json @@ -63,7 +63,7 @@ "eslint-plugin-unicorn": "61.0.2", "husky": "9.1.7", "jsdom": "^27.4.0", - "postcss": "^8.5.6", + "postcss": "^8.5.12", "prettier": "3.6.2", "prettier-plugin-tailwindcss": "^0.7.1", "sass": "1.92.1", @@ -8453,9 +8453,9 @@ } }, "node_modules/postcss": { - "version": "8.5.6", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz", - "integrity": "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==", + "version": "8.5.12", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.12.tgz", + "integrity": "sha512-W62t/Se6rA0Az3DfCL0AqJwXuKwBeYg6nOaIgzP+xZ7N5BFCI7DYi1qs6ygUYT6rvfi6t9k65UMLJC+PHZpDAA==", "dev": true, "funding": [ { diff --git a/components/frontend/package.json b/components/frontend/package.json index 42075998..74af5c36 100644 --- a/components/frontend/package.json +++ b/components/frontend/package.json @@ -78,7 +78,7 @@ "eslint-plugin-unicorn": "61.0.2", "husky": "9.1.7", "jsdom": "^27.4.0", - "postcss": "^8.5.6", + "postcss": "^8.5.12", "prettier": "3.6.2", "prettier-plugin-tailwindcss": "^0.7.1", "sass": "1.92.1", diff --git a/components/frontend/src/components/endpoint/policy-item.tsx b/components/frontend/src/components/endpoint/policy-item.tsx index c4d8f965..6243fa17 100644 --- a/components/frontend/src/components/endpoint/policy-item.tsx +++ b/components/frontend/src/components/endpoint/policy-item.tsx @@ -33,6 +33,14 @@ const POLICY_TYPE_CONFIG: Record< } > = { // Transaction/Pricing policies + mpp_accounting: { + icon: Coins, + label: 'MPP Micro-payment', + color: 'text-emerald-600 dark:text-emerald-400', + bgColor: 'bg-emerald-50 dark:bg-emerald-950/30', + borderColor: 'border-emerald-200 dark:border-emerald-800', + description: 'Pay-per-request micro-payment via MPP blockchain' + }, transaction: { icon: Coins, label: 'Transaction Policy', diff --git a/components/frontend/src/components/endpoint/xendit-policy-content.tsx b/components/frontend/src/components/endpoint/xendit-policy-content.tsx index a99b5a58..42e5fb6a 100644 --- a/components/frontend/src/components/endpoint/xendit-policy-content.tsx +++ b/components/frontend/src/components/endpoint/xendit-policy-content.tsx @@ -8,11 +8,9 @@ import { Badge } from '@/components/ui/badge'; import { syftClient } from '@/lib/sdk-client'; import { cn } from '@/lib/utils'; -interface BundleTier { +interface MoneyBundle { name: string; - units: number; - unit_type: string; - price: number; + amount: number; } type SubscriptionState = @@ -38,16 +36,7 @@ async function fetchSubscriptionStatus( const data: unknown = await response.json(); if (typeof data !== 'object' || data === null) return { remaining: null }; const d = data as Record; - const remaining = - typeof d.remaining === 'number' - ? d.remaining - : typeof d.credits === 'number' - ? d.credits - : typeof d.units === 'number' - ? d.units - : typeof d.balance === 'number' - ? d.balance - : null; + const remaining = typeof d.remaining_balance === 'number' ? d.remaining_balance : null; return { remaining }; } catch { return null; @@ -63,12 +52,12 @@ export const XenditPolicyContent = memo(function XenditPolicyContent({ config, enabled }: Readonly) { - const bundleTiers = Array.isArray(config.bundle_tiers) - ? (config.bundle_tiers as BundleTier[]) - : []; const currency = typeof config.currency === 'string' ? config.currency : 'IDR'; const paymentUrl = isValidUrl(config.payment_url) ? config.payment_url : null; - const bundleUsageUrl = isValidUrl(config.bundle_usage_url) ? config.bundle_usage_url : null; + const bundleUsageUrl = isValidUrl(config.credits_url) ? config.credits_url : null; + const bundles: MoneyBundle[] = Array.isArray(config.bundles) + ? (config.bundles as MoneyBundle[]) + : []; const [status, setStatus] = useState({ state: 'loading' }); @@ -117,15 +106,15 @@ export const XenditPolicyContent = memo(function XenditPolicyContent({ Active subscription {status.remaining === null ? '' - : ` · ${status.remaining.toLocaleString()} requests remaining`} + : ` · ${currency} ${status.remaining.toLocaleString()} remaining`} )} - {/* Tier table + CTA — only when not subscribed */} + {/* Bundle table + CTA — only when not subscribed */} {status.state === 'not_subscribed' && ( <> - {bundleTiers.length > 0 && ( + {bundles.length > 0 && (
@@ -133,16 +122,11 @@ export const XenditPolicyContent = memo(function XenditPolicyContent({
- {bundleTiers.map((tier) => ( -
-
- {tier.name} - - {tier.units.toLocaleString()} {tier.unit_type} - -
+ {bundles.map((bundle) => ( +
+ {bundle.name} - {currency} {tier.price.toLocaleString()} + {currency} {bundle.amount.toLocaleString()}
))} diff --git a/components/mcp/README.md b/components/mcp/README.md deleted file mode 100644 index d0e373cb..00000000 --- a/components/mcp/README.md +++ /dev/null @@ -1,94 +0,0 @@ -# FastMCP Server with SyftBox Integration - -A comprehensive FastMCP server implementation featuring integrated OAuth 2.1 authentication and SyftBox API integration. - -## 🚀 Features - -- **Consolidated Architecture**: Single-server deployment combining MCP protocol and OAuth 2.1 authentication -- **SyftBox Integration**: Built-in OTP authentication flow with automatic token management -- **Secure Token Storage**: Automatic capture and storage of SyftBox access/refresh tokens -- **Seamless API Access**: Existing tools automatically use stored tokens for SyftBox API calls -- **Real-time Authentication**: Complete OAuth 2.1 with PKCE flow implementation - -## 📁 Project Structure - -``` -├── echo.py # Main FastMCP server with integrated OAuth & SyftBox -├── syftbox_client.py # SyftBox API client for OTP authentication -├── fastmcp.json # Server configuration -├── pyproject.toml # Dependencies -└── README_OAuth.md # Legacy OAuth documentation -``` - -## 🔧 Quick Start - -1. **Install Dependencies**: - ```bash - uv sync - ``` - -2. **Start Server**: - ```bash - uv run fastmcp run - ``` - -3. **Access Server**: - - **MCP Endpoint**: `http://localhost:8004/mcp` - - **OAuth Flow**: `http://localhost:8004/oauth/authorize` - - **JWKS**: `http://localhost:8004/.well-known/jwks.json` - -## 🔐 Authentication Flow - -1. **OAuth 2.1 Authorization**: Client initiates OAuth flow with PKCE -2. **SyftBox OTP**: User enters email and receives OTP from SyftBox -3. **Token Capture**: Server automatically stores SyftBox access/refresh tokens -4. **Seamless Integration**: Tools automatically use stored tokens for API calls - -## 🛠️ Available Tools - -### Core Tools -- `echo_tool` - Echo input text -- `list_data_sources` - List available data sources (includes SyftBox sources) -- `build_context` - Build context from data sources with SyftBox integration - -### SyftBox Data Sources -- `syftbox_profile` - Fetch user profile using stored tokens -- `syftbox_api:/endpoint` - Access any SyftBox API endpoint - -## 📊 Usage Examples - -```bash -# List all data sources (includes SyftBox integration) -list_data_sources() - -# Fetch SyftBox user profile (uses stored tokens automatically) -build_context(["syftbox_profile"]) - -# Custom SyftBox API call (uses stored tokens automatically) -build_context(["syftbox_api:/api/datasets"]) -``` - -## 🔑 Environment Variables - -```bash -OAUTH_ISSUER=http://localhost:8004 -OAUTH_AUDIENCE=fastmcp-api -API_BASE_URL=http://localhost:8004 -``` - -## 🏗️ Architecture - -- **FastMCP Framework**: Modern MCP server implementation -- **OAuth 2.1 + PKCE**: Secure authorization with proof key -- **JWT Tokens**: RS256 signed tokens with JWKS endpoint -- **SyftBox Client**: OTP authentication integration -- **Token Management**: Automatic storage and refresh handling - -## 📖 Learn More - -- [FastMCP Documentation](https://gofastmcp.com/) -- [MCP Protocol](https://modelcontextprotocol.io/) -- [OAuth 2.1 Specification](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1) - ---- -*Enhanced FastMCP server with integrated SyftBox authentication and API access.* diff --git a/components/mcp/pyproject.toml b/components/mcp/pyproject.toml index 0bb49d91..eb9c3676 100644 --- a/components/mcp/pyproject.toml +++ b/components/mcp/pyproject.toml @@ -17,7 +17,7 @@ dependencies = [ # Web framework dependencies with security fixes "starlette>=0.49.1", # CVE-2025-62727: O(n^2) DoS via Range header "werkzeug>=3.1.6", # CVE-2025-66221, CVE-2026-21860, CVE-2026-27199: Windows device names - "authlib>=1.6.9", # CVE-2025-68158, CVE-2026-28802, CVE-2026-27962, CVE-2026-28490, CVE-2026-28498 + "authlib>=1.6.11", # CVE-2025-68158, CVE-2026-28802, CVE-2026-27962, CVE-2026-28490, CVE-2026-28498 # Pydantic (compatible with fastmcp>=2.14.3) "pydantic>=2.11.7", # Authentication and crypto @@ -25,7 +25,7 @@ dependencies = [ "pyjwt>=2.12.0", # CVE-2026-32597: accepts unknown crit header extensions # Utilities "python-dotenv==1.1.0", - "python-multipart>=0.0.5", + "python-multipart>=0.0.26", "email-validator>=2.0.0", # SyftHub integration "syft-accounting-sdk @ git+https://git@github.com/OpenMined/accounting-sdk.git", @@ -48,7 +48,7 @@ override-dependencies = [ "urllib3>=2.6.3", # CVE-2025-66471, CVE-2026-21441 "starlette>=0.49.1", # CVE-2025-62727 "werkzeug>=3.1.6", # CVE-2025-66221, CVE-2026-21860, CVE-2026-27199 - "authlib>=1.6.9", # CVE-2025-68158, CVE-2026-27962, CVE-2026-28490, CVE-2026-28498, CVE-2026-28802 + "authlib>=1.6.11", # CVE-2025-68158, CVE-2026-27962, CVE-2026-28490, CVE-2026-28498, CVE-2026-28802 "aiohttp>=3.13.3", # CVE-2025-69223 "mcp>=1.23.0", # CVE-2025-66416 "pydantic>=2.11.7", # syft-accounting-sdk pins ==2.11.4; override for fastmcp compat diff --git a/components/mcp/uv.lock b/components/mcp/uv.lock index 938522a9..24eccceb 100644 --- a/components/mcp/uv.lock +++ b/components/mcp/uv.lock @@ -5,7 +5,7 @@ requires-python = ">=3.12" [manifest] overrides = [ { name = "aiohttp", specifier = ">=3.13.3" }, - { name = "authlib", specifier = ">=1.6.9" }, + { name = "authlib", specifier = ">=1.6.11" }, { name = "mcp", specifier = ">=1.23.0" }, { name = "pydantic", specifier = ">=2.11.7" }, { name = "requests", specifier = ">=2.32.4" }, @@ -167,14 +167,14 @@ wheels = [ [[package]] name = "authlib" -version = "1.6.9" +version = "1.6.11" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cryptography" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/af/98/00d3dd826d46959ad8e32af2dbb2398868fd9fd0683c26e56d0789bd0e68/authlib-1.6.9.tar.gz", hash = "sha256:d8f2421e7e5980cc1ddb4e32d3f5fa659cfaf60d8eaf3281ebed192e4ab74f04", size = 165134, upload-time = "2026-03-02T07:44:01.998Z" } +sdist = { url = "https://files.pythonhosted.org/packages/28/10/b325d58ffe86815b399334a101e63bc6fa4e1953921cb23703b48a0a0220/authlib-1.6.11.tar.gz", hash = "sha256:64db35b9b01aeccb4715a6c9a6613a06f2bd7be2ab9d2eb89edd1dfc7580a38f", size = 165359, upload-time = "2026-04-16T07:22:50.279Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/53/23/b65f568ed0c22f1efacb744d2db1a33c8068f384b8c9b482b52ebdbc3ef6/authlib-1.6.9-py2.py3-none-any.whl", hash = "sha256:f08b4c14e08f0861dc18a32357b33fbcfd2ea86cfe3fe149484b4d764c4a0ac3", size = 244197, upload-time = "2026-03-02T07:44:00.307Z" }, + { url = "https://files.pythonhosted.org/packages/57/2f/55fca558f925a51db046e5b929deb317ddb05afed74b22d89f4eca578980/authlib-1.6.11-py2.py3-none-any.whl", hash = "sha256:c8687a9a26451c51a34a06fa17bb97cb15bba46a6a626755e2d7f50da8bff3e3", size = 244469, upload-time = "2026-04-16T07:22:48.413Z" }, ] [[package]] @@ -1308,11 +1308,11 @@ wheels = [ [[package]] name = "python-multipart" -version = "0.0.22" +version = "0.0.26" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/94/01/979e98d542a70714b0cb2b6728ed0b7c46792b695e3eaec3e20711271ca3/python_multipart-0.0.22.tar.gz", hash = "sha256:7340bef99a7e0032613f56dc36027b959fd3b30a787ed62d310e951f7c3a3a58", size = 37612, upload-time = "2026-01-25T10:15:56.219Z" } +sdist = { url = "https://files.pythonhosted.org/packages/88/71/b145a380824a960ebd60e1014256dbb7d2253f2316ff2d73dfd8928ec2c3/python_multipart-0.0.26.tar.gz", hash = "sha256:08fadc45918cd615e26846437f50c5d6d23304da32c341f289a617127b081f17", size = 43501, upload-time = "2026-04-10T14:09:59.473Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/1b/d0/397f9626e711ff749a95d96b7af99b9c566a9bb5129b8e4c10fc4d100304/python_multipart-0.0.22-py3-none-any.whl", hash = "sha256:2b2cd894c83d21bf49d702499531c7bafd057d730c201782048f7945d82de155", size = 24579, upload-time = "2026-01-25T10:15:54.811Z" }, + { url = "https://files.pythonhosted.org/packages/9a/22/f1925cdda983ab66fc8ec6ec8014b959262747e58bdca26a4e3d1da29d56/python_multipart-0.0.26-py3-none-any.whl", hash = "sha256:c0b169f8c4484c13b0dcf2ef0ec3a4adb255c4b7d18d8e420477d2b1dd03f185", size = 28847, upload-time = "2026-04-10T14:09:58.131Z" }, ] [[package]] @@ -1608,7 +1608,7 @@ dependencies = [ [package.metadata] requires-dist = [ { name = "aiohttp", specifier = ">=3.13.3" }, - { name = "authlib", specifier = ">=1.6.9" }, + { name = "authlib", specifier = ">=1.6.11" }, { name = "cryptography", specifier = ">=41.0.0" }, { name = "email-validator", specifier = ">=2.0.0" }, { name = "fastmcp", specifier = ">=2.14.3" }, @@ -1617,7 +1617,7 @@ requires-dist = [ { name = "pydantic", specifier = ">=2.11.7" }, { name = "pyjwt", specifier = ">=2.12.0" }, { name = "python-dotenv", specifier = "==1.1.0" }, - { name = "python-multipart", specifier = ">=0.0.5" }, + { name = "python-multipart", specifier = ">=0.0.26" }, { name = "requests", specifier = ">=2.32.4" }, { name = "starlette", specifier = ">=0.49.1" }, { name = "syft-accounting-sdk", git = "https://github.com/OpenMined/accounting-sdk.git" }, diff --git a/docs/agent-endpoint-workflow-design.md b/docs/agent-endpoint-workflow-design.md deleted file mode 100644 index 680afc6b..00000000 --- a/docs/agent-endpoint-workflow-design.md +++ /dev/null @@ -1,1728 +0,0 @@ -# Agent Endpoint Workflow — Architecture Design - -> Design for a new `agent` endpoint type that enables bidirectional, session-based communication between users and remote agents running on desktop/CLI nodes — similar to Claude Code's interactive workflow, but in SyftHub's distributed format. - ---- - -## Table of Contents - -1. [Motivation & Problem Statement](#1-motivation--problem-statement) -2. [Design Principles](#2-design-principles) -3. [Architecture Overview](#3-architecture-overview) -4. [Session Lifecycle](#4-session-lifecycle) -5. [Complete Sequence Diagram](#5-complete-sequence-diagram) -6. [WebSocket Protocol (Frontend ↔ Aggregator)](#6-websocket-protocol-frontend--aggregator) -7. [NATS Session Protocol (Aggregator ↔ Space)](#7-nats-session-protocol-aggregator--space) -8. [HTTP Direct Transport (Non-Tunnel Spaces)](#8-http-direct-transport-non-tunnel-spaces) -9. [Aggregator Session Manager](#9-aggregator-session-manager) -10. [Go SDK Server — Agent Handler Framework](#10-go-sdk-server--agent-handler-framework) -11. [Go SDK Client — Agent Resource](#11-go-sdk-client--agent-resource) -12. [TypeScript SDK — Agent Client](#12-typescript-sdk--agent-client) -13. [Frontend — Agent UI](#13-frontend--agent-ui) -14. [Authentication & Token Strategy](#14-authentication--token-strategy) -15. [Comparison: Chat vs Agent Workflow](#15-comparison-chat-vs-agent-workflow) -16. [Error Handling & Edge Cases](#16-error-handling--edge-cases) -17. [Backward Compatibility Checklist](#17-backward-compatibility-checklist) -18. [Implementation Phases](#18-implementation-phases) - ---- - -## 1. Motivation & Problem Statement - -### Current Limitation - -The existing SyftHub workflow supports only **request → response** interactions: - -``` -User sends query → Aggregator orchestrates RAG → Model generates response → User receives answer -``` - -This works for stateless Q&A but cannot support: - -- **Multi-step reasoning** where an agent iterates through tool calls -- **Interactive workflows** where the agent pauses for user confirmation -- **Dynamic input** where the user provides additional context mid-execution -- **Long-running tasks** where the agent reports progress over minutes - -### The Agent Paradigm - -An agent endpoint operates as a **session-based, bidirectional conversation** where both sides can push messages at any time: - -``` -User sends prompt → Agent starts working - ← Agent: "Reading file auth.py..." - ← Agent: tool_call(read_file, {path: "auth.py"}) - ← Agent: tool_result(success, contents) - ← Agent: "I found a bug. I want to fix it." - ← Agent: tool_call(write_file, {path: "auth.py"}, requires_confirmation=true) -User: confirm → - ← Agent: tool_result(success) -User: "Also check tests" → - ← Agent: "Checking test files..." - ← Agent: tool_call(read_file, {path: "test_auth.py"}) - ... - ← Agent: session.completed("Fixed 3 bugs across 2 files") -``` - -This is the Claude Code interaction model, generalized for SyftHub's distributed architecture. - ---- - -## 2. Design Principles - -1. **Fully additive** — Zero modifications to existing chat/RAG workflow -2. **Transport-agnostic agent handlers** — Same `AgentSession` API whether transport is NATS or HTTP -3. **Leverage existing infrastructure** — Reuse NATS tunneling, satellite tokens, X25519 encryption -4. **Session as the primitive** — All communication scoped to a session with clear lifecycle -5. **Agent-defined interaction** — The agent (not the platform) decides when to ask for input, which tools need confirmation, and how to stream output - ---- - -## 3. Architecture Overview - -### Existing vs Agent Data Path - -```mermaid -graph TB - subgraph "Existing Chat Path (unchanged)" - FE_CHAT[Frontend] -->|"SSE POST"| AG_CHAT[Aggregator
/chat/stream] - AG_CHAT -->|"RAG Pipeline"| AG_ORCH[Orchestrator] - AG_ORCH -->|"HTTP or NATS
req/resp"| SPACE_CHAT[Space Handler] - end - - subgraph "New Agent Path (additive)" - FE_AGENT[Frontend] <-->|"WebSocket"| AG_AGENT[Aggregator
/agent/session] - AG_AGENT <-->|"Session Manager"| SM[SessionTransport] - SM <-->|"NATS session"| SPACE_AGENT[Space
AgentHandler] - SM <-->|"HTTP SSE+POST"| SPACE_HTTP[Space
AgentHandler] - end - - style FE_CHAT fill:#4A90D9,color:#fff - style FE_AGENT fill:#E74C3C,color:#fff - style AG_CHAT fill:#7B68EE,color:#fff - style AG_AGENT fill:#E74C3C,color:#fff -``` - -### Full System View - -```mermaid -graph TB - subgraph "Browser" - UI[Agent UI
useAgentWorkflow] - end - - subgraph "SyftHub Cloud" - NG[Nginx
WebSocket proxy] - BE[Backend Hub
Tokens + Auth] - AG[Aggregator
Session Manager] - NATS[NATS Broker] - end - - subgraph "User's Machine" - SPACE[Desktop/CLI
NATS Client] - AGENT[Agent Handler
Go/Python] - end - - UI <-->|"WebSocket"| NG - NG <-->|"WebSocket"| AG - UI -->|"Token requests"| NG - NG -->|"Token requests"| BE - - AG <-->|"Encrypted pub/sub
session protocol"| NATS - NATS <-->|"Encrypted pub/sub"| SPACE - SPACE --> AGENT - - style UI fill:#4A90D9,color:#fff - style AG fill:#7B68EE,color:#fff - style NATS fill:#27AE60,color:#fff - style SPACE fill:#E74C3C,color:#fff -``` - -### Why WebSocket (not SSE)? - -| Criterion | SSE (current chat) | WebSocket (agent) | -|-----------|--------------------|--------------------| -| Direction | Server → Client only | Bidirectional | -| User input mid-stream | Requires separate POST endpoint | Native — same connection | -| Session identity | Implicit (one SSE per request) | Explicit session_id | -| Reconnection | Reconnect = new request | Reconnect = resume session (v2) | -| Fit for agents | Poor — user can't inject input | Natural | - ---- - -## 4. Session Lifecycle - -```mermaid -stateDiagram-v2 - [*] --> Connecting: Frontend opens WebSocket - Connecting --> Initializing: session.start sent - Initializing --> Running: session.created received - - Running --> Running: agent sends events
(thinking, tool_call, token, status) - Running --> AwaitingInput: agent.request_input - AwaitingInput --> Running: user.message / user.confirm - Running --> Running: user.message (async input) - - Running --> Completed: session.completed - Running --> Failed: session.failed - Running --> Cancelled: user.cancel / user sends session.close - AwaitingInput --> Cancelled: user.cancel - - Completed --> [*] - Failed --> [*] - Cancelled --> [*] - - note right of Running - Agent alternates between - autonomous work and - user interaction - end note - - note right of AwaitingInput - Agent explicitly pauses - and requests user input - (tool confirmation, question, etc.) - end note -``` - -### Session State Transitions - -| Current State | Event | Next State | Actor | -|---------------|-------|------------|-------| -| — | WebSocket opened | `connecting` | Frontend | -| `connecting` | `session.start` sent | `initializing` | Frontend | -| `initializing` | `session.created` received | `running` | Aggregator | -| `running` | `agent.request_input` | `awaiting_input` | Agent | -| `awaiting_input` | `user.message` / `user.confirm` | `running` | User | -| `running` | `session.completed` | `completed` | Agent | -| `running` | `session.failed` | `failed` | Agent | -| `running` / `awaiting_input` | `user.cancel` | `cancelled` | User | -| `running` / `awaiting_input` | `session.close` | `closed` | User | -| any | WebSocket disconnect | `disconnected` | Network | -| any | Inactivity timeout | `timed_out` | Aggregator | - ---- - -## 5. Complete Sequence Diagram - -### Happy Path: Agent with Tool Confirmation - -```mermaid -sequenceDiagram - actor User - participant FE as Frontend
(React) - participant BE as Backend
(Hub API) - participant AG as Aggregator
(Session Mgr) - participant NATS as NATS Broker - participant Space as Desktop/CLI - participant Agent as Agent Handler - - rect rgb(255, 243, 224) - Note over FE,BE: Token Acquisition - FE->>BE: GET /api/v1/token?aud=alice - BE-->>FE: satellite_token (60s) - FE->>BE: POST /api/v1/peer-token - BE-->>FE: peer_token + peer_channel - end - - rect rgb(224, 247, 250) - Note over FE,AG: WebSocket Connection - FE->>AG: WS /aggregator/api/v1/agent/session - FE->>AG: session.start {prompt, endpoint, tokens} - end - - rect rgb(232, 245, 233) - Note over AG,Agent: Session Establishment - AG->>AG: Create session state - AG->>AG: Encrypt payload (X25519 + AES-256-GCM) - AG->>NATS: PUB syfthub.spaces.alice
type: agent_session_start - AG->>NATS: SUB syfthub.peer.{channel} - NATS->>Space: Deliver agent_session_start - Space->>Space: Decrypt, verify token - Space->>Space: Create AgentSession - Space->>Agent: Launch handler goroutine - AG-->>FE: session.created {session_id} - end - - rect rgb(243, 229, 245) - Note over Agent,FE: Agent Execution (bidirectional) - - Agent->>Space: session.SendStatus("Reading file...") - Space->>NATS: PUB syfthub.peer.{channel}
type: agent_event - NATS-->>AG: agent_event {status} - AG-->>FE: agent.status {status: "Reading file..."} - - Agent->>Space: session.SendToolCall(read_file) - Space->>NATS: agent_event {tool_call} - NATS-->>AG: Relay - AG-->>FE: agent.tool_call {tool: "read_file", requires_confirmation: false} - - Agent->>Agent: Execute tool internally - Agent->>Space: session.SendToolResult(success) - Space->>NATS: agent_event {tool_result} - NATS-->>AG: Relay - AG-->>FE: agent.tool_result {status: "success"} - - Agent->>Space: session.SendThinking("Found a bug...") - Space->>NATS: agent_event {thinking} - NATS-->>AG: Relay - AG-->>FE: agent.thinking {content: "Found a bug..."} - - Agent->>Space: session.SendToolCall(write_file, requires_confirm=true) - Space->>NATS: agent_event {tool_call, requires_confirmation: true} - NATS-->>AG: Relay - AG-->>FE: agent.tool_call {tool: "write_file", requires_confirmation: true} - Note over FE: Show Confirm/Deny buttons - end - - rect rgb(255, 235, 238) - Note over User,Agent: User Confirmation - User->>FE: Click "Confirm" - FE->>BE: GET /api/v1/token?aud=alice (refresh) - BE-->>FE: fresh satellite_token - FE->>AG: user.confirm {tool_call_id, satellite_token} - AG->>AG: Encrypt - AG->>NATS: PUB syfthub.spaces.alice
type: agent_user_message - NATS->>Space: Deliver user confirmation - Space->>Space: Decrypt, push to session.recvCh - Agent->>Agent: session.RequestConfirmation() returns true - Agent->>Agent: Execute write_file - Agent->>Space: session.SendToolResult(success) - Space->>NATS: agent_event {tool_result} - NATS-->>AG: Relay - AG-->>FE: agent.tool_result {status: "success"} - end - - rect rgb(232, 245, 233) - Note over User,Agent: Dynamic User Input - User->>FE: "Also check test files" - FE->>AG: user.message {content: "Also check test files"} - AG->>NATS: agent_user_message - NATS->>Space: Deliver - Space->>Agent: Push to session.recvCh - Agent->>Agent: session.Receive() returns user message - Note over Agent: Agent continues with new context - end - - rect rgb(224, 247, 250) - Note over Agent,FE: Session Completion - Agent->>Space: Handler returns nil - Space->>NATS: agent_event {session.completed} - NATS-->>AG: Relay - AG-->>FE: session.completed {summary, usage, duration_ms} - AG->>AG: Cleanup session, unsubscribe NATS - FE->>FE: Close WebSocket - end -``` - ---- - -## 6. WebSocket Protocol (Frontend ↔ Aggregator) - -### Message Envelope - -Every WebSocket message is a JSON object with a common envelope: - -```json -{ - "type": "", - "session_id": "", - "sequence": 42, - "timestamp": "2026-03-19T10:30:00.123Z", - "payload": { } -} -``` - -- `session_id` is absent in `session.start` (assigned by aggregator) -- `sequence` is monotonically increasing per direction (client sequences and server sequences are independent) -- All payloads are JSON objects - -### Client → Server Messages - -```mermaid -classDiagram - class SessionStart { - +type: "session.start" - +payload.prompt: string - +payload.endpoint: EndpointRef - +payload.satellite_token: string - +payload.transaction_token: string? - +payload.peer_token: string? - +payload.peer_channel: string? - +payload.config: AgentConfig? - +payload.messages: Message[]? - } - - class UserMessage { - +type: "user.message" - +payload.content: string - +payload.satellite_token: string? - } - - class UserConfirm { - +type: "user.confirm" - +payload.tool_call_id: string - +payload.modifications: string? - } - - class UserDeny { - +type: "user.deny" - +payload.tool_call_id: string - +payload.reason: string? - } - - class UserCancel { - +type: "user.cancel" - } - - class SessionClose { - +type: "session.close" - } - - class Ping { - +type: "ping" - } -``` - -#### `session.start` — Initialize Agent Session - -```json -{ - "type": "session.start", - "sequence": 1, - "timestamp": "2026-03-19T10:30:00Z", - "payload": { - "prompt": "Find and fix the bug in auth.py", - "endpoint": { - "owner": "alice", - "slug": "code-assistant" - }, - "satellite_token": "eyJ...", - "transaction_token": "tx_...", - "peer_token": "peer_...", - "peer_channel": "a1b2c3d4-...", - "config": { - "max_tokens": 4096, - "temperature": 0.7, - "system_prompt": "You are a coding assistant.", - "metadata": { "project": "syfthub" } - }, - "messages": [ - { "role": "user", "content": "Previous context..." }, - { "role": "assistant", "content": "Previous response..." } - ] - } -} -``` - -#### `user.message` — Send Input Mid-Session - -```json -{ - "type": "user.message", - "session_id": "sess_abc123", - "sequence": 2, - "timestamp": "2026-03-19T10:31:15Z", - "payload": { - "content": "Also check the test files for similar issues", - "satellite_token": "eyJ...(fresh token)..." - } -} -``` - -#### `user.confirm` / `user.deny` — Respond to Tool Call - -```json -{ - "type": "user.confirm", - "session_id": "sess_abc123", - "sequence": 3, - "payload": { - "tool_call_id": "tc_xyz789" - } -} -``` - -```json -{ - "type": "user.deny", - "session_id": "sess_abc123", - "sequence": 3, - "payload": { - "tool_call_id": "tc_xyz789", - "reason": "Don't modify that file, it's managed by another team" - } -} -``` - -### Server → Client Messages - -```mermaid -classDiagram - class SessionCreated { - +type: "session.created" - +payload.session_id: string - +payload.endpoint: EndpointInfo - } - - class AgentThinking { - +type: "agent.thinking" - +payload.content: string - +payload.is_streaming: boolean - } - - class AgentToolCall { - +type: "agent.tool_call" - +payload.tool_call_id: string - +payload.tool_name: string - +payload.arguments: object - +payload.requires_confirmation: boolean - +payload.description: string? - } - - class AgentToolResult { - +type: "agent.tool_result" - +payload.tool_call_id: string - +payload.status: "success" | "error" - +payload.result: any? - +payload.error: string? - +payload.duration_ms: int - } - - class AgentMessage { - +type: "agent.message" - +payload.content: string - +payload.is_complete: boolean - } - - class AgentToken { - +type: "agent.token" - +payload.content: string - } - - class AgentStatus { - +type: "agent.status" - +payload.status: string - +payload.detail: string? - +payload.progress: Progress? - } - - class AgentRequestInput { - +type: "agent.request_input" - +payload.prompt: string - +payload.input_type: "text"|"confirmation"|"choice" - +payload.choices: string[]? - +payload.default: string? - } - - class AgentError { - +type: "agent.error" - +payload.code: string - +payload.message: string - +payload.recoverable: boolean - } - - class SessionCompleted { - +type: "session.completed" - +payload.summary: string? - +payload.usage: TokenUsage? - +payload.duration_ms: int - } - - class SessionFailed { - +type: "session.failed" - +payload.code: string - +payload.message: string - } -``` - -#### Event Type Reference - -| Event | Direction | Purpose | Phase | -|-------|-----------|---------|-------| -| `session.start` | Client→Server | Initialize session | Setup | -| `session.created` | Server→Client | Session confirmed | Setup | -| `agent.thinking` | Server→Client | Agent reasoning (transparency) | Running | -| `agent.tool_call` | Server→Client | Agent wants to use a tool | Running | -| `agent.tool_result` | Server→Client | Tool execution result | Running | -| `agent.message` | Server→Client | Agent text response | Running | -| `agent.token` | Server→Client | Streamed response chunk | Running | -| `agent.status` | Server→Client | Progress update | Running | -| `agent.request_input` | Server→Client | Agent pauses for input | Running→Awaiting | -| `agent.error` | Server→Client | Error (may be recoverable) | Any | -| `user.message` | Client→Server | User sends input | Running/Awaiting | -| `user.confirm` | Client→Server | Confirm tool call | Awaiting | -| `user.deny` | Client→Server | Deny tool call | Awaiting | -| `user.cancel` | Client→Server | Cancel current run | Running/Awaiting | -| `session.close` | Client→Server | Terminate session | Any | -| `session.completed` | Server→Client | Agent finished | Terminal | -| `session.failed` | Server→Client | Unrecoverable error | Terminal | -| `ping`/`pong` | Both | Keepalive | Any | - ---- - -## 7. NATS Session Protocol (Aggregator ↔ Space) - -### Extension of Existing Tunnel Protocol - -The agent session protocol extends `syfthub-tunnel/v1` with new message types. Existing types (`endpoint_request`, `endpoint_response`) are unchanged. - -```mermaid -graph LR - subgraph "Existing Types (unchanged)" - ER["endpoint_request"] - ERESP["endpoint_response"] - end - - subgraph "New Agent Types" - ASS["agent_session_start"] - AUM["agent_user_message"] - ASC["agent_session_cancel"] - AE["agent_event"] - end - - subgraph "NATS Subjects" - S1["syfthub.spaces.{username}"] - S2["syfthub.peer.{peer_channel}"] - end - - ER -->|"Aggregator publishes"| S1 - ASS -->|"Aggregator publishes"| S1 - AUM -->|"Aggregator publishes"| S1 - ASC -->|"Aggregator publishes"| S1 - - ERESP -->|"Space publishes"| S2 - AE -->|"Space publishes"| S2 - - style ASS fill:#E74C3C,color:#fff - style AUM fill:#E74C3C,color:#fff - style ASC fill:#E74C3C,color:#fff - style AE fill:#E74C3C,color:#fff -``` - -### Aggregator → Space Messages - -All published to `syfthub.spaces.{username}`: - -#### `agent_session_start` - -```json -{ - "protocol": "syfthub-tunnel/v1", - "type": "agent_session_start", - "correlation_id": "corr-uuid-1", - "session_id": "sess-uuid-1", - "reply_to": "peer-channel-uuid", - "endpoint": { "slug": "code-assistant", "type": "agent" }, - "satellite_token": "eyJ...", - "timeout_ms": 0, - "encryption_info": { - "algorithm": "X25519-ECDH-AES-256-GCM", - "ephemeral_public_key": "base64url...", - "nonce": "base64url..." - }, - "encrypted_payload": "base64url(encrypted JSON)" -} -``` - -Decrypted payload: -```json -{ - "prompt": "Find and fix the bug in auth.py", - "config": { - "max_tokens": 4096, - "temperature": 0.7, - "system_prompt": "You are a coding assistant." - }, - "messages": [], - "transaction_token": "tx_..." -} -``` - -#### `agent_user_message` - -```json -{ - "protocol": "syfthub-tunnel/v1", - "type": "agent_user_message", - "correlation_id": "corr-uuid-2", - "session_id": "sess-uuid-1", - "reply_to": "peer-channel-uuid", - "satellite_token": "eyJ...(fresh)...", - "encryption_info": { ... }, - "encrypted_payload": "base64url(encrypted JSON)" -} -``` - -Decrypted payload: -```json -{ - "message_type": "user_message", - "content": "Also check test files" -} -``` - -Or for confirmations: -```json -{ - "message_type": "user_confirm", - "tool_call_id": "tc_xyz789" -} -``` - -#### `agent_session_cancel` - -```json -{ - "protocol": "syfthub-tunnel/v1", - "type": "agent_session_cancel", - "correlation_id": "corr-uuid-3", - "session_id": "sess-uuid-1", - "reply_to": "peer-channel-uuid", - "encryption_info": { ... }, - "encrypted_payload": "base64url(encrypted {})" -} -``` - -### Space → Aggregator Messages - -All published to `syfthub.peer.{peer_channel}`: - -#### `agent_event` - -```json -{ - "protocol": "syfthub-tunnel/v1", - "type": "agent_event", - "correlation_id": "corr-uuid-4", - "session_id": "sess-uuid-1", - "endpoint_slug": "code-assistant", - "encryption_info": { ... }, - "encrypted_payload": "base64url(encrypted JSON)", - "timing": { - "received_at": "2026-03-19T10:30:00Z", - "processed_at": "2026-03-19T10:30:00.050Z", - "duration_ms": 50 - } -} -``` - -Decrypted payload (the actual agent event): -```json -{ - "event_type": "tool_call", - "sequence": 5, - "data": { - "tool_call_id": "tc_xyz789", - "tool_name": "write_file", - "arguments": { "path": "auth.py", "content": "..." }, - "requires_confirmation": true, - "description": "Fix authentication bug in auth.py" - } -} -``` - -### NATS Subject Reuse - -```mermaid -sequenceDiagram - participant AG as Aggregator - participant NATS as NATS Broker - participant SP as Space - - Note over SP: Already subscribed to
syfthub.spaces.alice
(handles BOTH endpoint_request
AND agent_session_start) - - AG->>NATS: PUB syfthub.spaces.alice
{type: "agent_session_start", session_id: "S1"} - NATS->>SP: Deliver - - Note over SP: Dispatch by type field:
endpoint_request → existing handler
agent_session_start → new session handler - - SP->>NATS: PUB syfthub.peer.{channel}
{type: "agent_event", session_id: "S1"} - NATS-->>AG: Deliver - - AG->>NATS: PUB syfthub.spaces.alice
{type: "agent_user_message", session_id: "S1"} - NATS->>SP: Deliver - - Note over SP: Route by session_id to
correct AgentSession's recvCh -``` - -### Encryption Per Message - -Each NATS message is independently encrypted — exactly the same as existing tunnel protocol: - -- Aggregator generates a **new ephemeral keypair per message** -- Space generates a **new ephemeral keypair per event** -- AAD = `correlation_id` (unique per message, NOT session_id) -- Same HKDF info labels: `syfthub-tunnel-request-v1` / `syfthub-tunnel-response-v1` - -This means each message has forward secrecy independent of all other messages. - ---- - -## 8. HTTP Direct Transport (Non-Tunnel Spaces) - -For spaces with a public URL (not tunneling), the aggregator uses HTTP: - -```mermaid -sequenceDiagram - participant AG as Aggregator - participant SP as Space (HTTP) - - AG->>SP: POST /api/v1/agent/session/start
{prompt, config, satellite_token} - Note over SP: Returns SSE stream (keep-alive) - - SP-->>AG: SSE: agent.status {reading file} - SP-->>AG: SSE: agent.tool_call {read_file} - SP-->>AG: SSE: agent.tool_result {success} - SP-->>AG: SSE: agent.thinking {found bug} - SP-->>AG: SSE: agent.tool_call {write_file, requires_confirmation} - - AG->>SP: POST /api/v1/agent/session/{id}/message
{type: "user_confirm", tool_call_id: "..."} - - SP-->>AG: SSE: agent.tool_result {success} - SP-->>AG: SSE: session.completed -``` - -### Space HTTP Endpoints (new) - -| Method | Path | Purpose | -|--------|------|---------| -| `POST` | `/api/v1/agent/session/start` | Start session, returns SSE stream | -| `POST` | `/api/v1/agent/session/{id}/message` | Send user message/confirm/deny | -| `POST` | `/api/v1/agent/session/{id}/cancel` | Cancel session | - -### Transport Abstraction - -```mermaid -classDiagram - class SessionTransport { - <> - +send_to_space(message) async - +receive_from_space() AsyncGenerator~AgentEvent~ - +close() async - } - - class NATSSessionTransport { - -nats_conn: Connection - -peer_channel: str - -session_id: str - -space_pubkey: bytes - +send_to_space(message) async - +receive_from_space() AsyncGenerator~AgentEvent~ - +close() async - } - - class HTTPSessionTransport { - -sse_connection: httpx.Response - -space_url: str - -session_id: str - +send_to_space(message) async - +receive_from_space() AsyncGenerator~AgentEvent~ - +close() async - } - - SessionTransport <|.. NATSSessionTransport - SessionTransport <|.. HTTPSessionTransport -``` - ---- - -## 9. Aggregator Session Manager - -### Architecture - -```mermaid -graph TB - WS1[WebSocket 1] --> SM[Session Manager] - WS2[WebSocket 2] --> SM - WS3[WebSocket 3] --> SM - - SM --> S1[Session 1
NATS Transport] - SM --> S2[Session 2
NATS Transport] - SM --> S3[Session 3
HTTP Transport] - - S1 --> NATS[NATS Broker] - S2 --> NATS - S3 --> HTTP[HTTP Space] - - style SM fill:#7B68EE,color:#fff -``` - -### Session State - -```python -@dataclass -class AgentSession: - session_id: str - websocket: WebSocket - transport: SessionTransport - endpoint_ref: ResolvedEndpoint - owner_username: str - state: Literal["initializing", "running", "awaiting_input", - "completed", "failed", "cancelled"] - created_at: datetime - last_activity: datetime - sequence_counter: int = 0 - config: dict = field(default_factory=dict) -``` - -### WebSocket Handler (FastAPI) - -```python -@router.websocket("/agent/session") -async def agent_session(websocket: WebSocket): - await websocket.accept() - session: AgentSession | None = None - - try: - # 1. Wait for session.start message - start_msg = await asyncio.wait_for( - websocket.receive_json(), timeout=30.0 - ) - validate_session_start(start_msg) - - # 2. Resolve endpoint, create transport - endpoint = resolve_agent_endpoint(start_msg) - transport = create_transport(endpoint, start_msg) - - # 3. Create session - session = AgentSession( - session_id=str(uuid.uuid4()), - websocket=websocket, - transport=transport, - endpoint_ref=endpoint, - ... - ) - - # 4. Send session.created to frontend - await websocket.send_json({ - "type": "session.created", - "session_id": session.session_id, - ... - }) - - # 5. Start bidirectional relay - await asyncio.gather( - relay_space_to_frontend(session), # Transport → WebSocket - relay_frontend_to_space(session), # WebSocket → Transport - ) - - except WebSocketDisconnect: - if session: - await session.transport.send_to_space(cancel_message()) - finally: - if session: - await session.transport.close() -``` - -### Relay Coroutines - -```python -async def relay_space_to_frontend(session: AgentSession): - """Forward agent events from space to frontend WebSocket.""" - async for event in session.transport.receive_from_space(): - session.last_activity = datetime.utcnow() - session.sequence_counter += 1 - - ws_message = { - "type": event.event_type, # agent.thinking, agent.tool_call, etc. - "session_id": session.session_id, - "sequence": session.sequence_counter, - "timestamp": datetime.utcnow().isoformat(), - "payload": event.data, - } - - # Update session state based on event - if event.event_type == "agent.request_input": - session.state = "awaiting_input" - elif event.event_type == "session.completed": - session.state = "completed" - elif event.event_type == "session.failed": - session.state = "failed" - else: - session.state = "running" - - await session.websocket.send_json(ws_message) - - if session.state in ("completed", "failed"): - break - - -async def relay_frontend_to_space(session: AgentSession): - """Forward user messages from frontend WebSocket to space.""" - while session.state not in ("completed", "failed", "cancelled"): - try: - msg = await asyncio.wait_for( - session.websocket.receive_json(), - timeout=1800.0 # 30-minute inactivity timeout - ) - except asyncio.TimeoutError: - session.state = "timed_out" - await session.transport.send_to_space(cancel_message()) - break - - session.last_activity = datetime.utcnow() - - if msg["type"] == "user.cancel": - session.state = "cancelled" - await session.transport.send_to_space(cancel_message()) - break - elif msg["type"] == "session.close": - session.state = "cancelled" - await session.transport.send_to_space(cancel_message()) - break - elif msg["type"] in ("user.message", "user.confirm", "user.deny"): - await session.transport.send_to_space(msg) - elif msg["type"] == "ping": - await session.websocket.send_json({"type": "pong"}) -``` - ---- - -## 10. Go SDK Server — Agent Handler Framework - -### Handler Interface - -```mermaid -classDiagram - class AgentSession { - +ID: string - +InitialPrompt: string - +Messages: []Message - +Config: AgentConfig - +User: UserContext - +Context(): context.Context - +Send(event AgentEvent): error - +SendThinking(text string): error - +SendToolCall(call ToolCall): error - +SendToolResult(result ToolResult): error - +SendMessage(content string): error - +SendToken(token string): error - +SendStatus(status string, detail string): error - +RequestInput(prompt string): UserMessage, error - +RequestConfirmation(action string): bool, error - +Receive(): UserMessage, error - +AddMessage(role string, content string) - } - - class AgentHandler { - <> - +func(ctx, *AgentSession) error - } - - class ToolCall { - +ID: string - +Name: string - +Arguments: map[string]any - +RequiresConfirmation: bool - +Description: string - } - - class ToolResult { - +ToolCallID: string - +Status: string - +Result: any - +Error: string - +DurationMs: int - } - - class UserMessage { - +Type: string - +Content: string - +ToolCallID: string - +Reason: string - } - - AgentSession --> ToolCall - AgentSession --> ToolResult - AgentSession --> UserMessage - AgentHandler --> AgentSession -``` - -### Registration API - -```go -// Endpoint registration (alongside existing DataSource/Model) -api.Agent("code-assistant"). - Name("Code Assistant"). - Description("An AI agent that can read, write, and debug code"). - Version("1.0.0"). - Tools([]syfthubapi.ToolDef{ - {Name: "read_file", Description: "Read a file", RequiresConfirmation: false}, - {Name: "write_file", Description: "Write a file", RequiresConfirmation: true}, - {Name: "run_command", Description: "Execute a command", RequiresConfirmation: true}, - {Name: "search_code", Description: "Search codebase", RequiresConfirmation: false}, - }). - Handler(myAgentHandler) -``` - -### Example Agent Handler - -```go -func myAgentHandler(ctx context.Context, session *syfthubapi.AgentSession) error { - prompt := session.InitialPrompt - - for { - // 1. Think about the prompt - session.SendThinking("Analyzing the request: " + prompt) - - // 2. Decide on action - session.SendStatus("searching", "Looking for relevant files...") - - // 3. Use a tool (no confirmation needed) - session.SendToolCall(syfthubapi.ToolCall{ - ID: uuid.New().String(), - Name: "read_file", - Arguments: map[string]any{"path": "auth.py"}, - RequiresConfirmation: false, - Description: "Reading auth.py to understand the code", - }) - - content, err := readFile("auth.py") - if err != nil { - return err - } - - session.SendToolResult(syfthubapi.ToolResult{ - ToolCallID: toolCallID, - Status: "success", - Result: content, - }) - - // 4. Use a tool that requires confirmation - confirmed, err := session.RequestConfirmation( - "I want to modify auth.py to fix the token validation bug", - ) - if err != nil { - return err // Session cancelled or disconnected - } - - if confirmed { - // Apply fix - session.SendStatus("writing", "Applying fix to auth.py") - // ... write file ... - session.SendMessage("Fixed the bug in auth.py!") - } else { - session.SendMessage("OK, I won't modify auth.py.") - } - - // 5. Check for additional user input - session.SendMessage("Is there anything else you'd like me to do?") - msg, err := session.Receive() - if err != nil { - return err // Session ended - } - - if msg.Content == "" || strings.ToLower(msg.Content) == "no" { - break - } - prompt = msg.Content // Continue with new prompt - } - - return nil // Session completes successfully -} -``` - -### Internal Session Management (Space Side) - -```mermaid -flowchart TD - NATS_MSG[NATS Message Received] --> TYPE_CHECK{Message type?} - - TYPE_CHECK -->|endpoint_request| EXISTING[Existing handler
(unchanged)] - TYPE_CHECK -->|agent_session_start| NEW_SESSION - TYPE_CHECK -->|agent_user_message| ROUTE_SESSION - TYPE_CHECK -->|agent_session_cancel| CANCEL_SESSION - - NEW_SESSION[Create AgentSession
with channels] --> DECRYPT[Decrypt payload] - DECRYPT --> VERIFY[Verify satellite token] - VERIFY --> LOOKUP[Lookup agent endpoint] - LOOKUP --> SPAWN["go handler(ctx, session)"] - SPAWN --> RELAY_LOOP["Relay goroutine:
session.sendCh → NATS
(encrypt & publish to peer_channel)"] - - ROUTE_SESSION[Find session by session_id] --> DECRYPT2[Decrypt payload] - DECRYPT2 --> PUSH["Push to session.recvCh"] - - CANCEL_SESSION[Find session by session_id] --> CTX_CANCEL["Cancel session context
Handler goroutine returns"] - - style NEW_SESSION fill:#E74C3C,color:#fff - style ROUTE_SESSION fill:#E8A838,color:#fff - style CANCEL_SESSION fill:#95A5A6,color:#fff -``` - ---- - -## 11. Go SDK Client — Agent Resource - -```mermaid -classDiagram - class Client { - +Agent(): *AgentResource - } - - class AgentResource { - +StartSession(ctx, req): *AgentSessionClient, error - } - - class AgentSessionRequest { - +Prompt: string - +Endpoint: string - +Config: *AgentConfig - +Messages: []Message - } - - class AgentSessionClient { - +SessionID: string - +SendMessage(ctx, content): error - +Confirm(ctx, toolCallID): error - +Deny(ctx, toolCallID, reason): error - +Cancel(ctx): error - +Close(): error - +Events(): chan AgentEvent - +Errors(): chan error - } - - class AgentEvent { - <> - +EventType(): string - +Sequence(): int - } - - Client --> AgentResource - AgentResource --> AgentSessionClient - AgentSessionClient --> AgentEvent -``` - -Usage: -```go -client, _ := syfthub.NewClient(syfthub.WithAPIToken("syft_pat_...")) -session, _ := client.Agent().StartSession(ctx, &syfthub.AgentSessionRequest{ - Prompt: "Fix the auth bug", - Endpoint: "alice/code-assistant", -}) -defer session.Close() - -for event := range session.Events() { - switch e := event.(type) { - case *syfthub.ToolCallEvent: - if e.RequiresConfirmation { - session.Confirm(ctx, e.ToolCallID) - } - case *syfthub.TokenEvent: - fmt.Print(e.Content) - case *syfthub.RequestInputEvent: - session.SendMessage(ctx, getUserInput()) - case *syfthub.SessionCompletedEvent: - fmt.Println("\nDone:", e.Summary) - return - } -} -``` - ---- - -## 12. TypeScript SDK — Agent Client - -```typescript -// In @syfthub/sdk -class AgentResource { - async startSession(options: AgentSessionOptions): Promise { - // 1. Resolve endpoint - // 2. Fetch tokens (satellite, transaction, peer) - // 3. Open WebSocket to aggregator - // 4. Send session.start - // 5. Wait for session.created - // 6. Return AgentSessionClient - } -} - -interface AgentSessionClient { - readonly sessionId: string; - readonly state: AgentSessionState; - - // Send messages - sendMessage(content: string): Promise; - confirm(toolCallId: string): Promise; - deny(toolCallId: string, reason?: string): Promise; - cancel(): Promise; - close(): void; - - // Receive events (async iterable) - events(): AsyncIterable; - - // Event listeners (alternative API) - on(event: 'thinking', handler: (e: ThinkingEvent) => void): void; - on(event: 'tool_call', handler: (e: ToolCallEvent) => void): void; - on(event: 'tool_result', handler: (e: ToolResultEvent) => void): void; - on(event: 'message', handler: (e: MessageEvent) => void): void; - on(event: 'token', handler: (e: TokenEvent) => void): void; - on(event: 'status', handler: (e: StatusEvent) => void): void; - on(event: 'request_input', handler: (e: RequestInputEvent) => void): void; - on(event: 'completed', handler: (e: CompletedEvent) => void): void; - on(event: 'failed', handler: (e: FailedEvent) => void): void; - on(event: 'error', handler: (e: ErrorEvent) => void): void; -} - -// Usage -const client = new SyftHubClient({ apiToken: '...' }); -const session = await client.agent.startSession({ - prompt: 'Fix the auth bug', - endpoint: 'alice/code-assistant', -}); - -for await (const event of session.events()) { - if (event.type === 'agent.token') { - process.stdout.write(event.payload.content); - } else if (event.type === 'agent.tool_call' && event.payload.requires_confirmation) { - await session.confirm(event.payload.tool_call_id); - } else if (event.type === 'agent.request_input') { - const input = await getUserInput(event.payload.prompt); - await session.sendMessage(input); - } -} -``` - ---- - -## 13. Frontend — Agent UI - -### Component Hierarchy - -```mermaid -graph TD - AP[AgentPage
/agent/:owner/:slug] --> AV[AgentView] - AV --> AH[AgentHeader
endpoint info + session status] - AV --> AEL[AgentEventList
scrollable event feed] - AV --> AI[AgentInput
adaptive input field] - - AEL --> EC_TH[ThinkingBlock
collapsible reasoning] - AEL --> EC_TC[ToolCallCard
tool name + args + confirm/deny] - AEL --> EC_TR[ToolResultCard
collapsible result] - AEL --> EC_MSG[MessageBubble
markdown message] - AEL --> EC_TOK[StreamingMessage
accumulates tokens] - AEL --> EC_ST[StatusBadge
progress indicator] - AEL --> EC_RI[InputPrompt
highlighted request for input] - AEL --> EC_UM[UserMessage
user's messages] - - AV -->|"uses"| UAW[useAgentWorkflow] - UAW -->|"uses"| SDK[SyftHubClient.agent] - - style AV fill:#4A90D9,color:#fff - style UAW fill:#E8A838,color:#fff - style SDK fill:#7B68EE,color:#fff -``` - -### useAgentWorkflow Hook - -```mermaid -stateDiagram-v2 - [*] --> idle - - idle --> connecting: startSession() - connecting --> running: session.created - connecting --> error: connection failed - - running --> running: agent events - running --> awaiting_input: agent.request_input - awaiting_input --> running: sendMessage() / confirm() - running --> completed: session.completed - running --> failed: session.failed - running --> cancelled: cancel() - - completed --> idle: new session - failed --> idle: new session - cancelled --> idle: new session - error --> idle: retry -``` - -### UI Mockup Flow - -``` -┌─────────────────────────────────────────────┐ -│ Code Assistant (alice/code-assistant) [●] │ ← AgentHeader -├─────────────────────────────────────────────┤ -│ │ -│ You: Find and fix the bug in auth.py │ ← UserMessage -│ │ -│ ┌─ Thinking ─────────────────────────────┐ │ ← ThinkingBlock -│ │ Analyzing the request. I'll start by │ │ (collapsible) -│ │ reading auth.py to understand the code.│ │ -│ └────────────────────────────────────────┘ │ -│ │ -│ ◎ Reading file... │ ← StatusBadge -│ │ -│ ┌─ Tool: read_file ─────────────────────┐ │ ← ToolCallCard -│ │ path: "auth.py" │ │ -│ │ ┌─ Result (success) ───────────────┐ │ │ ← ToolResultCard -│ │ │ def validate_token(token): │ │ │ (collapsible) -│ │ │ ...124 lines... │ │ │ -│ │ └─────────────────────────────────┘ │ │ -│ └────────────────────────────────────────┘ │ -│ │ -│ ┌─ Tool: write_file ────────────────────┐ │ ← ToolCallCard -│ │ path: "auth.py" │ │ (confirmation) -│ │ Fix authentication bug in auth.py │ │ -│ │ │ │ -│ │ [✓ Confirm] [✗ Deny] │ │ ← Action buttons -│ └────────────────────────────────────────┘ │ -│ │ -├─────────────────────────────────────────────┤ -│ ⏳ Waiting for your confirmation... │ ← AgentInput -│ ┌─────────────────────────────────── [Send]│ │ (shows prompt) -│ └───────────────────────────────────────────│ -└─────────────────────────────────────────────┘ -``` - ---- - -## 14. Authentication & Token Strategy - -### Token Lifecycle for Agent Sessions - -```mermaid -sequenceDiagram - participant FE as Frontend - participant BE as Backend - participant AG as Aggregator - participant SP as Space - - Note over FE: Session start - FE->>BE: GET /api/v1/token?aud=alice (60s) - BE-->>FE: satellite_token_1 - FE->>AG: session.start {satellite_token_1} - AG->>SP: agent_session_start {satellite_token_1} - SP->>SP: Verify token → establish session - Note over SP: Session verified.
Trust NATS channel for duration. - - Note over FE: 2 minutes later, user sends message - FE->>BE: GET /api/v1/token?aud=alice (fresh 60s) - BE-->>FE: satellite_token_2 - FE->>AG: user.message {satellite_token_2} - AG->>SP: agent_user_message {satellite_token_2} - SP->>SP: Optional re-verification
(validates fresh token) - - Note over SP: Agent events don't need
user tokens — they originate
FROM the space. - SP->>AG: agent_event (no token needed) - AG->>FE: agent.tool_call (no token needed) -``` - -### Token Strategy Summary - -| Message Direction | Token Required | When Verified | -|-------------------|----------------|---------------| -| `session.start` (FE→Space) | Satellite token (mandatory) | At session creation | -| `user.message` (FE→Space) | Satellite token (optional, recommended) | If present, re-verified | -| `user.confirm/deny` (FE→Space) | None | Trusted within session | -| `agent_event` (Space→FE) | None | N/A (originates from space) | -| `session.cancel` (FE→Space) | None | Trusted within session | - -### Peer Token for Agent Sessions - -The peer token (for NATS tunneling) may need a longer TTL for agent sessions: - -- Current default: short-lived (minutes) -- Agent sessions: may last 30+ minutes -- Solution: Backend accepts an optional `session_type: "agent"` parameter in `POST /api/v1/peer-token` that issues a longer-lived peer token (e.g., 2 hours) -- Alternative: Frontend refreshes peer token periodically and aggregator handles re-subscription - ---- - -## 15. Comparison: Chat vs Agent Workflow - -```mermaid -graph TB - subgraph "Chat Workflow (existing)" - C1[User sends query] --> C2[Fetch tokens] - C2 --> C3[POST /chat/stream] - C3 --> C4[Aggregator RAG pipeline] - C4 --> C5[Retrieve → Rerank → Generate] - C5 --> C6[SSE stream response] - C6 --> C7[Display with citations] - Note1["Single request/response
Aggregator orchestrates
SSE unidirectional
Seconds duration"] - end - - subgraph "Agent Workflow (new)" - A1[User sends prompt] --> A2[Fetch tokens] - A2 --> A3[WS /agent/session] - A3 --> A4[Aggregator relay] - A4 --> A5[Space runs agent handler] - A5 --> A6[Bidirectional events] - A6 --> A7{Agent needs input?} - A7 -->|Yes| A8[User provides input] - A8 --> A5 - A7 -->|No| A9[Agent continues] - A9 --> A5 - A5 --> A10[session.completed] - Note2["Multi-turn session
Aggregator relays (no RAG)
WebSocket bidirectional
Minutes-hours duration"] - end - - style C4 fill:#7B68EE,color:#fff - style A4 fill:#E74C3C,color:#fff -``` - -### Feature Comparison - -| Feature | Chat (model/data_source) | Agent | -|---------|-------------------------|-------| -| **Transport (FE↔AG)** | SSE (POST) | WebSocket | -| **Transport (AG↔Space)** | NATS req/resp or HTTP | NATS session or HTTP SSE+POST | -| **Direction** | Unidirectional (server→client) | Bidirectional | -| **Aggregator role** | RAG orchestrator | Message relay | -| **Session duration** | Seconds (single request) | Minutes to hours | -| **User input** | Single prompt | Multiple inputs during session | -| **Agent autonomy** | None (dumb model) | Full (tool calls, reasoning) | -| **Tool calls** | N/A | First-class with confirmation | -| **Streaming** | Token events via SSE | Token events via WebSocket | -| **Endpoint type** | `model`, `data_source` | `agent` | -| **Handler signature** | `(query) → response` | `(session) → runs until done` | - ---- - -## 16. Error Handling & Edge Cases - -### Connection Failures - -```mermaid -flowchart TD - subgraph "WebSocket Disconnect (Frontend)" - WS_DC[Browser disconnect] --> AG_DETECT[Aggregator detects
WebSocket close] - AG_DETECT --> AG_CANCEL["Send agent_session_cancel
to space via NATS"] - AG_CANCEL --> SP_CANCEL[Space cancels
handler context] - SP_CANCEL --> CLEANUP[Session cleanup] - end - - subgraph "Space Goes Offline" - SP_OFF[Desktop/CLI offline] --> NATS_LOST[NATS messages
undelivered] - NATS_LOST --> AG_TIMEOUT[Aggregator message
timeout (30s)] - AG_TIMEOUT --> AG_ERR["Send agent.error to FE
{recoverable: false}"] - AG_ERR --> FE_ERR[Frontend shows
connection lost] - end - - subgraph "Agent Handler Crash" - PANIC[Handler panics] --> RECOVER[Recovery middleware
catches panic] - RECOVER --> SP_FAIL["Space sends session.failed
via NATS"] - SP_FAIL --> AG_RELAY[Aggregator relays
session.failed to FE] - AG_RELAY --> FE_FAIL[Frontend shows
agent error] - end - - subgraph "Inactivity Timeout" - NO_ACTIVITY[30min no messages] --> AG_TIMEOUT2[Aggregator timeout] - AG_TIMEOUT2 --> AG_CANCEL2["Send cancel to space
+ close WebSocket"] - end - - style WS_DC fill:#E8A838,color:#fff - style SP_OFF fill:#E74C3C,color:#fff - style PANIC fill:#E74C3C,color:#fff - style NO_ACTIVITY fill:#95A5A6,color:#fff -``` - -### Error Codes (Agent-Specific) - -| Code | Meaning | Recoverable | -|------|---------|-------------| -| `SESSION_INIT_FAILED` | Couldn't establish session with space | No | -| `AGENT_NOT_FOUND` | Agent endpoint slug not in registry | No | -| `AGENT_DISABLED` | Agent endpoint exists but disabled | No | -| `HANDLER_CRASHED` | Agent handler panicked/errored | No | -| `HANDLER_TIMEOUT` | Handler exceeded max session duration | No | -| `SPACE_DISCONNECTED` | Space went offline mid-session | No | -| `TOKEN_EXPIRED` | Satellite token expired, refresh failed | Yes (send fresh token) | -| `TOOL_EXECUTION_ERROR` | Tool call failed | Yes (agent can retry) | -| `MESSAGE_TOO_LARGE` | Message exceeds size limit | Yes (send smaller) | -| `SESSION_TIMEOUT` | Inactivity timeout exceeded | No | - -### Concurrent User Messages - -```mermaid -sequenceDiagram - participant User - participant Agent - - Note over Agent: Agent is working on task 1 - - User->>Agent: "Also do task 2" - User->>Agent: "And task 3" - - Note over Agent: Messages buffered in recvCh - - Agent->>Agent: session.Receive() → "Also do task 2" - Agent->>Agent: session.Receive() → "And task 3" - Note over Agent: Agent processes in order -``` - -The agent's `recvCh` is a buffered channel. Messages are queued and the agent processes them when ready via `session.Receive()`. - ---- - -## 17. Backward Compatibility Checklist - -Every existing component remains unchanged: - -| Component | Change Required | Impact on Existing | -|-----------|----------------|-------------------| -| **Backend: Auth endpoints** | None | Zero — satellite/peer tokens work as-is | -| **Backend: Chat endpoints** | None | Zero | -| **Backend: DB models** | Add `agent` to EndpointType enum | Additive enum value | -| **Backend: Endpoint sync** | Accept `type: "agent"` | Already handles unknown types | -| **Aggregator: /chat/stream** | None | Zero | -| **Aggregator: /chat** | None | Zero | -| **Aggregator: Orchestrator** | None | Zero | -| **Aggregator: NATS transport** | Handle new message types | Existing types unchanged | -| **Go SDK (syfthubapi): processor** | Add `agent` case | Existing cases unchanged | -| **Go SDK (syfthubapi): NATS transport** | Handle new message types | Existing handler unchanged | -| **Go SDK (syfthub): Chat resource** | None | Zero | -| **TS SDK: Chat resource** | None | Zero | -| **Frontend: Chat components** | None | Zero | -| **Frontend: Routes** | Add `/agent/:owner/:slug` | Existing routes unchanged | -| **Nginx** | Add WebSocket proxy rule | Existing rules unchanged | -| **NATS subjects** | Reuse existing subjects | No new subjects needed | -| **Encryption** | Same protocol | Same X25519+AES-256-GCM | - ---- - -## 18. Implementation Phases - -### Phase 1: Foundation (Core Protocol) - -```mermaid -gantt - title Phase 1 — Foundation - dateFormat X - axisFormat %s - - section Backend - Add agent to EndpointType enum :a1, 0, 1 - Accept agent in endpoint sync :a2, 0, 1 - - section Aggregator - WebSocket endpoint stub :b1, 0, 2 - Session Manager :b2, 1, 3 - NATS Session Transport :b3, 2, 4 - - section Go SDK (syfthubapi) - AgentSession struct + channels :c1, 0, 2 - AgentHandler type + registration :c2, 1, 3 - NATS agent message handling :c3, 2, 4 - Session goroutine management :c4, 3, 5 - - section Integration - End-to-end test (WS→NATS→Space) :d1, 4, 6 -``` - -**Deliverables:** -- Agent endpoint type in database -- Working WebSocket → NATS → Space → Agent handler pipeline -- Basic `AgentSession` with `Send()`, `Receive()`, `RequestConfirmation()` - -### Phase 2: Client & UI - -**Deliverables:** -- Go SDK client `Agent()` resource with `StartSession()` -- TypeScript SDK `agent.startSession()` with WebSocket client -- Frontend `useAgentWorkflow` hook -- Frontend `AgentView` with event rendering -- Tool confirmation UI (confirm/deny buttons) - -### Phase 3: Polish & HTTP Transport - -**Deliverables:** -- HTTP direct transport for non-tunneled agent endpoints -- Token refresh during long sessions -- Session timeout/cleanup -- Ping/pong keepalive -- Agent status progress bars -- Thinking block UI (collapsible) - -### Phase 4: Advanced Features - -**Deliverables:** -- Session persistence and resumption after disconnect -- Session history in backend database -- Agent-side data source access (agent calls other SyftHub endpoints) -- File/image attachments in user messages -- Shared tool registry -- Multi-agent orchestration (agent spawns sub-agents) - ---- - -## Appendix A: Full Message Type Catalog - -### Client → Server - -| Type | Payload Fields | When Sent | -|------|---------------|-----------| -| `session.start` | `prompt`, `endpoint`, `satellite_token`, `peer_token?`, `peer_channel?`, `transaction_token?`, `config?`, `messages?` | Once, to initialize | -| `user.message` | `content`, `satellite_token?` | Any time during session | -| `user.confirm` | `tool_call_id`, `modifications?` | After `agent.tool_call` with `requires_confirmation` | -| `user.deny` | `tool_call_id`, `reason?` | After `agent.tool_call` with `requires_confirmation` | -| `user.cancel` | — | Any time to stop agent | -| `session.close` | — | To terminate session | -| `ping` | — | Keepalive | - -### Server → Client - -| Type | Payload Fields | When Sent | -|------|---------------|-----------| -| `session.created` | `session_id`, `endpoint` | After successful initialization | -| `agent.thinking` | `content`, `is_streaming` | Agent reasoning | -| `agent.tool_call` | `tool_call_id`, `tool_name`, `arguments`, `requires_confirmation`, `description?` | Agent wants to use tool | -| `agent.tool_result` | `tool_call_id`, `status`, `result?`, `error?`, `duration_ms` | After tool execution | -| `agent.message` | `content`, `is_complete` | Agent text output | -| `agent.token` | `content` | Streamed response chunk | -| `agent.status` | `status`, `detail?`, `progress?` | Progress update | -| `agent.request_input` | `prompt`, `input_type`, `choices?`, `default?` | Agent pauses for input | -| `agent.error` | `code`, `message`, `recoverable` | Error occurred | -| `session.completed` | `summary?`, `usage?`, `duration_ms` | Agent finished | -| `session.failed` | `code`, `message` | Unrecoverable error | -| `pong` | — | Keepalive response | - -## Appendix B: NATS Message Type Catalog - -### Aggregator → Space (on `syfthub.spaces.{username}`) - -| Type | Purpose | Encryption | -|------|---------|-----------| -| `endpoint_request` | Existing: single query | Per-message ephemeral | -| `agent_session_start` | New: initialize agent session | Per-message ephemeral | -| `agent_user_message` | New: relay user input to agent | Per-message ephemeral | -| `agent_session_cancel` | New: cancel agent session | Per-message ephemeral | - -### Space → Aggregator (on `syfthub.peer.{peer_channel}`) - -| Type | Purpose | Encryption | -|------|---------|-----------| -| `endpoint_response` | Existing: single response | Per-message ephemeral | -| `agent_event` | New: agent event (wraps all event types) | Per-message ephemeral | - -## Appendix C: Configuration Reference - -| Component | Setting | Default | Purpose | -|-----------|---------|---------|---------| -| Aggregator | `AGENT_SESSION_TIMEOUT_SECONDS` | 1800 (30min) | Inactivity timeout | -| Aggregator | `AGENT_MAX_SESSIONS` | 100 | Max concurrent sessions | -| Aggregator | `AGENT_MAX_MESSAGE_SIZE_BYTES` | 524288 (512KB) | Max per-message size | -| Space | `AGENT_HANDLER_TIMEOUT_SECONDS` | 3600 (1hr) | Max session duration | -| Space | `AGENT_RECV_BUFFER_SIZE` | 100 | Buffered channel size | -| Backend | `AGENT_PEER_TOKEN_TTL_SECONDS` | 7200 (2hr) | Peer token TTL for agent sessions | - -## Appendix D: File Locations (Proposed) - -| Component | New Files | Purpose | -|-----------|-----------|---------| -| **Aggregator** | `api/endpoints/agent.py` | WebSocket endpoint | -| | `services/session_manager.py` | Session lifecycle | -| | `services/session_transport.py` | NATS + HTTP transport abstraction | -| | `schemas/agent.py` | Agent message schemas | -| **Go SDK (syfthubapi)** | `agent.go` | AgentSession, AgentHandler | -| | `agent_builder.go` | Agent endpoint builder | -| | `session_manager.go` | Space-side session tracking | -| **Go SDK (syfthub)** | `agent.go` | Agent client resource | -| | `agent_session.go` | AgentSessionClient | -| **TS SDK** | `resources/agent.ts` | Agent client resource | -| | `types/agent.ts` | Agent event types | -| **Frontend** | `pages/agent.tsx` | Agent page | -| | `components/agent/agent-view.tsx` | Main agent UI | -| | `components/agent/agent-event-list.tsx` | Event feed | -| | `components/agent/event-cards/*.tsx` | Per-event-type cards | -| | `hooks/use-agent-workflow.ts` | Agent workflow hook | -| **Backend** | Migration: add `agent` to endpoint type enum | DB schema | diff --git a/docs/llm-chat-workflow.md b/docs/llm-chat-workflow.md deleted file mode 100644 index d6673763..00000000 --- a/docs/llm-chat-workflow.md +++ /dev/null @@ -1,1237 +0,0 @@ -# SyftHub LLM Chat Workflow — Complete Architecture - -> End-to-end trace of a chat request from the React frontend through the backend, aggregator, NATS tunnel, desktop/CLI node, and Go SDK endpoint handler. - ---- - -## Table of Contents - -1. [System Overview](#1-system-overview) -2. [Component Relationship Map](#2-component-relationship-map) -3. [Complete Chat Sequence (High Level)](#3-complete-chat-sequence-high-level) -4. [Phase 1: Frontend — User Input to API Call](#4-phase-1-frontend--user-input-to-api-call) -5. [Phase 2: Token Acquisition](#5-phase-2-token-acquisition) -6. [Phase 3: Aggregator — RAG Orchestration](#6-phase-3-aggregator--rag-orchestration) -7. [Phase 4: Transport Decision — HTTP vs NATS](#7-phase-4-transport-decision--http-vs-nats) -8. [Phase 5: NATS Tunnel Protocol](#8-phase-5-nats-tunnel-protocol) -9. [Phase 6: Desktop/CLI — Endpoint Execution](#9-phase-6-desktopcli--endpoint-execution) -10. [Phase 7: Response Assembly & Streaming](#10-phase-7-response-assembly--streaming) -11. [SSE Event Lifecycle](#11-sse-event-lifecycle) -12. [Authentication & Token Architecture](#12-authentication--token-architecture) -13. [NATS Encryption Protocol](#13-nats-encryption-protocol) -14. [Branch Logic: Streaming vs Non-Streaming](#14-branch-logic-streaming-vs-non-streaming) -15. [Branch Logic: Authenticated vs Guest](#15-branch-logic-authenticated-vs-guest) -16. [Citation & Attribution Pipeline](#16-citation--attribution-pipeline) -17. [Error Handling Across Layers](#17-error-handling-across-layers) -18. [Data Models Reference](#18-data-models-reference) - ---- - -## 1. System Overview - -```mermaid -graph TB - subgraph "User's Browser" - FE[React Frontend
port 3000] - end - - subgraph "SyftHub Cloud" - NG[Nginx Reverse Proxy
port 8080] - BE[Backend Hub API
FastAPI · port 8000] - AG[Aggregator Service
FastAPI · port 8001] - NATS[NATS Broker
port 4222 / WS] - DB[(PostgreSQL)] - RD[(Redis)] - end - - subgraph "User's Machine" - DA[Desktop App / CLI
Go · NATS client] - EP[Local Endpoints
Python handlers] - end - - FE -->|"all requests"| NG - NG -->|"/api/v1/*"| BE - NG -->|"/aggregator/api/v1/*"| AG - - BE --> DB - BE --> RD - - AG -->|"HTTP direct"| EP2[Remote Endpoints] - AG -->|"NATS tunnel"| NATS - NATS -->|"encrypted"| DA - DA -->|"subprocess"| EP - - BE -.->|"token endpoints"| FE - AG -.->|"SSE stream"| FE - - style FE fill:#4A90D9,color:#fff - style BE fill:#E8A838,color:#fff - style AG fill:#7B68EE,color:#fff - style NATS fill:#27AE60,color:#fff - style DA fill:#E74C3C,color:#fff - style NG fill:#95A5A6,color:#fff -``` - -**Key insight**: The backend is NOT in the chat request path. Chat requests flow directly from frontend → aggregator. The backend only provides authentication tokens. - ---- - -## 2. Component Relationship Map - -```mermaid -graph LR - subgraph "Frontend Layer" - CV[ChatView] --> UCW[useChatWorkflow] - SI[SearchInput] --> CV - UCW --> SDK_TS["@syfthub/sdk
TypeScript"] - end - - subgraph "Token Layer (Backend)" - SDK_TS -->|"GET /api/v1/token"| SAT[Satellite Token EP] - SDK_TS -->|"POST /api/v1/accounting/transaction-tokens"| TXN[Transaction Token EP] - SDK_TS -->|"POST /api/v1/peer-token"| PEER[Peer Token EP] - end - - subgraph "Aggregator Layer" - SDK_TS -->|"POST /aggregator/api/v1/chat/stream"| CHAT_EP[Chat Stream EP] - CHAT_EP --> ORCH[Orchestrator] - ORCH --> RET[Retrieval Service] - ORCH --> RERANK[Reranker
ONNX] - ORCH --> PB[Prompt Builder] - ORCH --> GEN[Generation Service] - end - - subgraph "Transport Layer" - RET -->|"HTTP"| HTTP_T[HTTP Client] - RET -->|"NATS"| NATS_T[NATS Transport] - GEN -->|"HTTP"| HTTP_T - GEN -->|"NATS"| NATS_T - end - - subgraph "Endpoint Layer (Desktop/CLI)" - NATS_T -->|"encrypted pub/sub"| NATS_H[NATS Handler] - HTTP_T -->|"direct POST"| HTTP_H[HTTP Handler] - NATS_H --> PROC[RequestProcessor] - HTTP_H --> PROC - PROC --> REG[Endpoint Registry] - REG --> EXEC[SubprocessExecutor
Python handler] - end - - style CV fill:#4A90D9,color:#fff - style ORCH fill:#7B68EE,color:#fff - style NATS_T fill:#27AE60,color:#fff - style PROC fill:#E74C3C,color:#fff -``` - ---- - -## 3. Complete Chat Sequence (High Level) - -```mermaid -sequenceDiagram - actor User - participant FE as Frontend
(React) - participant BE as Backend
(FastAPI) - participant AG as Aggregator
(FastAPI) - participant NATS as NATS Broker - participant Space as Desktop/CLI
(Go) - participant EP as Endpoint
(Python) - - User->>FE: Type query, select model & sources - FE->>FE: Validate input, resolve source IDs - - rect rgb(255, 243, 224) - Note over FE,BE: Phase 1 — Token Acquisition - par Satellite Tokens - FE->>BE: GET /api/v1/token?aud={owner} - BE-->>FE: RS256 JWT (60s TTL) - and Transaction Tokens - FE->>BE: POST /api/v1/accounting/transaction-tokens - BE-->>FE: Billing tokens per owner - and Peer Token (if tunneling) - FE->>BE: POST /api/v1/peer-token - BE-->>FE: peer_token + peer_channel + nats_url - end - end - - rect rgb(224, 247, 250) - Note over FE,AG: Phase 2 — Chat Stream - FE->>AG: POST /aggregator/api/v1/chat/stream
(prompt, model, sources, all tokens) - AG-->>FE: SSE: retrieval_start - - rect rgb(232, 245, 233) - Note over AG,EP: Phase 3 — Retrieval (parallel per source) - par For each data source - alt HTTP endpoint - AG->>EP: POST {url}/query
Authorization: Bearer {satellite_token} - EP-->>AG: documents[] - else NATS tunnel endpoint - AG->>AG: Encrypt payload (X25519 + AES-256-GCM) - AG->>NATS: PUB syfthub.spaces.{owner} - NATS->>Space: Encrypted request - Space->>Space: Decrypt, verify token - Space->>EP: Invoke handler - EP-->>Space: documents[] - Space->>Space: Encrypt response - Space->>NATS: PUB syfthub.peer.{channel} - NATS-->>AG: Encrypted response - AG->>AG: Decrypt response - end - end - end - - AG-->>FE: SSE: source_complete (per source) - AG-->>FE: SSE: retrieval_complete - - AG->>AG: Rerank documents (ONNX) - AG-->>FE: SSE: reranking_start / reranking_complete - - AG->>AG: Build augmented prompt - AG-->>FE: SSE: generation_start - - rect rgb(243, 229, 245) - Note over AG,EP: Phase 4 — LLM Generation - alt HTTP model - AG->>EP: POST {url}/query
Authorization: Bearer {satellite_token} - EP-->>AG: LLM response - else NATS tunnel model - AG->>NATS: PUB syfthub.spaces.{owner} - NATS->>Space: Encrypted request - Space->>EP: Invoke model handler - EP-->>Space: LLM response - Space->>NATS: PUB syfthub.peer.{channel} - NATS-->>AG: Encrypted response - end - end - - AG-->>FE: SSE: token (repeated, chunked) - AG-->>FE: SSE: generation_heartbeat (periodic) - AG->>AG: Annotate citations, compute attribution - AG-->>FE: SSE: done (response + sources + metadata) - end - - FE->>FE: Parse citations, update UI - FE->>User: Display formatted response with sources -``` - ---- - -## 4. Phase 1: Frontend — User Input to API Call - -### Component Hierarchy - -```mermaid -graph TD - CP[ChatPage] -->|"navigation state"| CV[ChatView] - CV -->|"renders"| SI[SearchInput] - CV -->|"renders"| ML[MessageList] - CV -->|"renders"| STAT[StatusIndicator] - CV -->|"uses"| UCW[useChatWorkflow hook] - SI -->|"@mention"| AC[Autocomplete] - SI -->|"model picker"| MP[ModelSelector] - ML -->|"per message"| MM[MarkdownMessage] - MM -->|"citations"| CIT[CitationHighlight] - - UCW -->|"dispatches"| RED[workflowReducer] - UCW -->|"calls"| SDK[SyftHubClient] - RED -->|"state updates"| CV - - style UCW fill:#E8A838,color:#fff - style SDK fill:#4A90D9,color:#fff -``` - -### useChatWorkflow State Machine - -```mermaid -statediagram-v2 - [*] --> idle - idle --> preparing: submitQuery() - preparing --> streaming: executeWithSources() - streaming --> streaming: SSE events - streaming --> complete: done event - streaming --> error: error event / abort - preparing --> error: validation failure - complete --> idle: new query - error --> idle: new query -``` - -### Frontend Request Flow - -```mermaid -flowchart TD - A[User clicks Send] --> B{Input valid?} - B -->|No| ERR1[Show validation error] - B -->|Yes| C[Resolve source IDs → full paths] - C --> D[Set phase = preparing] - - D --> E[Collect unique endpoint owners] - E --> F{User authenticated?} - - F -->|Yes| G1[GET /api/v1/token?aud=owner
for each unique owner] - F -->|No| G2[GET /api/v1/token/guest?aud=owner
for each unique owner] - - G1 --> H{Any tunneling endpoints?} - G2 --> H - - H -->|Yes| I[POST /api/v1/peer-token
with target_usernames] - H -->|No| J[Skip peer token] - - I --> K[Build ChatRequest body] - J --> K - - F -->|Yes| L[POST /api/v1/accounting/transaction-tokens] - L --> K - - K --> M[POST /aggregator/api/v1/chat/stream] - M --> N[Set phase = streaming] - N --> O[Process SSE events via AsyncIterable] - - style A fill:#4A90D9,color:#fff - style M fill:#7B68EE,color:#fff - style O fill:#27AE60,color:#fff -``` - -### ChatRequest Body (sent to aggregator) - -```json -{ - "prompt": "What are the key features?", - "model": { - "url": "https://space.example.com", - "slug": "gpt-model", - "name": "GPT Model", - "owner_username": "alice" - }, - "data_sources": [ - { - "url": "tunneling:bob", - "slug": "docs-dataset", - "name": "Docs", - "owner_username": "bob" - } - ], - "endpoint_tokens": { - "alice": "eyJ...(satellite JWT)...", - "bob": "eyJ...(satellite JWT)..." - }, - "transaction_tokens": { - "alice": "tx_token_alice", - "bob": "tx_token_bob" - }, - "peer_token": "peer_jwt_for_nats", - "peer_channel": "a1b2c3d4-uuid", - "top_k": 5, - "max_tokens": 1024, - "temperature": 0.7, - "similarity_threshold": 0.5, - "stream": true, - "messages": [ - {"role": "user", "content": "Previous question"}, - {"role": "assistant", "content": "Previous answer"} - ] -} -``` - ---- - -## 5. Phase 2: Token Acquisition - -```mermaid -sequenceDiagram - participant SDK as TS SDK - participant BE as Backend Hub - - Note over SDK: Collect unique owners from model + data_sources - - par Satellite Tokens (one per owner) - SDK->>BE: GET /api/v1/token?aud=alice - BE->>BE: Validate user active
Sign RS256 JWT (sub=user, aud=alice, 60s) - BE-->>SDK: {target_token, expires_in: 60} - - SDK->>BE: GET /api/v1/token?aud=bob - BE-->>SDK: {target_token, expires_in: 60} - and Transaction Tokens (batch) - SDK->>BE: POST /api/v1/accounting/transaction-tokens
{owner_usernames: ["alice", "bob"]} - BE->>BE: For each owner:
1. Look up owner email
2. POST to accounting service - BE-->>SDK: {tokens: {"alice": "tx1", "bob": "tx2"}, errors: {}} - and Peer Token (if tunneling detected) - SDK->>BE: POST /api/v1/peer-token
{target_usernames: ["bob"]} - BE->>BE: Generate peer channel UUID
Store in Redis with TTL - BE-->>SDK: {peer_token, peer_channel, expires_in, nats_url} - end - - Note over SDK: All tokens collected → build ChatRequest -``` - -### Token Types Comparison - -| Token | Endpoint | Signing | TTL | Purpose | Auth Required | -|-------|----------|---------|-----|---------|---------------| -| **Hub Access** | Login | HS256 | 30min | Authenticate with backend | N/A (login) | -| **Satellite** | `GET /api/v1/token` | RS256 | 60s | Authorize endpoint access | Yes (or guest variant) | -| **Transaction** | `POST /api/v1/accounting/transaction-tokens` | External | Varies | Billing authorization | Yes | -| **Peer** | `POST /api/v1/peer-token` | Internal | Short | NATS P2P communication | Yes (or guest variant) | - ---- - -## 6. Phase 3: Aggregator — RAG Orchestration - -### Orchestrator Pipeline - -```mermaid -flowchart TD - REQ[ChatRequest received] --> RESOLVE[Resolve EndpointRefs
→ ResolvedEndpoints] - RESOLVE --> CHECK_TUNNEL{Any tunneling
endpoints?} - - CHECK_TUNNEL -->|Yes, no peer_token| GEN_PEER[Generate ephemeral
peer_channel UUID] - CHECK_TUNNEL -->|Yes, has peer_token| RETRIEVE - CHECK_TUNNEL -->|No tunneling| RETRIEVE - GEN_PEER --> RETRIEVE - - RETRIEVE[Parallel Retrieval
asyncio.gather per source] --> |SSE: retrieval_start| R_START - R_START --> R_EACH - - subgraph "Per Data Source (parallel)" - R_EACH[Query data source] --> R_TYPE{Transport?} - R_TYPE -->|HTTP| R_HTTP[POST url/query
Bearer satellite_token] - R_TYPE -->|NATS| R_NATS[Encrypt & publish
to syfthub.spaces.owner] - R_HTTP --> R_DONE[RetrievalResult] - R_NATS --> R_DONE - end - - R_DONE --> |SSE: source_complete| S_COMPLETE - S_COMPLETE --> ALL_DONE{All sources
complete?} - ALL_DONE -->|No| R_EACH - ALL_DONE -->|Yes| |SSE: retrieval_complete| RERANK_CHECK - - RERANK_CHECK{Documents > 0?} - RERANK_CHECK -->|No| BUILD_PROMPT - RERANK_CHECK -->|Yes| RERANK - - RERANK[Rerank via ONNX
CENTRAL_REEMBEDDING] --> |SSE: reranking_start/complete| BUILD_PROMPT - - BUILD_PROMPT[PromptBuilder.build
system + context + history + query] --> GEN - - GEN[Generation Service] --> |SSE: generation_start| GEN_TYPE{Transport?} - GEN_TYPE -->|HTTP| GEN_HTTP[POST model_url/query
Bearer satellite_token] - GEN_TYPE -->|NATS| GEN_NATS[Encrypt & publish
to syfthub.spaces.owner] - - GEN_HTTP --> STREAM_CHECK{Streaming enabled?} - GEN_NATS --> STREAM_CHECK - - STREAM_CHECK -->|Yes| TOKENS[Yield token events
SSE: token] - STREAM_CHECK -->|No| HEARTBEAT[Periodic heartbeat
SSE: generation_heartbeat] - - TOKENS --> ANNOTATE - HEARTBEAT --> ANNOTATE - - ANNOTATE[Annotate citations
cite:N → cite:N-start:end] --> ATTRIB[Compute profit_share
per source] - ATTRIB --> |SSE: done| DONE[Final response + metadata] - - style REQ fill:#7B68EE,color:#fff - style RETRIEVE fill:#27AE60,color:#fff - style RERANK fill:#E8A838,color:#fff - style GEN fill:#E74C3C,color:#fff - style DONE fill:#4A90D9,color:#fff -``` - -### Retrieval Service Detail - -```mermaid -flowchart LR - subgraph "retrieve() — parallel mode" - Q[query + data_sources] --> TASKS["asyncio.gather(*tasks)"] - TASKS --> DS1[Source 1: POST url/query] - TASKS --> DS2[Source 2: POST url/query] - TASKS --> DS3[Source 3: NATS tunnel] - DS1 --> MERGE[Merge all RetrievalResults] - DS2 --> MERGE - DS3 --> MERGE - end - - subgraph "retrieve_streaming() — first-completed mode" - Q2[query + data_sources] --> TASKS2["asyncio.wait(FIRST_COMPLETED)"] - TASKS2 --> |"yields as each completes"| YIELD[AsyncGenerator yields
RetrievalResult per source] - end -``` - -### Prompt Builder — Context Assembly - -```mermaid -flowchart TD - PB[PromptBuilder.build] --> HAS_CTX{Has retrieved
documents?} - - HAS_CTX -->|No| NO_CTX[System prompt:
"You are a helpful assistant"] - HAS_CTX -->|Yes| HAS_DICT{context_dict
provided?} - - HAS_DICT -->|Yes| CITE_PATH["Citation path:
System prompt includes numbered docs
[1] Title: content...
Instruct model to use [cite:N]"] - HAS_DICT -->|No| XML_PATH["XML path:
System prompt wraps docs in XML
<context><document>...</document></context>"] - - NO_CTX --> ADD_HIST - CITE_PATH --> ADD_HIST - XML_PATH --> ADD_HIST - - ADD_HIST{Chat history?} - ADD_HIST -->|Yes| HIST[Prepend history messages
user/assistant alternating] - ADD_HIST -->|No| FINAL - - HIST --> FINAL[Final messages array:
system + history + user query] -``` - ---- - -## 7. Phase 4: Transport Decision — HTTP vs NATS - -```mermaid -flowchart TD - EP_URL[endpoint.url] --> CHECK{URL starts with
'tunneling:' ?} - - CHECK -->|No → HTTP| HTTP_PATH - CHECK -->|Yes → NATS| NATS_PATH - - subgraph "HTTP Direct Path" - HTTP_PATH[Build target URL] --> HTTP_REQ["POST {url}/api/v1/endpoints/{slug}/query"] - HTTP_REQ --> HTTP_HEADERS["Headers:
Authorization: Bearer {satellite_token}
Content-Type: application/json
X-Transaction-Token: {txn_token}"] - HTTP_HEADERS --> HTTP_RESP[Parse JSON response] - HTTP_RESP --> HTTP_RETRY{Status 5xx?} - HTTP_RETRY -->|Yes, attempts < 2| HTTP_REQ - HTTP_RETRY -->|No| HTTP_RESULT[Return result] - end - - subgraph "NATS Tunnel Path" - NATS_PATH[Extract username from URL
tunneling:alice → alice] --> FETCH_KEY["Fetch space's X25519 public key
GET /api/v1/nats/encryption-key/{username}
(cached 300s)"] - FETCH_KEY --> ENCRYPT[Generate ephemeral keypair
ECDH + HKDF → AES key
AES-256-GCM encrypt payload] - ENCRYPT --> PUB["Publish to NATS
subject: syfthub.spaces.{username}"] - PUB --> SUB["Subscribe to reply
subject: syfthub.peer.{peer_channel}"] - SUB --> WAIT[Wait for response
timeout: 30s data / 120s model] - WAIT --> DECRYPT[Decrypt response
ECDH with retained ephemeral key] - DECRYPT --> NATS_RESULT[Return result] - end - - style CHECK fill:#E8A838,color:#fff - style HTTP_PATH fill:#4A90D9,color:#fff - style NATS_PATH fill:#27AE60,color:#fff -``` - ---- - -## 8. Phase 5: NATS Tunnel Protocol - -### Message Flow - -```mermaid -sequenceDiagram - participant AG as Aggregator - participant HUB as Hub Backend - participant NATS as NATS Broker - participant SP as Space (Desktop/CLI) - - Note over AG: Need to call endpoint owned by "alice"
URL = "tunneling:alice" - - AG->>HUB: GET /api/v1/nats/encryption-key/alice - HUB-->>AG: {encryption_public_key: "base64url..."} - Note over AG: Cache key for 300s - - AG->>AG: Generate ephemeral X25519 keypair
(eph_priv, eph_pub) - AG->>AG: shared = ECDH(eph_priv, alice_pub) - AG->>AG: aes_key = HKDF(shared, info="syfthub-tunnel-request-v1") - AG->>AG: ciphertext = AES-256-GCM(aes_key, nonce, payload, AAD=correlation_id) - - AG->>NATS: PUB syfthub.spaces.alice
{protocol, correlation_id, reply_to,
encryption_info, encrypted_payload} - - NATS->>SP: Deliver message - - SP->>SP: shared = ECDH(alice_priv, eph_pub) - SP->>SP: aes_key = HKDF(shared, info="syfthub-tunnel-request-v1") - SP->>SP: payload = AES-256-GCM.Open(aes_key, nonce, ciphertext, AAD=correlation_id) - - SP->>SP: Verify satellite_token
Look up endpoint by slug
Invoke handler - - SP->>SP: Generate fresh ephemeral keypair
(resp_eph_priv, resp_eph_pub) - SP->>SP: shared = ECDH(resp_eph_priv, req_eph_pub) - SP->>SP: aes_key = HKDF(shared, info="syfthub-tunnel-response-v1") - SP->>SP: ciphertext = AES-256-GCM(aes_key, nonce, response, AAD=correlation_id) - - SP->>NATS: PUB syfthub.peer.{peer_channel}
{protocol, correlation_id, status,
encryption_info, encrypted_payload, timing} - - NATS-->>AG: Deliver response - - AG->>AG: shared = ECDH(eph_priv, resp_eph_pub) - AG->>AG: aes_key = HKDF(shared, info="syfthub-tunnel-response-v1") - AG->>AG: response = AES-256-GCM.Open(...) -``` - -### NATS Subject Naming - -```mermaid -graph LR - subgraph "Subject Namespace" - S1["syfthub.spaces.{username}"] - S2["syfthub.peer.{peer_channel}"] - end - - AG[Aggregator] -->|"publishes request"| S1 - SP[Space] -->|"subscribes"| S1 - SP -->|"publishes response"| S2 - AG -->|"subscribes"| S2 - - style S1 fill:#27AE60,color:#fff - style S2 fill:#E8A838,color:#fff -``` - -### Wire Message Format - -```mermaid -classDiagram - class TunnelRequest { - +protocol: "syfthub-tunnel/v1" - +type: "endpoint_request" - +correlation_id: UUID - +reply_to: peer_channel - +endpoint: EndpointInfo - +satellite_token: string - +timeout_ms: int - +encryption_info: EncryptionInfo - +encrypted_payload: base64url - } - - class TunnelResponse { - +protocol: "syfthub-tunnel/v1" - +type: "endpoint_response" - +correlation_id: UUID - +status: "success" | "error" - +endpoint_slug: string - +encryption_info: EncryptionInfo - +encrypted_payload: base64url - +error: ErrorInfo? - +timing: TimingInfo - } - - class EncryptionInfo { - +algorithm: "X25519-ECDH-AES-256-GCM" - +ephemeral_public_key: base64url - +nonce: base64url (12 bytes) - } - - class EndpointInfo { - +slug: string - +type: "model" | "data_source" - } - - class TimingInfo { - +received_at: ISO8601 - +processed_at: ISO8601 - +duration_ms: int - } - - TunnelRequest --> EncryptionInfo - TunnelRequest --> EndpointInfo - TunnelResponse --> EncryptionInfo - TunnelResponse --> TimingInfo -``` - ---- - -## 9. Phase 6: Desktop/CLI — Endpoint Execution - -### Space Startup Flow - -```mermaid -sequenceDiagram - participant App as Desktop App - participant FS as Filesystem - participant HUB as Hub Backend - participant NATS as NATS Broker - - App->>FS: Load settings.json
(~/.config/syfthub/) - App->>HUB: Authenticate with API key - HUB-->>App: username, user info - - App->>App: Set SPACE_URL = tunneling:{username} - - alt Key file exists - App->>FS: Load X25519 keypair
(~/.config/syfthub/tunnel_key) - else First run - App->>App: Generate X25519 keypair - App->>FS: Save key atomically
(O_CREATE|O_EXCL, mode 0600) - end - - App->>HUB: PUT /api/v1/nats/encryption-key
{encryption_public_key: base64url} - - App->>FS: Scan endpoints directory
(README.md frontmatter + runner.py) - App->>App: Build endpoint registry - - App->>HUB: POST /api/v1/endpoints/sync
(register all endpoints with hub) - - App->>HUB: GET /api/v1/nats/credentials - HUB-->>App: {nats_url, auth_token} - - App->>NATS: Connect(url, token)
Subscribe("syfthub.spaces.{username}") - Note over App,NATS: Ready to receive requests -``` - -### Request Processing Pipeline - -```mermaid -flowchart TD - MSG[NATS Message Received] --> PARSE[Parse JSON → TunnelRequest] - PARSE --> ENC_CHECK{encryption_info &
encrypted_payload present?} - - ENC_CHECK -->|No| REJECT[Reject — no plaintext allowed] - ENC_CHECK -->|Yes| DECRYPT[Decrypt payload
X25519 ECDH + AES-256-GCM] - - DECRYPT --> VERIFY[Verify satellite_token
POST /api/v1/verify] - VERIFY --> LOOKUP[Registry.Get(slug)] - LOOKUP --> ENABLED{Endpoint enabled?} - - ENABLED -->|No| ERR_DISABLED[Error: ENDPOINT_DISABLED] - ENABLED -->|Yes| TYPE_CHECK{Endpoint type?} - - TYPE_CHECK -->|data_source| DS_PARSE[Parse DataSourceQueryRequest
Extract query from messages] - TYPE_CHECK -->|model| M_PARSE[Parse ModelQueryRequest
Extract messages array] - - DS_PARSE --> INVOKE - M_PARSE --> INVOKE - - INVOKE{File-based endpoint?} - INVOKE -->|Yes| SUBPROCESS[SubprocessExecutor.Execute
Python handler via stdin/stdout] - INVOKE -->|No| IN_MEMORY[Call registered Go handler] - - SUBPROCESS --> RESPONSE[Build TunnelResponse] - IN_MEMORY --> RESPONSE - - RESPONSE --> ENC_RESP[Encrypt response
Fresh ephemeral keypair] - ENC_RESP --> PUBLISH["Publish to NATS
syfthub.peer.{peer_channel}"] - - style MSG fill:#27AE60,color:#fff - style DECRYPT fill:#E8A838,color:#fff - style VERIFY fill:#E74C3C,color:#fff - style PUBLISH fill:#4A90D9,color:#fff -``` - ---- - -## 10. Phase 7: Response Assembly & Streaming - -```mermaid -flowchart TD - subgraph "Aggregator Response Assembly" - GEN_RESULT[Generation result
(raw text with cite:N tags)] --> ANNOTATE["Annotate citations
[cite:N] → [cite:N-start:end]"] - ANNOTATE --> ATTRIB[Compute profit_share
per source using attribution lib] - ATTRIB --> BUILD_RESP[Build final response
+ sources + metadata + usage] - BUILD_RESP --> SSE_DONE["Emit SSE: done"] - end - - subgraph "Frontend Response Processing" - SSE_DONE --> PARSE_EVT[Parse done event] - PARSE_EVT --> UPDATE_STATE[Dispatch SET_COMPLETE
phase = complete] - UPDATE_STATE --> ON_COMPLETE[onComplete callback] - ON_COMPLETE --> ADD_MSG[Add assistant message
to message history] - ADD_MSG --> PARSE_CIT[parseCitations
extract cite:N-start:end markers] - PARSE_CIT --> BUILD_MD[buildCitedMarkdown
inject HTML mark + sup badges] - BUILD_MD --> RENDER[Render MarkdownMessage
with highlighted citations] - end - - style GEN_RESULT fill:#7B68EE,color:#fff - style RENDER fill:#4A90D9,color:#fff -``` - ---- - -## 11. SSE Event Lifecycle - -```mermaid -sequenceDiagram - participant AG as Aggregator - participant FE as Frontend - - Note over AG,FE: SSE Stream (text/event-stream) - - AG->>FE: event: retrieval_start
data: {"sources": 3} - Note over FE: Initialize progress bar - - AG->>FE: event: source_complete
data: {"path": "alice/docs", "status": "success", "documents": 12} - AG->>FE: event: source_complete
data: {"path": "bob/wiki", "status": "success", "documents": 8} - AG->>FE: event: source_complete
data: {"path": "carol/faq", "status": "error", "documents": 0} - Note over FE: Update per-source status - - AG->>FE: event: retrieval_complete
data: {"total_documents": 20, "time_ms": 1523} - Note over FE: Mark retrieval phase done - - AG->>FE: event: reranking_start
data: {"documents": 20} - AG->>FE: event: reranking_complete
data: {"documents": 5, "time_ms": 342} - Note over FE: Show reranked count - - AG->>FE: event: generation_start
data: {} - Note over FE: Show "Generating..." - - loop Every token chunk - AG->>FE: event: token
data: {"content": "The key "} - AG->>FE: event: token
data: {"content": "features are "} - AG->>FE: event: token
data: {"content": "[cite:1] ..."} - end - - loop Every ~500ms (if non-streaming model) - AG->>FE: event: generation_heartbeat
data: {"elapsed_ms": 2500} - end - - AG->>FE: event: done
data: {"response": "...", "sources": {...},
"metadata": {...}, "usage": {...}, "profit_share": {...}} - Note over FE: Display final response with citations -``` - -### SSE Event Types Reference - -| Event | Payload | Phase | Purpose | -|-------|---------|-------|---------| -| `retrieval_start` | `{sources: N}` | Retrieval | N data sources will be queried | -| `source_complete` | `{path, status, documents}` | Retrieval | One source finished | -| `retrieval_complete` | `{total_documents, time_ms}` | Retrieval | All sources done | -| `reranking_start` | `{documents: N}` | Reranking | Starting to rerank N docs | -| `reranking_complete` | `{documents: N, time_ms}` | Reranking | Top N selected after rerank | -| `generation_start` | `{}` | Generation | LLM generation beginning | -| `generation_heartbeat` | `{elapsed_ms}` | Generation | Periodic liveness signal | -| `token` | `{content: "..."}` | Generation | Streamed response chunk | -| `done` | `{response, sources, metadata, usage, profit_share}` | Complete | Final result | -| `error` | `{message: "..."}` | Error | Pipeline failure | - ---- - -## 12. Authentication & Token Architecture - -```mermaid -graph TB - subgraph "Token Hierarchy" - HUB_TOKEN["Hub Access Token
HS256 · 30min
Authenticates user with backend"] - SAT_TOKEN["Satellite Token
RS256 · 60s
Authorizes endpoint access"] - TXN_TOKEN["Transaction Token
External · varies
Billing authorization"] - PEER_TOKEN["Peer Token
Internal · short
NATS P2P auth"] - GUEST_SAT["Guest Satellite Token
RS256 · 60s
sub=guest, no auth needed"] - end - - USER[User Login] -->|"POST /api/v1/auth/login"| HUB_TOKEN - HUB_TOKEN -->|"GET /api/v1/token?aud=X"| SAT_TOKEN - HUB_TOKEN -->|"POST /api/v1/accounting/transaction-tokens"| TXN_TOKEN - HUB_TOKEN -->|"POST /api/v1/peer-token"| PEER_TOKEN - - ANON[Anonymous User] -->|"GET /api/v1/token/guest?aud=X"| GUEST_SAT - ANON -->|"POST /api/v1/nats/guest-peer-token"| PEER_TOKEN - - SAT_TOKEN -->|"in ChatRequest.endpoint_tokens"| AG[Aggregator] - TXN_TOKEN -->|"in ChatRequest.transaction_tokens"| AG - PEER_TOKEN -->|"in ChatRequest.peer_token"| AG - GUEST_SAT -->|"in ChatRequest.endpoint_tokens"| AG - - AG -->|"Authorization: Bearer {sat_token}"| EP[Endpoint] - AG -->|"X-Transaction-Token: {txn}"| EP - - style HUB_TOKEN fill:#E8A838,color:#fff - style SAT_TOKEN fill:#4A90D9,color:#fff - style TXN_TOKEN fill:#E74C3C,color:#fff - style PEER_TOKEN fill:#27AE60,color:#fff -``` - -### Satellite Token Claims - -```mermaid -classDiagram - class SatelliteToken { - +sub: user_id (or "guest") - +aud: target_owner_username - +iss: hub_url - +exp: now + 60s - +role: "admin" | "user" | "guest" - +iat: now - +kid: key_id - --- - Signing: RS256 - Verification: JWKS at /.well-known/jwks.json - } -``` - ---- - -## 13. NATS Encryption Protocol - -```mermaid -graph TB - subgraph "Request Encryption (Aggregator → Space)" - A1[Generate ephemeral keypair
eph_priv, eph_pub] --> A2["ECDH(eph_priv, space_longterm_pub)
→ shared_secret"] - A2 --> A3["HKDF-SHA256(shared_secret)
info='syfthub-tunnel-request-v1'
→ 32-byte AES key"] - A3 --> A4["AES-256-GCM.Seal(key, nonce, payload)
AAD = correlation_id"] - A4 --> A5["Send: eph_pub + nonce + ciphertext"] - end - - subgraph "Request Decryption (Space)" - B1["Receive: eph_pub + nonce + ciphertext"] --> B2["ECDH(space_longterm_priv, eph_pub)
→ same shared_secret"] - B2 --> B3["HKDF-SHA256(shared_secret)
info='syfthub-tunnel-request-v1'
→ same AES key"] - B3 --> B4["AES-256-GCM.Open(key, nonce, ciphertext)
AAD = correlation_id"] - end - - subgraph "Response Encryption (Space → Aggregator)" - C1["Generate fresh ephemeral keypair
resp_eph_priv, resp_eph_pub"] --> C2["ECDH(resp_eph_priv, request_eph_pub)
→ response_shared_secret"] - C2 --> C3["HKDF-SHA256(response_shared_secret)
info='syfthub-tunnel-response-v1'
← different info!"] - C3 --> C4["AES-256-GCM.Seal(key, nonce, response)
AAD = correlation_id"] - C4 --> C5["Send: resp_eph_pub + nonce + ciphertext"] - end - - subgraph "Response Decryption (Aggregator)" - D1["Receive: resp_eph_pub + nonce + ciphertext"] --> D2["ECDH(request_eph_priv, resp_eph_pub)
→ same response_shared_secret"] - D2 --> D3["HKDF-SHA256(response_shared_secret)
info='syfthub-tunnel-response-v1'
→ same AES key"] - D3 --> D4["AES-256-GCM.Open(key, nonce, ciphertext)
AAD = correlation_id"] - end - - A5 -.->|"NATS"| B1 - C5 -.->|"NATS"| D1 - - style A4 fill:#E74C3C,color:#fff - style B4 fill:#27AE60,color:#fff - style C4 fill:#E74C3C,color:#fff - style D4 fill:#27AE60,color:#fff -``` - -### Key Properties - -| Property | Value | -|----------|-------| -| **Key Agreement** | X25519 ECDH | -| **KDF** | HKDF-SHA256, no salt | -| **Symmetric Cipher** | AES-256-GCM (12-byte nonce) | -| **AAD** | correlation_id (UUID) | -| **Domain Separation** | Request: `syfthub-tunnel-request-v1`, Response: `syfthub-tunnel-response-v1` | -| **Forward Secrecy** | Yes — ephemeral keys per request and per response | -| **Key Persistence** | Space long-term key on disk (mode 0600), aggregator ephemeral per-request | - ---- - -## 14. Branch Logic: Streaming vs Non-Streaming - -```mermaid -flowchart TD - REQ[Chat Request] --> STREAM{request.stream?} - - STREAM -->|true| STREAM_PATH - STREAM -->|false| SYNC_PATH - - subgraph "Streaming Path (POST /chat/stream)" - STREAM_PATH[StreamingResponse
media_type=text/event-stream] --> S_RET[retrieve_streaming
asyncio.wait FIRST_COMPLETED] - S_RET --> S_YIELD[Yield source_complete
as each source finishes] - S_YIELD --> S_RERANK[Rerank if documents > 0] - S_RERANK --> S_GEN_CHECK{model_streaming_enabled?} - - S_GEN_CHECK -->|true| S_GEN_STREAM["generate_stream()
yield token events"] - S_GEN_CHECK -->|false| S_GEN_SYNC["generate() as asyncio.Task
yield heartbeat every 500ms
until task completes"] - - S_GEN_STREAM --> S_DONE[Yield done event] - S_GEN_SYNC --> S_DONE - end - - subgraph "Non-Streaming Path (POST /chat)" - SYNC_PATH[JSON Response] --> NS_RET["retrieve()
asyncio.gather all sources"] - NS_RET --> NS_RERANK[Rerank] - NS_RERANK --> NS_GEN["generate()
single call, await result"] - NS_GEN --> NS_RESP[Return ChatResponse JSON] - end - - style STREAM fill:#E8A838,color:#fff - style S_GEN_STREAM fill:#27AE60,color:#fff - style S_GEN_SYNC fill:#7B68EE,color:#fff -``` - ---- - -## 15. Branch Logic: Authenticated vs Guest - -```mermaid -flowchart TD - USER_CHECK{User authenticated?} - - USER_CHECK -->|Yes| AUTH_PATH - USER_CHECK -->|No| GUEST_PATH - - subgraph "Authenticated Path" - AUTH_PATH[Has hub access token] --> AUTH_SAT["GET /api/v1/token?aud={owner}
per unique owner"] - AUTH_SAT --> AUTH_TXN["POST /api/v1/accounting/transaction-tokens
{owner_usernames: [...]}"] - AUTH_TXN --> AUTH_PEER{Tunneling endpoints?} - AUTH_PEER -->|Yes| AUTH_PEER_TOK["POST /api/v1/peer-token
{target_usernames: [...]}"] - AUTH_PEER -->|No| AUTH_BUILD[Build request] - AUTH_PEER_TOK --> AUTH_BUILD - end - - subgraph "Guest Path" - GUEST_PATH[No authentication] --> GUEST_SAT["GET /api/v1/token/guest?aud={owner}
(IP rate-limited)"] - GUEST_SAT --> GUEST_TXN[No transaction tokens
guests cannot be billed] - GUEST_TXN --> GUEST_PEER{Tunneling endpoints?} - GUEST_PEER -->|Yes| GUEST_PEER_TOK["POST /api/v1/nats/guest-peer-token
(IP rate-limited)"] - GUEST_PEER -->|No| GUEST_BUILD[Build request] - GUEST_PEER_TOK --> GUEST_BUILD - end - - AUTH_BUILD --> SEND[Send to Aggregator] - GUEST_BUILD --> SEND - - subgraph "Endpoint-Side Verification" - SEND --> EP_VERIFY{Space verifies token} - EP_VERIFY --> ROLE_CHECK{token.role?} - ROLE_CHECK -->|"user/admin"| FULL_ACCESS[Full access
policies may apply] - ROLE_CHECK -->|"guest"| GUEST_CHECK{Endpoint allows
guest access?} - GUEST_CHECK -->|Yes| LIMITED[Limited access
no billing] - GUEST_CHECK -->|No| DENIED[403 POLICY_DENIED] - end - - style AUTH_PATH fill:#4A90D9,color:#fff - style GUEST_PATH fill:#95A5A6,color:#fff - style DENIED fill:#E74C3C,color:#fff -``` - ---- - -## 16. Citation & Attribution Pipeline - -```mermaid -flowchart TD - subgraph "1. Prompt Construction" - DOCS[Retrieved documents] --> NUMBER["Number documents:
[1] Title: content...
[2] Title: content..."] - NUMBER --> SYSTEM["System prompt instructs:
'Use [cite:N] to reference sources'"] - SYSTEM --> LLM[Send to LLM] - end - - subgraph "2. LLM Generation" - LLM --> RAW["Raw response:
'The key feature [cite:1] is
performance [cite:2]...'"] - end - - subgraph "3. Aggregator Annotation" - RAW --> ANNOTATE["_annotate_cite_positions():
[cite:1] → [cite:1-0:15]
[cite:2] → [cite:2-20:42]
(adds character positions)"] - ANNOTATE --> ATTRIB["_compute_attribution():
Count cite references per source
→ profit_share: {owner/slug: 0.6, ...}"] - end - - subgraph "4. Frontend Rendering" - ATTRIB --> FE_PARSE["parseCitations():
Extract [cite:N-start:end] markers"] - FE_PARSE --> FE_BUILD["buildCitedMarkdown():
Inject HTML highlights
<mark> + <sup> badges"] - FE_BUILD --> RENDER["Render with click-to-source
highlight + source panel"] - end - - style LLM fill:#7B68EE,color:#fff - style ANNOTATE fill:#E8A838,color:#fff - style RENDER fill:#4A90D9,color:#fff -``` - ---- - -## 17. Error Handling Across Layers - -```mermaid -flowchart TD - subgraph "Frontend Errors" - FE1[Validation Error] --> FE_SHOW[Show inline error] - FE2[AuthenticationError] --> FE_REAUTH[Prompt re-login] - FE3[AggregatorError] --> FE_MSG[Show error message] - FE4[AbortError] --> FE_CANCEL[Silently cancel] - FE5[Network Error] --> FE_RETRY[Show connection error] - end - - subgraph "Aggregator Errors" - AG1[Retrieval timeout] --> AG_PARTIAL["Per-source error
SSE: source_complete status=timeout
Continue with other sources"] - AG2[Retrieval error] --> AG_PARTIAL - AG3[Reranking failure] --> AG_FALLBACK["Silent fallback
Use raw score sort"] - AG4[Generation 5xx] --> AG_RETRY["Retry up to 2x"] - AG5[Generation 403] --> AG_FAIL["SSE: error event
{message: 'Access denied'}"] - AG6[NATS timeout] --> AG_NATS_ERR["NATSTransportError
→ SSE: error event"] - end - - subgraph "Space Errors" - SP1[Decryption failure] --> SP_ERR1["Error: DECRYPTION_FAILED
HTTP 400"] - SP2[Token invalid] --> SP_ERR2["Error: AUTH_FAILED
HTTP 401"] - SP3[Endpoint not found] --> SP_ERR3["Error: ENDPOINT_NOT_FOUND
HTTP 404"] - SP4[Policy denied] --> SP_ERR4["Error: POLICY_DENIED
HTTP 403"] - SP5[Handler crash] --> SP_ERR5["Error: EXECUTION_FAILED
HTTP 500"] - SP6[Timeout] --> SP_ERR6["Error: TIMEOUT
HTTP 504"] - end - - AG_PARTIAL -.-> FE3 - AG_FAIL -.-> FE3 - AG_NATS_ERR -.-> FE3 - SP_ERR1 -.-> AG6 - SP_ERR2 -.-> AG5 -``` - -### Error Code Reference (Space → Aggregator) - -| Code | HTTP Status | Meaning | -|------|------------|---------| -| `AUTH_FAILED` | 401 | Satellite token invalid/expired | -| `ENDPOINT_NOT_FOUND` | 404 | Slug not in registry | -| `POLICY_DENIED` | 403 | Endpoint policy rejected request | -| `EXECUTION_FAILED` | 500 | Handler threw an error | -| `TIMEOUT` | 504 | Handler exceeded timeout | -| `INVALID_REQUEST` | 400 | Malformed request payload | -| `ENDPOINT_DISABLED` | 503 | Endpoint exists but disabled | -| `RATE_LIMIT_EXCEEDED` | 429 | Too many requests | -| `DECRYPTION_FAILED` | 400 | NATS payload decrypt error | -| `INTERNAL_ERROR` | 500 | Unexpected server error | - ---- - -## 18. Data Models Reference - -### Request/Response Flow - -```mermaid -classDiagram - class ChatRequest { - +prompt: string - +model: EndpointRef - +data_sources: EndpointRef[] - +endpoint_tokens: map~string,string~ - +transaction_tokens: map~string,string~ - +peer_token: string? - +peer_channel: string? - +top_k: int = 5 - +max_tokens: int = 1024 - +temperature: float = 0.7 - +similarity_threshold: float = 0.5 - +stream: bool - +messages: Message[] - +custom_system_prompt: string? - +retrieval_only: bool = false - } - - class EndpointRef { - +url: string - +slug: string - +name: string - +tenant_name: string? - +owner_username: string? - +query_override: string? - } - - class Message { - +role: "system"|"user"|"assistant" - +content: string - } - - class ChatResponse { - +response: string - +sources: map~string,DocumentSource~ - +retrieval_info: SourceInfo[] - +metadata: ResponseMetadata - +usage: TokenUsage? - +profit_share: map~string,float~? - } - - class DocumentSource { - +slug: string - +content: string - } - - class ResponseMetadata { - +retrieval_time_ms: int - +generation_time_ms: int - +total_time_ms: int - } - - class TokenUsage { - +prompt_tokens: int - +completion_tokens: int - +total_tokens: int - } - - ChatRequest --> EndpointRef - ChatRequest --> Message - ChatResponse --> DocumentSource - ChatResponse --> ResponseMetadata - ChatResponse --> TokenUsage -``` - -### Retrieval Data Flow - -```mermaid -classDiagram - class RetrievalResult { - +source_path: string - +documents: Document[] - +status: "success"|"error"|"timeout" - +error_message: string? - +latency_ms: int - } - - class Document { - +content: string - +metadata: map - +score: float - +title: string? - } - - class GenerationResult { - +response: string - +latency_ms: int - +usage: TokenUsage? - } - - class ResolvedEndpoint { - +path: string - +url: string - +slug: string - +name: string - +owner_username: string - +endpoint_type: "model"|"data_source" - +tenant_name: string? - } - - RetrievalResult --> Document -``` - ---- - -## Appendix A: File Reference - -| Layer | Key File | Purpose | -|-------|----------|---------| -| **Frontend** | `components/frontend/src/hooks/use-chat-workflow.ts` | Chat workflow state machine | -| | `components/frontend/src/components/chat/chat-view.tsx` | Chat UI container | -| | `components/frontend/src/components/chat/search-input.tsx` | Query input with model/source selection | -| | `components/frontend/src/lib/citation-utils.ts` | Citation parsing & rendering | -| **TS SDK** | `sdk/typescript/src/resources/chat.ts` | Chat API client, SSE parsing | -| | `sdk/typescript/src/resources/auth.ts` | Token acquisition (satellite, transaction, peer) | -| **Backend** | `components/backend/src/syfthub/api/endpoints/token.py` | Satellite token generation | -| | `components/backend/src/syfthub/api/endpoints/accounting.py` | Transaction token generation | -| | `components/backend/src/syfthub/api/endpoints/peer.py` | Peer token generation | -| **Aggregator** | `components/aggregator/src/aggregator/api/endpoints/chat.py` | Chat endpoint handlers | -| | `components/aggregator/src/aggregator/services/orchestrator.py` | RAG pipeline orchestration | -| | `components/aggregator/src/aggregator/services/retrieval.py` | Data source retrieval | -| | `components/aggregator/src/aggregator/services/model.py` | Model client (HTTP) | -| | `components/aggregator/src/aggregator/services/prompt_builder.py` | Prompt construction | -| | `components/aggregator/src/aggregator/clients/nats_transport.py` | NATS client (aggregator side) | -| **Go SDK** | `sdk/golang/syfthub/chat.go` | Hub client chat/stream | -| | `sdk/golang/syfthub/auth.go` | Token acquisition (Go client) | -| | `sdk/golang/syfthubapi/processor.go` | Request processing pipeline | -| | `sdk/golang/syfthubapi/transport/nats.go` | NATS transport (space side) | -| | `sdk/golang/syfthubapi/transport/crypto.go` | X25519 + AES-256-GCM encryption | -| | `sdk/golang/syfthubapi/transport/http.go` | HTTP transport (space side) | - -## Appendix B: Environment Variables - -| Component | Variable | Default | Purpose | -|-----------|----------|---------|---------| -| Backend | `SATELLITE_TOKEN_EXPIRE_SECONDS` | 60 | Satellite token TTL | -| Backend | `NATS_AUTH_TOKEN` | — | Required for peer token endpoints | -| Backend | `NATS_WS_PUBLIC_URL` | — | WebSocket URL in peer token response | -| Aggregator | `AGGREGATOR_MODEL_STREAMING_ENABLED` | false | Enable token-by-token streaming from model | -| Aggregator | `AGGREGATOR_SYFTHUB_URL` | — | Hub URL for endpoint resolution | -| Space | `SYFTHUB_URL` | — | Hub backend URL | -| Space | `SYFTHUB_API_KEY` | — | PAT for authentication | -| Space | `SPACE_URL` | — | Public URL or `tunneling:{username}` | -| Space | `SERVER_PORT` | 8000 | HTTP listen port | -| Space | `HEARTBEAT_TTL_SECONDS` | 300 | Health ping interval base | - -## Appendix C: Timeout Reference - -| Timeout | Value | Context | -|---------|-------|---------| -| Satellite token TTL | 60s | Must re-fetch frequently | -| Data source query | 30s | HTTP proxy timeout | -| Model query | 120s | HTTP proxy timeout | -| NATS request timeout | `timeout_ms` in request or 120s default | Per-request configurable | -| Hub API call | 30s | Default httpx timeout | -| Aggregator API call | 120s | Default for generation | -| Heartbeat interval | TTL × 0.8 (default 240s) | Periodic health ping | -| Encryption key cache | 300s | Aggregator caches space public keys | diff --git a/sdk/README.md b/sdk/README.md deleted file mode 100644 index bd8feb0e..00000000 --- a/sdk/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# SyftHub SDKs - -Official SDKs for interacting with the SyftHub API. - -## Available SDKs - -| SDK | Language | Directory | Status | -|-----|----------|-----------|--------| -| [Python SDK](./python/) | Python 3.10+ | `sdk/python/` | Stable | -| [TypeScript SDK](./typescript/) | TypeScript/Node.js 18+ | `sdk/typescript/` | Stable | - -## Quick Comparison - -### Installation - -**Python:** -```bash -pip install syfthub-sdk -# or -uv add syfthub-sdk -``` - -**TypeScript:** -```bash -npm install @syfthub/sdk -# or -yarn add @syfthub/sdk -``` - -### Basic Usage - -**Python:** -```python -from syfthub_sdk import SyftHubClient - -client = SyftHubClient(base_url="https://hub.syft.com") -user = await client.auth.login("alice", "password") - -for endpoint in client.hub.browse(): - print(endpoint.name) -``` - -**TypeScript:** -```typescript -import { SyftHubClient } from '@syfthub/sdk'; - -const client = new SyftHubClient({ baseUrl: 'https://hub.syft.com' }); -const user = await client.auth.login('alice', 'password'); - -for await (const endpoint of client.hub.browse()) { - console.log(endpoint.name); -} -``` - -## API Parity - -Both SDKs provide the same functionality with identical APIs (adjusted for language conventions): - -| Feature | Python | TypeScript | -|---------|--------|------------| -| Auth | `client.auth.*` | `client.auth.*` | -| My Endpoints | `client.my_endpoints.*` | `client.myEndpoints.*` | -| Hub | `client.hub.*` | `client.hub.*` | -| Users | `client.users.*` | `client.users.*` | -| Accounting | `client.accounting.*` | `client.accounting.*` | - -### Naming Conventions - -| Python (snake_case) | TypeScript (camelCase) | -|---------------------|------------------------| -| `my_endpoints` | `myEndpoints` | -| `full_name` | `fullName` | -| `get_tokens()` | `getTokens()` | -| `is_authenticated` | `isAuthenticated` | - -### Iteration - -**Python:** -```python -for endpoint in client.hub.browse(): - print(endpoint.name) -``` - -**TypeScript:** -```typescript -for await (const endpoint of client.hub.browse()) { - console.log(endpoint.name); -} -``` - -## Environment Variables - -Both SDKs support the same environment variables: - -| Variable | Description | -|----------|-------------| -| `SYFTHUB_URL` | SyftHub API base URL | -| `SYFTHUB_ACCOUNTING_URL` | Accounting service URL (optional) | -| `SYFTHUB_ACCOUNTING_EMAIL` | Accounting auth email (optional) | -| `SYFTHUB_ACCOUNTING_PASSWORD` | Accounting auth password (optional) | - -## Development - -### Python SDK - -```bash -cd sdk/python -uv sync -uv run pytest -``` - -### TypeScript SDK - -```bash -cd sdk/typescript -npm install -npm run build -npm test -``` - -## License - -MIT diff --git a/sdk/golang/README.md b/sdk/golang/README.md deleted file mode 100644 index bf288303..00000000 --- a/sdk/golang/README.md +++ /dev/null @@ -1,456 +0,0 @@ -# SyftHub Go SDK - -Official Go SDK for SyftHub - a platform for RAG-powered AI endpoints. - -## Installation - -```bash -go get github.com/openmined/syfthub/sdk/golang -``` - -## Quick Start - -```go -package main - -import ( - "context" - "fmt" - "log" - - "github.com/openmined/syfthub/sdk/golang/syfthub" -) - -func main() { - // Create client (reads SYFTHUB_URL from environment) - client, err := syfthub.NewClient() - if err != nil { - log.Fatal(err) - } - defer client.Close() - - ctx := context.Background() - - // Login - user, err := client.Auth.Login(ctx, "username", "password") - if err != nil { - log.Fatal(err) - } - fmt.Printf("Logged in as: %s\n", user.Username) - - // RAG Chat Query - chat := client.Chat() - response, err := chat.Complete(ctx, &syfthub.ChatRequest{ - Prompt: "What is machine learning?", - Model: "alice/gpt-model", - DataSources: []string{"bob/ml-docs", "carol/tutorials"}, - }) - if err != nil { - log.Fatal(err) - } - fmt.Println(response.Response) -} -``` - -## Configuration - -### Environment Variables - -| Variable | Description | Default | -|----------|-------------|---------| -| `SYFTHUB_URL` | SyftHub API URL | `https://hub.syft.com` | -| `SYFTHUB_AGGREGATOR_URL` | Aggregator service URL | Auto-discovered | -| `SYFTHUB_API_TOKEN` | API token for authentication | - | - -### Client Options - -```go -client, err := syfthub.NewClient( - syfthub.WithBaseURL("https://hub.syft.com"), - syfthub.WithTimeout(30 * time.Second), - syfthub.WithAggregatorURL("https://aggregator.syft.com"), - syfthub.WithAPIToken("your-api-token"), -) -``` - -## Authentication - -### Username/Password Login - -```go -user, err := client.Auth.Login(ctx, "username", "password") -``` - -### API Token Authentication - -```go -// Via environment variable -os.Setenv("SYFTHUB_API_TOKEN", "your-api-token") -client, _ := syfthub.NewClient() - -// Or via option -client, _ := syfthub.NewClient(syfthub.WithAPIToken("your-api-token")) -``` - -### Register New User - -```go -user, err := client.Auth.Register(ctx, &syfthub.RegisterRequest{ - Username: "newuser", - Email: "user@example.com", - Password: "securepassword", - FullName: "New User", -}) -``` - -## Chat (RAG Queries) - -### Complete (Non-Streaming) - -```go -chat := client.Chat() - -response, err := chat.Complete(ctx, &syfthub.ChatRequest{ - Prompt: "Explain neural networks", - Model: "owner/model-slug", - DataSources: []string{"owner1/docs", "owner2/kb"}, - TopK: 5, - MaxTokens: 1024, - Temperature: 0.7, -}) - -fmt.Println(response.Response) -fmt.Printf("Retrieval: %dms, Generation: %dms\n", - response.Metadata.RetrievalTimeMs, - response.Metadata.GenerationTimeMs) -``` - -### Streaming - -```go -events, errChan := chat.Stream(ctx, &syfthub.ChatRequest{ - Prompt: "What is Python?", - Model: "owner/model", -}) - -for event := range events { - switch e := event.(type) { - case *syfthub.TokenEvent: - fmt.Print(e.Content) - case *syfthub.RetrievalCompleteEvent: - fmt.Printf("[Retrieved %d docs]\n", e.TotalDocuments) - case *syfthub.DoneEvent: - fmt.Println("\nComplete!") - case *syfthub.ErrorEvent: - fmt.Printf("Error: %s\n", e.Message) - } -} - -if err := <-errChan; err != nil { - log.Fatal(err) -} -``` - -### Available Models and Data Sources - -```go -// Get available models -models, err := chat.GetAvailableModels(ctx) -for _, m := range models { - fmt.Printf("%s/%s: %s\n", m.OwnerUsername, m.Slug, m.Name) -} - -// Get available data sources -sources, err := chat.GetAvailableDataSources(ctx) -``` - -## Hub Discovery - -### Browse Public Endpoints - -```go -iter := client.Hub.Browse(ctx, syfthub.WithPageSize(20)) -for iter.Next(ctx) { - ep := iter.Value() - fmt.Printf("%s/%s: %s\n", ep.OwnerUsername, ep.Slug, ep.Name) -} -if err := iter.Err(); err != nil { - log.Fatal(err) -} -``` - -### Search Endpoints - -```go -results, err := client.Hub.Search(ctx, "machine learning", - syfthub.WithTopK(10), - syfthub.WithMinScore(0.5), -) -for _, r := range results { - fmt.Printf("[%.2f] %s\n", r.RelevanceScore, r.Name) -} -``` - -### Trending Endpoints - -```go -iter := client.Hub.Trending(ctx, syfthub.WithMinStars(10)) -for iter.Next(ctx) { - ep := iter.Value() - fmt.Printf("%s - %d stars\n", ep.Name, ep.StarsCount) -} -``` - -### Star/Unstar - -```go -err := client.Hub.Star(ctx, "owner/endpoint") -err = client.Hub.Unstar(ctx, "owner/endpoint") - -starred, err := client.Hub.IsStarred(ctx, "owner/endpoint") -``` - -## Endpoint Management - -### List My Endpoints - -```go -iter := client.MyEndpoints.List(ctx, - syfthub.WithVisibility(syfthub.VisibilityPublic), -) -for iter.Next(ctx) { - ep := iter.Value() - fmt.Println(ep.Name) -} -``` - -### Create Endpoint - -```go -endpoint, err := client.MyEndpoints.Create(ctx, &syfthub.CreateEndpointRequest{ - Name: "My API", - Type: syfthub.EndpointTypeModel, - Visibility: syfthub.VisibilityPublic, - Description: "A cool AI model", - Readme: "# My API\n\nDocumentation here.", -}) -``` - -### Update/Delete - -```go -endpoint, err := client.MyEndpoints.Update(ctx, "owner/slug", - &syfthub.UpdateEndpointRequest{ - Description: ptr("Updated description"), - }, -) - -err = client.MyEndpoints.Delete(ctx, "owner/slug") -``` - -## User Management - -### Update Profile - -```go -user, err := client.Users.Update(ctx, &syfthub.UpdateUserRequest{ - FullName: ptr("John Doe"), -}) -``` - -### Check Username/Email Availability - -```go -available, err := client.Users.CheckUsername(ctx, "newusername") -available, err = client.Users.CheckEmail(ctx, "new@example.com") -``` - -### Aggregator Configurations - -```go -// List aggregators -aggregators, err := client.Users.Aggregators.List(ctx) - -// Create aggregator -agg, err := client.Users.Aggregators.Create(ctx, - "My Aggregator", - "https://my-aggregator.example.com", -) - -// Set as default -agg, err = client.Users.Aggregators.SetDefault(ctx, agg.ID) - -// Delete -err = client.Users.Aggregators.Delete(ctx, agg.ID) -``` - -## API Tokens - -```go -tokens := client.APITokens() - -// Create token (SAVE THE TOKEN - only shown once!) -result, err := tokens.Create(ctx, &syfthub.CreateAPITokenRequest{ - Name: "CI/CD Pipeline", - Scopes: []syfthub.APITokenScope{syfthub.APITokenScopeWrite}, -}) -fmt.Println("Token:", result.Token) // Save this! - -// List tokens -response, err := tokens.List(ctx) -for _, t := range response.Tokens { - fmt.Printf("%s: %s\n", t.Name, t.TokenPrefix) -} - -// Revoke -err = tokens.Revoke(ctx, tokenID) -``` - -## Accounting (Billing) - -```go -// Get accounting resource (auto-fetches credentials from backend) -accounting, err := client.Accounting(ctx) - -// Check balance -user, err := accounting.GetUser(ctx) -fmt.Printf("Balance: %.2f credits\n", user.Balance) - -// List transactions -iter := accounting.GetTransactions(ctx) -for iter.Next(ctx) { - tx := iter.Value() - fmt.Printf("%s: %.2f (%s -> %s)\n", - tx.Status, tx.Amount, tx.SenderEmail, tx.RecipientEmail) -} - -// Create transaction -tx, err := accounting.CreateTransaction(ctx, &syfthub.CreateTransactionRequest{ - RecipientEmail: "recipient@example.com", - Amount: 10.0, -}) - -// Confirm transaction -tx, err = accounting.ConfirmTransaction(ctx, tx.ID) -``` - -## Direct SyftAI Queries - -For custom RAG pipelines, use the low-level SyftAI resource: - -```go -syftai := client.SyftAI() - -// Query data source directly -docs, err := syftai.QueryDataSource(ctx, &syfthub.QueryDataSourceRequest{ - Endpoint: syfthub.EndpointRef{URL: "http://syftai:8080", Slug: "docs"}, - Query: "What is Python?", - UserEmail: "user@example.com", - TopK: 10, -}) - -// Query model directly -response, err := syftai.QueryModel(ctx, &syfthub.QueryModelRequest{ - Endpoint: syfthub.EndpointRef{URL: "http://syftai:8080", Slug: "gpt"}, - Messages: []syfthub.Message{ - {Role: "system", Content: "You are helpful."}, - {Role: "user", Content: "Hello!"}, - }, - UserEmail: "user@example.com", -}) - -// Stream model response -chunks, errChan := syftai.QueryModelStream(ctx, &syfthub.QueryModelRequest{...}) -for chunk := range chunks { - fmt.Print(chunk) -} -``` - -## Pagination - -All list operations return a `PageIterator[T]` for lazy pagination: - -```go -iter := client.Hub.Browse(ctx) - -// Iterate through all items -for iter.Next(ctx) { - item := iter.Value() - // ... -} -if err := iter.Err(); err != nil { - log.Fatal(err) -} - -// Or get all items at once -all, err := iter.All(ctx) - -// Or get first N items -first5, err := iter.Take(ctx, 5) - -// Or get first page only -firstPage, err := iter.FirstPage(ctx) - -// Or use callback -err := iter.ForEach(ctx, func(item T) bool { - fmt.Println(item) - return true // continue iteration -}) -``` - -## Error Handling - -All errors implement the `SyftHubError` interface: - -```go -response, err := chat.Complete(ctx, req) -if err != nil { - var authErr *syfthub.AuthenticationError - var notFound *syfthub.NotFoundError - var epErr *syfthub.EndpointResolutionError - - switch { - case errors.As(err, &authErr): - fmt.Println("Authentication failed:", authErr.Message) - case errors.As(err, ¬Found): - fmt.Println("Not found:", notFound.Message) - case errors.As(err, &epErr): - fmt.Printf("Could not resolve endpoint '%s': %s\n", - epErr.EndpointPath, epErr.Message) - default: - fmt.Println("Error:", err) - } -} -``` - -### Error Types - -| Error | Description | -|-------|-------------| -| `AuthenticationError` | Invalid credentials or expired token | -| `AuthorizationError` | Insufficient permissions | -| `NotFoundError` | Resource not found | -| `ValidationError` | Invalid request data | -| `NetworkError` | Connection failed | -| `ConfigurationError` | Missing or invalid configuration | -| `ChatError` | Chat/RAG operation failed | -| `AggregatorError` | Aggregator service error | -| `EndpointResolutionError` | Could not resolve endpoint path | -| `RetrievalError` | Document retrieval failed | -| `GenerationError` | Model generation failed | - -## Examples - -See the [examples](examples/) directory for complete working examples: - -```bash -cd examples/demo -go run . -username alice -password secret123 \ - -model "bob/gpt-model" \ - -data-sources "carol/docs" \ - -prompt "What is machine learning?" -``` - -## License - -Apache 2.0 diff --git a/sdk/golang/examples/file_based/endpoints/echo-model/README.md b/sdk/golang/examples/file_based/endpoints/echo-model/README.md deleted file mode 100644 index 58d59237..00000000 --- a/sdk/golang/examples/file_based/endpoints/echo-model/README.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -slug: echo-model -type: model -name: Echos Model -description: Echos back the user message -enabled: true -version: "1.0.1" -runtime: - mode: subprocess - timeout: 30 ---- - -# Echo Model - -A simple model endpoint that echoes back the last user message. - -## Usage - -```bash -curl -X POST http://localhost:8001/api/v1/endpoints/echo-model/query \ - -H "Content-Type: application/json" \ - -H "Authorization: Bearer test-token" \ - -d '{"messages": [{"role": "user", "content": "Hello!"}]}' -``` - -## Response - -```json -{ - "summary": { - "role": "assistant", - "content": "Echo: Hello!" - } -} -``` diff --git a/sdk/golang/examples/file_based/endpoints/sample-model/README.md b/sdk/golang/examples/file_based/endpoints/sample-model/README.md deleted file mode 100644 index 160cf553..00000000 --- a/sdk/golang/examples/file_based/endpoints/sample-model/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -slug: sample-model -type: model -name: Sample Model -description: A sample model endpoint demonstrating file-based configuration -enabled: true -version: "1.0.0" -env: - required: [] - optional: [DEBUG] - inherit: [PATH, HOME] -runtime: - mode: subprocess - workers: 1 - timeout: 30 ---- - -# Sample Model Endpoint - -This is a sample model endpoint that demonstrates the file-based endpoint -configuration system. - -## Usage - -Send a POST request to `/api/v1/endpoints/sample-model/query` with: - -```json -{ - "messages": [ - {"role": "user", "content": "Hello, how are you?"} - ] -} -``` - -## Response - -The model will return a friendly response based on the input. diff --git a/sdk/golang/examples/file_based/endpoints/simple-search/README.md b/sdk/golang/examples/file_based/endpoints/simple-search/README.md deleted file mode 100644 index 78b6c726..00000000 --- a/sdk/golang/examples/file_based/endpoints/simple-search/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -slug: simple-search -type: data_source -name: Simple Search -description: A simple search endpoint that returns sample documents -enabled: true -version: "1.0.0" -runtime: - mode: subprocess - timeout: 30 ---- - -# Simple Search - -A simple data source endpoint that returns sample documents matching the query. - -## Usage - -```bash -curl -X POST http://localhost:8001/api/v1/endpoints/simple-search/query \ - -H "Content-Type: application/json" \ - -H "Authorization: Bearer test-token" \ - -d '{"messages": [{"role": "user", "content": "machine learning"}]}' -``` - -## Response - -```json -{ - "references": [ - { - "document_id": "doc-1", - "content": "...", - "similarity_score": 0.95 - } - ] -} -``` diff --git a/sdk/golang/syfthub/accounting.go b/sdk/golang/syfthub/accounting.go index d393e232..de6fa063 100644 --- a/sdk/golang/syfthub/accounting.go +++ b/sdk/golang/syfthub/accounting.go @@ -4,11 +4,9 @@ import ( "context" "encoding/json" "fmt" - "io" "net/http" "net/url" "strconv" - "strings" "time" ) @@ -54,143 +52,48 @@ import ( // // Confirm the transaction // tx, err = accounting.ConfirmTransaction(ctx, tx.ID) type AccountingResource struct { - url string - email string - password string - timeout time.Duration - client *http.Client + http *httpClient + timeout time.Duration } -// newAccountingResource creates a new AccountingResource. +// newAccountingResource creates a new AccountingResource backed by the unified +// httpClient with a basicAuth strategy. func newAccountingResource(accountingURL, email, password string, timeout time.Duration) *AccountingResource { return &AccountingResource{ - url: strings.TrimSuffix(accountingURL, "/"), - email: email, - password: password, - timeout: timeout, - client: &http.Client{ - Timeout: timeout, - }, + http: newBasicAuthClient(accountingURL, timeout, email, password), + timeout: timeout, } } -// request makes an authenticated request to the accounting service. -func (a *AccountingResource) request(ctx context.Context, method, path string, body interface{}, result interface{}) error { - var reqBody io.Reader - if body != nil { - jsonBody, err := json.Marshal(body) - if err != nil { - return err - } - reqBody = strings.NewReader(string(jsonBody)) +// do makes a request through the accounting httpClient. When applyAuth is nil, +// the client's default basic-auth strategy is used; supply a closure to override +// (e.g. for delegated transactions that authenticate with a Bearer token). +func (a *AccountingResource) do(ctx context.Context, method, path string, body, result interface{}, applyAuth func(*http.Request)) error { + var opts []RequestOption + if applyAuth != nil { + opts = append(opts, withAuthFunc(applyAuth)) } - - req, err := http.NewRequestWithContext(ctx, method, a.url+path, reqBody) + respBody, err := a.http.Request(ctx, method, path, body, opts...) if err != nil { return err } - - req.SetBasicAuth(a.email, a.password) - req.Header.Set("Content-Type", "application/json") - - resp, err := a.client.Do(req) - if err != nil { - return newAPIError(0, fmt.Sprintf("Accounting request failed: %v", err)) - } - defer resp.Body.Close() - - respBody, err := io.ReadAll(resp.Body) - if err != nil { - return newAPIError(resp.StatusCode, fmt.Sprintf("Failed to read response: %v", err)) - } - - if resp.StatusCode >= 400 { - return a.handleErrorResponse(resp.StatusCode, respBody) + if result != nil && len(respBody) > 0 { + return json.Unmarshal(respBody, result) } - - if result != nil && resp.StatusCode != 204 && len(respBody) > 0 { - if err := json.Unmarshal(respBody, result); err != nil { - return err - } - } - return nil } -// requestWithToken makes a request using Bearer token auth (for delegated transactions). -func (a *AccountingResource) requestWithToken(ctx context.Context, method, path, token string, body interface{}, result interface{}) error { - var reqBody io.Reader - if body != nil { - jsonBody, err := json.Marshal(body) - if err != nil { - return err - } - reqBody = strings.NewReader(string(jsonBody)) - } - - req, err := http.NewRequestWithContext(ctx, method, a.url+path, reqBody) - if err != nil { - return err - } - - req.Header.Set("Authorization", "Bearer "+token) - req.Header.Set("Content-Type", "application/json") - - resp, err := a.client.Do(req) - if err != nil { - return newAPIError(0, fmt.Sprintf("Accounting request failed: %v", err)) - } - defer resp.Body.Close() - - respBody, err := io.ReadAll(resp.Body) - if err != nil { - return newAPIError(resp.StatusCode, fmt.Sprintf("Failed to read response: %v", err)) - } - - if resp.StatusCode >= 400 { - return a.handleErrorResponse(resp.StatusCode, respBody) - } - - if result != nil && resp.StatusCode != 204 && len(respBody) > 0 { - if err := json.Unmarshal(respBody, result); err != nil { - return err - } - } - - return nil +// request makes a request with the default basic-auth strategy. +func (a *AccountingResource) request(ctx context.Context, method, path string, body, result interface{}) error { + return a.do(ctx, method, path, body, result, nil) } -// handleErrorResponse handles HTTP error responses from accounting service. -func (a *AccountingResource) handleErrorResponse(statusCode int, body []byte) error { - var detail string - var errorBody map[string]interface{} - if err := json.Unmarshal(body, &errorBody); err == nil { - if d, ok := errorBody["detail"].(string); ok { - detail = d - } else if m, ok := errorBody["message"].(string); ok { - detail = m - } else { - detail = string(body) - } - } else { - detail = string(body) - if detail == "" { - detail = fmt.Sprintf("HTTP %d", statusCode) - } - } - - switch statusCode { - case 401: - return newAuthenticationError(fmt.Sprintf("Authentication failed: %s", detail)) - case 403: - return newAuthorizationError(fmt.Sprintf("Permission denied: %s", detail)) - case 404: - return newNotFoundError(fmt.Sprintf("Not found: %s", detail)) - case 422: - return newValidationError(fmt.Sprintf("Validation error: %s", detail), nil) - default: - return newAPIError(statusCode, fmt.Sprintf("Accounting API error: %s", detail)) - } +// requestWithToken makes a request authenticated with a Bearer token (used for +// delegated transactions). It shares the httpClient connection pool. +func (a *AccountingResource) requestWithToken(ctx context.Context, method, path, token string, body, result interface{}) error { + return a.do(ctx, method, path, body, result, func(r *http.Request) { + r.Header.Set("Authorization", "Bearer "+token) + }) } // ========================================================================= @@ -446,7 +349,7 @@ func (a *AccountingResource) CreateDelegatedTransaction(ctx context.Context, req // Close closes the HTTP client and releases resources. func (a *AccountingResource) Close() { - if a.client != nil { - a.client.CloseIdleConnections() + if a.http != nil { + a.http.Close() } } diff --git a/sdk/golang/syfthub/accounting_test.go b/sdk/golang/syfthub/accounting_test.go index 50841ce5..cf861d6e 100644 --- a/sdk/golang/syfthub/accounting_test.go +++ b/sdk/golang/syfthub/accounting_test.go @@ -13,19 +13,23 @@ import ( func TestNewAccountingResource(t *testing.T) { ar := newAccountingResource("https://accounting.example.com/", "user@example.com", "password123", 30*time.Second) - if ar.url != "https://accounting.example.com" { - t.Errorf("url = %q, trailing slash should be trimmed", ar.url) + if ar.http.baseURL != "https://accounting.example.com" { + t.Errorf("baseURL = %q, trailing slash should be trimmed", ar.http.baseURL) } - if ar.email != "user@example.com" { - t.Errorf("email = %q", ar.email) + auth, ok := ar.http.auth.(*basicAuth) + if !ok { + t.Fatalf("auth strategy = %T, want *basicAuth", ar.http.auth) } - if ar.password != "password123" { - t.Errorf("password = %q", ar.password) + if auth.username != "user@example.com" { + t.Errorf("username = %q", auth.username) + } + if auth.password != "password123" { + t.Errorf("password = %q", auth.password) } if ar.timeout != 30*time.Second { t.Errorf("timeout = %v", ar.timeout) } - if ar.client == nil { + if ar.http.client == nil { t.Error("client should be initialized") } } diff --git a/sdk/golang/syfthub/chat.go b/sdk/golang/syfthub/chat.go index ed465ee7..30e1a544 100644 --- a/sdk/golang/syfthub/chat.go +++ b/sdk/golang/syfthub/chat.go @@ -83,23 +83,42 @@ type chatPrepared struct { aggregatorURL string } -// prepareRequest resolves endpoints, fetches tokens, and builds the aggregator request body. -// It is called by both Complete (stream=false) and streamInternal (stream=true) to eliminate -// the ~73 lines of setup code that would otherwise be duplicated. -func (c *ChatResource) prepareRequest(ctx context.Context, req *ChatCompleteRequest, stream bool) (*chatPrepared, error) { - // Apply defaults - if req.TopK == 0 { - req.TopK = 5 +// resolvedDefaults holds zero-value defaults resolved as locals so prepareRequest +// never mutates the caller's *ChatCompleteRequest. +type resolvedDefaults struct { + topK int + maxTokens int + temperature float64 + similarityThreshold float64 +} + +func resolveDefaults(req *ChatCompleteRequest) resolvedDefaults { + r := resolvedDefaults{ + topK: req.TopK, + maxTokens: req.MaxTokens, + temperature: req.Temperature, + similarityThreshold: req.SimilarityThreshold, } - if req.MaxTokens == 0 { - req.MaxTokens = 1024 + if r.topK == 0 { + r.topK = 5 } - if req.Temperature == 0 { - req.Temperature = 0.7 + if r.maxTokens == 0 { + r.maxTokens = 1024 } - if req.SimilarityThreshold == 0 { - req.SimilarityThreshold = 0.5 + if r.temperature == 0 { + r.temperature = 0.7 } + if r.similarityThreshold == 0 { + r.similarityThreshold = 0.5 + } + return r +} + +// prepareRequest resolves endpoints, fetches tokens, and builds the aggregator request body. +// It is called by both Complete (stream=false) and streamInternal (stream=true) to eliminate +// the ~73 lines of setup code that would otherwise be duplicated. +func (c *ChatResource) prepareRequest(ctx context.Context, req *ChatCompleteRequest, stream bool) (*chatPrepared, error) { + defaults := resolveDefaults(req) // Resolve aggregator URL aggregatorURL := c.aggregatorURL @@ -147,7 +166,10 @@ func (c *ChatResource) prepareRequest(ctx context.Context, req *ChatCompleteRequ } // Auto-fetch peer token if tunneling endpoints detected - var peerToken, peerChannel string + tokens := chatTokens{ + endpoint: endpointTokens, + transaction: transactionTokens.Tokens, + } tunnelingUsernames := c.collectTunnelingUsernames(modelRef, dsRefs) if len(tunnelingUsernames) > 0 { var peerResponse *PeerTokenResponse @@ -157,26 +179,12 @@ func (c *ChatResource) prepareRequest(ctx context.Context, req *ChatCompleteRequ peerResponse, err = c.auth.GetPeerToken(ctx, tunnelingUsernames) } if err == nil { - peerToken = peerResponse.PeerToken - peerChannel = peerResponse.PeerChannel + tokens.peerToken = peerResponse.PeerToken + tokens.peerChannel = peerResponse.PeerChannel } } - requestBody := c.buildRequestBody( - req.Prompt, - modelRef, - dsRefs, - endpointTokens, - transactionTokens.Tokens, - req.TopK, - req.MaxTokens, - req.Temperature, - req.SimilarityThreshold, - stream, - req.Messages, - peerToken, - peerChannel, - ) + requestBody := c.buildRequestBody(req, modelRef, dsRefs, defaults, tokens, stream) return &chatPrepared{requestBody: requestBody, aggregatorURL: aggregatorURL}, nil } @@ -517,24 +525,26 @@ func (c *ChatResource) collectTunnelingUsernames(modelRef *EndpointRef, dsRefs [ return usernames } -// buildRequestBody builds the request body for the aggregator. +// chatTokens bundles the token maps + peer info passed to buildRequestBody. +type chatTokens struct { + endpoint map[string]string + transaction map[string]string + peerToken string + peerChannel string +} + +// buildRequestBody serializes the aggregator request body. It is a pure serializer — +// defaults are resolved upstream in resolveDefaults, and the caller's *req is never mutated. func (c *ChatResource) buildRequestBody( - prompt string, + req *ChatCompleteRequest, modelRef *EndpointRef, dsRefs []EndpointRef, - endpointTokens map[string]string, - transactionTokens map[string]string, - topK int, - maxTokens int, - temperature float64, - similarityThreshold float64, + defaults resolvedDefaults, + tokens chatTokens, stream bool, - messages []Message, - peerToken string, - peerChannel string, ) map[string]interface{} { body := map[string]interface{}{ - "prompt": prompt, + "prompt": req.Prompt, "model": map[string]interface{}{ "url": modelRef.URL, "slug": modelRef.Slug, @@ -543,12 +553,12 @@ func (c *ChatResource) buildRequestBody( "owner_username": modelRef.OwnerUsername, }, "data_sources": make([]map[string]interface{}, 0, len(dsRefs)), - "endpoint_tokens": endpointTokens, - "transaction_tokens": transactionTokens, - "top_k": topK, - "max_tokens": maxTokens, - "temperature": temperature, - "similarity_threshold": similarityThreshold, + "endpoint_tokens": tokens.endpoint, + "transaction_tokens": tokens.transaction, + "top_k": defaults.topK, + "max_tokens": defaults.maxTokens, + "temperature": defaults.temperature, + "similarity_threshold": defaults.similarityThreshold, "stream": stream, } @@ -562,15 +572,15 @@ func (c *ChatResource) buildRequestBody( }) } - if len(messages) > 0 { - body["messages"] = messages + if len(req.Messages) > 0 { + body["messages"] = req.Messages } - if peerToken != "" { - body["peer_token"] = peerToken + if tokens.peerToken != "" { + body["peer_token"] = tokens.peerToken } - if peerChannel != "" { - body["peer_channel"] = peerChannel + if tokens.peerChannel != "" { + body["peer_channel"] = tokens.peerChannel } return body diff --git a/sdk/golang/syfthub/chat_test.go b/sdk/golang/syfthub/chat_test.go index 4c32462e..cfaba15f 100644 --- a/sdk/golang/syfthub/chat_test.go +++ b/sdk/golang/syfthub/chat_test.go @@ -707,21 +707,23 @@ func TestBuildRequestBody(t *testing.T) { transactionTokens := map[string]string{"bob": "tx_bob"} messages := []Message{{Role: "user", Content: "Hello"}} - body := chat.buildRequestBody( - "Test prompt", - modelRef, - dsRefs, - endpointTokens, - transactionTokens, - 10, - 2048, - 0.8, - 0.7, - true, - messages, - "peer_token", - "peer_channel", - ) + req := &ChatCompleteRequest{ + Prompt: "Test prompt", + Messages: messages, + } + defaults := resolvedDefaults{ + topK: 10, + maxTokens: 2048, + temperature: 0.8, + similarityThreshold: 0.7, + } + tokens := chatTokens{ + endpoint: endpointTokens, + transaction: transactionTokens, + peerToken: "peer_token", + peerChannel: "peer_channel", + } + body := chat.buildRequestBody(req, modelRef, dsRefs, defaults, tokens, true) if body["prompt"] != "Test prompt" { t.Errorf("prompt = %v", body["prompt"]) diff --git a/sdk/golang/syfthub/client.go b/sdk/golang/syfthub/client.go index da0d5be9..a78111f5 100644 --- a/sdk/golang/syfthub/client.go +++ b/sdk/golang/syfthub/client.go @@ -57,10 +57,16 @@ const ( // // ... save tokens to file/db ... // // Later: // client.SetTokens(tokens) +// +// Client is the main SyftHub client. Options are applied as pure data +// (they only set fields like apiToken); anything that depends on the HTTP +// client is applied once in NewClient after c.http is constructed. Any +// future option that needs c.http should follow the same pattern. type Client struct { baseURL string aggregatorURL string timeout time.Duration + apiToken string http *httpClient // Eagerly-initialized resources @@ -109,9 +115,7 @@ func WithAggregatorURL(url string) Option { // When provided, the client will be authenticated immediately without needing to call Login(). func WithAPIToken(token string) Option { return func(c *Client) error { - if c.http != nil { - c.http.SetAPIToken(token) - } + c.apiToken = token return nil } } @@ -153,18 +157,13 @@ func NewClient(opts ...Option) (*Client, error) { // Create HTTP client c.http = newHTTPClient(c.baseURL, c.timeout) - // Check for API token from environment - if envToken := os.Getenv(EnvAPIToken); envToken != "" { + // Apply API token: explicit option takes precedence over environment. + if c.apiToken != "" { + c.http.SetAPIToken(c.apiToken) + } else if envToken := os.Getenv(EnvAPIToken); envToken != "" { c.http.SetAPIToken(envToken) } - // Re-apply WithAPIToken option if it was passed (after http client is created) - for _, opt := range opts { - if err := opt(c); err != nil { - return nil, err - } - } - // Create eagerly-initialized resources c.Auth = newAuthResource(c.http) c.Users = newUsersResource(c.http) diff --git a/sdk/golang/syfthub/http.go b/sdk/golang/syfthub/http.go index 3ab6ec21..d935ba75 100644 --- a/sdk/golang/syfthub/http.go +++ b/sdk/golang/syfthub/http.go @@ -18,37 +18,89 @@ type HTTPDoer interface { Do(req *http.Request) (*http.Response, error) } +// authStrategy applies authentication credentials to an outgoing request. +// Implementations: bearerAuth (Hub JWT/API tokens, refreshable), basicAuth +// (accounting service). +type authStrategy interface { + apply(req *http.Request) + // canRefresh reports whether a 401 response should trigger a token refresh + // and retry. Only bearer auth supports refresh today. + canRefresh() bool +} + +// bearerAuth sends "Authorization: Bearer " where the token is resolved +// lazily via tokenProvider so tokens can rotate without reconstructing the +// strategy. +type bearerAuth struct { + tokenProvider func() string +} + +func (b *bearerAuth) apply(req *http.Request) { + if t := b.tokenProvider(); t != "" { + req.Header.Set("Authorization", "Bearer "+t) + } +} +func (b *bearerAuth) canRefresh() bool { return true } + +// basicAuth sends HTTP Basic auth. Used by the accounting service. +type basicAuth struct { + username string + password string +} + +func (b *basicAuth) apply(req *http.Request) { req.SetBasicAuth(b.username, b.password) } +func (b *basicAuth) canRefresh() bool { return false } + // httpClient is the internal HTTP client with automatic token management. type httpClient struct { baseURL string timeout time.Duration client HTTPDoer + auth authStrategy - // Token storage (protected by mutex) + // Token storage for bearerAuth (protected by mutex). mu sync.RWMutex accessToken string refreshToken string apiToken string } -// newHTTPClient creates a new HTTP client. +// newHTTPClient creates a new HTTP client with bearer-token auth. func newHTTPClient(baseURL string, timeout time.Duration) *httpClient { - return &httpClient{ + h := &httpClient{ baseURL: strings.TrimRight(baseURL, "/"), timeout: timeout, client: &http.Client{ Timeout: timeout, }, } + h.auth = &bearerAuth{tokenProvider: h.getBearerToken} + return h +} + +// newBasicAuthClient creates an HTTP client that authenticates with HTTP Basic auth. +// Used by the accounting service which does not participate in JWT refresh. +func newBasicAuthClient(baseURL string, timeout time.Duration, username, password string) *httpClient { + h := &httpClient{ + baseURL: strings.TrimRight(baseURL, "/"), + timeout: timeout, + client: &http.Client{ + Timeout: timeout, + }, + } + h.auth = &basicAuth{username: username, password: password} + return h } // newHTTPClientWithDoer creates a new HTTP client with a custom HTTPDoer (for testing). func newHTTPClientWithDoer(baseURL string, timeout time.Duration, doer HTTPDoer) *httpClient { - return &httpClient{ + h := &httpClient{ baseURL: strings.TrimRight(baseURL, "/"), timeout: timeout, client: doer, } + h.auth = &bearerAuth{tokenProvider: h.getBearerToken} + return h } // Close closes the HTTP client. @@ -128,6 +180,7 @@ type requestOptions struct { retryOn401 bool formData url.Values query url.Values + authFunc func(*http.Request) // overrides client's auth strategy when non-nil } // RequestOption is a function that modifies request options. @@ -161,6 +214,13 @@ func WithQuery(params url.Values) RequestOption { } } +// withAuthFunc overrides the client's default auth strategy for a single call. +func withAuthFunc(fn func(*http.Request)) RequestOption { + return func(o *requestOptions) { + o.authFunc = fn + } +} + // Request makes an HTTP request and returns the response body. func (h *httpClient) Request(ctx context.Context, method, path string, body interface{}, opts ...RequestOption) ([]byte, error) { // Apply default options @@ -206,10 +266,10 @@ func (h *httpClient) Request(ctx context.Context, method, path string, body inte } req.Header.Set("Accept", "application/json") - if options.includeAuth { - if token := h.getBearerToken(); token != "" { - req.Header.Set("Authorization", "Bearer "+token) - } + if options.authFunc != nil { + options.authFunc(req) + } else if options.includeAuth { + h.auth.apply(req) } // Make request @@ -225,8 +285,8 @@ func (h *httpClient) Request(ctx context.Context, method, path string, body inte return nil, newNetworkError(fmt.Errorf("failed to read response body: %w", err)) } - // Handle 401 with token refresh - if resp.StatusCode == 401 && options.retryOn401 && options.includeAuth && h.attemptRefresh(ctx) { + // Handle 401 with token refresh (bearer auth only). + if resp.StatusCode == 401 && options.retryOn401 && options.includeAuth && h.auth.canRefresh() && h.attemptRefresh(ctx) { // Retry with new token return h.Request(ctx, method, path, body, append(opts, WithNoRetry())...) } @@ -476,9 +536,7 @@ func (h *httpClient) StreamRequest(ctx context.Context, method, path string, bod req.Header.Set("Accept", "text/event-stream") if options.includeAuth { - if token := h.getBearerToken(); token != "" { - req.Header.Set("Authorization", "Bearer "+token) - } + h.auth.apply(req) } // Make request @@ -496,117 +554,3 @@ func (h *httpClient) StreamRequest(ctx context.Context, method, path string, bod return resp, nil } - -// basicAuthHTTPClient wraps httpClient with Basic authentication for accounting service. -type basicAuthHTTPClient struct { - *httpClient - username string - password string -} - -// newBasicAuthHTTPClient creates a new HTTP client with Basic authentication. -func newBasicAuthHTTPClient(baseURL string, timeout time.Duration, username, password string) *basicAuthHTTPClient { - return &basicAuthHTTPClient{ - httpClient: newHTTPClient(baseURL, timeout), - username: username, - password: password, - } -} - -// Request makes an HTTP request with Basic authentication. -func (h *basicAuthHTTPClient) Request(ctx context.Context, method, path string, body interface{}, opts ...RequestOption) ([]byte, error) { - // Apply default options - options := &requestOptions{ - includeAuth: true, - retryOn401: false, // No token refresh for Basic auth - } - for _, opt := range opts { - opt(options) - } - - // Build URL - reqURL := h.baseURL + path - if options.query != nil { - reqURL += "?" + options.query.Encode() - } - - // Build request body - var bodyReader io.Reader - if body != nil { - bodyBytes, err := json.Marshal(body) - if err != nil { - return nil, fmt.Errorf("failed to marshal request body: %w", err) - } - bodyReader = bytes.NewReader(bodyBytes) - } - - // Create request - req, err := http.NewRequestWithContext(ctx, method, reqURL, bodyReader) - if err != nil { - return nil, newNetworkError(fmt.Errorf("failed to create request: %w", err)) - } - - // Set headers - req.Header.Set("Content-Type", "application/json") - req.Header.Set("Accept", "application/json") - - if options.includeAuth { - req.SetBasicAuth(h.username, h.password) - } - - // Make request - resp, err := h.client.Do(req) - if err != nil { - return nil, newNetworkError(err) - } - defer resp.Body.Close() - - // Read response body - respBody, err := io.ReadAll(resp.Body) - if err != nil { - return nil, newNetworkError(fmt.Errorf("failed to read response body: %w", err)) - } - - // Handle errors - if resp.StatusCode >= 400 { - return nil, h.handleError(resp.StatusCode, respBody) - } - - return respBody, nil -} - -// Get makes a GET request with Basic authentication. -func (h *basicAuthHTTPClient) Get(ctx context.Context, path string, result interface{}, opts ...RequestOption) error { - body, err := h.Request(ctx, "GET", path, nil, opts...) - if err != nil { - return err - } - if result != nil { - return json.Unmarshal(body, result) - } - return nil -} - -// Post makes a POST request with Basic authentication. -func (h *basicAuthHTTPClient) Post(ctx context.Context, path string, body, result interface{}, opts ...RequestOption) error { - respBody, err := h.Request(ctx, "POST", path, body, opts...) - if err != nil { - return err - } - if result != nil { - return json.Unmarshal(respBody, result) - } - return nil -} - -// Patch makes a PATCH request with Basic authentication. -func (h *basicAuthHTTPClient) Patch(ctx context.Context, path string, body, result interface{}, opts ...RequestOption) error { - respBody, err := h.Request(ctx, "PATCH", path, body, opts...) - if err != nil { - return err - } - if result != nil { - return json.Unmarshal(respBody, result) - } - return nil -} diff --git a/sdk/golang/syfthub/http_test.go b/sdk/golang/syfthub/http_test.go index 67ed7b08..8c97e142 100644 --- a/sdk/golang/syfthub/http_test.go +++ b/sdk/golang/syfthub/http_test.go @@ -638,7 +638,7 @@ func TestBasicAuthHTTPClient(t *testing.T) { })) defer server.Close() - client := newBasicAuthHTTPClient(server.URL, 30*time.Second, "user@example.com", "secret123") + client := newBasicAuthClient(server.URL, 30*time.Second, "user@example.com", "secret123") _, err := client.Request(context.Background(), "GET", "/api/test", nil) if err != nil { t.Fatalf("Request error: %v", err) @@ -655,7 +655,7 @@ func TestBasicAuthHTTPClient(t *testing.T) { })) defer server.Close() - client := newBasicAuthHTTPClient(server.URL, 30*time.Second, "user", "pass") + client := newBasicAuthClient(server.URL, 30*time.Second, "user", "pass") var result map[string]int err := client.Get(context.Background(), "/user", &result) if err != nil { @@ -672,7 +672,7 @@ func TestBasicAuthHTTPClient(t *testing.T) { })) defer server.Close() - client := newBasicAuthHTTPClient(server.URL, 30*time.Second, "user", "pass") + client := newBasicAuthClient(server.URL, 30*time.Second, "user", "pass") var result map[string]string err := client.Post(context.Background(), "/transactions", map[string]interface{}{"amount": 10.0}, &result) if err != nil { @@ -692,7 +692,7 @@ func TestBasicAuthHTTPClient(t *testing.T) { })) defer server.Close() - client := newBasicAuthHTTPClient(server.URL, 30*time.Second, "user", "pass") + client := newBasicAuthClient(server.URL, 30*time.Second, "user", "pass") var result map[string]string err := client.Patch(context.Background(), "/user", map[string]string{"name": "new name"}, &result) if err != nil { diff --git a/sdk/golang/syfthubapi/README.md b/sdk/golang/syfthubapi/README.md deleted file mode 100644 index ba8d5194..00000000 --- a/sdk/golang/syfthubapi/README.md +++ /dev/null @@ -1,342 +0,0 @@ -# SyftHub API - Go SDK - -A Go framework for building SyftHub Spaces with a FastAPI-like interface. This is a 1:1 feature-complete port of the Python `syfthub-api` package. - -## Features - -- **Declarative endpoint registration** via builder pattern -- **Two execution modes**: HTTP direct and NATS tunneling -- **File-based endpoint configuration** with hot-reload -- **Policy enforcement framework** (pre/post execution hooks) -- **Heartbeat mechanism** for availability signaling -- **JWT token verification** via SyftHub backend -- **Middleware support** for request/response processing -- **Python subprocess execution** for file-based endpoints - -## Installation - -```bash -go get github.com/openmined/syfthub/sdk/golang/syfthubapi -``` - -## Quick Start - -### Basic HTTP Server - -```go -package main - -import ( - "context" - "log" - - "github.com/openmined/syfthub/sdk/golang/syfthubapi" -) - -func main() { - app := syfthubapi.New() - - // Register a data source endpoint - app.DataSource("papers"). - Name("Research Papers"). - Description("Search through research papers"). - Handler(func(ctx context.Context, query string, reqCtx *syfthubapi.RequestContext) ([]syfthubapi.Document, error) { - return []syfthubapi.Document{ - {DocumentID: "1", Content: "...", SimilarityScore: 0.95}, - }, nil - }) - - // Register a model endpoint - app.Model("chat"). - Name("Chat Assistant"). - Description("An AI chat assistant"). - Handler(func(ctx context.Context, messages []syfthubapi.Message, reqCtx *syfthubapi.RequestContext) (string, error) { - return "Hello! How can I help?", nil - }) - - // Run the server - if err := app.Run(context.Background()); err != nil { - log.Fatal(err) - } -} -``` - -### Configuration - -Configuration is loaded from environment variables: - -| Variable | Description | Required | -|----------|-------------|----------| -| `SYFTHUB_URL` | SyftHub backend URL | Yes | -| `SYFTHUB_API_KEY` | API token (PAT) for authentication | Yes | -| `SPACE_URL` | Public URL or `tunneling:username` | Yes | -| `LOG_LEVEL` | Logging level (DEBUG, INFO, WARNING, ERROR) | No | -| `SERVER_HOST` | HTTP server bind address | No | -| `SERVER_PORT` | HTTP server port (default: 8000) | No | -| `HEARTBEAT_ENABLED` | Enable heartbeat (default: true) | No | -| `HEARTBEAT_TTL_SECONDS` | Heartbeat TTL (default: 300) | No | -| `ENDPOINTS_PATH` | Path to file-based endpoints | No | -| `WATCH_ENABLED` | Enable hot-reload (default: true) | No | - -Or use functional options: - -```go -app := syfthubapi.New( - syfthubapi.WithSyftHubURL("https://syfthub.example.com"), - syfthubapi.WithAPIKey("syft_pat_xxx"), - syfthubapi.WithSpaceURL("http://localhost:8001"), - syfthubapi.WithLogLevel("DEBUG"), - syfthubapi.WithServerPort(8001), - syfthubapi.WithHeartbeatEnabled(true), - syfthubapi.WithEndpointsPath("./endpoints"), -) -``` - -## Endpoint Types - -### Data Source - -Data sources return documents based on a search query: - -```go -app.DataSource("slug"). - Name("Display Name"). - Description("Brief description"). - Version("1.0.0"). - Handler(func(ctx context.Context, query string, reqCtx *syfthubapi.RequestContext) ([]syfthubapi.Document, error) { - // Return relevant documents - return []syfthubapi.Document{...}, nil - }) -``` - -### Model - -Models process messages and return a response: - -```go -app.Model("slug"). - Name("Display Name"). - Description("Brief description"). - Version("1.0.0"). - Handler(func(ctx context.Context, messages []syfthubapi.Message, reqCtx *syfthubapi.RequestContext) (string, error) { - // Process messages and return response - return "response", nil - }) -``` - -## Execution Modes - -### HTTP Mode (Default) - -Set `SPACE_URL` to an HTTP URL: - -```bash -export SPACE_URL=http://localhost:8001 -``` - -The server listens directly on the specified host and port. - -### Tunnel Mode - -Set `SPACE_URL` to use NATS tunneling: - -```bash -export SPACE_URL=tunneling:my-username -``` - -The server connects to NATS and receives requests via pub/sub. - -## File-Based Endpoints - -Endpoints can be defined via directory structure: - -``` -endpoints/ -├── my-model/ -│ ├── README.md # YAML frontmatter + docs -│ ├── runner.py # Python handler -│ ├── .env # Environment variables -│ ├── pyproject.toml # Dependencies -│ └── policy/ -│ └── rate_limit.yaml -``` - -### README.md Frontmatter - -```yaml ---- -slug: my-model -type: model # or "data_source" -name: My Model -description: Description here -enabled: true -version: "1.0.0" -env: - required: [API_KEY] - optional: [DEBUG] - inherit: [PATH, HOME] -runtime: - mode: subprocess - workers: 1 - timeout: 30 ---- - -# Documentation -``` - -### runner.py Handler - -```python -def handler(messages: list[dict], context: dict = None) -> str: - """Handle model requests.""" - return "response" - -# For data sources: -def handler(query: str, context: dict = None) -> list[dict]: - """Handle data source requests.""" - return [{"document_id": "1", "content": "...", "similarity_score": 0.9}] -``` - -## Policy Framework - -### Built-in Policies - -```go -import "github.com/openmined/syfthub/sdk/golang/syfthubapi/policy" - -// Rate limiting -rateLimit := policy.NewRateLimitPolicy("rate-limit", 100, 3600) // 100 requests per hour - -// Access control -accessPolicy := policy.NewAccessGroupPolicy("access", - []string{"alice", "bob"}, // allowed users - []string{"admin"}, // allowed roles - nil, // denied users - nil, // denied roles -) - -// Time window -timeWindow := policy.NewTimeWindowPolicy("business-hours", 9, 17, nil, nil) - -// Add to endpoint -app.Model("premium"). - Policies(rateLimit, accessPolicy). - Handler(...) -``` - -### YAML Policy Configuration - -```yaml -# policy/rate_limit.yaml -type: rate_limit -name: rate-limit -args: - max_requests: 100 - window_seconds: 3600 -``` - -### Composite Policies - -```go -// All policies must pass -allOf := policy.NewAllOfPolicy("all", policy1, policy2) - -// At least one must pass -anyOf := policy.NewAnyOfPolicy("any", policy1, policy2) - -// Negate a policy -not := policy.NewNotPolicy("not-admin", adminPolicy) -``` - -## Middleware - -```go -// Built-in middleware -app.Use(syfthubapi.LoggingMiddleware(logger)) -app.Use(syfthubapi.RecoveryMiddleware(logger)) -app.Use(syfthubapi.TimeoutMiddleware(30 * time.Second)) - -// Custom middleware -app.Use(func(next syfthubapi.RequestHandler) syfthubapi.RequestHandler { - return func(ctx context.Context, req *syfthubapi.TunnelRequest) (*syfthubapi.TunnelResponse, error) { - // Pre-processing - resp, err := next(ctx, req) - // Post-processing - return resp, err - } -}) -``` - -## Lifecycle Hooks - -```go -app.OnStartup(func(ctx context.Context) error { - // Initialize database connections, etc. - return nil -}) - -app.OnShutdown(func(ctx context.Context) error { - // Clean up resources - return nil -}) -``` - -## Error Handling - -All errors implement `error` and can be checked with `errors.Is()`: - -```go -import "errors" - -if errors.Is(err, syfthubapi.ErrPolicyDenied) { - // Handle policy denial -} - -if errors.Is(err, syfthubapi.ErrAuthentication) { - // Handle auth error -} -``` - -## Package Structure - -``` -syfthubapi/ -├── api.go # Main SyftAPI struct -├── config.go # Configuration management -├── endpoint.go # Endpoint types and builder -├── schemas.go # Request/Response types -├── errors.go # Error types -├── middleware.go # Middleware chain -├── auth.go # Authentication -├── transport/ -│ ├── transport.go # Transport interface -│ ├── http.go # HTTP transport -│ └── nats.go # NATS transport -├── heartbeat/ -│ └── heartbeat.go # Heartbeat manager -├── policy/ -│ ├── policy.go # Policy interface -│ ├── loader.go # YAML loading -│ └── builtin.go # Built-in policies -└── filemode/ - ├── provider.go # File provider - ├── loader.go # README parsing - ├── watcher.go # File watching - ├── executor.go # Subprocess execution - └── venv.go # Virtual env management -``` - -## Comparison with Python SDK - -| Feature | Python | Go | -|---------|--------|-----| -| Endpoint registration | `@app.datasource()` decorator | `app.DataSource().Handler()` builder | -| Async handlers | `async def` | Goroutines + context | -| Error handling | Exceptions | Error returns | -| Configuration | Pydantic Settings | Functional options | -| Hot-reload | watchdog | fsnotify | -| Subprocess execution | loky | os/exec | - -## License - -Apache 2.0 diff --git a/sdk/golang/syfthubapi/REFACTORING_PLAN.md b/sdk/golang/syfthubapi/REFACTORING_PLAN.md deleted file mode 100644 index 24786b69..00000000 --- a/sdk/golang/syfthubapi/REFACTORING_PLAN.md +++ /dev/null @@ -1,694 +0,0 @@ -# SyftHub Go SDK Refactoring Plan - -## Overview - -This plan addresses all identified architectural issues and implements P0, P1, and P2 recommendations from the SDK evaluation. - -**Estimated Total Effort**: 8-10 hours -**Risk Level**: Medium (significant refactoring with proper safety measures) - ---- - -## Phase 0: Preparation (Before Any Changes) - -### 0.1 Create Test Foundation -Before refactoring, create basic tests to ensure behavior preservation. - -**Test Files to Create**: -| File | Coverage | -|------|----------| -| `config_test.go` | LoadFromEnv, Validate, IsTunnelMode, DeriveNATSWebSocketURL | -| `endpoint_test.go` | Builders, registry, invocation | -| `api_test.go` | Request handling, policy execution | -| `auth_test.go` | Auth and sync clients with mock HTTP | -| `middleware_test.go` | Middleware chain behavior | - -### 0.2 Safety Rules -- Each commit leaves code in working state -- Use Strangler Fig pattern: add new code alongside old, switch, remove old -- Run tests after each step -- Use `go test -race` to detect race conditions - ---- - -## Phase 1: P0 Critical Security Fixes - -### Step 1: Fix Token Verification (CRITICAL) - -**Problem**: `api.go:565-580` returns hardcoded user for ANY non-empty token, bypassing all authentication. - -**Files Changed**: `api.go` - -**Changes**: - -1. Add `authClient` field to `SyftAPI` struct: -```go -type SyftAPI struct { - // ... existing fields - authClient *AuthClient // NEW -} -``` - -2. Initialize in `New()`: -```go -func New(opts ...Option) *SyftAPI { - // ... existing setup - slogLogger := NewSlogLogger(logger) - authClient := NewAuthClient(config.SyftHubURL, config.APIKey, slogLogger) - - return &SyftAPI{ - // ... existing fields - authClient: authClient, - } -} -``` - -3. Replace `verifyToken` implementation: -```go -func (api *SyftAPI) verifyToken(ctx context.Context, token string) (*UserContext, error) { - if api.authClient == nil { - return nil, &AuthenticationError{Message: "auth client not initialized"} - } - return api.authClient.VerifyToken(ctx, token) -} -``` - -**Risk**: Medium - Changes authentication behavior -**Test**: Verify with real backend or mock AuthClient - ---- - -### Step 2: Fix Race Condition in Policy Execution - -**Problem**: `runPreExecutePolicies` and `runPostExecutePolicies` read `globalPolicies` without holding a lock while `AddPolicy` writes with a lock. - -**Files Changed**: `api.go` - -**Changes**: - -1. Update `runPreExecutePolicies` (line 583): -```go -func (api *SyftAPI) runPreExecutePolicies(ctx context.Context, reqCtx *RequestContext, endpoint *Endpoint) error { - // Copy reference under read lock - api.mu.RLock() - policies := api.globalPolicies - api.mu.RUnlock() - - // Run global policies first - for _, p := range policies { - if err := p.PreExecute(ctx, reqCtx); err != nil { - return &PolicyDeniedError{ - Policy: p.Name(), - Reason: err.Error(), - User: reqCtx.User.Username, - Endpoint: endpoint.Slug, - } - } - } - - // Endpoint policies don't need lock (immutable after registration) - for _, p := range endpoint.policies { - // ... existing logic - } - - return nil -} -``` - -2. Update `runPostExecutePolicies` (line 612) similarly. - -**Risk**: Low - Adds safety without changing behavior -**Test**: Run with `go test -race ./...` - ---- - -## Phase 2: P1 Correctness Fixes - -### Step 3: Replace panic() with Error Return - -**Problem**: `DeriveNATSWebSocketURL` panics on invalid input instead of returning error. - -**Files Changed**: `config.go`, `auth.go` - -**Changes in config.go**: - -```go -// DeriveNATSWebSocketURL derives the NATS WebSocket URL from a SyftHub URL. -// Returns error if URL scheme is not http:// or https://. -func DeriveNATSWebSocketURL(syfthubURL string) (string, error) { - if strings.HasPrefix(syfthubURL, "https://") { - host := strings.TrimRight(syfthubURL[len("https://"):], "/") - if !strings.Contains(host, ":") { - host += ":443" - } - return "wss://" + host, nil - } - if strings.HasPrefix(syfthubURL, "http://") { - host := strings.TrimRight(syfthubURL[len("http://"):], "/") - if !strings.Contains(host, ":") { - host += ":80" - } - return "ws://" + host, nil - } - return "", fmt.Errorf("cannot derive NATS URL from %q: must start with http:// or https://", syfthubURL) -} -``` - -**Changes in auth.go** (line 178): -```go -func (c *AuthClient) GetNATSCredentials(ctx context.Context, username string) (*NATSCredentials, error) { - // ... existing code - - natsURL, err := DeriveNATSWebSocketURL(c.baseURL) - if err != nil { - return nil, &AuthenticationError{ - Message: "failed to derive NATS URL", - Cause: err, - } - } - - // ... rest of function -} -``` - -**Risk**: Low - API change but callers updated -**Test**: Unit test with invalid URLs - ---- - -### Step 4: Handle LoadFromEnv Error - -**Problem**: Error from `config.LoadFromEnv()` is silently ignored. - -**Files Changed**: `api.go` - -**Changes** (line 86): -```go -func New(opts ...Option) *SyftAPI { - config := DefaultConfig() - - // Log warning but don't fail - env vars are optional - if err := config.LoadFromEnv(); err != nil { - slog.Warn("failed to load config from environment", "error", err) - } - - // ... rest of function -} -``` - -**Risk**: Very low -**Test**: Set invalid env var, check warning logged - ---- - -### Step 5: Add Comprehensive Unit Tests - -**New Files**: -- `config_test.go` -- `endpoint_test.go` -- `api_test.go` -- `auth_test.go` -- `middleware_test.go` - -Each test file should cover: -- Happy path -- Error cases -- Edge cases -- Concurrency (where applicable) - ---- - -## Phase 3: P2 Design Improvements - -### Step 6: Create PolicyExecutor (Extract Class) - -**Problem**: SyftAPI is a God Object with too many responsibilities. - -**New File**: `policy_executor.go` - -```go -package syfthubapi - -import ( - "context" - "log/slog" - "sync" -) - -// PolicyExecutor manages policy evaluation for requests. -type PolicyExecutor struct { - globalPolicies []Policy - mu sync.RWMutex - logger *slog.Logger -} - -// NewPolicyExecutor creates a new policy executor. -func NewPolicyExecutor(logger *slog.Logger) *PolicyExecutor { - return &PolicyExecutor{ - logger: logger, - } -} - -// AddGlobalPolicy adds a policy that applies to all endpoints. -func (e *PolicyExecutor) AddGlobalPolicy(p Policy) { - e.mu.Lock() - defer e.mu.Unlock() - e.globalPolicies = append(e.globalPolicies, p) -} - -// GlobalPolicies returns a copy of global policies (thread-safe). -func (e *PolicyExecutor) GlobalPolicies() []Policy { - e.mu.RLock() - defer e.mu.RUnlock() - result := make([]Policy, len(e.globalPolicies)) - copy(result, e.globalPolicies) - return result -} - -// RunPreExecute runs pre-execution policies. -func (e *PolicyExecutor) RunPreExecute(ctx context.Context, reqCtx *RequestContext, endpoint *Endpoint) error { - e.mu.RLock() - policies := e.globalPolicies - e.mu.RUnlock() - - // Run global policies first - for _, p := range policies { - if err := p.PreExecute(ctx, reqCtx); err != nil { - return &PolicyDeniedError{ - Policy: p.Name(), - Reason: err.Error(), - User: reqCtx.User.Username, - Endpoint: endpoint.Slug, - } - } - } - - // Run endpoint-specific policies - for _, p := range endpoint.policies { - if err := p.PreExecute(ctx, reqCtx); err != nil { - return &PolicyDeniedError{ - Policy: p.Name(), - Reason: err.Error(), - User: reqCtx.User.Username, - Endpoint: endpoint.Slug, - } - } - } - - return nil -} - -// RunPostExecute runs post-execution policies in reverse order. -func (e *PolicyExecutor) RunPostExecute(ctx context.Context, reqCtx *RequestContext, endpoint *Endpoint, result any) error { - // Run endpoint policies in reverse order - for i := len(endpoint.policies) - 1; i >= 0; i-- { - p := endpoint.policies[i] - if err := p.PostExecute(ctx, reqCtx, result); err != nil { - return &PolicyDeniedError{ - Policy: p.Name(), - Reason: err.Error(), - User: reqCtx.User.Username, - Endpoint: endpoint.Slug, - } - } - } - - // Run global policies in reverse order - e.mu.RLock() - policies := e.globalPolicies - e.mu.RUnlock() - - for i := len(policies) - 1; i >= 0; i-- { - p := policies[i] - if err := p.PostExecute(ctx, reqCtx, result); err != nil { - return &PolicyDeniedError{ - Policy: p.Name(), - Reason: err.Error(), - User: reqCtx.User.Username, - Endpoint: endpoint.Slug, - } - } - } - - return nil -} -``` - ---- - -### Step 7: Create RequestProcessor (Extract Class) - -**New File**: `processor.go` - -```go -package syfthubapi - -import ( - "context" - "encoding/json" - "fmt" - "log/slog" - "time" -) - -// RequestProcessor handles the execution of endpoint requests. -type RequestProcessor struct { - registry *EndpointRegistry - policyExecutor *PolicyExecutor - authClient *AuthClient - logger *slog.Logger -} - -// ProcessorConfig holds configuration for RequestProcessor. -type ProcessorConfig struct { - Registry *EndpointRegistry - PolicyExecutor *PolicyExecutor - AuthClient *AuthClient - Logger *slog.Logger -} - -// NewRequestProcessor creates a new request processor. -func NewRequestProcessor(cfg *ProcessorConfig) *RequestProcessor { - return &RequestProcessor{ - registry: cfg.Registry, - policyExecutor: cfg.PolicyExecutor, - authClient: cfg.AuthClient, - logger: cfg.Logger, - } -} - -// Process handles an incoming tunnel request. -func (p *RequestProcessor) Process(ctx context.Context, req *TunnelRequest) (*TunnelResponse, error) { - startTime := time.Now() - - p.logger.Debug("processing request", - "correlation_id", req.CorrelationID, - "endpoint", req.Endpoint.Slug, - "endpoint_type", req.Endpoint.Type, - ) - - // Create request context - reqCtx := NewRequestContext() - reqCtx.EndpointSlug = req.Endpoint.Slug - reqCtx.EndpointType = EndpointType(req.Endpoint.Type) - - // Verify token - userCtx, err := p.authClient.VerifyToken(ctx, req.SatelliteToken) - if err != nil { - return p.errorResponse(req, TunnelErrorCodeAuthFailed, err.Error()), nil - } - reqCtx.User = userCtx - - // Get endpoint - endpoint, ok := p.registry.Get(req.Endpoint.Slug) - if !ok { - return p.errorResponse(req, TunnelErrorCodeEndpointNotFound, - fmt.Sprintf("endpoint not found: %s", req.Endpoint.Slug)), nil - } - - if !endpoint.Enabled { - return p.errorResponse(req, TunnelErrorCodeEndpointDisabled, - fmt.Sprintf("endpoint disabled: %s", req.Endpoint.Slug)), nil - } - - // Run pre-execution policies - if err := p.policyExecutor.RunPreExecute(ctx, reqCtx, endpoint); err != nil { - return p.errorResponse(req, TunnelErrorCodePolicyDenied, err.Error()), nil - } - - // Execute handler using invoker pattern - result, err := p.invokeEndpoint(ctx, req, endpoint, reqCtx) - if err != nil { - return p.errorResponse(req, TunnelErrorCodeExecutionFailed, err.Error()), nil - } - - // Run post-execution policies - reqCtx.Output = result - if err := p.policyExecutor.RunPostExecute(ctx, reqCtx, endpoint, result); err != nil { - return p.errorResponse(req, TunnelErrorCodePolicyDenied, err.Error()), nil - } - - // Serialize response - payload, err := json.Marshal(result) - if err != nil { - return p.errorResponse(req, TunnelErrorCodeInternalError, - fmt.Sprintf("failed to serialize response: %v", err)), nil - } - - processedAt := time.Now() - return &TunnelResponse{ - Protocol: "syfthub-tunnel/v1", - Type: "endpoint_response", - CorrelationID: req.CorrelationID, - Status: "success", - EndpointSlug: req.Endpoint.Slug, - Payload: payload, - Timing: &TunnelTiming{ - ReceivedAt: startTime, - ProcessedAt: processedAt, - DurationMs: processedAt.Sub(startTime).Milliseconds(), - }, - }, nil -} - -// invokeEndpoint executes the endpoint handler based on type. -func (p *RequestProcessor) invokeEndpoint(ctx context.Context, req *TunnelRequest, endpoint *Endpoint, reqCtx *RequestContext) (any, error) { - endpointType := EndpointType(req.Endpoint.Type) - - switch endpointType { - case EndpointTypeDataSource: - var dsReq DataSourceQueryRequest - if err := json.Unmarshal(req.Payload, &dsReq); err != nil { - return nil, fmt.Errorf("invalid request payload: %w", err) - } - reqCtx.Input = dsReq.GetQuery() - docs, err := endpoint.InvokeDataSource(ctx, dsReq.GetQuery(), reqCtx) - if err != nil { - return nil, err - } - return DataSourceQueryResponse{ - References: DataSourceReferences{Documents: docs}, - }, nil - - case EndpointTypeModel: - var modelReq ModelQueryRequest - if err := json.Unmarshal(req.Payload, &modelReq); err != nil { - return nil, fmt.Errorf("invalid request payload: %w", err) - } - reqCtx.Input = modelReq.Messages - response, err := endpoint.InvokeModel(ctx, modelReq.Messages, reqCtx) - if err != nil { - return nil, err - } - return ModelQueryResponse{ - Summary: ModelSummary{ - Message: ModelSummaryMessage{Content: response}, - }, - }, nil - - default: - return nil, fmt.Errorf("unknown endpoint type: %s", req.Endpoint.Type) - } -} - -// errorResponse creates an error tunnel response. -func (p *RequestProcessor) errorResponse(req *TunnelRequest, code TunnelErrorCode, message string) *TunnelResponse { - p.logger.Debug("returning error response", - "correlation_id", req.CorrelationID, - "code", code, - "message", message, - ) - return &TunnelResponse{ - Protocol: "syfthub-tunnel/v1", - Type: "endpoint_response", - CorrelationID: req.CorrelationID, - Status: "error", - EndpointSlug: req.Endpoint.Slug, - Error: &TunnelError{ - Code: code, - Message: message, - }, - } -} -``` - ---- - -### Step 8: Update SyftAPI to Use Extracted Components - -**File Changed**: `api.go` - -**Updated struct**: -```go -type SyftAPI struct { - config *Config - logger *slog.Logger - registry *EndpointRegistry - transport Transport - heartbeatManager HeartbeatManager - fileProvider FileProvider - - // Extracted components - processor *RequestProcessor - policyExecutor *PolicyExecutor - authClient *AuthClient - syncClient *SyncClient - - // Lifecycle - middleware []Middleware - startupHooks []LifecycleHook - shutdownHooks []LifecycleHook - shutdownCh chan struct{} - shutdownWg sync.WaitGroup - - mu sync.RWMutex // For middleware/hooks only -} -``` - -**Updated New()**: -```go -func New(opts ...Option) *SyftAPI { - config := DefaultConfig() - if err := config.LoadFromEnv(); err != nil { - slog.Warn("failed to load config from environment", "error", err) - } - - for _, opt := range opts { - opt(config) - } - - logger := setupLogger(config.LogLevel) - slogLogger := NewSlogLogger(logger) - - registry := NewEndpointRegistry() - authClient := NewAuthClient(config.SyftHubURL, config.APIKey, slogLogger) - syncClient := NewSyncClient(config.SyftHubURL, config.APIKey, slogLogger) - policyExecutor := NewPolicyExecutor(logger) - - processor := NewRequestProcessor(&ProcessorConfig{ - Registry: registry, - PolicyExecutor: policyExecutor, - AuthClient: authClient, - Logger: logger, - }) - - return &SyftAPI{ - config: config, - logger: logger, - registry: registry, - authClient: authClient, - syncClient: syncClient, - policyExecutor: policyExecutor, - processor: processor, - shutdownCh: make(chan struct{}), - } -} -``` - -**Delegate methods**: -```go -func (api *SyftAPI) AddPolicy(policy Policy) { - api.policyExecutor.AddGlobalPolicy(policy) -} - -func (api *SyftAPI) handleRequest(ctx context.Context, req *TunnelRequest) (*TunnelResponse, error) { - return api.processor.Process(ctx, req) -} -``` - ---- - -### Step 9: Remove Duplicate Policy Interface - -**File Changed**: `policy/policy.go` - -**Changes**: -```go -package policy - -import ( - "context" - "github.com/openmined/syfthub/sdk/golang/syfthubapi" -) - -// Policy is an alias for the canonical Policy interface in syfthubapi. -type Policy = syfthubapi.Policy - -// Compile-time interface checks -var ( - _ syfthubapi.Policy = (*BasePolicy)(nil) - _ syfthubapi.Policy = (*CompositePolicy)(nil) - _ syfthubapi.Policy = (*NotPolicy)(nil) -) - -// ... rest of file unchanged -``` - ---- - -### Step 10: Delete WorkerPoolExecutor (YAGNI) - -**File Changed**: `filemode/executor.go` - -**Delete lines 228-330** (WorkerPoolExecutor and related types). - -**Update CreateExecutor**: -```go -func CreateExecutor(cfg *ExecutorConfig, runtime *RuntimeConfig) (syfthubapi.Executor, error) { - venvPython := filepath.Join(cfg.WorkDir, ".venv", "bin", "python") - if _, err := os.Stat(venvPython); err == nil { - cfg.PythonPath = venvPython - } - - if runtime.Mode != "" && runtime.Mode != "subprocess" { - cfg.Logger.Warn("unsupported runtime mode, using subprocess", "mode", runtime.Mode) - } - - return NewSubprocessExecutor(cfg) -} -``` - ---- - -## Dependency Graph - -``` -Step 1 (auth fix) ─┐ -Step 2 (race fix) ─┼─→ Step 6 (PolicyExecutor) ─→ Step 7 (RequestProcessor) ─┬→ Step 9 (invokers) -Step 3 (panic fix) ─┘ └→ Step 10 (inject sync) -Step 4 (env error) ─→ independent -Step 5 (tests) ─→ continuous -Step 8 (Policy iface) ─→ independent -``` - -**Critical Path**: Steps 1, 2 → Step 6 → Step 7 → Steps 9, 10 - ---- - -## Testing Strategy - -After each step: -1. Run `go build ./...` - Compile check -2. Run `go test ./...` - Unit tests -3. Run `go test -race ./...` - Race detection -4. Manual test with example app - ---- - -## Rollback Plan - -Each step is a separate commit. If issues arise: -1. `git revert ` for the problematic step -2. Fix the issue -3. Re-apply - ---- - -## Success Criteria - -- [ ] All tests pass -- [ ] No race conditions detected -- [ ] Token verification uses real AuthClient -- [ ] SyftAPI struct has ≤10 fields -- [ ] No panics in normal code paths -- [ ] All P0/P1/P2 issues resolved diff --git a/sdk/python/README.md b/sdk/python/README.md deleted file mode 100644 index 3a22d42a..00000000 --- a/sdk/python/README.md +++ /dev/null @@ -1,203 +0,0 @@ -# SyftHub SDK - -Python SDK for interacting with the SyftHub API programmatically. - -## Installation - -```bash -# Using pip -pip install syfthub-sdk - -# Using uv -uv add syfthub-sdk - -# From source -cd sdk -uv sync -``` - -## Quick Start - -```python -from syfthub_sdk import SyftHubClient - -# Initialize client -client = SyftHubClient(base_url="https://hub.syft.com") - -# Register a new user -user = client.auth.register( - username="john", - email="john@example.com", - password="secret123", - full_name="John Doe" -) - -# Login -user = client.auth.login(username="john", password="secret123") -print(f"Logged in as {user.username}") - -# Get current user -me = client.auth.me() -``` - -## Managing Your Endpoints - -```python -# List your endpoints (with lazy pagination) -for endpoint in client.my_endpoints.list(): - print(f"{endpoint.name} ({endpoint.visibility})") - -# Get just the first page -first_page = client.my_endpoints.list().first_page() - -# Create an endpoint -endpoint = client.my_endpoints.create( - name="My Cool API", - visibility="public", - description="A really cool API", - readme="# My API\n\nThis is my API documentation." -) -print(f"Created: {endpoint.slug}") - -# Update an endpoint -endpoint = client.my_endpoints.update( - endpoint_id=endpoint.id, - description="Updated description" -) - -# Delete an endpoint -client.my_endpoints.delete(endpoint_id=endpoint.id) -``` - -## Browsing the Hub - -```python -# Browse public endpoints -for endpoint in client.hub.browse(): - print(f"{endpoint.path}: {endpoint.name}") - -# Get trending endpoints -for endpoint in client.hub.trending(min_stars=10): - print(f"{endpoint.name} - {endpoint.stars_count} stars") - -# Get a specific endpoint by path -endpoint = client.hub.get("alice/cool-api") -print(endpoint.readme) - -# Star/unstar endpoints (requires auth) -client.hub.star("alice/cool-api") -client.hub.unstar("alice/cool-api") - -# Check if you've starred an endpoint -if client.hub.is_starred("alice/cool-api"): - print("You've starred this!") -``` - -## User Profile - -```python -# Update profile -user = client.users.update( - full_name="John D.", - avatar_url="https://example.com/avatar.png" -) - -# Check username availability -if client.users.check_username("newusername"): - print("Username is available!") - -# Change password -client.auth.change_password( - current_password="old123", - new_password="new456" -) -``` - -## Accounting - -```python -# Get account balance -balance = client.accounting.balance() -print(f"Credits: {balance.credits} {balance.currency}") - -# List transactions -for tx in client.accounting.transactions(): - print(f"{tx.created_at}: {tx.amount} - {tx.description}") -``` - -## Token Persistence - -```python -# Get tokens for saving -tokens = client.get_tokens() -if tokens: - # Save to file, database, etc. - save_tokens(tokens.access_token, tokens.refresh_token) - -# Later, restore session -from syfthub_sdk import AuthTokens - -tokens = AuthTokens( - access_token=load_access_token(), - refresh_token=load_refresh_token() -) -client.set_tokens(tokens) -``` - -## Environment Variables - -| Variable | Description | -|----------|-------------| -| `SYFTHUB_URL` | SyftHub API base URL | - -## Error Handling - -```python -from syfthub_sdk import ( - SyftHubError, - AuthenticationError, - AuthorizationError, - NotFoundError, - ValidationError, - ConfigurationError, -) - -try: - client.auth.login(username="john", password="wrong") -except AuthenticationError as e: - print(f"Login failed: {e}") -except SyftHubError as e: - print(f"API error [{e.status_code}]: {e.message}") -``` - -## Context Manager - -```python -with SyftHubClient(base_url="https://hub.syft.com") as client: - client.auth.login(username="john", password="secret123") - # ... do work ... -# Client is automatically closed -``` - -## Pagination - -All list methods return a `PageIterator` for lazy pagination: - -```python -# Iterate through all items (fetches pages as needed) -for endpoint in client.my_endpoints.list(): - print(endpoint.name) - -# Get just the first page -first_page = client.my_endpoints.list().first_page() - -# Get all items as a list -all_items = client.my_endpoints.list().all() - -# Get first N items -top_10 = client.my_endpoints.list().take(10) -``` - -## License - -MIT diff --git a/sdk/python/pyproject.toml b/sdk/python/pyproject.toml index 42866a0a..8d0af970 100644 --- a/sdk/python/pyproject.toml +++ b/sdk/python/pyproject.toml @@ -2,7 +2,6 @@ name = "syfthub-sdk" version = "0.1.1" description = "Python SDK for interacting with SyftHub API" -readme = "README.md" license = { text = "Apache-2.0" } requires-python = ">=3.10" authors = [{ name = "SyftHub Team" }] diff --git a/sdk/python/src/syfthub_sdk/accounting.py b/sdk/python/src/syfthub_sdk/accounting.py index 3af2e9a1..dc63377c 100644 --- a/sdk/python/src/syfthub_sdk/accounting.py +++ b/sdk/python/src/syfthub_sdk/accounting.py @@ -150,14 +150,19 @@ def _request( *, json: dict[str, Any] | None = None, params: dict[str, Any] | None = None, + token: str | None = None, ) -> dict[str, Any] | list[Any]: - """Make an authenticated request to the accounting service. + """Make a request to the accounting service. + + When `token` is provided, a per-request Bearer header overrides the + client's Basic auth for that call (used for delegated transactions). Args: method: HTTP method (GET, POST, PUT, DELETE, etc.) path: API path (e.g., "/user", "/transactions") json: JSON body for POST/PUT requests params: Query parameters + token: Optional Bearer token overriding Basic auth for this call Returns: Parsed JSON response @@ -168,9 +173,12 @@ def _request( APIError: On other errors """ client = self._get_client() + headers = {"Authorization": f"Bearer {token}"} if token else None try: - response = client.request(method, path, json=json, params=params) + response = client.request( + method, path, json=json, params=params, headers=headers + ) _handle_response_error(response) if response.status_code == 204: @@ -181,47 +189,6 @@ def _request( except httpx.RequestError as e: raise APIError(f"Accounting request failed: {e}") from e - def _request_with_token( - self, - method: str, - path: str, - token: str, - *, - json: dict[str, Any] | None = None, - ) -> dict[str, Any] | list[Any]: - """Make a request using Bearer token auth (for delegated transactions). - - Args: - method: HTTP method - path: API path - token: Bearer token for authentication - json: JSON body - - Returns: - Parsed JSON response - """ - try: - # Create a separate client without Basic auth - with httpx.Client( - base_url=self._url, - timeout=self._timeout, - ) as client: - response = client.request( - method, - path, - json=json, - headers={"Authorization": f"Bearer {token}"}, - ) - _handle_response_error(response) - - if response.status_code == 204: - return {} - - return response.json() # type: ignore[no-any-return] - - except httpx.RequestError as e: - raise APIError(f"Accounting request failed: {e}") from e - # ========================================================================= # User Operations # ========================================================================= @@ -554,14 +521,14 @@ def create_delegated_transaction( if amount <= 0: raise ValidationError("Amount must be greater than 0") - response = self._request_with_token( + response = self._request( "POST", "/transactions", - token, json={ "senderEmail": sender_email, "amount": amount, }, + token=token, ) data = response if isinstance(response, dict) else {} return Transaction.model_validate(data) diff --git a/sdk/python/src/syfthub_sdk/auth.py b/sdk/python/src/syfthub_sdk/auth.py index dfcd65d1..409108ac 100644 --- a/sdk/python/src/syfthub_sdk/auth.py +++ b/sdk/python/src/syfthub_sdk/auth.py @@ -3,6 +3,7 @@ from __future__ import annotations import concurrent.futures +from collections.abc import Callable from typing import TYPE_CHECKING from syfthub_sdk.models import ( @@ -404,25 +405,14 @@ def get_satellite_token(self, audience: str) -> SatelliteTokenResponse: data = response if isinstance(response, dict) else {} return SatelliteTokenResponse.model_validate(data) - def get_satellite_tokens(self, audiences: list[str]) -> dict[str, str]: - """Get satellite tokens for multiple audiences in parallel. - - This is useful when making requests to endpoints owned by different users. - Tokens are cached and reused where possible. - - Args: - audiences: List of audience identifiers (usernames) - - Returns: - Dict mapping audience to satellite token - - Raises: - AuthenticationError: If not authenticated + def _parallel_fetch_tokens( + self, + audiences: list[str], + fetch_one: Callable[[str], SatelliteTokenResponse], + ) -> dict[str, str]: + """Fetch tokens for multiple audiences in parallel. - Example: - # Get tokens for multiple endpoint owners - tokens = client.auth.get_satellite_tokens(["alice", "bob"]) - print(f"Got {len(tokens)} tokens") + Failures are silently skipped — the aggregator handles missing tokens. """ unique_audiences = list(set(audiences)) token_map: dict[str, str] = {} @@ -430,28 +420,45 @@ def get_satellite_tokens(self, audiences: list[str]) -> dict[str, str]: if not unique_audiences: return token_map - def fetch_token(aud: str) -> tuple[str, str | None]: - """Fetch a single token, returning None on failure.""" + def fetch(aud: str) -> tuple[str, str | None]: try: - response = self.get_satellite_token(aud) - return (aud, response.target_token) + return (aud, fetch_one(aud).target_token) except Exception: - # Failed tokens are silently skipped - the aggregator will handle missing tokens return (aud, None) - # Fetch tokens in parallel using ThreadPoolExecutor with concurrent.futures.ThreadPoolExecutor( max_workers=min(len(unique_audiences), 10) ) as executor: - results = list(executor.map(fetch_token, unique_audiences)) + results = list(executor.map(fetch, unique_audiences)) - # Collect successful results for aud, token in results: if token is not None: token_map[aud] = token return token_map + def get_satellite_tokens(self, audiences: list[str]) -> dict[str, str]: + """Get satellite tokens for multiple audiences in parallel. + + This is useful when making requests to endpoints owned by different users. + Tokens are cached and reused where possible. + + Args: + audiences: List of audience identifiers (usernames) + + Returns: + Dict mapping audience to satellite token + + Raises: + AuthenticationError: If not authenticated + + Example: + # Get tokens for multiple endpoint owners + tokens = client.auth.get_satellite_tokens(["alice", "bob"]) + print(f"Got {len(tokens)} tokens") + """ + return self._parallel_fetch_tokens(audiences, self.get_satellite_token) + def get_guest_satellite_token(self, audience: str) -> SatelliteTokenResponse: """Get a guest satellite token for a specific audience without authentication. @@ -480,29 +487,7 @@ def get_guest_satellite_tokens(self, audiences: list[str]) -> dict[str, str]: Returns: Dict mapping audience to satellite token """ - unique_audiences = list(set(audiences)) - token_map: dict[str, str] = {} - - if not unique_audiences: - return token_map - - def fetch_token(aud: str) -> tuple[str, str | None]: - try: - response = self.get_guest_satellite_token(aud) - return (aud, response.target_token) - except Exception: - return (aud, None) - - with concurrent.futures.ThreadPoolExecutor( - max_workers=min(len(unique_audiences), 10) - ) as executor: - results = list(executor.map(fetch_token, unique_audiences)) - - for aud, token in results: - if token is not None: - token_map[aud] = token - - return token_map + return self._parallel_fetch_tokens(audiences, self.get_guest_satellite_token) def get_peer_token(self, target_usernames: list[str]) -> PeerTokenResponse: """Get a peer token for NATS communication with tunneling spaces. diff --git a/sdk/python/src/syfthub_sdk/chat.py b/sdk/python/src/syfthub_sdk/chat.py index ab8f2973..6e1c0677 100644 --- a/sdk/python/src/syfthub_sdk/chat.py +++ b/sdk/python/src/syfthub_sdk/chat.py @@ -242,6 +242,10 @@ def __init__( # Separate client for aggregator with longer timeout (LLM can be slow) self._agg_client = httpx.Client(timeout=120.0) + def close(self) -> None: + """Close the aggregator HTTP client.""" + self._agg_client.close() + @staticmethod def _type_matches(actual_type: str, expected_type: str) -> bool: """Check if an endpoint type matches the expected type. diff --git a/sdk/python/src/syfthub_sdk/client.py b/sdk/python/src/syfthub_sdk/client.py index 0e8b8eb7..92ba1577 100644 --- a/sdk/python/src/syfthub_sdk/client.py +++ b/sdk/python/src/syfthub_sdk/client.py @@ -341,6 +341,10 @@ def close(self) -> None: self._http.close() if self._accounting is not None: self._accounting.close() + if self._chat is not None: + self._chat.close() + if self._syftai is not None: + self._syftai.close() def __enter__(self) -> Self: """Enter context manager.""" diff --git a/sdk/python/src/syfthub_sdk/syftai.py b/sdk/python/src/syfthub_sdk/syftai.py index 8e9a7460..620c38c8 100644 --- a/sdk/python/src/syfthub_sdk/syftai.py +++ b/sdk/python/src/syfthub_sdk/syftai.py @@ -98,6 +98,10 @@ def __init__( # Client for SyftAI-Space with reasonable timeout self._client = httpx.Client(timeout=60.0) + def close(self) -> None: + """Close the SyftAI-Space HTTP client.""" + self._client.close() + def _build_headers( self, tenant_name: str | None = None, @@ -110,6 +114,54 @@ def _build_headers( headers["X-Tenant-Name"] = tenant_name return headers + @staticmethod + def _endpoint_query_url(endpoint: EndpointRef) -> str: + return f"{endpoint.url.rstrip('/')}/api/v1/endpoints/{endpoint.slug}/query" + + @staticmethod + def _extract_error_message(response: httpx.Response) -> str: + """Extract a human-readable error message from a response body.""" + try: + error_data = response.json() + return str( + error_data.get("detail", error_data.get("message", str(error_data))) + ) + except Exception: + return response.text or f"HTTP {response.status_code}" + + def _post_endpoint( + self, + endpoint: EndpointRef, + body: dict[str, object], + *, + error_cls: type[RetrievalError | GenerationError], + error_prefix: str, + **error_kwargs: str, + ) -> httpx.Response: + """POST to an endpoint, mapping connection/HTTP errors to error_cls.""" + try: + response = self._client.post( + self._endpoint_query_url(endpoint), + json=body, + headers=self._build_headers(endpoint.tenant_name), + ) + except httpx.RequestError as e: + raise error_cls( + f"Failed to connect to {error_prefix} '{endpoint.slug}': {e}", + detail=str(e), + **error_kwargs, + ) from e + + if response.status_code >= 400: + message = self._extract_error_message(response) + raise error_cls( + f"{error_prefix.capitalize()} query failed: {message}", + detail=response.text, + **error_kwargs, + ) + + return response + def query_data_source( self, endpoint: EndpointRef, @@ -146,8 +198,6 @@ def query_data_source( for doc in docs: print(f"[{doc.score:.2f}] {doc.content[:100]}...") """ - url = f"{endpoint.url.rstrip('/')}/api/v1/endpoints/{endpoint.slug}/query" - request_body = { "user_email": user_email, "messages": query, # SyftAI-Space expects "messages" for query text @@ -155,33 +205,13 @@ def query_data_source( "similarity_threshold": similarity_threshold, } - try: - response = self._client.post( - url, - json=request_body, - headers=self._build_headers(endpoint.tenant_name), - ) - except httpx.RequestError as e: - raise RetrievalError( - f"Failed to connect to data source '{endpoint.slug}': {e}", - source_path=endpoint.slug, - detail=str(e), - ) from e - - if response.status_code >= 400: - try: - error_data = response.json() - message = error_data.get( - "detail", error_data.get("message", str(error_data)) - ) - except Exception: - message = response.text or f"HTTP {response.status_code}" - - raise RetrievalError( - f"Data source query failed: {message}", - source_path=endpoint.slug, - detail=response.text, - ) + response = self._post_endpoint( + endpoint, + request_body, + error_cls=RetrievalError, + error_prefix="data source", + source_path=endpoint.slug, + ) data = response.json() documents = [] @@ -235,8 +265,6 @@ def query_model( ) print(response) """ - url = f"{endpoint.url.rstrip('/')}/api/v1/endpoints/{endpoint.slug}/query" - request_body = { "user_email": user_email, "messages": [ @@ -247,33 +275,13 @@ def query_model( "stream": False, } - try: - response = self._client.post( - url, - json=request_body, - headers=self._build_headers(endpoint.tenant_name), - ) - except httpx.RequestError as e: - raise GenerationError( - f"Failed to connect to model '{endpoint.slug}': {e}", - model_slug=endpoint.slug, - detail=str(e), - ) from e - - if response.status_code >= 400: - try: - error_data = response.json() - message = error_data.get( - "detail", error_data.get("message", str(error_data)) - ) - except Exception: - message = response.text or f"HTTP {response.status_code}" - - raise GenerationError( - f"Model query failed: {message}", - model_slug=endpoint.slug, - detail=response.text, - ) + response = self._post_endpoint( + endpoint, + request_body, + error_cls=GenerationError, + error_prefix="model", + model_slug=endpoint.slug, + ) data = response.json() @@ -317,8 +325,6 @@ def query_model_stream( ): print(chunk, end="", flush=True) """ - url = f"{endpoint.url.rstrip('/')}/api/v1/endpoints/{endpoint.slug}/query" - request_body = { "user_email": user_email, "messages": [ @@ -332,7 +338,7 @@ def query_model_stream( try: with self._client.stream( "POST", - url, + self._endpoint_query_url(endpoint), json=request_body, headers={ **self._build_headers(endpoint.tenant_name), @@ -341,13 +347,7 @@ def query_model_stream( ) as response: if response.status_code >= 400: response.read() - try: - error_data = json.loads(response.text) - message = error_data.get( - "detail", error_data.get("message", str(error_data)) - ) - except Exception: - message = response.text or f"HTTP {response.status_code}" + message = self._extract_error_message(response) raise GenerationError( f"Model stream failed: {message}", diff --git a/sdk/python/uv.lock b/sdk/python/uv.lock index 8d0cb631..46aad4e8 100644 --- a/sdk/python/uv.lock +++ b/sdk/python/uv.lock @@ -522,7 +522,7 @@ wheels = [ [[package]] name = "pytest" -version = "8.4.2" +version = "9.0.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "colorama", marker = "sys_platform == 'win32'" }, @@ -533,9 +533,9 @@ dependencies = [ { name = "pygments" }, { name = "tomli", marker = "python_full_version < '3.11'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a3/5c/00a0e072241553e1a7496d638deababa67c5058571567b92a7eaa258397c/pytest-8.4.2.tar.gz", hash = "sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01", size = 1519618, upload-time = "2025-09-04T14:34:22.711Z" } +sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a8/a4/20da314d277121d6534b3a980b29035dcd51e6744bd79075a6ce8fa4eb8d/pytest-8.4.2-py3-none-any.whl", hash = "sha256:872f880de3fc3a5bdc88a11b39c9710c3497a547cfa9320bc3c5e62fbf272e79", size = 365750, upload-time = "2025-09-04T14:34:20.226Z" }, + { url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" }, ] [[package]] diff --git a/sdk/typescript/README.md b/sdk/typescript/README.md deleted file mode 100644 index edca1adf..00000000 --- a/sdk/typescript/README.md +++ /dev/null @@ -1,276 +0,0 @@ -# SyftHub TypeScript SDK - -TypeScript SDK for interacting with the SyftHub API programmatically. - -## Installation - -```bash -# Using npm -npm install @syfthub/sdk - -# Using yarn -yarn add @syfthub/sdk - -# Using pnpm -pnpm add @syfthub/sdk -``` - -## Quick Start - -```typescript -import { SyftHubClient } from '@syfthub/sdk'; - -// Initialize client -const client = new SyftHubClient({ baseUrl: 'https://hub.syft.com' }); - -// Register a new user -const user = await client.auth.register({ - username: 'john', - email: 'john@example.com', - password: 'secret123', - fullName: 'John Doe', -}); - -// Login -const loggedIn = await client.auth.login('john', 'secret123'); -console.log(`Logged in as ${loggedIn.username}`); - -// Get current user -const me = await client.auth.me(); -``` - -## Managing Your Endpoints - -```typescript -import { EndpointType, Visibility } from '@syfthub/sdk'; - -// List your endpoints (with lazy pagination) -for await (const endpoint of client.myEndpoints.list()) { - console.log(`${endpoint.name} (${endpoint.visibility})`); -} - -// Get just the first page -const firstPage = await client.myEndpoints.list().firstPage(); - -// Create an endpoint -const endpoint = await client.myEndpoints.create({ - name: 'My Cool API', - type: EndpointType.MODEL, - visibility: Visibility.PUBLIC, - description: 'A really cool API', - readme: '# My API\n\nThis is my API documentation.', -}); -console.log(`Created: ${endpoint.slug}`); - -// Update an endpoint -const updated = await client.myEndpoints.update('john/my-cool-api', { - description: 'Updated description', -}); - -// Delete an endpoint -await client.myEndpoints.delete('john/my-cool-api'); -``` - -## Browsing the Hub - -```typescript -// Browse public endpoints -for await (const endpoint of client.hub.browse()) { - console.log(`${endpoint.ownerUsername}/${endpoint.slug}: ${endpoint.name}`); -} - -// Get trending endpoints -for await (const endpoint of client.hub.trending({ minStars: 10 })) { - console.log(`${endpoint.name} - ${endpoint.starsCount} stars`); -} - -// Get a specific endpoint by path -const endpoint = await client.hub.get('alice/cool-api'); -console.log(endpoint.readme); - -// Star/unstar endpoints (requires auth) -await client.hub.star('alice/cool-api'); -await client.hub.unstar('alice/cool-api'); - -// Check if you've starred an endpoint -if (await client.hub.isStarred('alice/cool-api')) { - console.log("You've starred this!"); -} -``` - -## User Profile - -```typescript -// Update profile -const user = await client.users.update({ - fullName: 'John D.', - avatarUrl: 'https://example.com/avatar.png', -}); - -// Check username availability -if (await client.users.checkUsername('newusername')) { - console.log('Username is available!'); -} - -// Change password -await client.auth.changePassword('old123', 'new456'); -``` - -## Accounting - -```typescript -// Get account balance -const balance = await client.accounting.balance(); -console.log(`Credits: ${balance.credits} ${balance.currency}`); - -// List transactions -for await (const tx of client.accounting.transactions()) { - console.log(`${tx.createdAt}: ${tx.amount} - ${tx.description}`); -} -``` - -## Token Persistence - -```typescript -// Get tokens for saving -const tokens = client.getTokens(); -if (tokens) { - // Save to localStorage, database, etc. - localStorage.setItem('syfthub_tokens', JSON.stringify(tokens)); -} - -// Later, restore session -const saved = localStorage.getItem('syfthub_tokens'); -if (saved) { - const tokens = JSON.parse(saved); - client.setTokens(tokens); -} - -// Check if authenticated -if (client.isAuthenticated) { - console.log('Session restored!'); -} -``` - -## Environment Variables - -| Variable | Description | -|----------|-------------| -| `SYFTHUB_URL` | SyftHub API base URL | - -## Error Handling - -```typescript -import { - SyftHubError, - AuthenticationError, - AuthorizationError, - NotFoundError, - ValidationError, - NetworkError, -} from '@syfthub/sdk'; - -try { - await client.auth.login('john', 'wrong'); -} catch (error) { - if (error instanceof AuthenticationError) { - console.log(`Login failed: ${error.message}`); - } else if (error instanceof NotFoundError) { - console.log('User not found'); - } else if (error instanceof ValidationError) { - console.log(`Validation error: ${error.message}`); - console.log('Field errors:', error.errors); - } else if (error instanceof NetworkError) { - console.log(`Network error: ${error.message}`); - } else if (error instanceof SyftHubError) { - console.log(`API error: ${error.message}`); - } -} -``` - -## Pagination - -All list methods return a `PageIterator` for lazy async pagination: - -```typescript -// Iterate through all items (fetches pages as needed) -for await (const endpoint of client.myEndpoints.list()) { - console.log(endpoint.name); -} - -// Get just the first page -const firstPage = await client.myEndpoints.list().firstPage(); - -// Get all items as an array (loads all into memory) -const allItems = await client.myEndpoints.list().all(); - -// Get first N items -const top10 = await client.myEndpoints.list().take(10); -``` - -## TypeScript Support - -This SDK is written in TypeScript and provides full type safety: - -```typescript -import { - // Client - SyftHubClient, - SyftHubClientOptions, - - // Enums - Visibility, - EndpointType, - UserRole, - - // Types - User, - Endpoint, - EndpointPublic, - Policy, - Connection, - AuthTokens, - - // Input types - UserRegisterInput, - EndpointCreateInput, - EndpointUpdateInput, - - // Errors - SyftHubError, - AuthenticationError, - ValidationError, - - // Utilities - PageIterator, - getEndpointPublicPath, -} from '@syfthub/sdk'; - -// All types are properly inferred -const endpoint: Endpoint = await client.myEndpoints.create({ - name: 'My API', - type: EndpointType.MODEL, -}); -``` - -## Comparison with Python SDK - -| Python | TypeScript | -|--------|------------| -| `client.auth.login(username, password)` | `client.auth.login(username, password)` | -| `client.my_endpoints.list()` | `client.myEndpoints.list()` | -| `for ep in client.hub.browse()` | `for await (const ep of client.hub.browse())` | -| `client.get_tokens()` | `client.getTokens()` | -| `client.set_tokens(tokens)` | `client.setTokens(tokens)` | -| `client.is_authenticated` | `client.isAuthenticated` | - -The TypeScript SDK follows JavaScript/TypeScript conventions (camelCase) while providing the same functionality as the Python SDK. - -## Requirements - -- Node.js 18+ (for native `fetch` support) -- Or any modern browser - -## License - -MIT diff --git a/sdk/typescript/src/http.ts b/sdk/typescript/src/http.ts index 7edf4a01..8cdf51b7 100644 --- a/sdk/typescript/src/http.ts +++ b/sdk/typescript/src/http.ts @@ -7,6 +7,7 @@ import { InvalidAccountingPasswordError, NetworkError, NotFoundError, + SyftHubError, UserAlreadyExistsError, ValidationError, } from './errors.js'; @@ -471,6 +472,3 @@ export class HTTPClient { await this.refreshPromise; } } - -// Import SyftHubError for type checking -import { SyftHubError } from './errors.js'; diff --git a/sdk/typescript/src/resources/chat.ts b/sdk/typescript/src/resources/chat.ts index fb6c6f56..6ec70d73 100644 --- a/sdk/typescript/src/resources/chat.ts +++ b/sdk/typescript/src/resources/chat.ts @@ -37,6 +37,7 @@ import type { TokenUsage, } from '../models/chat.js'; import { SyftHubError } from '../errors.js'; +import { readSSEEvents } from '../utils.js'; import type { HubResource } from './hub.js'; import type { AuthResource } from './auth.js'; import { EndpointType } from '../models/index.js'; @@ -567,51 +568,17 @@ export class ChatResource { throw new AggregatorError('No response body from aggregator'); } - const reader = response.body.getReader(); - const decoder = new TextDecoder(); - let buffer = ''; - let currentEvent: string | null = null; - let currentData = ''; - - try { - while (true) { - const { done, value } = await reader.read(); - if (done) break; - - buffer += decoder.decode(value, { stream: true }); - const lines = buffer.split('\n'); - buffer = lines.pop() ?? ''; - - for (const line of lines) { - const trimmedLine = line.trim(); - - if (!trimmedLine) { - // Empty line = end of event - if (currentEvent && currentData) { - try { - const data = JSON.parse(currentData) as Record; - const event = this.parseSSEEvent(currentEvent, data); - if (event) { - yield event; - } - } catch { - yield { type: 'error', message: `Failed to parse SSE data: ${currentData}` }; - } - } - currentEvent = null; - currentData = ''; - continue; - } - - if (trimmedLine.startsWith('event:')) { - currentEvent = trimmedLine.slice(6).trim(); - } else if (trimmedLine.startsWith('data:')) { - currentData = trimmedLine.slice(5).trim(); - } + for await (const { event: eventName, data: dataStr } of readSSEEvents(response)) { + if (eventName === 'message') continue; // chat protocol always names events + try { + const data = JSON.parse(dataStr) as Record; + const event = this.parseSSEEvent(eventName, data); + if (event) { + yield event; } + } catch { + yield { type: 'error', message: `Failed to parse SSE data: ${dataStr}` }; } - } finally { - reader.releaseLock(); } } diff --git a/sdk/typescript/src/resources/syftai.ts b/sdk/typescript/src/resources/syftai.ts index f7cbccef..cfe5f414 100644 --- a/sdk/typescript/src/resources/syftai.ts +++ b/sdk/typescript/src/resources/syftai.ts @@ -25,6 +25,7 @@ import type { Document, QueryDataSourceOptions, QueryModelOptions } from '../models/chat.js'; import { SyftHubError } from '../errors.js'; +import { readSSEEvents } from '../utils.js'; /** * Error thrown when data source retrieval fails. @@ -253,55 +254,27 @@ export class SyftAIResource { throw new GenerationError('No response body from model', endpoint.slug); } - const reader = response.body.getReader(); - const decoder = new TextDecoder(); - let buffer = ''; + for await (const { data: dataStr } of readSSEEvents(response)) { + if (dataStr === '[DONE]') return; - try { - while (true) { - const { done, value } = await reader.read(); - if (done) break; - - buffer += decoder.decode(value, { stream: true }); - const lines = buffer.split('\n'); - buffer = lines.pop() ?? ''; - - for (const line of lines) { - const trimmedLine = line.trim(); - - if (!trimmedLine || trimmedLine.startsWith('event:')) { - continue; - } - - if (trimmedLine.startsWith('data:')) { - const dataStr = trimmedLine.slice(5).trim(); - if (dataStr === '[DONE]') { - return; - } - - try { - const data = JSON.parse(dataStr) as Record; - - // Extract content from various response formats - if (typeof data['content'] === 'string') { - yield data['content']; - } else if (Array.isArray(data['choices'])) { - // OpenAI-style response - for (const choice of data['choices'] as Record[]) { - const delta = choice['delta'] as Record | undefined; - if (delta && typeof delta['content'] === 'string') { - yield delta['content']; - } - } - } - } catch { - // Skip malformed data + try { + const data = JSON.parse(dataStr) as Record; + + // Extract content from various response formats + if (typeof data['content'] === 'string') { + yield data['content']; + } else if (Array.isArray(data['choices'])) { + // OpenAI-style response + for (const choice of data['choices'] as Record[]) { + const delta = choice['delta'] as Record | undefined; + if (delta && typeof delta['content'] === 'string') { + yield delta['content']; } } } + } catch { + // Skip malformed data } - } finally { - reader.releaseLock(); } } } diff --git a/sdk/typescript/src/utils.ts b/sdk/typescript/src/utils.ts index 1423df32..ccd207d8 100644 --- a/sdk/typescript/src/utils.ts +++ b/sdk/typescript/src/utils.ts @@ -1,3 +1,9 @@ +// Per-key caches. Every request runs these over every key; the key-set is +// small (tens to low hundreds across a process lifetime) so an unbounded Map +// is fine and saves repeated regex work on hot paths. +const snakeToCamelCache = new Map(); +const camelToSnakeCache = new Map(); + /** * Convert a snake_case string to camelCase. * @@ -6,7 +12,11 @@ * snakeToCamel('full_name') // 'fullName' */ export function snakeToCamel(str: string): string { - return str.replace(/_([a-z])/g, (_, letter: string) => letter.toUpperCase()); + const cached = snakeToCamelCache.get(str); + if (cached !== undefined) return cached; + const result = str.replace(/_([a-z])/g, (_, letter: string) => letter.toUpperCase()); + snakeToCamelCache.set(str, result); + return result; } /** @@ -17,7 +27,11 @@ export function snakeToCamel(str: string): string { * camelToSnake('fullName') // 'full_name' */ export function camelToSnake(str: string): string { - return str.replace(/[A-Z]/g, (letter) => `_${letter.toLowerCase()}`); + const cached = camelToSnakeCache.get(str); + if (cached !== undefined) return cached; + const result = str.replace(/[A-Z]/g, (letter) => `_${letter.toLowerCase()}`); + camelToSnakeCache.set(str, result); + return result; } /** @@ -109,3 +123,79 @@ export function buildSearchParams(params: Record): URLSearchPar return searchParams; } + +/** + * Parse a Server-Sent Events stream into event/data pairs. + * + * - Yields `{event, data}` on blank-line boundaries (SSE framing) OR after any + * `data:` line when no preceding `event:` has been seen (tolerates servers + * that emit only `data:` lines — fall back to `"message"`). + * - Does NOT JSON.parse; callers parse their own schema. + * - Flushes any pending event when the stream ends. + */ +export async function* readSSEEvents( + response: Response +): AsyncGenerator<{ event: string; data: string }> { + if (!response.body) return; + + const reader = response.body.getReader(); + const decoder = new TextDecoder(); + let buffer = ''; + let currentEvent: string | null = null; + let currentData = ''; + + const flush = function* (): Generator<{ event: string; data: string }> { + if (currentData) { + yield { event: currentEvent ?? 'message', data: currentData }; + } + currentEvent = null; + currentData = ''; + }; + + try { + while (true) { + const { done, value } = await reader.read(); + if (done) break; + + buffer += decoder.decode(value, { stream: true }); + const lines = buffer.split('\n'); + buffer = lines.pop() ?? ''; + + for (const line of lines) { + const trimmed = line.trim(); + + if (!trimmed) { + yield* flush(); + continue; + } + + if (trimmed.startsWith('event:')) { + currentEvent = trimmed.slice(6).trim(); + } else if (trimmed.startsWith('data:')) { + // If we already have buffered data without a blank-line terminator, + // emit it now so data-only streams (no event: header) still flow. + if (currentData && currentEvent === null) { + yield* flush(); + } + currentData = trimmed.slice(5).trim(); + } + } + } + + // Process any trailing line still in the buffer. + const trailing = buffer.trim(); + if (trailing) { + if (trailing.startsWith('event:')) { + currentEvent = trailing.slice(6).trim(); + } else if (trailing.startsWith('data:')) { + if (currentData && currentEvent === null) { + yield* flush(); + } + currentData = trailing.slice(5).trim(); + } + } + yield* flush(); + } finally { + reader.releaseLock(); + } +} diff --git a/skills/README.md b/skills/README.md deleted file mode 100644 index 2f6f503d..00000000 --- a/skills/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# SyftHub Skills - -This directory contains Claude Code skills for working with SyftHub. - -## Available Skills - -| Skill | Description | -|-------|-------------| -| [syfthub-cli](./syfthub-cli/) | CLI commands for authentication, endpoint discovery, RAG queries, and configuration | - -## Installation - -### Option 1: Copy to Claude Code skills directory - -```bash -# Clone or navigate to the syfthub repo -cp -r skills/syfthub-cli ~/.claude/skills/ -``` - -### Option 2: Symlink (for development) - -```bash -ln -s "$(pwd)/skills/syfthub-cli" ~/.claude/skills/syfthub-cli -``` - -### Option 3: One-liner install - -```bash -curl -fsSL https://raw.githubusercontent.com/OpenMined/syfthub/main/skills/syfthub-cli/SKILL.md -o ~/.claude/skills/syfthub-cli/SKILL.md --create-dirs -``` - -## Verify Installation - -After installation, the skill will automatically trigger when you ask Claude Code about: -- SyftHub CLI commands (`syft login`, `syft ls`, `syft query`, etc.) -- Browsing or listing AI endpoints -- RAG queries with SyftHub -- Managing aggregator configurations