-
Notifications
You must be signed in to change notification settings - Fork 136
feat: add Pi coding agent integration guide #2286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
9 commits
Select commit
Hold shift + click to select a range
39d3234
feat: add Pi coding agent integration guide
analogpvt a02feb3
feat: add Pi icon and update guide screenshots
analogpvt da2e984
feat: update Hermes and Kilo Code to official icons
analogpvt 8e6a20a
Merge branch 'main' into feat/pi-integration-guide
analogpvt c25496d
feat: update Pi guide to latest models
analogpvt 55deb8c
revert: remove Hermes and Kilo Code icon changes
analogpvt 3c7cbb3
fix: remove deprecated pnpm build-config keys
analogpvt 01fe31c
fix: restore pnpm-workspace.yaml to original
analogpvt d1fce1f
Merge branch 'main' into feat/pi-integration-guide
smakosh File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,177 @@ | ||
| --- | ||
| title: Pi Integration | ||
| description: Use any model with Pi coding agent through LLM Gateway — GPT-5.5, Gemini, Claude, DeepSeek, and 200+ others in your terminal. | ||
| image: guides/pi/config.png | ||
| icon: Terminal | ||
| --- | ||
|
|
||
| import { Step, Steps } from "fumadocs-ui/components/steps"; | ||
| import { Callout } from "fumadocs-ui/components/callout"; | ||
|
|
||
| [Pi](https://pi.dev) is a minimal terminal-based coding agent that gives an AI full access to read, write, edit, and run shell commands in your project. By pointing Pi at LLM Gateway, you can use any of our 200+ models — GPT-5.5, Gemini 3.1 Pro, Claude Opus 4.7, DeepSeek V4, and more — with full cost tracking and caching. | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| - An LLM Gateway account with an API key | ||
| - Pi installed (`curl -fsSL https://pi.dev/install.sh | bash`) | ||
| - Basic terminal familiarity | ||
|
|
||
| ## Setup | ||
|
|
||
| Pi uses a `models.json` configuration file to define providers and models. We'll add LLM Gateway as a custom provider. | ||
|
|
||
| <Steps> | ||
| <Step> | ||
| ### Get Your API Key | ||
|
|
||
| 1. Log in to your [LLM Gateway dashboard](https://llmgateway.io/dashboard) | ||
| 2. Navigate to **API Keys** section | ||
| 3. Create a new API key and copy the key | ||
|
|
||
| </Step> | ||
|
|
||
| <Step> | ||
| ### Configure Pi | ||
|
|
||
| Open (or create) the Pi models configuration file at `~/.pi/agent/models.json` and add LLM Gateway as a provider: | ||
|
|
||
| ```json | ||
| { | ||
| "providers": { | ||
| "llmgateway": { | ||
| "baseUrl": "https://api.llmgateway.io/v1", | ||
| "api": "openai-completions", | ||
| "apiKey": "llmgtwy_your_api_key_here", | ||
| "models": [ | ||
| { "id": "gpt-5.5", "name": "GPT-5.5" }, | ||
| { "id": "claude-opus-4-7", "name": "Claude Opus 4.7" }, | ||
| { "id": "gemini-3.1-pro", "name": "Gemini 3.1 Pro" }, | ||
| { "id": "deepseek-v4", "name": "DeepSeek V4", "reasoning": true } | ||
| ] | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| Replace `llmgtwy_your_api_key_here` with your actual API key from Step 1. | ||
|
|
||
|  | ||
|
|
||
| <Callout type="info"> | ||
| Pi reloads `models.json` when you open the `/model` menu — no restart needed | ||
| after editing. | ||
| </Callout> | ||
|
|
||
| </Step> | ||
|
|
||
| <Step> | ||
| ### Select Your Model | ||
|
|
||
| 1. Run `pi` in any project directory | ||
| 2. Type `/model` to open the model selector | ||
| 3. Select your LLM Gateway model from the list | ||
|
|
||
| All requests now route through LLM Gateway with full cost tracking. | ||
|
|
||
| </Step> | ||
|
|
||
| <Step> | ||
| ### Test the Integration | ||
|
|
||
| Ask Pi to do something in your project to verify everything works: | ||
|
|
||
| ``` | ||
| > hello | ||
| ``` | ||
|
|
||
|  | ||
|
|
||
| You should see the response streaming from your chosen model. Check your [LLM Gateway dashboard](https://llmgateway.io/dashboard) to confirm the request appears in your usage logs. | ||
|
|
||
| </Step> | ||
| </Steps> | ||
|
|
||
| ## Adding More Models | ||
|
|
||
| You can add any model from the [LLM Gateway models page](https://llmgateway.io/models) to your `models.json`. Just add entries to the `models` array: | ||
|
|
||
| ```json | ||
| { | ||
| "providers": { | ||
| "llmgateway": { | ||
| "baseUrl": "https://api.llmgateway.io/v1", | ||
| "api": "openai-completions", | ||
| "apiKey": "llmgtwy_your_api_key_here", | ||
| "models": [ | ||
| { "id": "gpt-5.5", "name": "GPT-5.5" }, | ||
| { "id": "gpt-5.5-mini", "name": "GPT-5.5 Mini" }, | ||
| { "id": "claude-opus-4-7", "name": "Claude Opus 4.7" }, | ||
| { "id": "claude-sonnet-4-6", "name": "Claude Sonnet 4.6" }, | ||
| { "id": "gemini-3.1-pro", "name": "Gemini 3.1 Pro" }, | ||
| { "id": "gemini-3.1-flash", "name": "Gemini 3.1 Flash" }, | ||
| { "id": "deepseek-v4", "name": "DeepSeek V4", "reasoning": true }, | ||
| { | ||
| "id": "deepseek-v4-mini", | ||
| "name": "DeepSeek V4 Mini", | ||
| "reasoning": true | ||
| } | ||
| ] | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| ## Using Environment Variables for the API Key | ||
|
|
||
| Instead of hardcoding your key, you can reference an environment variable: | ||
|
|
||
| ```json | ||
| { | ||
| "providers": { | ||
| "llmgateway": { | ||
| "baseUrl": "https://api.llmgateway.io/v1", | ||
| "api": "openai-completions", | ||
| "apiKey": "LLM_GATEWAY_API_KEY", | ||
| "models": [{ "id": "gpt-5.5", "name": "GPT-5.5" }] | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| Then set the variable in your shell profile: | ||
|
|
||
| ```bash | ||
| export LLM_GATEWAY_API_KEY=llmgtwy_your_api_key_here | ||
| ``` | ||
|
|
||
| ## Troubleshooting | ||
|
|
||
| ### Authentication Errors | ||
|
|
||
| - Verify your API key is correct in `~/.pi/agent/models.json` | ||
| - Check that the base URL is set to `https://api.llmgateway.io/v1` | ||
| - Ensure your LLM Gateway account has sufficient credits | ||
|
|
||
| ### Model Not Found | ||
|
|
||
| - Verify the model ID exists on the [models page](https://llmgateway.io/models) | ||
| - Model IDs are case-sensitive — copy them exactly as shown | ||
|
|
||
| ### Connection Issues | ||
|
|
||
| - Check your internet connection | ||
| - Ensure `api` is set to `"openai-completions"` (not `"openai-responses"`) | ||
| - Monitor your usage in the LLM Gateway dashboard | ||
|
|
||
| <Callout type="info"> | ||
| Need help? Join our [Discord community](https://llmgateway.io/discord) for | ||
| support and troubleshooting assistance. | ||
| </Callout> | ||
|
|
||
| ## Benefits of Using LLM Gateway with Pi | ||
|
|
||
| - **Any Model**: Use GPT-5.5, Claude Opus 4.7, Gemini 3.1 Pro, DeepSeek V4, or 200+ others | ||
| - **Cost Tracking**: Every Pi request appears in your dashboard with token counts and costs | ||
| - **Caching**: Repeated requests hit cache automatically, saving money | ||
| - **One Key**: Manage all providers through a single API key | ||
| - **No Vendor Lock-in**: Switch models by changing one line in your config | ||
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,71 @@ | ||
| --- | ||
| id: pi | ||
| slug: pi | ||
| title: Pi Coding Agent Integration | ||
| description: Use any model with Pi coding agent through LLM Gateway — GPT-5.5, Gemini 3.1 Pro, Claude Opus 4.7, DeepSeek V4, and 200+ others in your terminal. | ||
| date: 2026-05-13 | ||
| --- | ||
|
|
||
| [Pi](https://pi.dev) is a minimal terminal-based coding agent that gives an AI full access to read, write, edit, and run shell commands in your project. By pointing Pi at LLM Gateway, you can use any of our 200+ models with full cost tracking and caching. | ||
|
|
||
| ## Quick Start | ||
|
|
||
| Configure Pi to use LLM Gateway by editing `~/.pi/agent/models.json`: | ||
|
|
||
| ```json | ||
| { | ||
| "providers": { | ||
| "llmgateway": { | ||
| "baseUrl": "https://api.llmgateway.io/v1", | ||
| "api": "openai-completions", | ||
| "apiKey": "llmgtwy_your_api_key_here", | ||
| "models": [ | ||
| { "id": "gpt-5.5", "name": "GPT-5.5" }, | ||
| { "id": "claude-opus-4-7", "name": "Claude Opus 4.7" }, | ||
| { "id": "gemini-3.1-pro", "name": "Gemini 3.1 Pro" }, | ||
| { "id": "deepseek-v4", "name": "DeepSeek V4", "reasoning": true } | ||
| ] | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| Then run `pi` in any project directory and type `/model` to select your LLM Gateway model. | ||
|
|
||
| ## Setup Steps | ||
|
|
||
| 1. **Get Your API Key** — Log in to your [LLM Gateway dashboard](https://llmgateway.io/dashboard) and create a new API key | ||
| 2. **Edit models.json** — Add the LLM Gateway provider config shown above to `~/.pi/agent/models.json` | ||
| 3. **Select Model** — Run `pi`, type `/model`, and pick your model | ||
| 4. **Start Coding** — All requests route through LLM Gateway with full cost tracking | ||
|
|
||
| ## Adding More Models | ||
|
|
||
| Add any model from the [models page](https://llmgateway.io/models) to the `models` array in your config: | ||
|
|
||
| ```json | ||
| { "id": "gpt-5.5-mini", "name": "GPT-5.5 Mini" }, | ||
| { "id": "claude-sonnet-4-6", "name": "Claude Sonnet 4.6" }, | ||
| { "id": "gemini-3.1-flash", "name": "Gemini 3.1 Flash" }, | ||
| { "id": "deepseek-v4-mini", "name": "DeepSeek V4 Mini", "reasoning": true } | ||
| ``` | ||
|
|
||
| ## Using Environment Variables | ||
|
|
||
| Reference an env var instead of hardcoding your key: | ||
|
|
||
| ```json | ||
| "apiKey": "LLM_GATEWAY_API_KEY" | ||
| ``` | ||
|
|
||
| ```bash | ||
| export LLM_GATEWAY_API_KEY=llmgtwy_your_api_key_here | ||
| ``` | ||
|
|
||
| ## Troubleshooting | ||
|
|
||
| - **Auth errors**: Verify API key and base URL (`https://api.llmgateway.io/v1`) | ||
| - **Model not found**: Copy model IDs exactly from the [models page](https://llmgateway.io/models) | ||
| - **Connection issues**: Ensure `api` is set to `"openai-completions"` | ||
|
|
||
| Need help? Join our [Discord community](https://llmgateway.io/discord). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.