feat: add MiniMax as first-class LLM provider#344
Open
octo-patch wants to merge 1 commit intopezzolabs:mainfrom
Open
feat: add MiniMax as first-class LLM provider#344octo-patch wants to merge 1 commit intopezzolabs:mainfrom
octo-patch wants to merge 1 commit intopezzolabs:mainfrom
Conversation
Add MiniMax as a fully integrated LLM provider alongside OpenAI, with support for M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models. Changes: - Add MiniMax to Provider/PromptService enums and provider mappings - Add MiniMax chat completion settings with temperature clamping (0.01-1.0) - Add MiniMax to console provider selector (available, not Coming Soon) - Add MiniMax to API key management UI - Add MiniMax proxy handler/router for OpenAI-compatible API forwarding - Add MiniMax cost calculation in server report builder - Add PezzoMiniMax client wrapper in @pezzo/client SDK - Add MiniMax logo asset - Add 28 tests (13 type + 9 server + 6 proxy) - Update README with supported LLM providers table Signed-off-by: octopus <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds MiniMax as a fully integrated LLM provider in Pezzo, on par with OpenAI. MiniMax offers an OpenAI-compatible API with powerful models (M2.7 with 1M token context, M2.5 with 204K context).
Changes
libs/types): AddMiniMaxtoProviderenum andMiniMaxChatCompletiontoPromptServiceenum with provider details and prompt mappingsapps/console):apps/proxy): AddMiniMaxV1Handlerand/minimax/v1/*router for proxying requests tohttps://api.minimax.io/v1with observability and caching supportapps/server): Add MiniMax cost calculation in report builder with per-model pricing (M2.7, M2.7-highspeed, M2.5, M2.5-highspeed)libs/client): AddPezzoMiniMaxclass wrapping OpenAI SDK with MiniMax base URL, temperature clamping, and full observability integrationMiniMax Models Supported
Key Implementation Details
https://api.minimax.io/v1), so thePezzoMiniMaxclient wraps the OpenAI SDK with a custom base URLTest Plan
npx nx test types && npx nx test server && npx nx test proxy19 files changed, 1218 additions