From 400cd67f4764e37255970e1e1c986ef601b98743 Mon Sep 17 00:00:00 2001 From: OnChainAIIntel Date: Fri, 17 Apr 2026 21:30:14 +0200 Subject: [PATCH] Add PQS (Prompt Quality Score) to LLMOps --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 300bf46..b0ae120 100644 --- a/README.md +++ b/README.md @@ -229,6 +229,7 @@ An awesome & curated list of the best LLMOps tools for developers. | [Opik](https://github.com/comet-ml/opik) | Confidently evaluate, test, and ship LLM applications with a suite of observability tools to calibrate language model outputs across your dev and production lifecycle. | ![GitHub Badge](https://img.shields.io/github/stars/comet-ml/opik.svg?style=flat-square) | | [Parea AI](https://www.parea.ai/) | Platform and SDK for AI Engineers providing tools for LLM evaluation, observability, and a version-controlled enhanced prompt playground. | ![GitHub Badge](https://img.shields.io/github/stars/parea-ai/parea-sdk-py?style=flat-square) | | [Pezzo 🕹️](https://github.com/pezzolabs/pezzo) | Pezzo is the open-source LLMOps platform built for developers and teams. In just two lines of code, you can seamlessly troubleshoot your AI operations, collaborate and manage your prompts in one place, and instantly deploy changes to any environment. | ![GitHub Badge](https://img.shields.io/github/stars/pezzolabs/pezzo.svg?style=flat-square) | +| [PQS (Prompt Quality Score)](https://github.com/OnChainAIIntel/pqs-action) | Pre-inference prompt scoring. Grades any prompt on 8 dimensions (clarity, specificity, context, constraints, output format, role definition, examples, CoT structure) using PEEM, RAGAS, MT-Bench, G-Eval, ROUGE frameworks. GitHub Action gates prompts in CI pipelines; also available as MCP server, Python SDK, and REST API. | ![GitHub Badge](https://img.shields.io/github/stars/OnChainAIIntel/pqs-action.svg?style=flat-square) | | [PraisonAI](https://github.com/MervinPraison/PraisonAI) | Production-ready Multi-AI Agents framework with self-reflection. Fastest agent instantiation (3.77μs), 100+ LLM support via LiteLLM, MCP integration, agentic workflows (route/parallel/loop/repeat), built-in memory, Python & JS SDKs. | ![GitHub Badge](https://img.shields.io/github/stars/MervinPraison/PraisonAI.svg?style=flat-square) | | [PromptDX](https://github.com/puzzlet-ai/promptdx) | A declarative, extensible, and composable approach for developing LLM prompts using Markdown and JSX. | ![GitHub Badge](https://img.shields.io/github/stars/puzzlet-ai/promptdx.svg?style=flat-square) | | [PromptHub](https://www.prompthub.us) | Full stack prompt management tool designed to be usable by technical and non-technical team members. Test, version, collaborate, deploy, and monitor, all from one place. | |