Skip to content

Add PQS (Prompt Quality Score) to LLMOps#392

Open
OnChainAIIntel wants to merge 1 commit intotensorchord:mainfrom
OnChainAIIntel:add-pqs-prompt-quality-score
Open

Add PQS (Prompt Quality Score) to LLMOps#392
OnChainAIIntel wants to merge 1 commit intotensorchord:mainfrom
OnChainAIIntel:add-pqs-prompt-quality-score

Conversation

@OnChainAIIntel
Copy link
Copy Markdown

What this adds

PQS (Prompt Quality Score) — a pre-inference prompt scoring system, added to the LLMOps section.

Why it fits

PQS is a pre-inference LLMOps check. It sits in the production pipeline to gate prompt quality before any model call is made, saving LLM inference costs on low-quality prompts and preventing quality regressions from reaching production.

Closest neighbor in this list: Promptfoo (tests prompt outputs against assertions). PQS is complementary — it scores prompt inputs on 8 structured dimensions before inference runs.

What PQS does

  • Scores any prompt on 8 dimensions: clarity, specificity, context, constraints, output format, role definition, examples, CoT structure
  • Built on PEEM, RAGAS, MT-Bench, G-Eval, and ROUGE frameworks
  • Published on GitHub Marketplace as "PQS Check"
  • Alphabetical placement in table (P—between Portkey and PromptDX area)

Ecosystem

MIT licensed. Happy to rework description or move to a different section if a better fit exists.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant