Select any text or image and transform with AI, scripts, shortcuts and more.
Zero app switching.
Website · Docs · Download · Extensions
Select any text or image anywhere, press ⌥C, and run AI prompts, shell scripts, or connectors like GitHub and Linear without the app switching. One hotkey, no config, everything runs locally.
No cloud. No telemetry. No accounts.
- Select text or copy an image anywhere on your Mac
- Press ⌥C (Option+C)
- Cai detects the content type and shows relevant actions
- Pick an action with arrow keys or ⌘1–9
- Hit return to finish. Result is auto-copied to your clipboard — just ⌘V to paste
Examples:
- Take a screenshot → Create GitHub issue
- Select a recipe → Ask AI: "Extract ingredients for 2 people"
- Select
"serendipity"→ Define, Explain, Translate, Search - Select
"Let's meet Tuesday at 3pm at Starbucks"→ Create calendar event, Open in Maps - Select an email in Mail → Reply, Summarize, Translate
→ Read the full How It Works guide
- Smart content detection — recognizes what you copied (text, image, URL, JSON, meeting, address) and shows the right actions
- Built-in AI — Apple Intelligence on macOS 26+, or in-process MLX inference on Apple Silicon. No server, no cloud, no setup
- GitHub & Linear — create issues from any selected text with AI-generated title, body, and duplicate detection
- Custom actions — save reusable AI prompts, URL templates, and shell commands as one-click actions
- Image to Text — on-device OCR via Apple Vision framework
- Bring your own LLM — works with LM Studio, Ollama, any OpenAI-compatible server, or any model from HuggingFace mlx-community
Also includes:
- Custom output destinations (Mail, Notes, webhooks, AppleScript)
- Follow-up questions
- Context Snippets (pass per-app context to the LLM)
- Clipboard history (last 100, search and pin)
- Keyboard-first (arrow keys, ⌘1–9)
- Community extensions
→ See all features in the docs
brew tap cai-layer/cai && brew install --cask cai- Download the
.dmgfrom the latest release - Open the DMG and drag Cai.app to your Applications folder
- Open the app and grant Accessibility permission when prompted
- On macOS 26+, Cai uses Apple Intelligence automatically. Otherwise, the built-in MLX model downloads on first launch — or skip if you already use LM Studio / Ollama
→ Full installation guide · LLM setup
git clone https://github.com/cai-layer/cai.git
cd cai/Cai
open Cai.xcodeprojIn Xcode: select the Cai scheme and My Mac as destination, then Product → Run (⌘R).
Note: The app requires Accessibility permission and runs without App Sandbox (required for global hotkey and CGEvent posting).
→ Check the full changelog
Full documentation is at getcai.app/docs:
- How It Works — content detection, smart actions, follow-ups
- Keyboard Shortcuts — every key and what it does
- LLM Setup — Apple Intelligence, MLX, LM Studio, Ollama, cloud providers
- Choosing a Model — model picker guide and quantization explainer
- Ask AI — free-form prompts on selected text
- Custom Actions — save prompts, URLs, and shell commands
- Custom Destinations — webhooks, AppleScript, deeplinks, shell
- Connectors — GitHub and Linear integration
- Context Snippets — per-app context for smarter actions
- Community Extensions — install and create shared actions
- Troubleshooting — common issues and fixes
- macOS 14.0 (Sonoma) or later
- Apple Silicon (M1 or later) for the built-in AI engine
- 8 GB RAM minimum, 16 GB recommended for larger models
- Accessibility permission (for global hotkey ⌥C)
- SwiftUI + AppKit — native macOS, no Electron
- MLX-Swift — in-process LLM inference on Apple Silicon, no subprocess or server
- No App Sandbox — global hotkey requires CGEvent posting outside the sandbox
- MCP via ~200-line JSON-RPC client (Beta) — GitHub and Linear connectors with zero external MCP dependencies

