diff --git a/CLAUDE.md.example b/CLAUDE.md.example new file mode 100644 index 0000000..8534274 --- /dev/null +++ b/CLAUDE.md.example @@ -0,0 +1,122 @@ +# About me + +I'm [YOUR NAME], on the CX team at **StackBlitz** (Bolt.new). [One or two sentences about your background — comfort with shell, APIs, code reading; this calibrates how Claude explains things to you.] + +# My tool stack + +- **Front** — customer emails / support conversations. API base: `https://api2.frontapp.com`, Bearer token. Front API key lives in `~/Documents/Raycast-scripts/cx-briefing/.env`. +- **Linear** — tickets, bug reports, engineering escalations. Use the Linear connector (MCP). +- **Notion** — internal documentation and KB. Use the Notion connector (MCP). Notion's built-in AI can search across Slack/Linear/internal docs. +- **Slack** — internal team messages. Use the Slack connector (MCP). +- **GitHub Codespaces + Claude** — deep-dive investigations into customer repos and Bolt code. The "CX agent" specifically = a Codespace running the `stackblitz/cx-agent` repo with the Claude CLI inside (kicked off via `gh-cx-agent.sh` once you've set up your own Codespace). +- **Claude Chrome extension** — installed. Drive the browser via the `Claude_in_Chrome` MCP. +- **Granola** — call notes / transcripts. +- **Query** — internal StackBlitz tool for support metrics. Lives at `query.new`. + +# Secrets & API keys + +All Raycast scripts source `~/Documents/Raycast-scripts/cx-briefing/.env`. Required: + +- `FRONT_API_KEY` — primary one. **For StackBlitz team members:** the shared key is stored in the team 1Password CX vault (search "Front API Key"). Personal team keys may also work if you generate them at `app.frontapp.com/settings/tools/api`. +- `ANTHROPIC_API_KEY` — only needed for the `cx-briefing` AI synthesis layer; not required for the regular CX session workflow. + +Optional: +- `LINEAR_API_KEY`, `LINEAR_TEAM_IDS` — only used by the `customer-360` Linear-mention scan. The MCP connector handles general Linear use during conversations. +- `SLACK_BOT_TOKEN` — same: only for `customer-360`'s Slack scan. MCP connector handles general Slack. +- `SENTRY_AUTH_TOKEN` — for the optional Sentry section in `cx-briefing`. +- `FRONT_AUTHOR_EMAIL` / `FRONT_AUTHOR_ID` — overrides for the auto-resolved `{firstname}@stackblitz.com` Front identity. + +**Stripe (if you use the promo scripts) — macOS Keychain, not `.env`.** Service `raycast-stripe-api`, account `stripe-api-key`. **For StackBlitz team members:** the shared Stripe API key is in the 1Password CX vault. To install: +```bash +security add-generic-password -s raycast-stripe-api -a stripe-api-key -w 'sk_live_...' +``` + +# Admin URLs & API endpoints + +Two admin surfaces: + +- **stackblitz.com/admin** — account-level identity. + - By ID: `https://stackblitz.com/admin/users?q%5Bid_eq%5D={id}&commit=Filter&order=id_desc` + - By email: `https://stackblitz.com/admin/users?q%5Bby_email_address%5D={url-encoded-email}&commit=Filter&order=id_desc` +- **bolt.new/admin** — Bolt-specific (rate limits, tokens, org, sites, snapshots). + - Dashboard `/admin`, Sites `/admin/sites`, Static Hosting `/admin/static-hosting`, Bolt DB `/admin/bolt-db`, Token Usage `/admin/token-usage`, Snapshots `/admin/snapshots`, Netlify Partner Accounts `/admin/netlify-partner-accounts`. + +**Bolt rate-limit & token endpoints:** +- Show: `https://bolt.new/api/rate-limits/{userId}` +- Reset monthly: `https://bolt.new/api/rate-limits/reset/{userId}/month` +- Reset all: `https://bolt.new/api/rate-limits/reset/{userId}/all` +- Org-scoped: `https://bolt.new/api/rate-limits/org/{orgId}/{userId}` + +**Front API endpoints I use most:** +- Conversation: `/conversations/{cnv_id}` (IDs prefixed: `cnv_`, contacts `crd_`) +- Contact by email: `/contacts/alt:email:{url-encoded-email}` +- Teammates lookup: `/teammates?limit=200` +- Post conversation comment (internal note): `POST /conversations/{cnv_id}/comments` with `{author_id, body}`. **`@username` in the body auto-parses to a teammate mention** — no need for `@[Name](tea_xxx)` markup. + +# Knowledge bases + +Two to consult, in order: + +1. **Notion** — internal KB (known issues, internal process, team decisions). First stop for "what do we know about this internally." +2. **support.bolt.new** — customer-facing help center, built on Mintlify. First stop for "what should I tell the customer" — if it's documented publicly, link to it. + +Never paste internal Notion content into a customer reply without confirming it's safe to share. + +# How I work + +**Quality gate for support drafts.** After drafting or polishing any Bolt.new customer support response, run the `bolt-qa-checker` skill before treating it as ready to send. Trigger it automatically when a draft exists — don't wait to be asked. **If the QA pass flags Empathy & Communication, run the `empathy-tone-checker` skill next for phrase-level rewrite options.** + +**Live-session policy.** [REPLACE WITH YOUR TEAM POLICY. Common pattern: dedicated live sessions reserved for Premium-tier plans (~$500+/mo); discretion may extend to higher tiers below that; never offer below the agreed minimum.] When declining a session request, frame it positively ("outside what our support team can offer at this plan level") and pivot to a substantive written diagnostic rather than a flat no. + +**Always reply in English** to customer messages — even when the customer writes in another language. The team needs to be able to follow every thread and take over if needed. Match the customer's content and tone, just keep the language English. + +**Draft style when delivering structural feedback.** Lead with concrete praise of what the customer is doing well, *then* introduce the size/scale critique cleanly separated. Phrases like "the issue is volume per turn, not quality" or "constraint here is X, not Y" land much better than mixed praise-and-critique paragraphs. Avoid framing labels like "Where I land:" — make the bridge from decline to diagnostic feel like one continuous thought. + +**Escalation to the CX agent (Codespace) for deeper dives.** When admin chat hits its depth ceiling (need to read project code, trace a specific generation, inspect snapshots), the next step is the CX agent in the `stackblitz/cx-agent` Codespace. To escalate from a local CX session: + +1. Stop, surface what you've found from admin chat, and ask me before triggering the deep dive. +2. On confirmation, run `bash ~/Documents/Raycast-scripts/open-cx-deepdive.sh CNV`. +3. The primer instructs the CX agent to post findings back as a Front comment on the same conversation. + +If I run `open-cx-deepdive.sh` directly from Raycast, that's the "skip admin, go straight to deep dive" path. + +# Common pointers for Bolt + Supabase questions + +- **Staging vs production environment** → point at **Supabase project branching**. Separation tooling lives on the Supabase side, not inside Bolt. +- **Prompt-size guidance for large projects** → working ceiling is ~200K tokens per prompt; batches of 5–10 steps, verify each batch before sending the next, revert to snapshot rather than fix-forward when drift appears. + +# Agent picker change (mid-May 2026) + +Bolt replaced the individual Anthropic model picker with two agent modes: **Standard** and **Max**. +- Free users on Standard: Haiku + Sonnet OR an OSS experiment. +- Paid users on Standard: Sonnet 4.6. +- Paid users on Max: Opus (don't volunteer the version number to customers; if they ask about model drops, frame as "we pick the best model for the task — if you see a performance drop, let's investigate"). +- Docs: https://support.bolt.new/building/using-bolt/agents#agents +- Customers reporting *"can't use Claude agent"* or *"the Anthropic model is gone"* are usually confused by this change. Direct them to the docs and explain Max ≈ the old high-end model. + +# Check my Raycast scripts before building new automations + +I maintain a script library at `~/Documents/Raycast-scripts/`. Authoritative docs in that dir: +- `README.md` — live catalog of every script. +- `SETUP.md` — onboarding guide for teammates. +- `BOLT_FLOWS.md` — quick reference table of support flows keyed to script names (if present in this clone). + +If you're about to write a script for something, grep the catalog first — odds are something exists to extend. + +**Clipboard-driven chaining.** Raycast scripts pass the active UserID through the clipboard so they chain (admin lookup → copy ID → User Admin Actions → Reset Tokens). When you write something new in this style, keep the convention: read from clipboard, write the result back to clipboard. + +# Gotchas + +- **Browser AppleScript permission.** Scripts that read the active tab need Chrome's "Allow JavaScript from Apple Events" (Chrome Settings → Privacy & Security → Site Settings → Additional permissions). Without it they fail silently. Enable for both `stackblitz.com` and `bolt.new`. +- **ID ambiguity in `bolt.new/admin`.** Some admin pages accept both user ID and project ID as filters but default to one. Scripts default to project ID; if results look wrong, the filter param name is probably the issue. + +# Slack channels + +- **#bolt-bugs** — bug reports. Filed with the `/bug` Slack command. Form fields: **Reporter** (full name), **Description of the issue**, **Steps to reproduce the issue**, **B2C or B2B** (select), **Customer impact** (if urgent → create an incident instead, don't file a bug), **Project ID(s) and/or error message ID(s)**, **Front id/link** (optional). When you help prepare a bug, structure the content to drop straight into these fields. +- **#bolt-eng-support** — where CX asks engineering for help on specific issues. +- **#cx-team** — questions or new findings shared with the CX team. +- **#cx-core** — private CX-team channel for announcements not meant for the wider company. Treat anything destined for here as internal-confidential. + +# Team naming + +- [EXAMPLE: Dawid Matuszczyk goes by "Dex". Same person — Front shows `dawid@`, Slack shows `@Dex`.] Add other name-conflicts as you encounter them so I correlate the right person across tools. diff --git a/README.md b/README.md index d724bf0..cd6fa85 100644 --- a/README.md +++ b/README.md @@ -2,6 +2,8 @@ Custom Raycast commands to speed up common support/admin flows (StackBlitz admin, Bolt rate limits, token resets) and a few utilities. +**New to this stack?** See [SETUP.md](SETUP.md) for the end-to-end CX workstation bootstrap (Front + admin chat + CX agent escalation, plus the recommended `~/.claude/CLAUDE.md` calibration). + ## Prerequisites - macOS with Raycast installed diff --git a/SETUP.md b/SETUP.md new file mode 100644 index 0000000..4b8fcd3 --- /dev/null +++ b/SETUP.md @@ -0,0 +1,194 @@ +# Setup Guide + +End-to-end bootstrap so a new teammate can stand up the CX automation workstation. + +## What this stack does + +A set of Raycast-triggered scripts that automate the CX engineer workflow at StackBlitz/Bolt.new. Paste a Front URL or `cnv_id`, and a Claude Code session opens that pulls customer context from Front, queries the bolt admin chat, applies your CX policy from `~/.claude/CLAUDE.md`, drafts a reply, runs QA via the `bolt-qa-checker` skill, and posts as a Front comment for review. + +High-level scripts in this collection: + +- **customer-360/** — per-customer dashboard aggregating Front + admin +- **front-to-admin-chat/** — chain a Front conversation into the bolt admin chat with a prefilled query +- **open-cx-session.sh** — spawn a fresh Claude session per ticket in Warp / Terminal.app / Hyper +- **open-cx-deepdive.sh** — escalate to the CX agent in the `stackblitz/cx-agent` Codespace +- **install-skills.sh** — one-time copy of the team CX skills to `~/.claude/skills/` +- Plus the existing bolt admin / Front helpers / token-reset utilities (see [README.md](README.md)) + +## Prereqs + +- macOS (these scripts use AppleScript / osascript) +- [Raycast](https://raycast.com) installed +- **Claude Code CLI** (`brew install claude` — _not_ Claude Desktop; that's the chat app and isn't required for these scripts) +- [`gh` CLI](https://cli.github.com) installed and authenticated (`gh auth login`) +- `node` 18+ and `jq` (`brew install node jq`) +- A Chromium browser (Chrome, Brave, Edge, Arc, Chromium, Dia) + +### Service access required + +You'll need to be an active CX team member with access to: + +- **Front** — for the API token and the inbox you'll be working from +- **stackblitz.com/admin** and **bolt.new/admin** — CX team permissions +- **Notion, Slack, Linear** — accessed via Claude's connectors (MCP), not API keys (see "Configure Claude connectors" below) +- **GitHub Codespaces** with access to `stackblitz/cx-agent` (only needed if you want the deep-dive functionality) +- **1Password** with access to the team CX vault — that's where the shared Front API key and Stripe key live + +## Bootstrap (one-time per machine) + +### 1. Clone the repo + +```bash +git clone https://github.com/stackblitz/Raycast-scripts ~/Documents/Raycast-scripts +cd ~/Documents/Raycast-scripts +``` + +The scripts default to `~/Documents/Raycast-scripts/` as the working directory. If you clone elsewhere, set `WORKDIR=/your/path` in your shell environment. + +### 2. Install runtime dependencies + +```bash +brew install node jq gh claude +gh auth login +``` + +If `claude` is at an unusual path (not Homebrew default), set `CLAUDE_BIN=/your/path/to/claude` in your shell. + +### 3. Configure secrets + +```bash +cp cx-briefing/.env.example cx-briefing/.env +``` + +Then edit `cx-briefing/.env`. The keys you actually need: + +| Key | Required? | Where to get it | +|---|---|---| +| `FRONT_API_KEY` | **Yes** | StackBlitz team: 1Password CX vault, item "Front API Key". Or generate your own at [`app.frontapp.com/settings/tools/api`](https://app.frontapp.com/settings/tools/api) with scopes: Conversations read+write, Comments write, Tags read, Inboxes read, Contacts read+write, Teammates read. | +| `ANTHROPIC_API_KEY` | Only for `cx-briefing` AI synthesis | [`console.anthropic.com`](https://console.anthropic.com) | +| `LINEAR_API_KEY`, `LINEAR_TEAM_IDS` | Optional | Only used by `customer-360`'s Linear-mention scan. The Linear connector handles general Linear use during conversations. Generate at [`linear.app/settings/api`](https://linear.app/settings/api). | +| `SLACK_BOT_TOKEN` | Optional | Only used by `customer-360`'s Slack scan. The Slack connector handles general Slack use during conversations. | +| `SENTRY_AUTH_TOKEN` | Optional | [`sentry.io/settings/account/api/auth-tokens/`](https://sentry.io/settings/account/api/auth-tokens/) with scopes `org:read`, `project:read`, `event:read` | +| `FRONT_AUTHOR_EMAIL` / `FRONT_AUTHOR_ID` | Optional | Override for the auto-resolved `{firstname}@stackblitz.com` Front identity. Only set if your Front email doesn't match that pattern. | + +**Stripe API key (optional):** if you'll use the Stripe promo-code scripts, the key lives in the macOS Keychain rather than `.env`. StackBlitz team members: find the value in the 1Password CX vault ("Stripe API Key"). Install: + +```bash +security add-generic-password -s raycast-stripe-api -a stripe-api-key -w 'sk_live_...' +``` + +### 4. Configure Claude connectors (MCPs) + +Day-to-day CX work uses Claude's MCP connectors for Notion, Slack, and Linear — _not_ API keys in `.env`. Set these up once per machine via Claude Code's connector configuration. Each connects via OAuth. + +Required connectors: +- **Notion** — searches internal KB; also has its built-in AI for cross-search across Slack/Linear/docs +- **Slack** — reads team channels and threads +- **Linear** — reads issues for context + +Optional but useful: +- **Claude in Chrome** extension — lets Claude drive your browser +- **Front** MCP if you'd rather use it instead of the Front API directly + +See Anthropic's [connector docs](https://docs.claude.com/) for current install instructions. After connecting each, the tools become available with `mcp____` naming in your Claude Code sessions. + +### 5. Install the team CX skills + +```bash +./install-skills.sh +``` + +This copies the `bolt-qa-checker` and `empathy-tone-checker` skills from `skills/` in this repo to `~/.claude/skills/` so they're picked up by Claude Code on every session. + +- `bolt-qa-checker` — runs after any drafted reply to score Empathy, Ownership, Technical, Efficiency on a 1–5 scale and either passes the draft or returns a revised version. +- `empathy-tone-checker` — used when QA flags Empathy. Phrase-level rewrite suggestions, no scoring. + +Re-run `install-skills.sh` whenever the skills update upstream. + +### 6. Register Raycast script directory + +Raycast → Settings → Extensions → Script Commands → Add Directory → pick `~/Documents/Raycast-scripts/`. Verify scripts appear (search for "Customer 360", "Open CX Session", "Install CX Skills", etc.). + +### 7. Grant macOS permissions + +These scripts drive Chrome and your terminal via Apple Events. On first run, macOS prompts; you can also pre-grant: + +- **System Settings → Privacy & Security → Accessibility** → add **Raycast**. +- **System Settings → Privacy & Security → Automation → Raycast** → enable for: **Terminal**, **Warp** (if you use it), **Hyper** (if you use it), your Chromium browser, and **System Events**. +- In Chrome (or your Chromium variant): **Settings → Privacy & Security → Site Settings → Additional Permissions → JavaScript from Apple Events** → enable for both `stackblitz.com` AND `bolt.new`. Without this, `customer-360` and `front-to-admin-chat` cannot read admin pages. + +### 8. Create your CX agent Codespace (optional, for deep-dive) + +If you want the deep-dive flow to work end-to-end, you need a personal Codespace on `stackblitz/cx-agent`: + +1. Go to [github.com/stackblitz/cx-agent](https://github.com/stackblitz/cx-agent) +2. Open it in a Codespace ("Code" → "Codespaces" → "Create codespace on main") +3. Wait for it to provision (first time can take a few minutes) +4. After it's up, `gh codespace list` should show your codespace; `open-cx-deepdive.sh` will find it automatically by repo. + +Once provisioned you don't need to keep it running — `open-cx-deepdive.sh` auto-starts it on demand. + +### 9. Set up your `~/.claude/CLAUDE.md` + +Copy the template and customize it: + +```bash +cp ~/Documents/Raycast-scripts/CLAUDE.md.example ~/.claude/CLAUDE.md +# Then edit ~/.claude/CLAUDE.md +``` + +The template has `[YOUR NAME]` and `[REPLACE WITH YOUR TEAM POLICY]` placeholders to fill in. Universal content (tool stack, admin URLs, agent picker context, English-only rule) is already populated. Adapt freely — this file is read on every Claude Code session start. + +### 10. Smoke tests + +```bash +# Front API auth check (replace with a real customer email) +node customer-360/index.js a-real-customer-email@example.com --json --no-open | head -20 + +# Front-to-admin-chat dry run (won't open the browser) +node front-to-admin-chat/index.js cnv_xxx --print + +# Open CX session (spawns Warp / Terminal / Hyper depending on CX_TERMINAL) +~/Documents/Raycast-scripts/open-cx-session.sh cnv_xxx +``` + +Pick a known-good `cnv_id` you've already resolved so the spawned session doesn't act on a live ticket. + +## Picking your terminal + +`open-cx-session.sh` supports three spawn targets via `CX_TERMINAL`: + +| Value | Notes | +|---|---| +| `Hyper` (default) | Hyper.app via Cmd+N keystroke + System Events. No Agent-mode interception. Needs Accessibility permission for the calling process (Raycast → System Events). | +| `Terminal` | macOS Terminal.app via osascript `do script`. Most reliable when Automation permission is set. Requires Automation permission for the calling process (Raycast) → Terminal. | +| `Warp` | **Experimental.** Uses Warp Launch Configurations. In current testing, Warp opens but doesn't reliably execute the configured command — falls back to an empty shell. | + +Set permanently with `export CX_TERMINAL=Terminal` (or your preference) in `~/.zshrc`. + +## Workflow overview + +1. **Front URL → CX Session.** Trigger Raycast → "Open CX Session" → paste a Front URL. A new terminal tab opens with `claude` running. The session walks the standard workflow. +2. **Need depth?** When admin chat data isn't enough, the session asks you whether to escalate to the CX agent. On confirmation it triggers `open-cx-deepdive.sh`. +3. **Parallel work.** Trigger "Open CX Session" multiple times — each gets its own terminal tab. Same memory/policy loaded for each. +4. **Memory feedback loop.** When teammates or QA flag patterns, capture them in `~/.claude/CLAUDE.md` so future sessions calibrate. Calibration accumulates without retraining. + +## Troubleshooting + +| Symptom | Likely fix | +|---|---| +| AppleScript "not authorized" errors | Re-check Automation permissions for Raycast (Settings → Privacy → Automation) | +| `claude` not found from spawned session | Script auto-tries Homebrew ARM/Intel paths and `command -v claude`. Set `CLAUDE_BIN=/your/path` if yours is elsewhere | +| Warp Agent Mode intercepts the bootstrap command | The default `Warp` target uses Launch Configurations (no Agent Mode). If you see interception, pull main | +| `bolt-qa-checker` skill doesn't fire | Re-run `./install-skills.sh`, then restart your Claude Code session | +| Stale browser tabs break admin lookup | Close stale `stackblitz.com/admin` tabs; the scripts open fresh tabs but can race with old ones | +| `gh-cx-agent.sh`: "no SSH server in container" | Pending: `stackblitz/cx-agent` PR to add the `ghcr.io/devcontainers/features/sshd:1` feature. Browser path (`open-cx-deepdive.sh`) doesn't need SSH | +| Front API errors | Verify your `FRONT_API_KEY` scopes (listed above) and that the token isn't expired | +| Codespace not found | First-time users need to create one on `stackblitz/cx-agent` once (step 8 above) | + +## Maintenance + +- The `CLAUDE.md.example` is a *starting point*. Real calibration happens by editing your own `~/.claude/CLAUDE.md` as you discover patterns. +- When Bolt or internal tools change (model picker swap, new admin pages, etc.), update both your CLAUDE.md and this SETUP.md. +- Skills in `skills/` are versioned in this repo. When updated upstream, re-run `./install-skills.sh` to pick them up. +- Goodwill thresholds and live-session plan-tier policy are team-level — coordinate any changes with your team lead before adjusting personal CLAUDE.md beyond defaults. diff --git a/gh-cx-agent.sh b/gh-cx-agent.sh index 890dc6a..69c98f2 100755 --- a/gh-cx-agent.sh +++ b/gh-cx-agent.sh @@ -41,4 +41,5 @@ fi # 3) Run claude inside the Codespace (non-interactive) # Assumes `claude` is on PATH in the Codespace and that cx-agent repo is the workspace. -gh codespace ssh -c "cd /workspaces/cx-agent && echo \"$MSG\" | claude" "$CODESPACE_NAME" +# Note: `-c` selects the codespace, the trailing positional is the remote command. +gh codespace ssh -c "$CODESPACE_NAME" "cd /workspaces/cx-agent && echo \"$MSG\" | claude" diff --git a/install-skills.sh b/install-skills.sh new file mode 100755 index 0000000..70e0634 --- /dev/null +++ b/install-skills.sh @@ -0,0 +1,42 @@ +#!/bin/bash + +# Required parameters: +# @raycast.schemaVersion 1 +# @raycast.title Install CX Skills +# @raycast.mode fullOutput + +# Optional parameters: +# @raycast.icon 🧠 +# @raycast.packageName CX + +# Documentation: +# @raycast.description Copy the team CX skills (bolt-qa-checker, empathy-tone-checker) from this repo to ~/.claude/skills/ so they're available to Claude Code on this machine. Run once after cloning or whenever the skills directory in the repo updates. +# @raycast.author Jorrit Harmamny + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +SKILLS_SRC="$SCRIPT_DIR/skills" +SKILLS_DST="$HOME/.claude/skills" + +if [[ ! -d "$SKILLS_SRC" ]]; then + echo "Source skills directory not found: $SKILLS_SRC" + exit 1 +fi + +mkdir -p "$SKILLS_DST" +echo "Installing skills from $SKILLS_SRC to $SKILLS_DST..." + +for skill_dir in "$SKILLS_SRC"/*/; do + # Strip trailing slash so macOS `cp -R` copies the directory itself, not + # just its contents (the trailing slash semantics differ from BSD `cp`). + skill_dir="${skill_dir%/}" + skill_name=$(basename "$skill_dir") + echo " - $skill_name" + rm -rf "$SKILLS_DST/$skill_name" + cp -R "$skill_dir" "$SKILLS_DST/$skill_name" +done + +echo +echo "Done. Skills installed at $SKILLS_DST." +echo "Restart any active Claude Code sessions to pick up the new skills." diff --git a/open-cx-deepdive.sh b/open-cx-deepdive.sh new file mode 100755 index 0000000..2463927 --- /dev/null +++ b/open-cx-deepdive.sh @@ -0,0 +1,117 @@ +#!/bin/bash + +# Required parameters: +# @raycast.schemaVersion 1 +# @raycast.title Open CX Deep-Dive +# @raycast.mode silent + +# Optional parameters: +# @raycast.icon 🔬 +# @raycast.argument1 { "type": "text", "placeholder": "Front URL or cnv_id" } +# @raycast.packageName CX + +# Documentation: +# @raycast.description Skip admin chat and go straight to the CX agent (stackblitz/cx-agent Codespace) for a deep-dive on a Front conversation. Wakes the Codespace, copies a context primer to your clipboard, and opens the Codespace web URL in your browser. Paste the primer into the Claude CLI session in the Codespace to bootstrap the deep dive. +# @raycast.author Jorrit Harmamny + +set -euo pipefail + +INPUT="${1:-}" +if [[ -z "$INPUT" ]]; then + echo "Provide a Front URL or cnv_id." + exit 1 +fi + +# Extract cnv_id from URL or accept bare id +CNV=$(echo "$INPUT" | /usr/bin/grep -oE 'cnv_[a-zA-Z0-9]+' | head -1 || true) +if [[ -z "$CNV" ]]; then + echo "Could not find a cnv_id in: $INPUT" + exit 1 +fi + +# Verify gh CLI is available +if ! command -v gh >/dev/null 2>&1; then + echo "gh CLI not found in PATH. Install via: brew install gh" + exit 1 +fi + +# Fetch Front context (just enough to seed the deep-dive primer) +ENV_FILE="$HOME/Documents/Raycast-scripts/cx-briefing/.env" +if [[ ! -f "$ENV_FILE" ]]; then + echo "Missing env file at $ENV_FILE" + exit 1 +fi +FRONT_API_KEY=$(/usr/bin/awk -F= '/^FRONT_API_KEY=/ {gsub(/^["'"'"']|["'"'"']$/,"",$2); print $2}' "$ENV_FILE") +if [[ -z "$FRONT_API_KEY" ]]; then + echo "FRONT_API_KEY missing in $ENV_FILE" + exit 1 +fi + +# Pull conversation metadata +CONV_JSON=$(/usr/bin/curl -sS -H "Authorization: Bearer $FRONT_API_KEY" "https://api2.frontapp.com/conversations/$CNV") +EMAIL=$(echo "$CONV_JSON" | /usr/bin/python3 -c " +import json, sys +try: + data = json.load(sys.stdin) + recipient = data.get('recipient') or {} + print(recipient.get('handle') or '') +except Exception: + print('') +") +SUBJECT=$(echo "$CONV_JSON" | /usr/bin/python3 -c " +import json, sys +try: + data = json.load(sys.stdin) + print((data.get('subject') or '').strip()) +except Exception: + print('') +") + +# Locate the cx-agent Codespace +CODESPACE_NAME=$(gh codespace list --json name,repository --jq '.[] | select(.repository=="stackblitz/cx-agent") | .name' 2>/dev/null | head -n 1) +if [[ -z "$CODESPACE_NAME" ]]; then + echo "No Codespace for stackblitz/cx-agent. Create one at https://github.com/stackblitz/cx-agent first." + exit 1 +fi + +# Wake it if not running +STATE=$(gh codespace list --json name,repository,state --jq '.[] | select(.repository=="stackblitz/cx-agent") | .state' 2>/dev/null | head -n 1) +if [[ "$STATE" != "Available" ]]; then + echo "Starting Codespace $CODESPACE_NAME (current state: $STATE)..." + gh codespace start "$CODESPACE_NAME" || true +fi + +# Build the primer prompt (what the user will paste into Claude inside the Codespace) +PRIMER=$(cat </dev/null || true) +fi + +# Which terminal to spawn into. Override at invocation time, e.g. +# CX_TERMINAL=Terminal ./open-cx-session.sh cnv_xxx +# Supported values: +# Hyper → Hyper.app (default; Cmd+N keystroke; no Agent-mode interception) +# Terminal → macOS Terminal.app via osascript `do script`. Requires the +# calling process (Raycast, your shell) to have Automation +# permission for Terminal in System Settings → Privacy & Security +# → Automation. Without it, Terminal opens but stays blank. +# Warp → Warp.app via Launch Configurations (`~/.warp/launch_configurations/cx-session.yaml`). +# EXPERIMENTAL: Warp opens but in current testing doesn't reliably +# execute the configured command on macOS — falls back to an empty +# shell. +CX_TERMINAL="${CX_TERMINAL:-Hyper}" + +if [[ -z "$CLAUDE_BIN" ]] || [[ ! -x "$CLAUDE_BIN" ]]; then + echo "claude CLI not found. Install Claude Code (https://docs.anthropic.com/claude-code/) or set CLAUDE_BIN=/path/to/claude in your shell environment." + exit 1 +fi + +case "$CX_TERMINAL" in + Terminal) + [[ -d "/System/Applications/Utilities/Terminal.app" ]] || { echo "Terminal.app not found"; exit 1; } + ;; + Warp) + [[ -d "/Applications/Warp.app" ]] || { echo "Warp.app not found"; exit 1; } + ;; + Hyper) + [[ -d "/Applications/Hyper.app" ]] || { echo "Hyper.app not found"; exit 1; } + ;; + *) + echo "Unsupported CX_TERMINAL='$CX_TERMINAL'. Use 'Terminal', 'Warp', or 'Hyper'." + exit 1 + ;; +esac + +# Write the multi-line prompt to a temp file. The spawned Warp session reads it, +# deletes it, and feeds it to claude as the initial prompt. This avoids having +# to keystroke a huge multi-line string with all its escaping pitfalls. +PROMPT_FILE=$(/usr/bin/mktemp /tmp/cx-session-prompt.XXXXXX) +cat > "$PROMPT_FILE" < → ☑ Terminal. + # Without it, osascript silently fails and Terminal opens an empty window. + /usr/bin/osascript <. Launch configs run commands directly + # in shell — Agent Mode never sees them, so no credits consumed regardless + # of the user's Warp settings. + LAUNCH_DIR="$HOME/.warp/launch_configurations" + mkdir -p "$LAUNCH_DIR" + LAUNCH_FILE="$LAUNCH_DIR/cx-session.yaml" + + # Write the YAML config. SHELL_CMD goes into a `|` block scalar so we don't + # need to YAML-escape its quotes and $ chars. The 16-space indent is + # consistent with Warp's documented launch_configurations format. + { + echo "---" + echo "name: CX Session" + echo "windows:" + echo " - tabs:" + echo " - title: \"CX $CNV\"" + echo " layout:" + echo " cwd: $WORKDIR" + echo " commands:" + echo " - exec: |" + printf ' %s\n' "$SHELL_CMD" + } > "$LAUNCH_FILE" + + /usr/bin/open "warp://launch/cx-session" + ;; + Hyper) + # Hyper has no `do script` equivalent — keystroke a new window (Cmd+N) + # and type the command. Hyper has no agent-mode interception. + /usr/bin/osascript < + Quality assurance checker for Bolt.new customer support responses. This skill automatically + evaluates draft support responses against the Bolt.new four-dimension quality rubric + (Empathy & Communication, Ownership & Resolution, Technical Quality, Efficiency) and either + flags minor issues or outputs a corrected version with a summary of changes. Use this skill + every time a support response draft has been written or polished in the conversation — it is + the mandatory second pass before any response is considered ready to send. Trigger on any + conversation where a Bolt.new customer support email has just been drafted, or when the user + says "check this", "QA this", "review this", "run the checker", or similar. Also trigger + when the user explicitly asks to evaluate a response against the quality rubric or four + dimensions. If a support draft exists in the conversation, this skill should fire. +--- + +# Bolt.new Support QA Checker + +You are the quality gate for Bolt.new customer support responses. Every draft that gets written +in this conversation must pass through you before it's considered ready to send. + +## Before you begin + +Read the full quality rubric in `references/quality-rubric.md` (sibling to this file). It contains +the four scoring dimensions, critical failure conditions, the pre-send checklist, common pitfalls, +and key documentation links. Internalize all of it. + +## How this skill fits into the workflow + +This skill is Step 2 of a two-step process: + +- **Step 1** (handled by the project instructions): The user pastes a customer message and/or an + agent's raw draft. Claude writes or polishes a customer-ready response. +- **Step 2** (this skill): Immediately after the draft is produced, evaluate it against the + four-dimension rubric. Based on the score, either flag minor issues or output a corrected + version with a brief changelog. + +This step should feel seamless — not like a separate review tool. After writing the draft in +Step 1, transition naturally into the QA check (e.g., a brief separator, then the evaluation). + +## Evaluation process + +Score the draft on each of the four dimensions (1-5 scale): + +1. **Empathy & Communication** — Did the response reflect this customer's specific situation? + Is empathy backed by substance? Any AI slop? +2. **Ownership & Resolution** — Was the issue driven to resolution? Bugs escalated? Workaround + provided? Loop closed? +3. **Technical Quality** — Is all information accurate? Every question answered? No misinformation? +4. **Efficiency** — Minimal back-and-forth needed? Right length for the complexity? No wasted words? + +Apply critical failure caps from the rubric: +- Disrespectful/dismissive/condescending tone or AI slop → Empathy capped at 1 +- Platform bug identified but not escalated → Ownership capped at 3.5 + +Run the 7-item pre-send checklist silently (do not show it to the user unless something fails). + +## Output behavior — two paths + +### Path A: Score 4+ on all dimensions (draft is good) + +Output a brief section after the draft: + +``` +--- +**QA Check: PASS** (Empathy: X | Ownership: X | Technical: X | Efficiency: X) + +Minor flags: +- [flag 1, if any] +- [flag 2, if any] + +Draft is ready to send. +``` + +If there are zero flags, just show the scores and "Draft is ready to send." Keep it tight. + +### Path B: Any dimension scores below 4 (draft needs correction) + +Output a corrected version with a brief changelog: + +``` +--- +**QA Check: Revised** (Empathy: X→X | Ownership: X→X | Technical: X→X | Efficiency: X→X) + +What changed: +- [concise description of change 1] +- [concise description of change 2] +- [etc.] + +**Corrected response:** + +[full corrected response, ready to send] +``` + +The changelog should be brief — one line per change, focused on WHAT changed and WHY. +The corrected response should be complete and ready to copy-paste. It must follow all the +formatting rules from the project instructions (no emojis, no em dashes, no sign-offs, etc.). + +## Scoring guidance + +Be calibrated and honest: +- **5** — Excellent. No meaningful improvements. Ready to send. +- **4** — Good. One or two minor things that could be better but won't hurt the customer experience. +- **3** — Adequate. Clear gaps — missed a question, generic empathy, no next steps. +- **2** — Below standard. Multiple issues that would noticeably hurt the experience. +- **1** — Critical failure. Triggers a rubric cap condition. + +## What to actively check for + +These are the most common issues (from the rubric's "Common Pitfalls"): + +- **Unnecessary fear about data loss** — If the customer asked about downgrading or canceling, + did the response lead with reassurance that projects are safe? Or did it frame optional actions + as required steps? +- **Generic empathy / AI slop** — Could the empathy lines be copy-pasted onto any ticket? If + you remove them, is there still a real answer underneath? +- **Answering only the headline question** — Did the customer raise multiple questions or concerns? + Were ALL of them addressed, even briefly? +- **Workaround without escalation** — If a platform bug was involved, was escalation confirmed? + A workaround alone is not enough. +- **Missing or generic closing line** — Does the response end with a warm, situation-specific + line that references what the customer is working on? +- **No clear next steps** — Does the customer know exactly what happens next? + +## Compatibility with project instructions + +The project instructions handle: +- Tone and style (friendly, professional, no emojis, no em dashes, no sign-offs) +- Audience calibration (non-technical default, adaptive if noted) +- Resolution and goodwill policy +- Security (no internal links in customer-facing emails) +- Formatting standards + +The QA skill should NOT re-evaluate things the project instructions already enforce (like grammar +or formatting). Focus the QA check on the four substantive dimensions: empathy quality, ownership, +technical accuracy, and efficiency. If a formatting issue slipped through (e.g., an emoji or em +dash), catch it — but that's a bonus, not the primary focus. + +## Important rules + +- Never inflate scores. Honest, calibrated feedback makes the process trustworthy. +- Ground feedback in specifics — reference the actual lines or sections in the draft. +- If the user only provided a draft with no customer message, note that completeness and + technical accuracy can't be fully assessed, but still evaluate what you can. +- If a pre-send checklist item fails, include it in the changelog / flags. Don't hide failures. +- The corrected response (Path B) must be fully self-contained and ready to copy-paste — + not a list of suggested edits. diff --git a/skills/bolt-qa-checker/references/quality-rubric.md b/skills/bolt-qa-checker/references/quality-rubric.md new file mode 100644 index 0000000..e41ffb3 --- /dev/null +++ b/skills/bolt-qa-checker/references/quality-rubric.md @@ -0,0 +1,100 @@ +# Bolt.new Support Quality Reference + +## Evaluation Rubric - Four Dimensions + +Every response can be randomly selected for evaluation. Scores are based on these four dimensions: + +### 1. Empathy & Communication +Core question: Did the customer feel heard, respected, and spoken to like a human? + +What good looks like: +- Warm, personalized tone that reflects THIS customer's specific experience +- Sincere and honest when delivering bad news - collaborate on what we CAN do +- Empathy language is always backed by substance (diagnosis, steps, resolution) + +Critical failures (caps score at 1): +- Disrespectful, dismissive, condescending, or unprofessional tone +- "AI slop" - empathy language with no substance underneath it + +### 2. Ownership & Resolution +Core question: Did you take this on, drive it properly, and see it through? + +What good looks like: +- Research and attempt to replicate issues before responding +- Provide step-by-step instructions or link to the relevant help center article +- If a platform bug is involved, escalate (Linear, #bolt-bugs, etc.) AND provide a workaround +- Close the loop: communicate next steps, follow up, leave no loose threads +- If a previous response was wrong, own it and correct it + +Critical failures (caps score at 3.5): +- Platform bug identified but not escalated - workaround alone is not enough +- No internal notation of escalation threads (Slack, Linear links) + +### 3. Technical Quality +Core question: Were you right, and did you address everything? + +What good looks like: +- Accurate diagnosis and correct information +- All questions the customer raised are answered, not just the main one +- No misinformation about features, billing, or capabilities + +Common miss: +- Customer sends three questions, agent answers the main one but skips the other two + +### 4. Efficiency +Core question: Did you resolve this without wasting the customer's time? + +What good looks like: +- Minimal back-and-forth; ask the right clarifying questions upfront +- Right actions, minimal wasted motion +- Could NOT have been resolved in fewer messages without sacrificing quality + +--- + +## Pre-Send Checklist + +Run this silently before every response: + +1. SUBSTANCE - Remove all empathy language. Is there still a real answer? If not, rewrite. +2. COMPLETENESS - Has every question or concern the customer raised been addressed? +3. ESCALATION - If a bug is involved, has escalation been confirmed or flagged? +4. NEXT STEPS - Does the customer know exactly what happens next? +5. LOOSE THREADS - Are there open promises, unanswered questions, or missing follow-ups? +6. REASSURANCE - If account changes are discussed, did we lead with what stays safe? +7. CORRECTION - If a previous reply was wrong, did we acknowledge and fix it? + +--- + +## Common Pitfalls to Avoid + +### Causing unnecessary fear about data loss +When a customer asks about downgrading or canceling a plan, always lead with reassurance that their projects are safe. Do not frame optional actions (like migrating projects) as required steps. State clearly what stays the same before explaining what changes. + +### Generic empathy that applies to any ticket +Phrases like "I completely understand the frustration" or "I know this isn't ideal" are meaningless unless tied to the customer's specific situation. Replace them with references to what the customer was actually trying to do and how the issue impacted them. + +### Answering only the headline question +Customers often pack multiple questions into one message. Read the entire message, identify every question or concern, and address each one - even if briefly. + +### Sending workarounds without escalating bugs +If the issue is a platform bug, a workaround is not enough. Escalation must happen. If the agent draft doesn't mention escalation, flag it before polishing the response. + +### Over-producing output +One well-calibrated response is better than two options the agent has to choose between. Only produce two options when there's a genuine strategic fork (e.g., push back vs. accommodate). + +### Missing the closing line +Every response should end with a warm, situation-specific line that invites the customer to come back. Avoid generic closings - reference the specific thing they're working on. + +--- + +## Key Documentation Links + +- Billing (upgrade, downgrade, cancel): https://support.bolt.new/account-and-subscription/billing +- Tokens (allocation, rollover, expiration): https://support.bolt.new/account-and-subscription/tokens +- Teams plans overview: https://support.bolt.new/account-and-subscription/team-plans +- Teams billing and invoices: https://support.bolt.new/account-and-subscription/teams/billing-invoices +- Managing Teams (members, roles, seats): https://support.bolt.new/account-and-subscription/teams/manage-team +- Teams project controls: https://support.bolt.new/account-and-subscription/teams/project-controls +- Managing projects (view, delete, duplicate, transfer): https://support.bolt.new/building/using-bolt/projects-files +- Project transfer between accounts: https://support.bolt.new/building/using-bolt/projects-files#transfer-a-project-to-another-account +- Team templates: https://support.bolt.new/account-and-subscription/teams/team-templates diff --git a/skills/empathy-tone-checker/SKILL.md b/skills/empathy-tone-checker/SKILL.md new file mode 100644 index 0000000..eceb69c --- /dev/null +++ b/skills/empathy-tone-checker/SKILL.md @@ -0,0 +1,120 @@ +--- +name: empathy-tone-checker +description: Reviews a CX agent's draft response for empathy, tone, and communication quality. Identifies specific sentences or phrases that don't land well — generic empathy, robotic tone, dismissive language, grammar issues, AI slop — and offers multiple rewrite options the agent can choose from (e.g., friendlier tone, just the facts, more personal). No scoring involved — just practical, phrase-level feedback with concrete alternatives. Use this skill whenever a CX agent pastes a draft response and asks for feedback on tone, empathy, communication, or how it reads. Also trigger for "check my empathy", "how does this sound", "review my tone", "is this too robotic", "does this feel genuine", "empathy check", "tone check", "can you check this before I send it", or any request to improve a support response's warmth, personalization, or clarity. +--- + +# Empathy & Tone Checker + +You are a senior CX coach reviewing an agent's draft response before they send it. Your job is to find specific sentences or phrases that don't land well and offer concrete alternatives the agent can pick from. + +You are not scoring anything. You're a second pair of eyes — catching what's off, explaining why, and giving the agent options to fix it in a way that fits their style and the situation. + +## Input + +The user will provide one of: +- A customer message + their draft response +- Just their draft response (with or without context about the issue) +- A full ticket thread for tone review + +If critical context is missing (e.g., no customer message), note how that limits your review. Ask for the customer's original message if it would meaningfully change your feedback — sometimes a phrase only reads as dismissive relative to what prompted it. + +## What You're Looking For + +Read the full response and flag anything that falls into these categories: + +### Tone & Warmth Issues +- Language that feels robotic, stiff, or corporate ("We apologize for any inconvenience this may have caused") +- Phrasing that's technically polite but cold — the kind of thing that could be pasted into any ticket +- Impatient or curt language, even if unintentional +- Overly formal constructions that create distance ("Please be advised that...", "Kindly note that...") + +### Generic or Impersonal Empathy +- Empathy statements that don't reference the customer's actual situation ("I'm sorry for the issue", "I understand your frustration") +- Responses that feel templated — a name swap away from being sent to anyone +- Acknowledgments that skip over the specific thing the customer is upset about + +### Dismissive or Condescending Language +This is the most important category. Flag immediately: +- Blaming the customer for their problem ("why would you keep trying?", "you should have...") +- Framing goodwill as a personal favor ("what I can do is be nice and...") +- Implying the customer didn't read or pay attention ("as I mentioned before", "per my last message") +- Any phrasing that could make the customer feel stupid, unwelcome, or talked down to +- Sarcasm, even mild + +### AI Slop / Performative Empathy +- Over-the-top empathy phrases that sound polished but are empty ("I completely understand how frustrating this must be and I want to make sure you feel heard") +- Long paragraphs of acknowledgment with no concrete next steps +- Vague gestures: "I've added some tokens" (how many?), "I've noted your feedback" (what happens next?) +- Responses where removing all the empathy language leaves no substance behind + +### Candor & Accountability Gaps +- Deflecting blame to other teams, "technical issues", or the customer's own actions when the platform was at fault +- Avoiding an apology when one is clearly warranted +- Hiding limitations instead of being upfront about them (while pivoting to what *can* be done) +- Unsolicited best-practices advice bundled with an apology for a platform failure — this subtly shifts blame to the customer's workflow + +### Grammar & Clarity +- Grammatical errors, awkward phrasing, or unclear sentence structure +- Run-on sentences that bury the key information +- Ambiguous language where the customer might misunderstand what's being said or what they need to do next + +## What Good Looks Like + +Keep these principles in mind — they're what the recommendations should aim toward: + +- **Specific over generic.** Empathy should reference what actually happened to *this* customer, not be a catch-all phrase. +- **Warm but not performative.** The tone of a thoughtful colleague, not a chatbot told to sound caring. If removing the empathy language leaves nothing useful, the empathy is filler. +- **Sincere when apologizing.** Proportional to the impact. "We dropped the ball here" over "we apologize for any inconvenience." +- **Honest about limitations.** If something can't be done, say so — then pivot to what *can* be done. +- **Accountable, not deflecting.** Own mistakes. Don't blame the customer or hide behind passive voice. +- **Clear and patient.** The customer shouldn't have to re-read a sentence to understand what's being asked of them or what happens next. + +## Output Format + +For each phrase or sentence that needs attention, output: + +--- + +**Original:** +> [exact sentence or phrase from the draft] + +**Why this doesn't land:** +[Brief, plain-language explanation — one or two sentences max. Be specific about what's wrong: is it generic? cold? unclear? dismissive? performative?] + +**Options:** + +🟢 **Friendlier tone** — [rewritten version that's warmer and more conversational] + +🔵 **Just the facts** — [rewritten version that's clear and direct without extra warmth — good for agents who prefer a concise style or situations where brevity fits] + +🟡 **More personal** — [rewritten version that specifically references the customer's situation, their words, or their experience] + +--- + +Not every flagged phrase needs all three options. Use the ones that make sense: + +- If the issue is generic empathy, **More personal** is the most important option. +- If the issue is robotic tone, **Friendlier tone** matters most. +- If the issue is wordiness or AI slop, **Just the facts** is the priority. +- If the issue is dismissive language, all options should fix the dismissiveness — the choice is just about style. +- If the issue is grammar or clarity, you can offer a single corrected version — no need to force multiple style variants on a typo. + +Adapt the option labels when it makes sense for the specific issue. For example, if the problem is that an apology is too vague, you might offer "More specific apology" and "Brief acknowledgment" instead of the standard three. The labels are a framework, not a rigid template. + +## After the Flagged Items + +End with a brief summary: + +**What already works well:** +Call out 2-3 specific things from the response that are genuinely good — a phrase that nailed the tone, a moment of real personalization, a clear explanation. Agents need to know what to keep doing. If nothing stands out, skip this section rather than manufacturing praise. + +**Overall read:** +One or two sentences on how the response comes across as a whole. Is the general direction right? Is it close to ready or does it need significant rework? Be honest but constructive. + +## If the Draft Is Already Good + +If the response genuinely doesn't need changes — the tone is warm, the empathy is specific, and the communication is clear — say so. You can still offer minor polish suggestions, but frame them as optional, not as problems. Don't manufacture feedback just to justify being consulted. + +## Tone of Your Review + +Write like an experienced teammate who's done this a thousand times. Direct, specific, zero corporate filler. You're not grading — you're helping them ship a better response. Respect their voice; the goal is to improve their draft, not replace it with yours.