Skip to content

Agent Skill: User Survey#5577

Open
angelplusultra wants to merge 13 commits into
masterfrom
feat-agent-clarifying-questions
Open

Agent Skill: User Survey#5577
angelplusultra wants to merge 13 commits into
masterfrom
feat-agent-clarifying-questions

Conversation

@angelplusultra
Copy link
Copy Markdown
Contributor

@angelplusultra angelplusultra commented May 4, 2026

Pull Request Type

  • ✨ feat (New feature)
  • 🐛 fix (Bug fix)
  • ♻️ refactor (Code refactoring without changing behavior)
  • 💄 style (UI style changes)
  • 🔨 chore (Build, CI, maintenance)
  • 📝 docs (Documentation updates)

Relevant Issues

resolves #5575
translations #5617

Docs coming after final review.

Description

Adds an opt-in agent skill that lets the LLM pause mid-turn and ask the user clarifying questions through an interactive card in chat, then resume with the answers in context. Off by default, admin-gated, websocket-only (not exposed for API/programmatic agent runs).

Why: Today, when an agent receives an ambiguous prompt it has to guess. This feature gives the agent a structured way to ask for the missing detail instead — without the user needing to restart the turn.

What it does:

  • New ask-user agent tool that accepts a batch of typed questions (free-form input or single/multi-select choice). The agent decides when to call it; the tool descriptions and examples instruct it to batch independent questions and only ask when truly blocked.
  • Interactive card in chat: paginated when there is more than one question, simple form when there is one. Supports text/url/number/date/email/textarea inputs, single/multi-select choice with optional Other field, per-question skip, whole-survey skip, and a countdown timeout bar.
  • Admin settings (Agent Skill Settings modal): enable/disable toggle, max questions per turn (1-10, default 3), response timeout in seconds (10-600, default 120). All capped server-side.
  • Survey persistence: completed surveys are stored on the workspace_chats response JSON blob (no schema change). On reload, the survey re-renders as a read-only card above the agent reply. The Q/A transcript is also injected into the prompt history that gets fed to subsequent LLM turns, so both agent and normal chat can answer follow-ups like "what did I tell you earlier?"

How the wiring works:

  • Tool registration is conditional on the admin setting (server/utils/agents/defaults.js).
  • The handler validates and normalizes per-question, enforces the per-turn cap by truncating rather than rejecting, and returns a numbered Q/A transcript back to the LLM as the tool result.
  • requestUserClarification (server/utils/agents/aibitat/plugins/websocket.js) sends a clarificationRequest over the socket, awaits clarificationResponse, and times out cleanly.
  • Persistence mirrors the existing _pendingCitations / _pendingOutputs pattern: the tool buffers each completed survey on the aibitat instance, and the chat-history plugin drains the buffer into the response JSON when the agent reply is saved.
  • LLM context injection happens in convertToPromptHistory (single inject point covering both agent and normal chat pathways) by appending a tagged transcript to the assistant content.

Files of note:

  • server/utils/agents/aibitat/plugins/ask-questions.js — the tool itself
  • server/utils/agents/aibitat/plugins/websocket.js — requestUserClarification helper
  • server/utils/agents/aibitat/plugins/chat-history.js — persistence buffer drain
  • server/utils/helpers/chat/responses.js — exposes survey to frontend, injects transcript into LLM prompt history
  • frontend/src/components/WorkspaceChat/ChatContainer/ChatHistory/ClarifyingQuestion/ — live interactive card
  • frontend/src/components/WorkspaceChat/ChatContainer/ChatHistory/HistoricalMessage/HistoricalClarifyingQuestions/ — read-only persisted card
  • frontend/src/pages/Admin/Agents/AgentSkillSettings/index.jsx — admin settings UI

Visuals (if applicable)

Additional Information

  • All new user-facing strings use t() with defaultValue. Locale JSON files were intentionally not edited — the project convention is that localization is handled by a script as a final pass.
  • No DB migration required. The new clarifyingQuestions field rides inside the existing response JSON column on workspace_chats, alongside sources/metrics/outputs.
  • Skipped and timed-out surveys also persist (with appropriate copy in both the rendered card and the LLM transcript), so the agent never silently loses state.

Developer Validations

  • I ran yarn lint from the root of the repo and committed changes
  • Relevant documentation has been updated (if applicable)
  • I have tested my code functionality
  • Docker build succeeds locally

@angelplusultra angelplusultra linked an issue May 4, 2026 that may be closed by this pull request
@angelplusultra angelplusultra marked this pull request as ready for review May 6, 2026 23:24
@angelplusultra angelplusultra requested a review from shatfield4 May 6, 2026 23:24
Copy link
Copy Markdown
Collaborator

@shatfield4 shatfield4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few areas for improvement and a couple UI bugs to fix here. Feature works great 👍

Comment thread server/utils/agents/aibitat/plugins/ask-questions.js Outdated
Comment thread server/utils/agents/aibitat/plugins/ask-questions.js
Comment thread server/utils/agents/index.js Outdated
Copy link
Copy Markdown
Collaborator

@shatfield4 shatfield4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@angelplusultra angelplusultra changed the title feat: agent clarifying questions Agent Skill May 12, 2026
@angelplusultra angelplusultra changed the title Agent Skill Agent Skill: User Survey May 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEAT]: User Survey

3 participants