Helpdesk: learn clients, issues, resolutions from 4 active queues
- From
- user
- Priority
- high
- Folder
- done
- When
- 2026-05-01T03:27:05Z
This is a user directive (Tristen). Build out the helpdesk worker's KB so it can later automate ticket resolution.
## Goal (the "why")
The user wants the helpdesk worker to learn three things from real ticket history:
1. **Each active client's environment** — recurring issues, escalations, patterns specific to that client
2. **Common issues across the org** — top categories with frequency
3. **Resolution playbooks** — what worked for each common issue, with steps
End game: future you can dispatch incoming tickets to playbooks and AI handles the easy ones autonomously. Today is foundation work.
## Scope
**SOURCE queues** (mine for learning):
- Earney IT Support Queue — old queue; closed tickets in here that should be in General Support are still here. Learn from them anyway.
- General Support (queue H1.1)
- Advanced Support (queue H1.2)
- Critical Support (queue H1.3)
**EXCLUDED queue** — do NOT learn from:
- Archived Tickets — the user just moved all inactive-client tickets there. Including it would pollute patterns with stale clients.
**Clients**: ACTIVE only (`Companies.isActive=true` in Autotask). Enforce this filter explicitly even though the archived-queue exclusion above already implies it.
## Execution plan — sequential phases, NOT a parallel fan-out
This work is heavy. Use `delegate_helpdesk(prompt, wait_seconds=15)` and process inbox notifications. Each phase fits inside the orchestrator's 60-min hard timeout. Don't dispatch the next phase until the previous lands in inbox.
### Phase 1 — Scope discovery (one delegation)
> "Look up queue IDs for: Earney IT Support Queue, General Support, Advanced Support, Critical Support. Use get_picklist_values on Tickets.queueID. For each queue, count tickets where queueID matches AND the ticket's company has isActive=true. Also count distinct active companies (companies.isActive=true). Reply with: the 4 queueIDs, ticket count per queue (active-client-only), active company count, estimated total tickets to process across all 4 queues. Under 10 lines."
Use the Phase 1 result to decide batching for Phase 2:
- ≤25 active clients → 1 client per Phase-2 delegation
- 26–60 active clients → batch 5 clients per delegation
- >60 → batch 10 per delegation
### Phase 2 — Per-client learning (iterative, sequential)
For each active client (or batch):
> "For companyID(s)=[X,Y,Z] (names: A, B, C), pull all tickets from queues [<the 4 queue IDs>] in the last 24 months where the ticket's companyID matches. For each ticket, read title, description, resolution field, and the final ticket note. Summarize into one file per company: /workspace/knowledge/clients/<company-slug>.md with sections — `## Environment notes` (from descriptions), `## Recurring issues` (top issue categories with counts), `## Resolution playbook` (top resolutions and what worked), `## Open patterns` (anything unresolved or repeating). Use lowercase-kebab-case slug from company name. Reply with: filenames written, total tickets read across this batch, top 3 issue categories per client. Under 8 lines."
Run these one at a time. As each lands in inbox: `inbox_read` → silently `inbox_archive` → dispatch next batch.
### Phase 3 — Cross-client aggregation (one delegation)
> "Read all /workspace/knowledge/clients/*.md files you just wrote. Cross-reference to find: (a) top 20 common issues across the org with frequency + which clients hit them — write to /workspace/knowledge/issues/<issue-slug>.md (one file per top issue with: name, frequency, affected clients, suspected root cause, the most-common resolution); (b) top 15 resolution playbooks — write to /workspace/knowledge/resolutions/<resolution-slug>.md (one file per playbook with: applies-to-issue, numbered steps, success-rate-if-derivable, gotchas, automation-candidate yes/no). Reply with: # issues files written, # resolutions files written, top 5 issues by frequency, top 3 candidate playbooks for AI automation. Under 10 lines."
### Phase 4 — Refresh graph (one delegation)
> "Run /graphify . to refresh the helpdesk knowledge graph with the new clients/, issues/, and resolutions/ folders. Reply: nodes, links, total markdown files indexed. Under 4 lines."
## Constraints
- Orchestrator already routes worker spawns through `claude --model sonnet` (medium tier) and 60-min hard timeout — don't override.
- DO NOT parallelize Phase 2 batches. Sequential only. The earlier 12-parallel attempt killed the host.
- The platform server has only 3.3 GB RAM. eit-platform/eit-access/infosource are stopped to make room — keep them off until this finishes.
- If any Phase-2 delegation returns timeout/exhausted, split the failing batch in half and retry only the unfinished half.
- Worker should write durable .md files. Do not let it dump ticket data back to you in replies.
- ACTIVE clients only. Verify via `Companies.isActive=true`. If a ticket has a companyID whose company is inactive, skip that ticket.
## What success looks like
- `helpdesk/knowledge/clients/<slug>.md` for every active client
- `helpdesk/knowledge/issues/<slug>.md` × ~20
- `helpdesk/knowledge/resolutions/<slug>.md` × ~15
- Refreshed graph
## Communication back to the user
- During Phase 2: silently archive each batch's inbox notification (this is routine work, no need to spam Tristen).
- ONLY surface to the user at: (a) end of Phase 1 with the scope estimate so he can sanity-check the batch plan, (b) end of Phase 4 with the final tally.
- If a phase fails hard (worker reports a blocking issue), surface immediately — don't auto-retry beyond the one split-and-retry rule above.
— Signed: deployment automation, on Tristen's behalf