The support queue stopped being the bottleneck.
A B2B SaaS company with 40,000 users and a 4-person support team was drowning. We built a custom AI support agent wired into their product data, their Zendesk, and their docs. 60 days later: 62% of tier-1 tickets resolved without a human, first-response time down from 14 hours to under 2, and the same team finally doing the work that actually needed them.
A 4-person team, 800 tickets a week.
The company had hit a familiar inflection: 40,000 users, growing 8% per month, support volume growing faster. Four agents were handling ~200 tickets a day between them. Response time had crept up to 14 hours. CSAT was slipping. Leadership was staring down a hire-three-more-agents conversation and wondering if there was a better answer.
The actual pattern in the queue was revealing: about 70% of tickets fell into ten categories — password resets, billing questions, integration setup questions, "how do I find X?" navigation help, and known bug workarounds. The answer was already in their docs. Users just couldn't find it, and the team was spending its days copy-pasting the same three links.
A support agent that reads tickets, user state, and docs — and resolves.
One pod (engineer + ops lead) ran a 5-week engagement. Week one: audit the queue, categorize 4 weeks of historical tickets, and decide which categories were safe to auto-resolve. Weeks two and three: build the agent. Weeks four and five: shadow mode, then phased rollout.
- Triage classifier. Every incoming ticket is classified against 14 categories (auto-resolve, route to tier-2, route to engineering, route to account management, etc.) with a confidence score. Below a threshold, the ticket flows to a human untouched.
- Resolution agent. For auto-resolve categories, the agent reads the user's actual product state (via the company's API), cross-references the internal docs and recent release notes, and drafts a specific, personalized reply with step-by-step instructions and direct links. Then it sends.
- Escalation logic. Every AI-resolved ticket has a "still need help?" link. Clicks get logged, and if the user responds, the ticket is instantly escalated with full context (original request, what the agent tried, why the user is still stuck). No cold handoffs.
- Learning loop. Every time a tier-2 agent resolves an escalated ticket, the resolution is fed back into the retrieval index. The agent gets smarter without us re-deploying.
Two weeks of shadow mode, then quiet cut-over.
We ran the agent in shadow mode for two weeks — it drafted responses for every ticket but didn't send. Human agents compared their real responses against the AI drafts and flagged any divergences. By the end of week two, the agreement rate was above 95% on tier-1 categories.
Cutover was rolled out by category: password resets first (93% agreement), then billing FAQs, then integration setup. Within 30 days, auto-resolve categories covered 62% of ticket volume. CSAT on AI-resolved tickets was 4.6/5 — slightly higher than the human team's average, because the AI responses were more specific and arrived faster.
Same team, three times the leverage.
The support team kept its four people — and stopped being a support team. Two moved to customer-success roles (onboarding new enterprise accounts, which wasn't getting done). One specialized in tier-3 technical escalations alongside engineering. One became the "agent trainer," owning the feedback loop and the eval suite.
The company postponed three hires, redirected the budget to product engineering, and stopped talking about "the support problem." First-response time is now a KPI they hit, not a metric they avoid.
We were about to hire three people to do work we didn't want anyone doing. Azul showed us a different answer — and got us there in five weeks.
Let's see if an agent fits.
Send us your ticket volume, your resolution patterns, and your current tooling. We'll tell you if an AI support agent is a 5-week win or if you'd be better off with a smarter macro library.
Start the conversation →
Book a call