Every week a new article claims AI will triple your sales. The reality we see working with teams is more mundane: used well, AI saves hours of mechanical work, improves call preparation and raises message quality. Used badly, it simply accelerates bad content.
The question is not whether to use AI in sales. It is where it truly makes sense and where you are adding technology just to look modern without creating value.
"If the AI output needs 50% editing, the prompt is wrong. If it needs no editing at all, you are delegating judgement that should still be yours."
The map: what to automate and what to leave alone
There is a distinction that clarifies everything: process tasks versus judgement tasks.
Process tasks are repetitive, follow a pattern, and quality depends on available information more than experience. Those are the best AI candidates.
Judgement tasks require reading ambiguous signals, managing emotion, reading between the lines, and deciding when to push and when to hold back. Those stay with you.
- Account research before a first call
- Call summary and next steps
- Post-meeting follow-up email draft
- CRM enrichment (role, company, buying signals)
- Initial lead scoring against defined criteria
- Internal sales-team FAQ answers
- The first cold message to a prospect
- Objection handling live on a call
- Pricing and terms conversation
- The decision on whether an opportunity deserves time
- The close: any moment where the relationship is on the line
Tools that make real sense
You do not need a 12-tool stack. Three tools, configured well, cover 80% of the value:
For research and enrichment
Clay is one of the strongest options for consolidating account information: LinkedIn data, recent news, tech stack, funding rounds, and a contextual summary ready before a call. Initial setup takes time, but the outcome is 30 minutes of research turned into 30 seconds. At ARQ we work with Enginy on projects where automation and data enrichment need a deeper implementation.
For call summaries
Fireflies or Otter transcribe and summarise automatically. The value is not just the summary. It is that the team stops taking notes during the call and can be fully present. Connected to the CRM, the summary lands in the deal without manual effort.
For call coaching
Gong or Chorus go beyond transcription: they analyse patterns across the team's calls, identify which questions generate the most engagement, and compare the listen-to-talk ratio between rep and prospect. They add real value once you have enough call volume for patterns to be statistically meaningful — below around 20 calls a week per team, the signal is too thin.
For communication drafts
ChatGPT or Claude are useful for follow-up email drafts, offer structure, or RFP responses. The prompt matters: the more specific account and conversation context you feed it, the better the draft. The salesperson reviews, adjusts tone and sends. Not the other way around.
The adoption problem
The most common mistake when implementing AI in a sales team is doing too much, too fast, with too many tools. The result: nobody really uses them, or they use them badly, and three months later the whole thing gets written off as an experiment that failed.
The pattern that works:
- Choose one concrete use case. Not "improve sales with AI". Something specific, like "cut research time before a first call".
- Roll it out with one salesperson first. The early adopter. Measure time before and after. When they have real data, the rest of the team listens.
- Define the process, not just the tool. "Before every first call I run this prompt with this information and review the output for five minutes." Without an explicit process, the tool gets abandoned.
- Expand only when the first use case is habit. Not before. The urge to add more tools before the first ones have stuck is the main reason stacks go unused.
"AI does not turn weak salespeople into strong ones. It gives strong salespeople more time to do the work that actually differentiates them."
Where the real risk sits
The problem is not that AI is bad. It is that delegating judgement without noticing is easy.
If you use AI to generate outbound messaging and that message fails, you have two options: improve the prompt or question whether the ICP and value proposition are right. Most teams improve the prompt. The second analysis is the one that matters.
AI is very good at producing content that sounds reasonable. It is not good at deciding whether the strategy itself is right. That part is still yours.
We work with commercial teams to identify where AI actually makes sense in their specific process and to configure the stack without overbuilding it. No 12-tool demos, just the context of how your team works today.
Let's talk →