Hiring an outreach agency or consultant is a significant commitment, typically $5,000-15,000/month, with a 3-6 month ramp before you see reliable results. The difference between a partner who understands your business and one running a generic playbook can mean the difference between a program that grinds at a 2% reply rate and one that reaches a top-quartile 7-8%. That gap is not marginal. It's the difference between a program that generates pipeline and one that burns domains and budget for three months before you pull the plug.
These ten questions are designed to separate vendors who can adapt their process to your business from those who will plug you into the same machine they use for everyone else. The questions get progressively more specific. If a vendor can answer the first five confidently but stumbles on questions six through ten, they probably have solid infrastructure but lack the strategic depth to run a program tailored to your model.
1. How would you describe our business model in outreach terms, and how does that change your approach?
What you're testing: Do they understand that different business models require different outreach architectures, or do they treat every client the same?
What a good answer sounds like: They should immediately distinguish between characteristics that matter for outreach: your deal size, sales cycle length, number of decision-makers involved, and how accessible your buyers are via digital channels. They should be able to articulate how those factors change the sequence length, channel mix, personalization depth, and daily volume of the program they'd build for you.
Red flag: "We use the same proven framework for all our clients. It works across industries." A framework that works the same way for a $99/month SaaS tool and a $50,000 custom manufacturing order is a framework that's wrong for at least one of them.
2. What does "conversion" mean for a company like ours, and how does that affect how you size the program?
What you're testing: Do they understand that "book a demo" isn't a universal conversion event?
What a good answer sounds like: For a SaaS company, conversion is a demo or free trial. For a service business, it's a strategy call or audit. For a project-based company, it's a scoping conversation. For an industrial sale, it's getting on the vendor shortlist. Each of these has a different conversion rate, which changes the funnel math: how many prospects you need to contact to produce a given number of outcomes.
Red flag: They quote a meeting booking rate (like "we average 2% booking rate") without asking what a "meeting" means in your context. A 2% rate is excellent for SaaS demos but meaningless if your conversion event is an RFQ submission that takes three months of relationship building.
3. What signals would you use to trigger outreach to our prospects, and where would you source them?
What you're testing: Can they identify the specific business events that create buying windows for your product or service, or will they just blast a static list?
What a good answer sounds like: They should name signals specific to your business. If you're a marketing agency, they should mention CMO hires, unfilled marketing job postings, funding rounds, and campaign launches. If you sell software development services, they should talk about engineering job postings open for 60+ days, product roadmap announcements, or competitors shipping features your prospect lacks. If you're in manufacturing, they should mention regulatory changes, supply chain disruptions, and new product development announcements.
Red flag: "We'll pull a list from Apollo based on your ICP and start sequencing." A list is not a signal. A list tells you who could buy. A signal tells you who might be ready to buy right now.
4. Walk me through the sequence you'd build for us. Not a generic one, but one designed for how our buyers actually make decisions.
What you're testing: Can they design a sequence that matches your business model, or will they default to a templated pattern?
What a good answer sounds like: The sequence should reflect your deal structure. A 5-touch, 20-day email-heavy sequence is appropriate for low-friction products. A 7-10 touch sequence over 30-45 days with LinkedIn as a co-primary channel makes sense for service businesses. A 12-15 touch sequence over 60-90 days with phone as the dominant follow-up channel is right for complex technical sales. They should explain why they'd choose those parameters, not just recite them.
Red flag: A one-size-fits-all sequence diagram that doesn't change based on who you sell to.
5. How will you personalize at scale, and what's the minimum level of personalization you'll accept before a message goes out?
What you're testing: Do they have a real methodology for personalization, or is "personalized" just a marketing term for mail-merge?
What a good answer sounds like: They should define personalization in layers. Level 1 is basic mail-merge (name, company). Level 2 references something specific to the prospect's company or role. Level 3 reflects a recent event or publicly-stated priority. They should tell you which level is appropriate for your archetype and why, and they should have guardrails against messages going out with missing or bad personalization tokens.
Red flag: "We use AI to personalize every message." AI-generated personalization without review produces the most damaging off-brand messages a program can send. The question is whether AI drafts get reviewed before sending, and who reviews them.
6. What's your domain, deliverability, and sender reputation strategy?
What you're testing: Do they handle the infrastructure professionally, or are they putting your primary domain at risk?
What a good answer sounds like: They should explain their domain strategy (dedicated secondary domains, never your primary domain), DNS setup (SPF, DKIM, DMARC; they should know what these are without prompting), warmup process (4-8 weeks, with a specific schedule), and monitoring thresholds (bounce rates below 5%, spam complaints below 0.3%). They should also explain how they isolate your sending reputation from other clients.
Red flag: Any plan that involves sending from your primary domain, or any dismissiveness around warmup timelines. Cold sending from an unwarmed domain is the fastest way to end up in spam folders permanently.
7. How will you measure what's working, and what reporting will we get?
What you're testing: Can they give you operational visibility, or is the program a black box?
What a good answer sounds like: They should track deliverability metrics (bounce rate, spam complaints, inbox placement) in real time and have automatic pause triggers when thresholds are breached. Campaign performance should be reported weekly at minimum, with open rates, reply rates, positive reply rates, and meeting booking rates broken out by sequence, audience segment, and message variant. They should be able to explain what "good" looks like for your specific archetype, not just cite industry averages.
Red flag: Monthly summary reports with only top-line metrics. If you can't see performance by variant and segment, you can't improve the program.
8. How do you handle compliance, and what's our exposure in the geographies we sell into?
What you're testing: Do they understand the legal frame, or will you discover a problem after a regulator shows up?
What a good answer sounds like: Every vendor should know CAN-SPAM basics (unsubscribe link, physical address, honest subject lines). But if you sell into Europe, they should proactively raise GDPR and the legitimate interest test, including the documentation requirements. If you're in financial services, healthcare, or legal, they should know the industry-specific restrictions without being prompted. If they propose SMS or phone outreach, they should immediately discuss TCPA consent requirements and the risks.
Red flag: "We handle all that. Don't worry about compliance." Compliance is not a box they check for you. It's a shared responsibility, and a vendor who dismisses it either doesn't understand the exposure or is willing to accept risks they haven't told you about. Fines under GDPR can reach 4% of global revenue. TCPA violations are $500-1,500 per text message.
9. What does the handoff look like when a prospect responds, and who owns what at each stage?
What you're testing: Is the program designed to integrate with how your team actually works?
What a good answer sounds like: They should describe a clear handoff protocol: who qualifies the reply, what qualifies as a "positive response," how fast the handoff happens (same day is the floor), and what information gets passed along (context from the sequence, the prospect's specific response, suggested next action). They should have a shared source of truth (CRM integration or shared tracker) so nothing falls through the cracks.
Red flag: Vague answers like "we'll route qualified leads to your team." "Qualified" needs a definition. "Route" needs a mechanism. "Your team" needs a specific person.
10. Can you show me results from a client whose business model is similar to ours? Not just the same industry, but the same deal structure.
What you're testing: Do they have real experience with your model, or are they extrapolating from SaaS wins?
What a good answer sounds like: They should be able to share specific metrics from a comparable engagement: reply rates, meeting booking rates, pipeline generated, and time to first result, all from a client with a similar deal size, sales cycle, and buyer profile. The metrics should be consistent with realistic benchmarks for your archetype, not inflated to impress. A 2% meeting booking rate for a service engagement is good. A vendor claiming 8% for the same model is either cherry-picking or misreporting.
Red flag: "We've worked with hundreds of companies across every industry." Volume of clients is not evidence of relevance. If they can't produce a case study from a business that sells the way you sell, their experience may not transfer to your program, even if they're excellent at what they do for SaaS companies.
What the Answers Tell You
A vendor who gets defensive, gives vague answers, or redirects to their technology platform instead of their strategy is telling you something important: they haven't thought about your business at the level these questions require. That doesn't make them incompetent. It may just mean they're built for a different type of client.
The best outcome from this conversation isn't finding the vendor with the smoothest answers. It's finding the one who asks you questions back, about your buyers, your sales cycle, your past outreach experience, what worked and what didn't. The vendor who's curious about your business before pitching their solution is the one most likely to build a program that actually fits.