Competitive Analysis Methodology

A repeatable, objective framework for evaluating competitors. Designed to produce insights that directly inform product decisions, positioning, and roadmap priorities — not generic SWOTs that sit in slide decks.


1. Competitor Tiers

Not all competitors are equal. We categorise into three tiers based on proximity to our value proposition.

Tier 1: Direct Competitors

Solving strategy-to-execution for similar buyers. These are companies a prospect might evaluate alongside Zontally.

  • Workboard
  • Quantive (formerly Gtmhub)
  • Cascade
  • Perdoo

Analysis cadence: Deep review quarterly. Light monitoring monthly (product releases, pricing changes, funding rounds, key hires).

Tier 2: Adjacent Competitors

Overlap on one of our three promises but not all three. A prospect might believe one of these tools solves their problem — until they realise it doesn't connect strategy to execution.

OKR & Goal Management:

  • Lattice
  • 15Five
  • Betterworks

Employee Engagement:

  • CultureAmp
  • Officevibe
  • Peakon (Workday)

Execution & Project Management:

  • Monday.com
  • Asana
  • Smartsheet

Analysis cadence: Deep review semi-annually. Monitor for major product pivots or acquisitions.

Tier 3: The Real Competitor

What our buyer actually uses today. This isn't a company — it's a behaviour pattern.

  • PowerPoint for strategy decks
  • SharePoint for document storage
  • Excel for tracking and reporting
  • Email for status chasing
  • Meetings for accountability
  • Hope as a strategy execution methodology

This is the competitor we face in every deal. Understanding the status quo — and the pain of maintaining it — is more important than any product comparison.

Analysis cadence: Ongoing. Every customer conversation tells us about the status quo.


2. Analysis Dimensions

We evaluate every competitor across 7 dimensions chosen because they map directly to our value proposition, product principles, and what our buyers care about.

Dimension Definitions

D1: Strategy-to-Execution Depth

What we're assessing: How completely does the product connect C-suite strategy to frontline work? Can you trace from a strategic priority through objectives, team goals, and individual work items in a single system?

Why it matters: This is our core differentiator. Most tools do one layer well (OKRs, or project management, or engagement). We connect all three. We need to know how close any competitor gets to this.

What "5" looks like: Full traceability from company strategy → objectives → key results → team goals → individual work. All in one system, navigable in both directions (top-down and bottom-up).

What "1" looks like: No concept of strategy. It's a task manager or goal tracker with no strategic context.


D2: Execution Visibility

What we're assessing: Can leaders see real-time execution status without chasing updates? Is progress data-driven or self-reported? Does it answer "where are we?" without a meeting?

Why it matters: This is the Monday Morning Dashboard test (Theme 2). Our ICP's biggest pain is assembling the execution picture from scattered sources.

What "5" looks like: Real-time executive dashboard. Data-driven status (not self-reported RAG). Drill-down from strategy to detail. Alerts on degradation. A leader opens it Monday morning and has the picture in 60 seconds.

What "1" looks like: Status is manually entered by individuals. No dashboard. To know where things stand, you have to ask someone.


D3: Employee Experience

What we're assessing: Do individual contributors see their impact? Is the product engaging for the people doing the work, or is it admin overhead that only serves management?

Why it matters: Theme 3 — "I Can See My Impact." A product that only serves leaders will struggle with adoption. If ICs view it as a reporting burden, usage dies.

What "5" looks like: ICs have a personal view showing how their work connects to strategy. The product makes them feel their contribution matters. They choose to use it, not because they're told to.

What "1" looks like: The product is for managers and admins only. ICs enter data but get nothing back. It feels like surveillance.


D4: AI & Intelligence

What we're assessing: How sophisticated is the AI capability? Is it reporting (backward-looking), insight (current-state), recommendation (forward-looking), or adaptive (learning)?

Why it matters: Theme 5 — our Digital Leadership Team. AI is our long-term moat. We need to know what competitors offer today and where they're heading.

Maturity sub-levels:

LevelDescription
L0No AI capability
L1Conversational — chat Q&A, basic search
L2Contextual — proactive insights, pattern detection
L3Recommendation — trade-off analysis, suggested actions
L4Adaptive — predictive, learning from execution data, evolving

What "5" looks like: Proactive AI that predicts risk, recommends actions, learns from execution patterns, and delivers through persona-based agents (like our Digital Employees).

What "1" looks like: No AI. Maybe basic reporting or charts, but no intelligence.


D5: Time-to-Value

What we're assessing: How fast can a new customer go from signup to first meaningful insight? What's the implementation burden? Is it self-serve or does it need a consultant?

Why it matters: Principle #6 — time-to-value beats feature richness. We're pre-revenue. If our competitors require 6-week implementations, that's a gap we can exploit. If they have PLG onboarding that works in an hour, that's a threat.

What "5" looks like: Self-serve signup. Guided onboarding. First meaningful insight within hours, not weeks. Templates and sensible defaults. No consultant required.

What "1" looks like: Enterprise-only sales process. 6-12 week implementation. Requires dedicated CSM. Customer doesn't see value for months.


D6: Platform vs Feature

What we're assessing: Is the product a configurable, extensible platform or a fixed-function tool? Can customers adapt it to their way of working, or do they adapt to the tool?

Why it matters: Principle #8 — platform compounds, features don't. A platform play creates compounding value over time. A feature tool is easier to replicate and easier to replace.

What "5" looks like: Declarative data model. Configurable workflows, forms, and views. API-first. Extension framework. Customer can model their unique execution approach without custom development.

What "1" looks like: Fixed screens, fixed fields, fixed workflows. Take it or leave it. "Our way or the highway."


D7: ICP Fit

What we're assessing: Who is the product really built for? What persona buys it, uses it, and champions it? How closely does this align with our target buyer (Chiefs of Staff, Ops Directors, Transformation leads at 500-2,000 person companies)?

Why it matters: Ensures we're comparing like-for-like and spotting positioning gaps. A tool built for HR teams looks similar on a feature list but sells to a completely different buyer.

What "5" looks like: Built for the same buyer we're targeting. Same company size. Same pain point. Same buying process. Direct competition.

What "1" looks like: Completely different buyer. Different industry, company size, or functional area. Not actually competitive despite surface similarities.


3. Scoring Model

Each dimension is scored 1-5:

ScoreLabelDefinition
1Non-existentCapability doesn't exist or is fundamentally broken
2BasicExists but not a strength. Minimal functionality
3CompetentDoes the job adequately. Not differentiated
4StrongGenuine strength. Well-executed. Competitive advantage
5Market-leadingBest-in-class. Hard to beat. Defines the category

Scoring Rules

  • Evidence-based: Every score must include evidence (product screenshots, documentation, user reviews, analyst reports). No guessing.
  • Capability, not marketing: Score what the product does, not what the website says. Many competitors market capabilities they haven't built.
  • Current state: Score the product as it exists today, not the roadmap. Note roadmap direction separately.
  • From our ICP's perspective: Score based on what matters to a Chief of Staff at a 500-2,000 person company, not what matters to an enterprise CHRO or a startup founder.

4. Analysis Structure Per Competitor

Each competitor analysis follows this structure:

Section A: Company Profile

  • Company name, founded, HQ, funding stage and total raised
  • Estimated headcount and revenue stage
  • Target market and stated positioning
  • Recent news (funding rounds, acquisitions, leadership changes, pivots)

Section B: Product Capability Assessment

  • Score across all 7 dimensions with evidence for each
  • Radar chart visualisation of scores
  • Key strengths — what they do genuinely well (be honest and respectful)
  • Key weaknesses — where they fall short relative to our value proposition
  • Recent product direction — what are they investing in? Where are they heading?

Section C: Go-to-Market Assessment

  • Pricing model: Per-seat, per-team, platform fee, usage-based? Entry price point?
  • Sales motion: Product-led growth, sales-led, hybrid? Free tier or trial?
  • ICP and buyer persona: Who actually buys? Who champions? Who uses?
  • Content and community strategy: Thought leadership, community, events?
  • Partner ecosystem: Consultants, integrators, technology partners?

Section D: Zontally Implications

The most important section — this is where analysis becomes action.

  • Where we win head-to-head: Specific scenarios where Zontally is the better choice and why
  • Where we lose and need to accept it: Areas where the competitor is stronger and we should not try to compete (for now)
  • Where we lose and need to close the gap: Areas where the competitor is stronger and it threatens our positioning
  • Positioning guidance: Exact language for how we talk about ourselves when this competitor comes up in a conversation. One paragraph a salesperson can use.
  • Product implications: Does this analysis change any roadmap priorities or maturity targets?

5. Landscape Summary

In addition to individual competitor analyses, we maintain a Competitive Landscape Summary showing all competitors on a single comparison matrix.

Comparison Matrix

All competitors scored on all 7 dimensions in a single table. Sortable by any dimension.

Positioning Map

2D plot showing competitors on the two dimensions that matter most:

  • X-axis: Strategy-to-Execution Depth (D1)
  • Y-axis: AI & Intelligence (D4)

This visualises the whitespace we're targeting — the upper-right quadrant where deep strategy-execution connection meets sophisticated AI intelligence.

Whitespace Analysis

Where does no competitor score above 3? These are opportunities:

  • Dimensions where the market is underserved
  • Combinations of dimensions that no single competitor delivers
  • Buyer personas that no competitor specifically targets

6. How This Feeds Product Decisions

This framework isn't academic. Every analysis must answer at least one of these questions:

QuestionHow the framework answers it
"Should we build X?"Check if competitors already do it well. If CultureAmp dominates pulse surveys at score 5, we target "good enough" (score 3), not best-in-class
"How do we position against Y?"Section D of each analysis includes exact messaging guidance
"Where's the whitespace?"The landscape summary shows dimensions where no competitor scores above 3
"Are we falling behind?"Monthly monitoring catches competitive moves before they become threats
"How good is good enough?"Competitor scores set the benchmark for our component maturity targets
"What's our moat?"Tracking D1 + D4 scores over time shows whether our differentiation is holding

7. Process

New Competitor Analysis

  1. CPO identifies competitor for analysis (based on prospect mentions, market monitoring, or strategic review)
  2. Research phase: product trial/demo, documentation review, user reviews (G2, Gartner), analyst reports, pricing pages, investor materials
  3. Score across 7 dimensions with evidence
  4. Write Zontally Implications section
  5. Review with CEO and CCO — validate positioning guidance
  6. Publish to
    competitive-analysis/
    in product management repo
  7. Brief CMO on positioning implications for content and messaging

Monitoring (Ongoing)

  • Monthly: Tier 1 competitors — check for product releases, pricing changes, funding rounds, key hires
  • Quarterly: Tier 1 deep review — rescore dimensions, update implications
  • Semi-annually: Tier 2 deep review
  • Triggered: Any time a competitor is mentioned in a prospect conversation, capture and log

Sources

  • Product demos and free trials (first-hand wherever possible)
  • G2, Gartner Peer Insights, TrustRadius reviews
  • Competitor documentation, help centres, and changelogs
  • Analyst reports (Gartner, Forrester for enterprise tools)
  • LinkedIn and social media (hiring patterns reveal investment areas)
  • Competitor investor materials and blog posts
  • Customer and prospect feedback ("we also looked at X because...")

8. Templates

Competitor Analysis Document Template

# [Competitor Name] — Competitive Analysis

## Company Profile
- Founded:
- HQ:
- Funding:
- Headcount (est):
- Revenue stage:
- Target market:
- Positioning:

## Product Capability Assessment

| Dimension | Score (1-5) | Evidence |
|---|---|---|
| D1: Strategy-to-Execution Depth | | |
| D2: Execution Visibility | | |
| D3: Employee Experience | | |
| D4: AI & Intelligence | | |
| D5: Time-to-Value | | |
| D6: Platform vs Feature | | |
| D7: ICP Fit | | |

### Key Strengths

### Key Weaknesses

### Recent Product Direction

## Go-to-Market Assessment
### Pricing
### Sales Motion
### ICP & Buyer Persona
### Content & Community

## Zontally Implications
### Where We Win
### Where We Lose (Accept)
### Where We Lose (Close the Gap)
### Positioning Guidance
### Product Implications

Methodology established February 2026. Maintained by ZontallyCPO. Review cadence: Annually (methodology itself). Analyses per cadence above.


Read more