A repeatable, objective framework for evaluating competitors. Designed to produce insights that directly inform product decisions, positioning, and roadmap priorities — not generic SWOTs that sit in slide decks.
Not all competitors are equal. We categorise into three tiers based on proximity to our value proposition.
Solving strategy-to-execution for similar buyers. These are companies a prospect might evaluate alongside Zontally.
Analysis cadence: Deep review quarterly. Light monitoring monthly (product releases, pricing changes, funding rounds, key hires).
Overlap on one of our three promises but not all three. A prospect might believe one of these tools solves their problem — until they realise it doesn't connect strategy to execution.
OKR & Goal Management:
Employee Engagement:
Execution & Project Management:
Analysis cadence: Deep review semi-annually. Monitor for major product pivots or acquisitions.
What our buyer actually uses today. This isn't a company — it's a behaviour pattern.
This is the competitor we face in every deal. Understanding the status quo — and the pain of maintaining it — is more important than any product comparison.
Analysis cadence: Ongoing. Every customer conversation tells us about the status quo.
We evaluate every competitor across 7 dimensions chosen because they map directly to our value proposition, product principles, and what our buyers care about.
What we're assessing: How completely does the product connect C-suite strategy to frontline work? Can you trace from a strategic priority through objectives, team goals, and individual work items in a single system?
Why it matters: This is our core differentiator. Most tools do one layer well (OKRs, or project management, or engagement). We connect all three. We need to know how close any competitor gets to this.
What "5" looks like: Full traceability from company strategy → objectives → key results → team goals → individual work. All in one system, navigable in both directions (top-down and bottom-up).
What "1" looks like: No concept of strategy. It's a task manager or goal tracker with no strategic context.
What we're assessing: Can leaders see real-time execution status without chasing updates? Is progress data-driven or self-reported? Does it answer "where are we?" without a meeting?
Why it matters: This is the Monday Morning Dashboard test (Theme 2). Our ICP's biggest pain is assembling the execution picture from scattered sources.
What "5" looks like: Real-time executive dashboard. Data-driven status (not self-reported RAG). Drill-down from strategy to detail. Alerts on degradation. A leader opens it Monday morning and has the picture in 60 seconds.
What "1" looks like: Status is manually entered by individuals. No dashboard. To know where things stand, you have to ask someone.
What we're assessing: Do individual contributors see their impact? Is the product engaging for the people doing the work, or is it admin overhead that only serves management?
Why it matters: Theme 3 — "I Can See My Impact." A product that only serves leaders will struggle with adoption. If ICs view it as a reporting burden, usage dies.
What "5" looks like: ICs have a personal view showing how their work connects to strategy. The product makes them feel their contribution matters. They choose to use it, not because they're told to.
What "1" looks like: The product is for managers and admins only. ICs enter data but get nothing back. It feels like surveillance.
What we're assessing: How sophisticated is the AI capability? Is it reporting (backward-looking), insight (current-state), recommendation (forward-looking), or adaptive (learning)?
Why it matters: Theme 5 — our Digital Leadership Team. AI is our long-term moat. We need to know what competitors offer today and where they're heading.
Maturity sub-levels:
| Level | Description |
|---|---|
| L0 | No AI capability |
| L1 | Conversational — chat Q&A, basic search |
| L2 | Contextual — proactive insights, pattern detection |
| L3 | Recommendation — trade-off analysis, suggested actions |
| L4 | Adaptive — predictive, learning from execution data, evolving |
What "5" looks like: Proactive AI that predicts risk, recommends actions, learns from execution patterns, and delivers through persona-based agents (like our Digital Employees).
What "1" looks like: No AI. Maybe basic reporting or charts, but no intelligence.
What we're assessing: How fast can a new customer go from signup to first meaningful insight? What's the implementation burden? Is it self-serve or does it need a consultant?
Why it matters: Principle #6 — time-to-value beats feature richness. We're pre-revenue. If our competitors require 6-week implementations, that's a gap we can exploit. If they have PLG onboarding that works in an hour, that's a threat.
What "5" looks like: Self-serve signup. Guided onboarding. First meaningful insight within hours, not weeks. Templates and sensible defaults. No consultant required.
What "1" looks like: Enterprise-only sales process. 6-12 week implementation. Requires dedicated CSM. Customer doesn't see value for months.
What we're assessing: Is the product a configurable, extensible platform or a fixed-function tool? Can customers adapt it to their way of working, or do they adapt to the tool?
Why it matters: Principle #8 — platform compounds, features don't. A platform play creates compounding value over time. A feature tool is easier to replicate and easier to replace.
What "5" looks like: Declarative data model. Configurable workflows, forms, and views. API-first. Extension framework. Customer can model their unique execution approach without custom development.
What "1" looks like: Fixed screens, fixed fields, fixed workflows. Take it or leave it. "Our way or the highway."
What we're assessing: Who is the product really built for? What persona buys it, uses it, and champions it? How closely does this align with our target buyer (Chiefs of Staff, Ops Directors, Transformation leads at 500-2,000 person companies)?
Why it matters: Ensures we're comparing like-for-like and spotting positioning gaps. A tool built for HR teams looks similar on a feature list but sells to a completely different buyer.
What "5" looks like: Built for the same buyer we're targeting. Same company size. Same pain point. Same buying process. Direct competition.
What "1" looks like: Completely different buyer. Different industry, company size, or functional area. Not actually competitive despite surface similarities.
Each dimension is scored 1-5:
| Score | Label | Definition |
|---|---|---|
| 1 | Non-existent | Capability doesn't exist or is fundamentally broken |
| 2 | Basic | Exists but not a strength. Minimal functionality |
| 3 | Competent | Does the job adequately. Not differentiated |
| 4 | Strong | Genuine strength. Well-executed. Competitive advantage |
| 5 | Market-leading | Best-in-class. Hard to beat. Defines the category |
Each competitor analysis follows this structure:
The most important section — this is where analysis becomes action.
In addition to individual competitor analyses, we maintain a Competitive Landscape Summary showing all competitors on a single comparison matrix.
All competitors scored on all 7 dimensions in a single table. Sortable by any dimension.
2D plot showing competitors on the two dimensions that matter most:
This visualises the whitespace we're targeting — the upper-right quadrant where deep strategy-execution connection meets sophisticated AI intelligence.
Where does no competitor score above 3? These are opportunities:
This framework isn't academic. Every analysis must answer at least one of these questions:
| Question | How the framework answers it |
|---|---|
| "Should we build X?" | Check if competitors already do it well. If CultureAmp dominates pulse surveys at score 5, we target "good enough" (score 3), not best-in-class |
| "How do we position against Y?" | Section D of each analysis includes exact messaging guidance |
| "Where's the whitespace?" | The landscape summary shows dimensions where no competitor scores above 3 |
| "Are we falling behind?" | Monthly monitoring catches competitive moves before they become threats |
| "How good is good enough?" | Competitor scores set the benchmark for our component maturity targets |
| "What's our moat?" | Tracking D1 + D4 scores over time shows whether our differentiation is holding |
competitive-analysis/ in product management repo# [Competitor Name] — Competitive Analysis ## Company Profile - Founded: - HQ: - Funding: - Headcount (est): - Revenue stage: - Target market: - Positioning: ## Product Capability Assessment | Dimension | Score (1-5) | Evidence | |---|---|---| | D1: Strategy-to-Execution Depth | | | | D2: Execution Visibility | | | | D3: Employee Experience | | | | D4: AI & Intelligence | | | | D5: Time-to-Value | | | | D6: Platform vs Feature | | | | D7: ICP Fit | | | ### Key Strengths ### Key Weaknesses ### Recent Product Direction ## Go-to-Market Assessment ### Pricing ### Sales Motion ### ICP & Buyer Persona ### Content & Community ## Zontally Implications ### Where We Win ### Where We Lose (Accept) ### Where We Lose (Close the Gap) ### Positioning Guidance ### Product Implications
Methodology established February 2026. Maintained by ZontallyCPO. Review cadence: Annually (methodology itself). Analyses per cadence above.
Read more