JTBD Methods: ODI and the Switch Interview, Step by Step
ODI is a quantitative, survey-driven process that treats customer needs as measurable performance metrics and outputs statistical opportunity scores. The
Switch Interview is a qualitative, narrative-driven process that reconstructs real switching decisions and outputs causal force diagrams.
What follows is the canonical step-by-step process for each, drawn from the primary sources prescribed by each school.
Outcome-Driven Innovation (ODI)
As prescribed by Tony Ulwick / Strategyn. Primary sources: "Jobs to Be Done: Theory to Practice" (2016), "What Customers Want" (2005), strategyn.com, and Ulwick's Medium publication at jobs-to-be-done.com. The 2016 book presents the definitive six-step formulation containing 84 documented sub-steps.
Step 1. Define the market around the job-to-be-done
What happens. The team redefines its market not by product category, technology, or demographic, but as a group of people and the core functional job they are trying to get done. This reframing is foundational: it produces a stable unit of analysis that persists even as products and technologies change.
Input. Company's existing market assumptions; initial hypotheses about who the customer is and what they are trying to accomplish.
Output. A market definition statement in the form: [job executor] + [core functional job-to-be-done] — e.g., "parents trying to pass on life lessons to their children" or "surgeons trying to repair a torn rotator cuff."
Key sub-processes:
- 1a. Identify the job executor. The person who actually performs the job. In B2B markets, Ulwick distinguishes three customer types: the job executor (primary user), the product lifecycle support team (installers, maintainers), and the buyer (purchase decision-maker with financial goals). Each has distinct needs; the job executor is the primary focus.
- 1b. Define the core functional job. State the job as a single, solution-free sentence at the right level of abstraction — neither so narrow it implies a solution ("use a circular saw") nor so broad it loses meaning. Example: "cut a piece of wood in a straight line." The job must be stable over time and technology-agnostic.
Step 2. Uncover the customer's desired outcomes (qualitative research)
What happens. Through qualitative interviews, the team creates a job map of the core functional job and captures the complete set of desired outcome statements — the metrics customers use to measure success at each step of the job.
Input. The defined job-to-be-done and identified job executor.
Output. (1) A completed Universal Job Map with 10–20 job steps; (2) a comprehensive set of 50–150 desired outcome statements tied to those job steps; (3) related jobs, emotional/social jobs, and consumption chain jobs identified.
Key sub-processes:
2a. Create the Universal Job Map. Introduced by Ulwick and Bettencourt in the 2008 HBR article "The Customer-Centered Innovation Map," the job map deconstructs any core functional job into up to eight universal process steps:
- Define — Determine objectives and plan the approach
- Locate — Gather items and information needed to do the job
- Prepare — Set up the environment and organize materials
- Confirm — Verify readiness before proceeding
- Execute — Carry out the core task
- Monitor — Assess whether execution is on track
- Modify — Make adjustments when monitoring reveals problems
- Conclude — Finish, clean up, store outputs
The map must capture what the customer is trying to accomplish — not what they currently do. It must be solution-free and universally applicable regardless of which product the customer uses. Most jobs use some or all of the eight universal steps and decompose into 10–20 specific sub-steps.
2b. Capture desired outcome statements. With the job map as a scaffold, the team interviews job executors and captures their success metrics for each job step, typically 5–12 outcomes per step. Customers rarely articulate needs in metric form naturally; the interviewer reformulates raw language into the canonical syntax.
The precise outcome statement syntax is:
Direction of improvement + unit of measure (metric) + object of control + contextual clarifier
Or formally: Verb (minimize/maximize) + metric + object of control + contextual clarifier
Examples from Ulwick's published work:
- "Minimize the time it takes to get the songs in the desired order for listening"
- "Minimize the likelihood that the cut goes off track"
- "Minimize the likelihood of failing to identify a health issue that could be improved with nutrition"
The direction of improvement is almost always "minimize" (minimize time, minimize likelihood, minimize the number of). "Maximize" is used rarely. Common metrics include time it takes, likelihood of/that, number of, ability to, frequency with which. The object of control names the specific thing the customer wants to achieve or avoid. The contextual clarifier specifies when, where, or under what conditions.
Ulwick's ten characteristics of a well-formed outcome statement: stable over time, reveals how customers measure value, enables evaluation of competing solutions, guides product design (actionable), measurable, controllable, devoid of solutions, tied to the job-to-be-done, structured for reliable quantitative prioritization, and able to unify the organization.
2c. Capture additional need types. Beyond the core functional job outcomes, the team captures: related jobs (5–20 additional functional jobs the user may want to accomplish), emotional and social jobs (personal feelings and social perceptions desired), consumption chain jobs (purchasing, learning, interfacing, maintaining, upgrading, disposing), and buyer's financial desired outcomes (the purchase decision-maker's financial goals and constraints).
Step 3. Quantify which outcomes are underserved and overserved (quantitative research)
What happens. A statistically valid survey is fielded in which each desired outcome is rated for importance and satisfaction. The Opportunity Algorithm is applied to calculate an opportunity score for every outcome, revealing which needs are underserved, overserved, or appropriately served.
Input. Complete set of 50–150+ desired outcome statements; defined target population of job executors.
Output. Opportunity scores for every outcome; an Opportunity Landscape (scatter plot) visualizing the market's need structure; identification of underserved and overserved outcomes.
Key sub-processes:
3a. Design and field the survey. The instrument contains all outcome statements plus screening and profiling questions. Sample size is typically 180–3,000 respondents (must be statistically representative). Each outcome is rated on two questions:
- Importance: "When [executing this job], how important is it to you to [outcome statement]?" (1–5 scale: 1 = not at all important, 5 = extremely important)
- Satisfaction: "When using [your current solution], how satisfied are you with your ability to [outcome statement]?" (1–5 scale: 1 = not at all satisfied, 5 = extremely satisfied)
3b. Convert to scores using the top-two-box method. The Importance Score is the percentage of respondents who answered 4 or 5, normalized to a 0–10 scale (e.g., if 31% rated 4 or 5, Importance = 3.1). The Satisfaction Score uses the same calculation on the satisfaction question.
3c. Apply the Opportunity Algorithm. The exact formula:
Opportunity Score = Importance + MAX(Importance − Satisfaction, 0)
This gives double weight to importance relative to satisfaction. The MAX function ensures the score never falls below the importance value — high satisfaction cannot "cancel out" high importance. The theoretical range is 0–20.
Interpretation thresholds:
- > 15: Extreme opportunity — critical unmet need
- 12–15: High opportunity — strong innovation target
- 10–12: Moderate opportunity — worth pursuing in large markets
- < 10: Low opportunity — appropriately served or overserved
3d. Construct the Opportunity Landscape. A scatter plot with Satisfaction on the x-axis (0–10) and Importance on the y-axis (0–10). Each dot is one outcome statement. A diagonal separates underserved outcomes (upper-left, above the line) from overserved outcomes (lower-right, below the line). This visualization reveals the market's overall structure at a glance.
Step 4. Discover hidden segments of opportunity (outcome-based segmentation)
What happens. The quantitative data is segmented not by demographics or behavior, but by patterns of unmet needs — revealing groups of customers who share similar profiles of underserved and overserved outcomes.
Input. Complete dataset of importance and satisfaction ratings across all respondents and all outcomes.
Output. Statistically distinct customer segments, each with a unique set of unmet needs; segment-specific Opportunity Landscapes.
Key sub-processes:
4a. Factor analysis is run on the dataset to determine which desired outcomes best explain variance in responses.
4b. Cluster analysis groups respondents who rate outcomes similarly — customers with similar importance-satisfaction profiles are clustered together. This is Strategyn's proprietary Outcome-Based Segmentation methodology. The segments discovered are based on unmet needs, not demographics, though demographic profiling questions in the survey can subsequently describe each segment.
4c. Generate segment-level Opportunity Landscapes. Each segment gets its own scatter plot. Different segments frequently reveal radically different opportunity profiles. Ulwick cites the Bosch circular saw case: one segment (~33% of users, primarily finish carpenters) had 14 high-opportunity unmet outcomes, while other segments were largely overserved.
Step 5. Formulate the growth strategy
What happens. Using the segmented Opportunity Landscape, the team selects which segments to target and which growth strategy to employ, using the JTBD Growth Strategy Matrix.
Input. Segment-specific Opportunity Landscapes showing underserved and overserved outcome patterns.
Output. A chosen growth strategy per target segment; clear innovation direction.
Key sub-process — the five growth strategies:
The matrix maps strategies along two axes: gets the job done better ↔ worse and costs more ↔ less.
- Differentiated strategy — gets the job done significantly better at a higher price. Targets underserved segments willing to pay more. Examples: Dyson, Nespresso, iPhone.
- Dominant strategy — gets the job done significantly better AND cheaper. Targets the entire market. Rare and transformative. Examples: Google Search, Netflix streaming, UberX.
- Disruptive strategy — gets the job done adequately at much lower cost. Targets overserved customers paying for performance they don't need. Examples: Google Docs, TurboTax, Dollar Shave Club.
- Discrete strategy — gets the job done worse AND costs more. Serves restricted customers with no better alternative. Examples: airport concessions, stadium food.
- Sustaining strategy — incremental improvement, slightly better and/or slightly cheaper. Incumbent defense; poor strategy for market entry.
Strategic logic: If a market is primarily underserved → differentiated or dominant. If primarily overserved → disruptive. If mixed → different strategies for different segments.
Step 6. Formulate the product strategy — ideate, evaluate, and position
What happens. With target segments and their ranked unmet outcomes identified, the team devises solutions, evaluates them against customer-defined metrics, and develops positioning and pricing.
Input. Target segment(s) with prioritized unmet desired outcomes; chosen growth strategy.
Output. Validated product/service concept(s) ready for development; positioning strategy; pricing framework.
Key sub-processes:
6a. Outcome-driven ideation. Ideation does not begin with brainstorming — it begins with the ranked list of underserved outcomes. For platform-level solutions, ideate at the job-step level (how to get the entire job done on a single platform). For feature-level solutions, ideate at the outcome level (how to address specific unmet outcomes). For business model, define revenue model and cost structure after the platform is decided.
6b. Concept evaluation. Product concepts are assessed against the complete set of customer-defined desired outcomes. Customers rate the degree to which a concept would satisfy each specific outcome. A product that gets the job done 20% better or more than existing solutions is very likely to win in the marketplace — this is Ulwick's published threshold for predictable success.
6c. Competitive analysis. Existing products are rated against the same outcome metrics, generating a value-migration graph that reveals precisely where competitors fall short.
6d. Positioning. Communicate how the product helps customers execute the functional job better than competing solutions, specifically addressing their underserved outcomes, and how it satisfies associated emotional and social jobs.
6e. Pricing. Aligned with the growth strategy: premium for differentiated, low-cost for disruptive, better-and-cheaper for dominant.
The Switch Interview methodology
As prescribed by Bob Moesta and Chris Spiek / The Re-Wired Group. Primary sources: "Demand-Side Sales 101" (Moesta, 2020), "Jobs-to-be-Done Handbook" (Moesta and Spiek, 2014), therewiredgroup.com, JTBD Radio / jobstobedone.org, and Moesta's 2018 and 2023 Business of Software workshop presentations.
Step 1. Frame the learning questions
What happens. The team defines the specific research question and the type of switching decision to study. The question is framed from both the supply side (what the business needs to learn) and the demand side (what progress the customer was trying to make).
Input. A business challenge, product question, or hypothesis about why customers buy, switch, or leave.
Output. A clearly framed research question; parameters for the "experiment" — specifically, which single decision type to study.
Key details: Pick one decision type per study batch: signing up, upgrading, switching from a competitor, or canceling. Do not mix types — the force patterns will not cluster cleanly. Consider including both "hirers" (people who chose your product) and "firers" (people who left it) to reveal both pull and push.
Step 2. Screen, recruit, and schedule
What happens. Identify and recruit recent switchers — people who made a real switching decision recently enough to recall the details.
Input. Framed research question; customer or prospect database.
Output. Scheduled interviews with 10–12 carefully selected participants.
Key details:
- The "switching" criterion is non-negotiable. Only interview people who recently made an actual purchase or switch — not people who are considering it, not long-time habitual users. Moesta's prescribed window is within the last 90 days (ideally 30–90 days). Novice interviewers should stay under six months; anyone beyond 1.5 years will struggle to recall detail.
- Qualification checklist (from Moesta's 2018 workshop): the purchase was for themselves (not a gift), they "struggled" to make the decision (many choices, hard to justify, or took a long time), and they have used the product at least once.
- Interview count: 10 interviews is the Re-Wired Group's canonical number. Moesta has written: "If you conduct over ten interviews, the intent gets lost. You have so much information you end up relying on words instead of intent." For novices, 15–20 is acceptable. Never 100 — "it's not science."
- Intentional variation in recruitment. Based on Dr. Genichi Taguchi's Robust Design of Experiments (RDOE): deliberately vary demographic and contextual factors (gender, company size, reason for purchase) across the ten participants to induce meaningful variation. Do not recruit only your ideal customers.
- Session duration: 30–60 minutes per interview, typically ~45 minutes.
Step 3. Conduct the JTBD individual interview (the Switch Interview)
What happens. A documentary-style conversational interview reconstructs the complete timeline of the switching decision, from the first thought that something needed to change through purchase and first use.
Input. A recruited recent switcher; the timeline framework; recording equipment; two interviewers.
Output. A recorded narrative covering the entire switching journey, with timeline events, forces, emotions, tradeoffs, and context captured.
This step has several structured sub-phases:
3a. Set the documentary tone (~2 minutes). Make the conversation feel casual and exploratory, not interrogative. Moesta's typical opening: "I don't have a long list of questions. I just want to hear your story." Give permission for the interviewer to interrupt ("It's just curiosity, not me being rude"). Ask for first names of key people (boss, spouse, colleague) early — using them later unlocks deeper recall.
3b. Use the "game on / game off" technique. Developed by Spiek: a phone sits on the table — screen-up means the interview is live ("game on"), screen-down means the interviewers are pausing to discuss technique or coach each other ("game off"). This allows real-time calibration without losing the interviewee's trust.
3c. Reconstruct the timeline. The interview follows Moesta's six-stage buying timeline — the core structural framework of the method. The interviewer typically starts at the purchase moment and works backward:
- First thought — The initial realization that something needs to change; a "struggling moment" creates mental space for alternatives. Key probe: "When did you first realize something wasn't working?"
- Passive looking — Casual, unfocused awareness of alternatives; noticing without investing effort. Can last months or years. Key probe: "Were you noticing anything in the background?"
- Active looking — Focused research and comparison; requires a catalyst or "time wall" (deadline) to transition from passive. Key probe: "What did you Google? What alternatives did you compare?"
- Deciding — Making tradeoffs; the moment of commitment. Key probe: "What were you weighing? What did you have to give up?"
- Purchasing/first use — The "big hire"; expectations meet reality. Key probe: "Walk me through the first time you used it."
- Ongoing use — Continued habitual use; the "little hires"; reveals satisfaction and emerging new struggles. Key probe: "How has your use changed? What's still frustrating?"
3d. Apply core probing techniques. Moesta prescribes ten specific interview techniques (documented in Chapter 4 of Demand-Side Sales 101):
- Details, details, details — Ask specific, seemingly trivial questions ("What was the weather?", "Who was with you?", "What device were you on?") to jog deeper memory recall.
- Context creates meaning — When behavior seems irrational, probe for more context. "The irrational becomes rational with context."
- Contrast creates value — Use bracketing to force articulation of criteria: "Why virtual instead of in-person? Why not just drive there?"
- Unpack vague words — "Fast compared to what?" "What does 'easy' mean to you?" Every vague word hides a specific comparison.
- Energy matters — Listen for emphasis, sighs, pauses, intonation; when emotion surfaces, probe deeper.
- Play dumb — Frame challenging questions as confusion: "Hold on, I'm confused…" This prevents defensiveness.
- Three layers of language — Drive conversation from the pablum layer (generic platitudes) through the fantasy layer (solutions and wishes) down to the causal layer (what actually happened and why).
3e. Map the four forces in real time. During the interview, tag statements to the Four Forces of Progress:
| Force | Direction | Description |
|---|---|---|
| Push of the current situation | Toward change | Frustration, pain, or dysfunction with the status quo — the "struggling moment" |
| Pull of the new solution | Toward change | Attraction of the new solution; imagining a better future |
| Anxiety of the new solution | Against change | Fear, uncertainty, switching costs — "Will this actually work?" |
| Habit of the current situation | Against change | Comfort with the status quo, inertia, learned behaviors that must be abandoned |
The switching equation: Progress happens when (Push + Pull) > (Anxiety + Habit). Moesta's key insight: "The money is made on the anxiety side of the equation. Reducing anxieties gets them to what you're selling faster" — most companies over-invest in increasing pull (adding features) while neglecting anxiety and habit.
3f. Capture three dimensions of motivation. Each force and timeline stage is analyzed across three dimensions: functional (practical speed, effort, efficiency), emotional (internal feelings, fears, frustrations), and social (how the decision affects how others perceive the buyer).
Step 4. Debrief the interview
What happens. Immediately after each interview — within 15 minutes — the interviewing pair debriefs while recall is fresh.
Input. Interview recording and notes; the interviewers' fresh observations.
Output. A structured debrief document covering forces, timeline events, key language, and interpretive notes.
Key details: Moesta prescribes spending 30–45 minutes debriefing each interview, with the two interviewers actively debating what they heard. The debrief document captures: the core job (the progress the person was trying to make), the specific trigger event that moved them from passive to active looking, the dominant force (which of F1–F4 was strongest), hiring criteria (what tipped the final choice), firing criteria (what almost stopped them or would cause them to leave), the primary competitor being switched from (including "do nothing"), the strongest emotional moment, key tradeoffs made, timeline events mapped to the six stages, and open questions for subsequent interviews.
Step 5. Analyze interviews using the cluster method
What happens. After all interviews are complete, the team groups stories by similarity to identify distinct jobs.
Input. All debrief documents from 10+ interviews.
Output. 3–5 distinct clusters (jobs), each representing a group of people with similar contexts, struggling moments, and desired outcomes.
Key details: Moesta and Spiek describe three analysis methods (from the 2018 workshop slides): the contrast method (compare interviews to find differences), the dimensionalize-and-map method (plot interviews along key dimensions), and the cluster method (group similar stories together). Pattern detection should begin after interview three.
The team looks for stories where people share similar contexts, struggling moments, and desired outcomes. The target granularity — borrowing Moesta's language — is "the jagged place in the middle": broad enough to group large segments of people with similar circumstances, tight enough that people from genuinely different circumstances cannot be forced into the same job. You want 3–5 jobs, not one and not one hundred. Eight elements are analyzed for each cluster: context, struggling moments, pushes and pulls, anxieties and habits, desired outcomes, hiring and firing criteria, key tradeoffs, and basic quality requirements that cannot be violated.
Step 6. Detail each JTBD
What happens. For each cluster, the team creates a complete "job specification" with all elements fully articulated.
Input. Clustered interview data grouped by job.
Output. Complete JTBD specifications including force diagrams, job stories, timeline maps, competitive set maps, and hiring/firing criteria for each job.
Key deliverables per cluster:
- JTBD statement in the canonical job story format: "When [situation + trigger], I want [desired progress], so I can [expected outcome]."
- What the job is "more and less about" — a contrastive description
- Forces of Progress diagram — a visual four-quadrant map populated with specific customer language for each force
- Hiring and firing criteria — what must be present, and what disqualifies
- Key tradeoffs — what customers are willing to give up
- Design requirements — essential qualities that cannot be violated
- Competitive set map — the actual alternatives the customer considered, including "do nothing" and non-obvious competitors
- Timeline map — the six-stage buying timeline filled in with specific events, durations, and emotional high/low points
Step 7. Identify opportunities and implications
What happens. The team translates JTBD findings into actionable business decisions across product, marketing, sales, and strategy.
Input. Detailed JTBD specifications for all clusters.
Output. Prioritized opportunities; communication strategies; strategic implications for product, marketing, and sales.
Key translations:
- Job stories → positioning language and messaging
- Trigger events → marketing targeting and advertising placement (reach people in the struggling moment)
- Anxiety inventory → onboarding design and risk-reduction strategies (the highest-leverage intervention, per Moesta)
- Competitive set → sales battle cards (often includes non-obvious competitors)
- Hiring criteria → feature prioritization and sales demo scripts
- Struggling moments → content marketing and demand generation
- Timeline stages → sales funnel alignment
The most actionable clusters have strong push and pull but high anxiety — real demand behind a solvable barrier.
Step 8. Conduct the JTBD workshop
What happens. A two-day report-out workshop with the broader team to share findings, align understanding, ideate solutions, and build a 90-day action plan.
Input. Complete JTBD analysis and opportunity mapping from steps 5–7.
Output. Organizational alignment around the jobs discovered; a 90-day action plan; defined projects and quick-win adjustments.
Key details (from the 2018 workshop slides): The workshop includes an executive overview, review of each job found, ideation sessions for how to fulfill the jobs discovered, 90-day planning, definition of new projects, and identification of quick tweaks. Optional follow-ons include office hours for key team members, a project integration and planning session, and a second 90-day strategic plan.
Canonical sprint schedule: The Re-Wired Group runs the full process as a one-week sprint: Monday and Tuesday are devoted to conducting 5 interviews per day with debriefs after each. Wednesday and Thursday are analysis days (clustering, detailing, opportunity framing). Friday is synthesis (finalizing the JTBD frame, project ideation, communication plan, 90-day plan draft).
How the two processes compare at a glance
| Dimension | ODI (Ulwick / Strategyn) | Switch Interview (Moesta / Spiek) |
|---|---|---|
| Epistemology | Positivist — measurable metrics, statistical validation | Interpretivist — narrative reconstruction, causal inference |
| Primary method | Quantitative survey (n = 180–3,000) | Qualitative interviews (n = 10–12) |
| Unit of analysis | Desired outcome statement (a performance metric) | Switching story (a causal narrative) |
| How "job" is defined | Core functional job decomposed into measurable process steps | Progress a person is trying to make in a specific circumstance |
| Key analytical tool | Opportunity Algorithm: Imp + MAX(Imp − Sat, 0) | Four Forces of Progress: (Push + Pull) vs. (Anxiety + Habit) |
| Segmentation | Factor + cluster analysis on outcome ratings | Narrative clustering of switching stories |
| Primary output | Ranked opportunity scores; Opportunity Landscape | Force diagrams; job stories; timeline maps |
| Strategy selection | 5-strategy Growth Matrix (differentiated, dominant, disruptive, discrete, sustaining) | Opportunity prioritization based on force strength and addressable anxiety |
| Timeline | Weeks to months (qualitative + survey + analysis) | One-week sprint (interviews Mon–Tue, analysis Wed–Thu, synthesis Fri) |
| Where it excels | Large markets needing statistical confidence; prioritizing among many needs | Understanding causation behind switching; messaging and positioning; early-stage products |
Conclusion
These two methodologies are not competitors — they are complementary lenses on the same underlying theory. ODI answers "which needs matter most and to whom" with quantitative precision. It produces statistically validated opportunity scores that can direct R&D investment across large organizations with high confidence. Its power lies in the rigorous outcome statement syntax, the Opportunity Algorithm, and outcome-based segmentation — tools that transform fuzzy customer input into measurable innovation targets.
The Switch Interview answers "why do people actually change" with causal depth. It produces rich narrative artifacts — force diagrams, timeline maps, job stories — that reveal the emotional and contextual triggers behind real decisions. Its power lies in timeline reconstruction and the four-forces framework, which expose the anxieties and habits that quantitative surveys cannot detect.
For practitioners building a diagram comparing these processes, the critical structural difference is this:
ODI flows from definition → qualitative mapping → quantitative measurement → statistical segmentation → strategy selection → solution design — a linear funnel from broad to narrow.
The Switch Interview flows from question framing → recruitment → narrative reconstruction → debrief → pattern clustering → job specification → opportunity translation → organizational alignment — an iterative spiral from individual stories to shared meaning.
ODI's logic is deductive; the Switch Interview's logic is abductive.
Both begin with the customer's job. They simply listen for different things.