Time In StatusMake the case for Time In Status
27th Mar 2026

Make the case for Time In Status

Where does the time actually go?

Every sprint, someone asks: "Why is this late?" And every sprint, the answer is a shrug—"it got stuck in review," "we hit some blockers," "it took longer than expected."

You feel the delays. You know something's off. But without data, "reviews are slow" is just a feeling—easily dismissed in standups and forgotten by the retro.

The questions most teams can't answer:

  • How long do issues actually sit in each status?
  • Is "In Review" your silent bottleneck—or is it QA?
  • Are you losing 70% of your potential throughput to wait time you've never measured?

Time in Status reveals where work waits, spots bottlenecks before they pile up, and gives you the data to fix what's actually broken—not what you think is broken. Use these ready-to-send templates to get buy-in from your team, leaders, and security stakeholders.

Ready to send templates

Getting started with Time in Status often means communicating its value to different stakeholders. Whether you're presenting to your team, leadership, or security, this section provides ready-to-use templates and talking points to help you clearly articulate how Time in Status turns invisible process problems into visible, fixable improvements.

Make the case to leadership

From: you@company.com
To: name@company.com
Subject: We're losing throughput to bottlenecks we've never measured

Hey [Name],

What if I told you we might be shipping half of what we could—with the
same team, same skills, same sprint length?

I've been looking into why sprint commitments keep getting missed, and
the pattern is clear: we don't actually know where time goes in our
process. We know stories are "in progress" or "in review"—but not
how long they sit there, or whether that's normal.

Here's what the math says:

A team with 3-day code reviews ships ~260 stories/year.
Cut that to same-day reviews → ~520 stories/year.
Same people. Same skills. Double the output.

That's not a theory—it's Little's Law. And the hidden costs go deeper:

- 37.9 days/year lost to context switching per team
- 24.3 days/year lost to merge conflict rework
- $37,275/year per team in hidden costs from wait time alone

To match a faster team's output, we'd need to hire an entire second
team. That's a $500k+ decision vs. a process change.

But here's the thing—we can't fix what we can't see. I found a Jira
app that shows exactly where work gets stuck, stage by stage:

Time in Status for Jira

It shows time spent in each status right on the board card during
standup. No clicking into issues, no asking around. The Sprint Report
automatically ranks our top bottleneck stages. And it has an AI coach
(Noesis) that analyzes our specific patterns and suggests what to fix.

I'd like to install the free trial and see what it finds.
Any reason not to?

[Your name]
 

Talking points for the leadership conversation

If they ask "How is this different from just looking at the board?" The board shows what's in progress right now. Time in Status shows how long it's been there—and how that compares to similar work. A card in "In Review" for 3 hours is fine. The same card for 3 days is a $500k/year problem hiding in plain sight.

If they ask "Won't this slow people down with more reporting?" Zero reporting overhead. Time in Status runs automatically—durations show up on board cards, in the issue panel, and in sprint reports. The team doesn't log anything. They just see the data.

If they ask "What's the ROI?" At $75/hour loaded developer cost, 3-day review waits - create 497 hours of lost developer time per team per year. That's $37,275 in hidden costs—before counting the missed deadlines and scope cuts. The app pays for itself if it saves one bottleneck conversation per sprint.

Make the case to your team

From: you@company.com
To: your-team@company.com
Subject: What if standup actually showed us where work is stuck?

Hey Team,

Quick question: when someone asks "why did this take so long?" in the
retro—do we actually have the answer?

We all feel when things are slow. But we're debating opinions instead
of analyzing facts. Is the bottleneck in review? In QA? Is it a
one-off, or is it the same stage every sprint?

I found a Jira app that shows us exactly where time goes:

Time in Status for Jira

Here's what it does:

- Shows "3d 2h" right on the board card during standup — no clicking,
  no asking around
- Breaks down every issue's journey: how long in each status, where
  the wait time is, whether this issue is healthy or an outlier
- Sprint Report ranks our top bottleneck stages so retros focus on
  what matters
- AI coach (Noesis) analyzes patterns across sprints and suggests
  specific experiments to try

Important: this is NOT a surveillance tool. It measures process, not
people. The question isn't "who's slow?"—it's "what's blocking flow?"

Think of it like debugging. When a deploy fails, we look at the logs.
When our process fails, we should look at the data. This gives us
the logs.

I'd like to run it for a sprint or two and see what it reveals.
What do you think?

Best,
[Your Name]
 

Talking points for the team conversation

If someone asks "Is this going to be used to monitor us?" No. Time in Status measures the process, not individuals. "Your review took 3 days" is surveillance. "Our review stage averaged 2.8 days—what's causing the queue?" is process improvement. The framing matters, and we'll use it the right way.

If someone asks "Do we need another tool?" Time in Status isn't another tool to manage. It's built into Jira—shows up on the board, on the issue, in sprint reports. There's nothing to log, no dashboards to maintain. It just makes data visible that is currently hidden, that helps us improve our flow.

If someone asks "What if the data makes us look bad?" That's the point—not to look bad, but to stop guessing. If reviews are slow, the data helps us make the case for dedicated review time, smaller PRs, or better team agreements. It gives us evidence instead of opinions.

Make the case to security

From: you@company.com
To: it@company.com
Subject: Time in Status for Jira — Fortified Security, Atlassian Infrastructure

Hi [Name],

Our team wants to install Time in Status by Smart Guess, a Jira app
that shows where work gets stuck in our process so we can identify
and fix bottlenecks:

Time in Status for Jira

We understand security is a top priority—rightfully so. Here's why
Time in Status meets strict security standards:

1. Atlassian Cloud Fortified: Time in Status is part of the Atlassian
   Fortified program (achieved July 2024), meaning it meets rigorous
   security, reliability, and support standards. It's been thoroughly
   validated for enterprise-grade protection and performance.

2. Cloud Security Participant: Part of the Atlassian Marketplace Cloud
   Security Participant Program, with a $5,000 USD Bugcrowd reward
   pool since 2022. Ongoing security testing through Atlassian's Bug
   Bounty Program proactively identifies and resolves potential
   vulnerabilities.

3. All Data Stays on Atlassian: Smart Guess does not store customer
   data outside of Atlassian's platform. All persistent application
   data resides in Atlassian Forge Storage. User names and avatars
   are retrieved on-demand via API and never persisted.

4. Built on Atlassian Forge: Uses Atlassian's latest cloud platform—
   hosted on AWS—with all runtime, storage, and communication
   infrastructure managed by Atlassian. No local servers or data
   centers. Meets strict security standards now and in the future.

5. AI Data Handling: Only aggregated workflow metrics (cycle time,
   throughput, WIP limits, issue keys, state transitions) are
   transmitted to the AI coach. No summaries, descriptions, or
   personal information is shared.

6. Security Practices:
   - Two-factor authentication mandatory for all staff
   - Snyk Code (SAST) and Snyk Open Source scanning in CI/CD
   - Critical vulnerabilities resolved within 2 weeks
   - Full disk encryption on all employee devices
   - External storage devices prohibited
   - Employee credentials disabled same-day upon termination

7. Incident Response: Six-step process (investigate, notify Atlassian,
   contain, remediate, notify customers, post-incident review).
   Security contact: security@smartguess.is

8. Permissions and Policies: Full details at:
   - Information Security: https://smartguess.is/policy/information-security/
   - Incident Management: https://smartguess.is/policy/incident-management/

Since it runs entirely on the Atlassian platform with Forge, it
inherits the same security posture as our Jira instance. Should I go
ahead with the trial, or is there anything specific you'd like me to
check first?

Best,
[Your Name]
 

Talking points for the security conversation

If they ask about data residency: All data stays on the Atlassian platform. Smart Guess maintains no local servers or data centers. Everything runs on Atlassian Forge, hosted on AWS.

If they ask about AI/LLM data: Only aggregated workflow metrics are sent to the AI coach—cycle time, throughput, WIP limits, issue keys, and state transitions. No issue summaries, descriptions, comments, or personal information is shared.

If they ask about vulnerability management: Smart Guess uses Snyk Code (SAST) and Snyk Open Source for continuous security scanning. Bug-fix timeframes: Critical within 2 weeks, High within 4 weeks. They participate in the Atlassian Bug Bounty Program with a $5,000 Bugcrowd reward pool.

If they ask about compliance certifications: Smart Guess relies on Atlassian's platform-level certifications for infrastructure security. Their own Atlassian Cloud Fortified badge validates enterprise-grade security, reliability, and support standards.

Why Time in Status? The evidence.

If your stakeholders want more context, share these articles that break down the real cost of invisible process problems:

The hidden queue: what 3-day reviews actually cost The math behind why 3-day reviews cost you an entire second team's worth of output—and $37,275/year per team in hidden costs.

The 3-Day Code Review Problem (And Why You Can't Fix What You Can't See) Real developer stories from Reddit about how review wait times vary wildly between teams—and why nothing changes without visibility.

How to Debug Your Process with Time in Status for Jira A practical walkthrough from install to first insight—board cards, sprint reports, and AI-powered analysis.

5 Common Mistakes When Tracking Time in Status What goes wrong when teams collect time data without context, and how to avoid turning metrics into surveillance.

Share with a friend

Explore More Ways to Improve Delivery

Flow Intelligence

Predictive Planning Poker