EDPA — Evidence-Driven Proportional Allocation
Deriving capacity from delivery evidence
1. Summary
A person declares capacity for a period. The system identifies work items they demonstrably contributed to. Capacity is ex post proportionally split among relevant items based on Job Size and contribution level.
The result is a report that is a byproduct of delivery, not a separate administrative activity.
The model provides two complementary views of the same data:
- Per-person view: how a person's capacity is distributed among their items → reports, timesheets
- Per-item view: how work on an item is distributed among people → cost allocation, audit per deliverable
2. Terminology
| Term | Definition | Configuration |
|---|---|---|
| Iteration | Delivery cycle. Stories are planned, delivered, and closed. | 1w (AI-native) / 2w (classic) |
| Planning Interval | Planning cycle. Features are planned, coordinated, and evaluated. | 5w (4+1 IP) / 10w (8+2 IP) |
| IP Iteration | Innovation & Planning iteration at the end of PI. | Last iteration in PI |
| Job Size (JS) | Relative size estimate of a work item. Modified Fibonacci. | 1, 2, 3, 5, 8, 13, 20 |
| WSJF | Prioritization score: (BV + TC + RR) / JS | Independent per level |
| CW | Contribution Weight — degree of a person's involvement on an item. Independent per person. | 0.15–1.0 |
| Evidence Score | Raw sum of activity signals from GitHub. Detection layer. | Automatic |
| RS | Relevance Signal — normalized signal derived from Evidence Score. | Automatic |
| Derived Hours | Derived hours, model output. | After Iteration Close |
3. Model architecture
3.1 Three separate layers
| Layer | Purpose | Where it lives |
|---|---|---|
| Operational Metadata | Live delivery data | GitHub Issues + Projects |
| Capacity Registry | People capacity, roles, FTE | YAML / JSON in repo |
| Evidence & Reporting | Frozen snapshots, reports, signatures | /snapshots, /reports, /signed |
3.2 Source of truth
GitHub IS source of truth for: issue hierarchy, ownership, status, Job Size, WSJF inputs, review trail, delivery audit trail.
GitHub IS NOT source of truth for: hourly capacity, FTE, derived hours, signature states. These live in the evidence layer.
4. Work item hierarchy
Each level has its own independent Job Size and WSJF. Feature WSJF is not computed from Stories below it.
| Level | JS max | GitHub mapping |
|---|---|---|
| Epic | 20 | Issue Type: Epic |
| Feature | 13 | Issue Type: Feature |
| Story (2/10) | 8 | Issue Type: Story |
| Story (1/5) | 5 | Issue Type: Story |
| Task | — | Issue Type: Task |
Items above the limit are broken down. Granularity guardrails enforce reasonable granularity.
5. Model: Evidence-Driven Proportional Allocation
5.1 Iteration Planning Protocol
Before EDPA derives hours (ex-post), the team must plan the iteration (ex-ante). Planning requires confirmed capacity as input.
- Confirm Capacity — Each team member confirms availability. This is a commitment, not an estimate. External collaborators negotiate allocation explicitly. Result:
availability: confirmedincapacity.yaml. - Calculate Planning Capacity —
Planning_Capacity = Σ Capacity[P, I] × planning_factor. Theplanning_factoris a team-level property (configured per team incapacity.yamlunderteams:). Default: 0.8. - Select Work — Pull stories from the prioritized backlog (WSJF order) until
Σ JobSizeapproaches historical velocity ×planning_factor. Do not plan to 100%. - Buffer — The remaining ~20% absorbs support, maintenance, incidents, and unplanned work. If buffer items generate evidence, EDPA allocates them normally.
- Edge case — If no unplanned work occurs, all capacity goes to planned items. The guarantee
Σ = Capacityholds regardless.
5.2 Inputs
| Input | Source | Example |
|---|---|---|
Capacity[P, I] | Confirmed at Iteration Planning | 40h |
RelevantItems[P, I] | Automatic from GitHub evidence | 6 items across 3 levels |
JobSize[item] | Custom field on issue | Fibonacci 1–20 |
CW[P, item] | From evidence / manual override | 0.15–1.0 |
RS[P, item] | Normalized from Evidence Score | 0.25–1.0 |
5.3 Evidence detection
| GitHub signal | Evidence Score | Typical CW |
|---|---|---|
| Issue assignee | +4 | 1.0 (owner) |
Explicit /contribute command | +3 | explicit |
| PR author referencing item | +2 | 0.6 (key) |
| Commit author with ref in message | +1 | 0.25 (reviewer) |
| PR reviewer | +1 | 0.25 (reviewer) |
| Issue / PR comment | +0.5 | 0.15 (consulted) |
- Threshold: Evidence Score ≥ 1.0
- Heuristic: strongest signal → default CW
- Override:
/contribute @person weight:0.6 - Commit count is not converted to time
- Per-role corrections validated by Monte Carlo simulation (1,000 scenarios)
| Role | Original CW | Calibrated CW | Bias |
|---|---|---|---|
| Business Owner | 1.0 | 1.15 | +0.15 |
| Product Manager | 0.6 | 0.65 | +0.05 |
| Architect | 0.6 | 0.65 | +0.05 |
| Developer | 1.0 | 1.0 | 0.00 |
5.4 Calculation — two variants
5.5 Mathematical guarantee
6. Dual-view CW: two questions, one dataset
CW = 0.25 for a reviewer can mean two things. These are two different questions — the model solves both from the same data:
Question: How is person P's capacity distributed among their items?
Question: How is work on item X distributed among people?
| View | Question | Output | Guarantee |
|---|---|---|---|
| Per-person | How many hours did P spend on what? | Report, audit | Σ = capacity |
| Per-item | How many people and hours did item X cost? | Cost allocation | Σ shares = 100% |
7. Cadence configuration
| Cycle | Duration | 1.0 FTE | 0.5 | 0.25 |
|---|---|---|---|---|
| Iteration | 2 weeks | 80h | 40h | 20h |
| PI | 10 weeks | 400h | 200h | 100h |
| Cycle | Duration | 1.0 FTE | 0.5 | 0.25 |
|---|---|---|---|---|
| Iteration | 1 week | 40h | 20h | 10h |
| PI | 5 weeks | 200h | 100h | 50h |
8. Learning loop
- CW: After 2–3 iterations, evaluate the heuristic. PM underestimated? Arch overestimated?
- Job Size: Reference Story “3” ≠ Feature “3”. Independent per level.
- AI: You report time for delivery, not minutes of coding. AI → velocity, not report.
9. GitHub implementation
9.1 Custom fields
| Field | Type | Values |
|---|---|---|
| Issue Type | Issue type | Initiative, Epic, Feature, Story, Task, Bug |
| Job Size | Number | Fibonacci 1–20 |
| BV / TC / RR | Number | Fibonacci 1–20 |
| WSJF Score | Number | Auto (Action) |
| Planning Interval | Iteration | 5 or 10 weeks |
| Iteration | Iteration | 1 or 2 weeks |
| Team | Single select | Team values |
| Primary Owner | Assignee | Accountable owner |
9.2 GitHub Actions
| # | Action | Trigger | Function |
|---|---|---|---|
| 1 | WSJF Calculator | Field change | Auto WSJF calculation |
| 2 | Contributor Detector | PR merge / review | Contributor detection + evidence |
| 3 | Iteration Close | Manual dispatch | Snapshot + reports (MD/JSON/XLSX) + per-item |
| 4 | PI Close | Manual dispatch | Iteration aggregation |
| 5 | Velocity Tracker | Iter/PI close | Velocity JSON + dashboard |
9.3 Branch naming & DoR
CI check blocks PRs without reference to an issue (S-XXX, F-XXX, E-XXX). DoR: Issue Type, Parent, Job Size, BV+TC+RR, Owner.
10. Reports and audit
10.1 Pipeline
10.2 Freeze rule
After Iteration Close: snapshot is frozen. Evidence is not overwritten in-place. Every correction is a new revision. Essential for audit defense.
10.3 Audit principle
Provability rests on 5 pillars:
- GitHub delivery evidence
- Capacity registry (YAML)
- Frozen snapshot (reproducible input)
- Reproducible calculation (Score = JS × CW × RS)
- Signed output (BankID, law 21/2020 Sb.)
11. Assumptions and risks
- All items are closed (undelivered items are moved)
- Capacity confirmed at Iteration Planning
- Branch naming enforced (CI check)
- Job Size consistent per level
- CW is calibrated after the first iterations
| Risk | Mitigation |
|---|---|
| Auditor rejects | Methodology + snapshots + BankID |
| CW mismatch | Override + calibration |
| Commit without S-/F-/E-XXX | CI check blocks PR |
| PM/Arch without commits | Comments + /contribute |
| 0 items for a person | Process escalation |
12. Comparison with alternatives
Fixed Split
Pre-defined buckets (e.g., 60% Dev, 20% Arch, 20% QA). Hours are split fixedly by role, regardless of actual contribution. Simple, but inaccurate.
EDPA v1.0.0
Evidence-Driven Proportional Allocation. Hours are derived automatically from GitHub delivery evidence (commits, PRs, reviews). Mathematical guarantee: Σ = Capacity.
Manual timesheets
Each team member manually fills in hours. Subjective, administratively demanding, has no per-item view, unauditable without additional evidence.
| Property | Fixed Split | EDPA v1.0.0 | Manual timesheets |
|---|---|---|---|
| Fixed buckets | Yes | No | No |
| Empty levels | Problem | Do not exist | N/A |
| Per-person view | Yes | Yes (primary) | Yes |
| Per-item view | No | Yes (dual-view) | No |
| Cross-functional | Limited | Full | Full |
| Automation | Medium | High | None |
| Math. guarantee | Complex | Native | No |
13. Implementation plan
| Phase | Time | Contents |
|---|---|---|
| Day 1 | 6h | GitHub org, Projects setup, custom fields |
| Week 1 | 3 days | Actions 1–2 (WSJF + Contributor Detector) |
| Week 2 | 2 days | Actions 3–5 (Iteration Close + PI Close + Velocity) |
| Iteration 1 | 1–2w | Pilot operation, first reports, CW calibration |
| Retro PI 1 | 1 day | Cadence, CW accuracy, velocity, dual-view validation |