Your team adopted AI coding assistants six months ago. PRs are shipping faster. Lines of code per developer are up. But is the code actually better — or are you just shipping more of it?
Most teams can't answer this question because their existing metrics don't distinguish between human-written and AI-generated code. autter gives you that visibility.
The metrics gap
Traditional engineering metrics were designed for a world where every line of code had a human author. They measure throughput (PRs merged, lines changed), velocity (cycle time, lead time), and stability (change failure rate, MTTR). These metrics still matter — but they're incomplete in the AI era.
Consider: your team's PR throughput doubled after adopting Copilot. Great. But your change failure rate also increased by 40%. Is the trade-off worth it? Which PRs are causing the failures? Are they correlated with the percentage of AI-generated code in the diff?
Without instrumentation at the merge layer, you're flying blind.
What autter tracks
autter captures metrics at the point where code quality and team behaviour intersect — the pull request review.
PR composition analysis
For every pull request, autter tracks the ratio of AI-generated to human-written code:
// Example: autter analytics API
const prAnalysis = await autter.analytics.getPRComposition({
repo: "acme/backend",
pr: 1842,
});
// {
// total_lines_changed: 347,
// ai_authored_lines: 198,
// human_authored_lines: 149,
// ai_ratio: 0.57,
// issues_found: 4,
// issues_by_source: { ai: 3, human: 1 },
// review_cycles: 1,
// time_to_merge: "4.2h"
// }Team-level quality dashboard
autter aggregates PR-level data into team and organisation dashboards:
| Metric | What it tells you |
|---|---|
| AI-authored ratio | What percentage of merged code is AI-generated? |
| Issue density by source | Do AI-authored lines have more issues per KLOC than human-authored? |
| Review cycle correlation | Do high-AI PRs require more review cycles? |
| Revert rate by source | Are AI-authored changes reverted more often? |
| Time to first flag | How quickly does autter catch issues vs. human reviewers? |
| Convention compliance | Are AI tools following your team's established patterns? |
Trend analysis
The most valuable metrics aren't snapshots — they're trends. autter tracks how your quality indicators change over time and correlates them with events:
- New team member onboarding — does AI-authored issue density spike when new developers join?
- Tool changes — did switching from Copilot to Cursor change your AI code quality?
- Rule updates — did adding a new autter rule reduce a specific class of issues?
- Sprint pressure — does quality degrade near deadlines? By how much?
Actionable insights, not vanity metrics
Data without action is overhead. autter surfaces specific, actionable recommendations based on your metrics:
Bottleneck detection
"Reviews from the platform team are averaging 3.2 days. 67% of their review time is spent on convention enforcement that autter could automate. Consider enabling auto-merge for convention-only findings."
Quality regression alerts
"Test coverage in
src/payments/dropped 8% over the last two sprints. 12 PRs merged without new tests — 9 of them were >70% AI-authored. Suggested action: enable therequire-test-coveragerule for this module."
AI tool effectiveness
"PRs using Cursor have 1.4x fewer autter findings than PRs using Copilot for the same codebase. The difference is concentrated in naming conventions and import patterns."
Integration with existing tools
autter's analytics complement — not replace — your existing engineering intelligence stack:
| Tool | What it measures | What autter adds |
|---|---|---|
| GitHub Insights | PR throughput, contributor activity | AI vs. human attribution, quality-per-line metrics |
| LinearB / Sleuth | DORA metrics, cycle time | AI-correlated change failure rate |
| SonarQube | Static analysis coverage, code smells | Contextual quality scoring, convention compliance |
| Datadog / Grafana | Production error rates | Correlation between merge-time findings and runtime failures |
Export and API access
All autter metrics are available via API and can be exported to your BI tools:
# Export team metrics as CSV
npx autter metrics export \
--team backend \
--period 90d \
--format csv \
--output ./reports/q1-quality.csv
# Query via API
curl -H "Authorization: Bearer $AUTTER_TOKEN" \
"https://api.autter.dev/v1/metrics/team/backend?period=30d"Getting started
The analytics dashboard is available to all autter users. Connect your repository and data starts flowing from your next pull request — no additional configuration needed.
# View a quick summary in your terminal
npx autter metrics --team backend --period 30dYou can't improve what you can't measure. And in the AI coding era, what you need to measure has changed.
