A structured, public, versioned framework for scoring the structural properties of news events.
GRIN v1.0 — last updated May 2026
Audit our worldview
VibesWire applies a structured analytical framework called GRIN to every story we cover. This page documents how that framework works, what editorial commitments it encodes, how we apply it consistently using AI tooling, how we update it over time, and which intellectual traditions it draws from.
The goal is total transparency. A reader who disagrees with a VibesScore should be able to point at the framework, identify the specific dimension or weighting they take issue with, and make a substantive argument about it. That is a different stance from the standard newsroom claim of unbiased reporting. We are biased, in specific and stated ways, and we have made the bias structural rather than gestalt — encoded once in a public framework rather than reinvented per article by individual writers. You can audit our worldview. You cannot audit the New York Times' editorial gestalt.
Why GRIN exists
The dominant frameworks for analyzing news are inadequate. Left-right bias scoring (AllSides, Ad Fontes, Media Bias/Fact Check) categorizes outlets along a single ideological axis, which is a category error: ideology is not the most important property of a story. Who benefits, who pays, what gets concentrated, what gets distributed, what gets created, what gets destroyed — these are the questions that determine whether an event is good or bad for the systems people actually live inside. Ideological position is downstream.
Sentiment analysis is similarly thin. A story can be neutral in tone and devastating in consequence, or alarmist in tone and structurally trivial. Tone is a feature of writing; structure is a feature of the world.
GRIN is built to score the structural properties of events. It asks five questions of every story:
Does this generate new capability, or just redistribute existing value?
Does this build resilience, or create fragility?
What are the friction costs and extraction overhead?
Is the resistance to change here healthy filtering, or incumbent protection?
How aggressively is value being concentrated away from the many to the few?
Five questions, five 0–5 scores, one composite VibesScore (0–100), one verdict label (Extractive / Mixed / Generative).
The five dimensions
G
Generativity
Does this create new capability or just redistribute existing value?
Generativity scores the degree to which an event produces something that did not exist before — new productive capacity, new institutional capability, new technology, new market structure, new public good. A merger that consolidates two existing businesses into one is low-generativity even if it is profitable. A research breakthrough that opens a new field is high-generativity even if it has no immediate commercial application. The hard cases are events that appeargenerative but actually redistribute — the question is always whether the pie grew or just changed hands.
Scale: 0 = nothing generative happens. 5 = substantial new capability is created.
R
Resilience
Does this build systemic resilience or create fragility?
Resilience scores the degree to which an event hardens or weakens the systems people depend on — supply chains, institutions, infrastructure, social trust, financial stability. Concentration usually reduces resilience because it creates single points of failure. Redundancy, distribution, and optionality usually increase it. The dimension is concerned with what happens when something goes wrong, and whether the system has buffers, alternatives, and fail-safes when it does.
What are the friction costs and extraction overhead?
Generative Efficiency scores the operational and transactional friction the event introduces or removes. High friction — long approval cycles, regulatory capture, rent-seeking intermediaries, deadweight loss — is bad. Low friction is good, unless the friction was serving a legitimate protective function (in which case the dimension interacts with R and N). The dimension is not anti-regulation; it is anti-deadweight. A regulation that prevents a real harm is not friction. A regulation that exists because of historical lobbying and prevents nothing is friction.
Scale: 0 = maximum friction or extraction overhead. 5 = meaningful friction reduction with no protective loss.
N
Novelty Openness
Is resistance to change healthy filtering or incumbent protection?
Novelty Openness scores the degree to which the event preserves or attacks the system's capacity to adapt. Some resistance to novelty is healthy — it filters out fads and prevents premature commitment. Some resistance is purely incumbent protection dressed in safety language. The question is whether the resistance to change is information-gated (we will let new things in once we know they are safe) or status-gated (we will not let new things in because they threaten current beneficiaries).
Scale: 0 = pure incumbent protection. 5 = genuine, honest filtering of new approaches.
X
Extractive
Inverted
How aggressively is value being concentrated away from the many to the few?
Extractive is the verdict overlay. It does not measure capability, durability, friction, or openness. It measures who ends up with the gains and who absorbs the costs. A high score means the event concentrates value upward; a low score means value flows widely or stays distributed.
Scale: 0 = purely generative, value flows broadly. 5 = totally extractive, gains concentrate while costs distribute.
Polarity note.Extractive is the only inverted dimension. For G, R, Ge, and N, higher scores are better. For Extractive, higher scores are worse. This inversion is intentional — Extractive is the verdict, not a virtue.
The VibesScore (0–100)
The four positive dimensions and the inverted Extractive score combine into a single composite:
VibesScore = ((G + R + Ge + N + (5 − Extractive)) / 25) × 100
A perfect score of 100 requires 5/5 on all four positive dimensions and 0/5 on Extractive — a maximally generative, resilient, efficient, open, non-extractive event. A score of 0 requires the inverse. Most real-world events score between 20 and 70.
The verdict label is derived from VibesScore plus the standalone Extractive score:
Extractive
VibesScore ≤ 35, or Extractive ≥ 4
Mixed
35 < VibesScore < 65
Generative
VibesScore ≥ 65 and Extractive ≤ 2
The Extractive label can override the score if Extractive is ≥ 4 even when other dimensions are positive. This reflects an editorial commitment: events that concentrate value upward are extractive even if they have other generative properties. The verdict is about who ends up holding the gains.
What GRIN scores — and what it doesn’t
GRIN scores the structural properties of events. It does not score:
The truth of the underlying reporting. GRIN assumes the source article is factually accurate. Fact-checking is a separate function we perform editorially and flag in piece.
The ideological alignment of actors. A policy from a left-wing government and an identical policy from a right-wing government will score the same.
The moral character of individuals. Cohen's GameStop bid scores 24/100 because of the structural properties of the proposed transaction, not because of any judgment about Cohen as a person.
Predicted outcomes.GRIN scores what the event currently is and the trajectory it implies. It does not score speculative futures beyond the explicit “20-year trajectory” prompt that frames every analysis.
Aesthetic or narrative quality of the news coverage itself. GRIN evaluates the event, not the writing.
Editorial commitments encoded in the framework
GRIN is not neutral. It encodes specific editorial commitments. Stating them explicitly is the price of asking readers to take the scoring seriously:
Concentration of value is presumptively bad. The framework treats extractive concentration as the central diagnostic. A different framework could prioritize property rights, labor share, or environmental cost. We chose extraction because it most cleanly captures who-gains-vs-who-pays across diverse story types.
Resilience matters more than efficiency in mature systems. GRIN scores Resilience and Generative Efficiency as peers, but in scoring conflicts we tend to favor R when systems are at scale. A small efficiency gain that introduces a fragile chokepoint scores worse than a small efficiency loss that preserves redundancy.
Novelty is presumptively positive but not automatically so. GRIN's N dimension explicitly distinguishes healthy filtering from incumbent protection. We do not treat all change as good or all stability as bad.
Twenty-year time horizons matter. Every analysis explicitly considers what happens if the patterns in the story compound for two decades. This biases the framework against short-term gains that produce long-term fragility.
The status substrate is real and analytically important. GRIN treats narrative, attention, and brand value as economic forces that can be extracted from and concentrated, not as epiphenomena of the “real” economy. A meme-stock acquisition bid is analyzable through the same lens as a leveraged buyout.
A reader who disagrees with any of these commitments has identified a precise location for disagreement. That is the point.
The role of AI
GRIN is applied by an LLM (currently Claude, from Anthropic). This is deliberate, and it has implications worth being explicit about.
What the AI does.Applies the GRIN framework to a source article, produces dimensional scores, generates the extraction graph and key stats, drafts the prose components of the analysis (verdict, extraction map, 20-year trajectory, what-to-watch), and selects appropriate visualizations from the chart palette.
What the AI does not do.Decide the framework. The dimensions, scoring guides, polarity, weighting formula, and worldview are authored by VibesWire's editorial team and encoded in a system prompt that is itself versioned and (as of v2.0) public.
Why this matters.The standard concern about AI in journalism is that the model has its own biases that contaminate output. The standard counter-concern is that AI eliminates the human judgment that makes journalism valuable. GRIN's design splits the difference: editorial judgment is concentrated in framework design, where it is explicit and contestable; framework application is delegated to a system that applies it consistently across thousands of stories without per-article human bias injection.
Locus of bias.The bias is upstream. It lives in our choice of dimensions, our scoring guides, our example mappings. It does not live in per-article execution. This shifts where readers can contest us — they argue about the framework, not about whether we treated story X more harshly than story Y.
Consistency caveat.LLM application is approximately consistent, not exactly consistent. The same article scored multiple times will produce a small distribution of scores rather than identical numbers. We score each article once at temperature 0 with deterministic seeds, and we publish quarterly variance audits as part of the Methodology Log. If you re-run the framework on a piece and get a slightly different score, that is the noise floor speaking, and the magnitude is documented.
What we are not claiming.That AI removes human bias. Bias does not disappear when you use a model; it moves to where the model was specified. We are claiming that moving bias upstream into a stated framework is an improvement in epistemic transparency over the alternative of having it implicitly distributed across writer-by-writer editorial judgment.
The methodology log
GRIN is versioned. Major versions reflect philosophical shifts (added or removed dimensions, polarity changes, formula revisions). Minor versions reflect refinements to scoring guides, example mappings, or visualization selection logic. Patch versions reflect bug fixes in prompt engineering with no scoring impact.
Cadence.We publish a Methodology Log entry every quarter, even if no changes were made. A “no changes this quarter” entry is itself a published commitment.
What each entry contains.Version number and effective date; summary of changes; reasoning for each change (what pattern in our scoring or in reader feedback motivated it); calibration delta (how the new version scores a held-out sample of 50 prior articles compared to the old version); backfill policy for the change.
Per-article version stamps.Every published article notes the GRIN version it was scored under. We do not silently re-score historical pieces. If a dimension is added or weights shift, you can see which articles were scored before and after the change.
Calibration set.We maintain a public set of 50 historical articles with their original scores under each version. Anyone can reproduce our calibration deltas independently or argue that our version-to-version drift is larger than we claim.
What we will not change without strong evidence:
The five-dimension structure (G, R, Ge, N, Extractive)
The 0–5 scoring scale
The polarity inversion of Extractive
The 20-year trajectory framing
The commitment to publishing per-article version stamps
These are the framework's structural commitments. Changing them constitutes a major version increment with public reasoning.
Feedback and adversarial review
The systemic-bias amplification risk in any consistent framework is real: blind spots scale with application. We address this through three mechanisms, all of them public:
Reader submissions.A standing form for “things GRIN missed” — patterns, dimensions, or considerations our framework fails to capture. Submissions are anonymized and the patterns are summarized in each quarter's Methodology Log.
Commissioned adversarial review.Each year we hire a critic with explicit hostility to the framework's politics — someone who would build a different framework if asked — to audit a sample of our scoring and identify systematic gaps. The critic's report is published in full alongside our response. We pay critics regardless of whether their findings flatter us.
Cross-framework testing.Quarterly, we score a sample of articles through deliberately different rubrics (a property-rights-first frame, a labor-first frame, a national-security frame) and publish the divergences. This is not because we think those frames are correct; it is because they make the editorial choices encoded in GRIN visible by contrast.
The point of all three mechanisms is the same: the framework owner is incentivized not to find the framework's gaps. Outsourcing gap detection to people with different incentives is the only credible solution.
Glossary
Extractive (verdict).A label applied to events where value is concentrated upward and costs are distributed downward.
Extraction graph.A node-and-edge diagram showing how value, cost, or harm flows through a story. Sources, intermediate flows, named actors, transfers, and outcomes are visualized as a causal chain.
Extraction map.The prose version of the extraction graph, used in older articles and as a fallback when the graph cannot be rendered.
Generative.Producing new capability, capacity, or value rather than redistributing existing value.
Key stats.The 3–4 headline numeric takeaways that anchor the scale of a story. Each is a number with a short uppercase label.
Twenty-year trajectory.The explicit framing question every GRIN analysis asks: what does this story imply if its patterns compound for two decades?
VibesScore.The composite 0–100 score derived from the five dimensional scores via the published formula.
What to watch.The 4–7 follow-on indicators each piece identifies — concrete signals that would change the analysis or confirm the trajectory.
Intellectual genealogy and related reading
GRIN did not appear from nothing. It draws on three traditions, and readers who want to engage seriously with the framework should engage with its sources.
Editorial spirit — extraction and value-flow analysis
Matt Stoller, BIG and Goliath.Stoller's work is the contemporary anchor for anti-monopoly extraction analysis applied to current events. Goliath is the long-form historical case; BIG is the weekly newsletter. Stoller is mono-thematic where GRIN is multi-dimensional, but the moral structure is shared: who is concentrating, who is being hollowed out, what does the political economy look like underneath the official story.
Ben Hunt, Epsilon Theory.Hunt's analytical posture — that news is a narrative event whose economic consequences depend on how the story propagates rather than on the underlying facts — is the closest contemporary sibling to GRIN's analytical worldview. The “Things Fall Apart” essays, the “common knowledge game” frame, and the “Narrative Machine” pieces are essential reading. Hunt does not score, but he sees what GRIN scores.
Doomberg.Energy and macro analysis with structural framing. The analytical move — what is actually happening underneath the official narrative, who benefits from the framing, who absorbs the consequences— is GRIN's move applied to commodity markets and energy policy.
Oren Cass, American Compass, The Once and Future Worker.Cass's productive-vs-extractive distinction in economic policy is a direct cognate of GRIN's Extractive dimension applied at the policy level. His framework is more politically conservative than GRIN's tends to be, which is exactly why it is useful — it shows that the productive/extractive lens is not inherently a left or right project.
Ben Thompson, Stratechery.The discipline of applying coherent analytical frameworks (Aggregation Theory, the smiling curve, conservation of attractive profits) to specific business events is the methodological model GRIN imitates at a different scale. Thompson is mono-dimensional and tech-focused; GRIN is multi-dimensional and cross-domain. The practice of “apply a coherent lens to today's news” is the same.
Eurasia Group, Top Risks annual report.The structural template for multi-dimensional risk scoring with explicit weights and probabilities. Eurasia scores entities and risks; GRIN scores stories. The form is identical.
MSCI ESG Ratings, Sustainalytics methodology documents.Multi-dimensional scoring of companies along environmental, social, and governance axes. Their methodology revision practices — versioned, public, with calibration disclosures — are the model for GRIN's Methodology Log.
V-Dem (Varieties of Democracy) project.The most rigorous multi-dimensional scoring system in political science. V-Dem versions its indices, publishes methodology updates, maintains held-out calibration sets, and treats framework drift as a first-class methodological problem. Their public methodology documents are the model for the kind of transparency GRIN is reaching toward.
Reporters Without Borders, World Press Freedom Index.A multi-dimensional country-level rubric applied annually with explicit dimensions and public methodology. Same lineage as V-Dem; different topic.
Heritage Foundation, Index of Economic Freedom.A different politics, same structural form: multi-dimensional rubric, public methodology, annual update cycle, held-out calibration. Worth reading for the form even if you reject the politics.
Moody’s and S&P methodology documents.The bond-rating agencies are the original multi-dimensional rubric scorers and have spent a century working out how to handle methodology revisions, calibration drift, and per-rated-entity version stamps. Their published methodology revision protocols are the operational model for GRIN's Methodology Log.
Academic ancestors — news values and structural news analysis
Daniel Hallin, The Uncensored War (1986).Hallin's “three spheres” model — sphere of consensus, sphere of legitimate controversy, sphere of deviance — is the foundational academic framework for analyzing how news coverage structurally treats different topics. GRIN's editorial commitment to scoring structural properties rather than tone owes Hallin a direct debt.
Galtung & Ruge, “The Structure of Foreign News” (1965).The original multi-dimensional academic framework for analyzing news content. Galtung and Ruge identified factors — frequency, threshold, unambiguity, meaningfulness, consonance — that determine which events become news. Their framework is descriptive of news selection rather than normative about news evaluation, but the multi-dimensional rubric form is the genealogical ancestor.
Thorstein Veblen, The Theory of the Leisure Class (1899).Veblen's analysis of conspicuous consumption and pecuniary emulation is the foundational text for understanding why mature economies converge on status competition rather than productive expansion. GRIN's Extractive dimension scores phenomena that Veblen described 125 years ago.
René Girard, Things Hidden Since the Foundation of the World.Mimetic theory — that desire is mediated through models rather than directly oriented toward objects — is the philosophical substrate underneath any framework that takes status, narrative, and concentration seriously. GRIN's analysis of meme-stock dynamics and narrative-as-asset would be impossible without Girardian foundations.
Methodological inspiration — calibration, transparency, and self-correction
Philip Tetlock, Superforecasting and the Good Judgment Project.Tetlock's work on prediction calibration is the methodological template for treating analytical frameworks as falsifiable. The practices of public scoring, calibration tracking, and bias correction are the source for GRIN's commitment to a held-out calibration set.
Ray Dalio, Principles and Bridgewater’s Daily Observations.The institutional practice of writing down a worldview, applying it consistently, and updating it through explicit feedback loops. Bridgewater's culture of “principled disagreement” is the model for our adversarial review commitment.
See GRIN applied to the day’s news.
Every article on VibesWire is scored, dated, and version-stamped.