Methodology · Layer 05
VERIDEX.
A fixed set of validation gates. Coverage of analytical, statistical, regulatory, and presentation integrity. Reports do not leave the firm without passing.
VERIDEX is the audit layer of the Power In Numbers methodology. The firm produces analytical work in three operating modes — for an individual, for a firm, and for a sovereign — and every deliverable in every mode passes through the same set of gates before it is allowed to leave. The gates are coverage requirements, not style preferences. A report that fails a gate is revised until it passes, or it is not shipped.
The gates are published. The engines that run them are not. Power In Numbers treats the operating prompts behind AI Mirror, VERIDEX BI USA, and the Sovereign Mirror as the firm’s intellectual property — the product of a lengthy, iterative prompt-engineering programme — and does not disclose them. What this page describes is the standard the engines are required to satisfy. The reader does not need the engine to inspect the work.
The standard is forensic. Every quantitative claim must be traceable to a named source. Every projection must disclose its method and its uncertainty. Every named institution must be independently verifiable. Every recommendation must be paired with a Counterfactual Pause and an Opportunity Cost Assessment. Every Decision-Maker Briefing must be a separate document. The principle is simple: the deliverable is constructed so that an adversarial reviewer can audit it without access to the firm.
Three engines. One audit. One standard.
VERIDEX Core + BMEI module
The forensic auditor. Operates above the engines. Adjudicates source quality, statistical rigor, regulatory accuracy, and presentation discipline. Inherits no allegiance to any engine and no commercial interest in any deliverable’s conclusion.
AI Mirror
For an individual evaluating market entry.
A firm-built advisory engine. Subject is the principal. Output is calibrated to a single decision-maker preparing to commit capital, time, or reputation to a new market.
VERIDEX BI — USA
For a US firm evaluating a US expansion.
A firm-built advisory engine specialised for the US regulatory and competitive landscape. Subject is the operator and the board. Output is calibrated to a domestic expansion decision.
Sovereign Mirror
For a fiscal authority evaluating its own allocation.
A firm-built engine for fiscal-allocation-to-impact cartography. Subject is the sovereign, analysing itself. Output is calibrated to a Ministry of Finance, a development-partner audience, or an oversight body.
22 canonical gates · 4 engine extensions
Every deliverable, in every engine, passes through the same fixed gate set before publication. Each gate is a binary pass/fail check against a named integrity standard. A failed gate is a refused deliverable until the failure is repaired.
The 22 gates in full.
Organised by integrity family, in the order each family is invoked during an audit. The numbering is canonical — the gates are addressed by number throughout the firm and across every engagement.
Sourcing & data integrity
- G·01
Every quantitative figure traces to a named, verifiable source.
No fiscal figure, market estimate, multiplier, or rate appears in the deliverable without a traceable citation to a primary document, statutory source, or verified knowledge-base entry. Untraceable figures fail the gate, regardless of how confident the analyst is in their accuracy.
- G·02
Every named institution and program is independently verifiable.
Ministries, regulators, programs, statutes, treaties, and incentive schemes named in the deliverable are spot-checked against the issuing authority. AI-generated content is notorious for fabricating plausible institutions; this gate is the audit answer.
- G·03
No statistic appears without a named source.
Percentages, growth rates, market sizes, and quantitative claims are paired with an inline citation to the source — not "studies show," not "industry reports," not "common knowledge." The cited source must be retrievable by a reader who is willing to verify.
- G·19
Source tier is disclosed.
Every load-bearing figure carries an explicit tier marker (Tier 1 sovereign / multilateral, Tier 2 economic intelligence, Tier 3 sector authority, Tier 4 commercial estimate, Tier 5 grey literature). Tier 4 figures cannot be presented next to Tier 1 data without visual differentiation.
- G·20
Temporal provenance is tagged.
Data older than two years, or sourced from a known-lag publication (e.g., national statistics with a multi-year reporting cycle), carries a temporal tag indicating the data vintage. The reader sees the age of the figure they are deciding on.
Analytical integrity
- G·04
No advocacy or partisan language in analytical sections.
The analytical core uses descriptive, not normative, language. "We recommend," "the government should," and partisan framing are prohibited. The same allocation question, asked of two opposing administrations, must produce the same scoring.
- G·05
A different question must produce different content.
Two deliverables addressing two different questions cannot be substantially boilerplated copies of each other. The body of each report must reflect the specific question being asked. Generic content is treated as a deliverability failure.
- G·06
Score-to-narrative alignment.
Where the deliverable scores options or risks numerically, the surrounding narrative must match the scores. A scenario rated low on a given dimension cannot be discussed as if it scored high. Disagreement between the math and the prose is a failed gate.
- G·17
Absorption and deployment are scored against historical data.
Claims that a market or institution can absorb capital, talent, or programs at a given rate must be calibrated to the actual historical deployment rate of comparable initiatives — not assumed, not impressionistic. Optimism about deployment is the most common failure mode in this category.
- G·18
Political-economy scoring is analytical, not partisan.
Where the deliverable assesses political viability, the assessment is grounded in named governance indicators, electoral cycles, and policy trajectories — not in the analyst’s personal read of the politics. The score must be defensible to a counterparty of the opposite political prior.
Quantitative modeling
- G·08
Scenario parameters are consistent with the resource being modeled.
A model funded by commodity revenue uses commodity-revenue parameters; a model funded by general budget uses general-budget parameters. Mixing parameter sources to produce tighter confidence bands is a transparency failure and is treated as a Gate 4 violation as well.
- G·11
Monte Carlo correlation is treated explicitly.
Stochastic models either implement correlation between macroeconomic variables with country-calibrated parameters, or assume independence with an explicit, quantified tail-risk disclosure. Models that quietly assume independence, without disclosure, fail the gate.
- G·12
A three-step decomposition is present.
Where the deliverable cites a market or fiscal envelope, it decomposes that envelope into total, allocable, and realistically deployable layers — and the layer that feeds the model is the realistic one. A headline figure, used un-decomposed, is a refused gate.
- G·15
Arithmetic is verifiable.
A reader with the assumptions table and a spreadsheet can reproduce every quantitative claim in the deliverable. If the math does not check out under independent recomputation, the deliverable does not ship.
Risk and counterfactual
- G·07
Risk profiles are specific, not boilerplate.
Each risk dimension is described with the actual mechanics of the risk in this question, this market, this sector — not with reusable generic language. Mitigations are specific and actionable; "engage local counsel" and "monitor the situation" do not satisfy the gate.
- G·09
A Counterfactual Pause is included.
Every analysis ends with two to three substantive arguments against the highest-scored option. The Pause exists to give the principal a structured reason to think twice before acting on what the analysis recommends. A missing Pause is a refused deliverable.
- G·10
Opportunity Cost Assessment is mandatory at scale.
Allocations that exceed half a percent of an annual budget — or fifty million dollars equivalent — must be examined against named alternative deployments of the same capital. The reader sees what the recommended option precludes, by name, before they decide.
- G·16
Counterfactual micro-profiles name at least two alternatives.
For each scored option, the deliverable identifies at least two named alternative programs or strategies that compete for the same resource. Generic "other priorities exist" language fails the gate. The alternatives must be specific enough to be looked up.
Internal consistency
- G·13
Decision-Maker call-to-action discipline.
The deliverable’s call to action — what the principal is being asked to decide — is framed as a decision, not as a sale. Promotional framing inside an analytical document fails the gate.
- G·14
Impact and Risk scores are internally consistent.
Where the same underlying data informs both an impact score and a risk score, the two scores must be reconcilable within a defined tolerance. A high "absorption capacity" impact score and a high "absorption risk" score, on the same data, fail the gate.
Presentation discipline
- G·21
The Executive Summary follows the six-component, equal-weight rule.
Every Executive Summary contains six structural components in fixed order, and every option under analysis receives equal narrative weight in the summary regardless of which option the analytical core scored highest. The summary cannot be used to nudge the principal.
- G·22
The Decision-Maker Briefing is separated from the primary report.
Where the engagement produces a confidential briefing for the principal, that briefing is a separate document — never bundled with, referenced in, or implied by the primary deliverable. Separation is enforced by operator discipline; the gate exists to make the discipline visible.
Four engine-level extensions.
Each operating engine carries calibrations beyond the canonical 22. These are the four currently in production. They apply to engagements run through AI Mirror and VERIDEX BI USA. Sovereign Mirror engagements satisfy them by inheritance.
- G·24
Rate-currency verification.
Wage, fee, tariff, and price-rate figures are checked against current sources at the time of analysis, with a disclosed verification log. Rates that cannot be verified within the engagement window are tagged as such; stale rates are never presented as current.
- G·25
Source-quality and temporal-provenance disclosure.
In addition to tier (Gate 19) and vintage (Gate 20), the engine discloses the publication context of every load-bearing source — peer review, replication status, funding-source conflicts, and known lag. The reader sees the source’s standing, not just its name.
- G·26
Addressable-market decomposition.
Market-entry analyses decompose a stated market into addressable, serviceable, and obtainable segments before any market-size figure feeds a model. Headline market-size figures, used as model inputs, fail the gate.
- G·27
Executive Summary construction.
The Executive Summary follows a fixed construction protocol: a governing finding, a structured options panel, a quantified base-case and scenarios, a counterfactual line, a risk-and-mitigation line, and a decision call. The protocol is engine-internal; the standard is public.
The five tiers, and what does not qualify.
VERIDEX evaluates sources on a fixed five-tier hierarchy. Every load-bearing figure in a deliverable carries a visible tier marker. Tier 4 figures cannot stand next to Tier 1 data without explicit differentiation. The list of source types that do not qualify is equally fixed.
- T1
Primary sovereign and multilateral data.
National statistics bureaus, central banks, IMF Article IV, World Bank Indicators, IFC Country Diagnostics, UNCTAD investment data, official government gazettes.
- T2
Established economic intelligence.
EIU country risk and commerce reports, Oxford Economics, S&P Global, Fitch, Moody’s sovereign assessments, regional MDB reports, Worldwide Governance Indicators, Mo Ibrahim Index, Transparency International CPI, WEF Global Competitiveness.
- T3
Sector-specific authority.
IRENA (energy), GSMA (telecoms), WHO (health), FAO (agriculture), ITU (digital infrastructure), national sector regulators, peer-reviewed sector journals, Big Four sector practice reports (weighted with awareness of incentive structure).
- T4
Commercial intelligence.
Named, dated market-sizing reports from established research firms; U.S. Commercial Service country guides; investment-promotion-agency publications, used only when cross-referenced. Always tagged; never load-bearing without defense.
- T5
Grey literature and named expert opinion.
Pre-prints, NBER and SSRN working papers, conference proceedings, named experts with disclosed credentials and affiliations. Flagged as non-peer-reviewed.
What does not qualify.
- News articles or blogs as primary data — acceptable only as a pointer to a primary source.
- Unnamed "industry reports" or "market research."
- Social media, influencer claims, or anecdotal evidence.
- Investment-promotion materials treated as objective analysis.
- AI-generated content citing itself or another AI output as a source.
- Wikipedia (acceptable as a navigation aid to a primary source — never as a source itself).
What VERIDEX refuses to do.
Published refusals. The auditor does not soften, hedge, or reframe a finding to protect anyone’s emotional, political, or commercial outcome. The principal is not the auditor’s client. The truth is.
- 01 Validate a conclusion the evidence does not support.
- 02 Treat investment-promotion materials as objective analysis.
- 03 Treat "Africa rising" or equivalent macro-narratives as evidence for a specific opportunity.
- 04 Manufacture balance between a well-evidenced finding and a weakly supported counter.
- 05 Treat a single successful case as evidence of a replicable opportunity, absent base-rate context.
- 06 Soften a negative finding because the analysis was expensive to produce, or the principal is invested in the outcome.
- 07 Overlook a fabricated institution because the rest of the analysis is strong.
- 08 Treat market entry as inherently desirable. "Do not enter" is a valid finding.
The three engines. The deliverables they produced.
VERIDEX is not described as a checklist. It is shown as a record of published deliverables that passed it. Each engine below has produced at least one full report under audit. Every report carries its full gate set in an audit footer.
- Engine
AI Mirror
For an individual evaluating a market entry — the founder, the operator, the investor.
Reference deliverable. Stage-3 Full Report — Dr. Janelle Thompson, Nexten Summit Accra 2026 (Ghana).
See the case study → - Engine
VERIDEX BI — USA
For a US-domiciled firm evaluating a US market expansion — the operator, the board, the lender.
Reference deliverable. VERIDEX Full Report — The Master’s Chair, Atlanta market entry.
See the case study → - Engine
Sovereign Mirror
For a fiscal authority evaluating its own allocation of a defined revenue stream.
Reference deliverable. Four interactive briefings — Domestic Resource Mobilisation, Health Financing, Ghana Stabilisation Fund, Annual Budget Funding Amount.
See the case study →