AI News
12 Jan 2026
Read 10 min
Tony Blair Institute AI incubator investigation exposed
Tony Blair Institute AI incubator investigation reveals plans to sell decision tools to governments
What the Tony Blair Institute AI incubator investigation found
A push to build, not just advise
TBI created an in-house AI Incubator in spring 2024. Staff say leaders want to rival major contractors by building products on top of large language models. They do not plan to compete with ChatGPT or Claude directly. Instead, they want tools that help ministers and officials make decisions. A recent job ad promised to “reinvent government leadership” with applied AI. Insiders say top earners in the unit can make up to £370,000. Critics inside the organisation call the strategy risky for a non-profit. Several sources said the Incubator is seen as central to Tony Blair’s legacy and the institute’s future identity.Money, partners, and influence
The institute has relied on major donations from Oracle founder Larry Ellison. According to sources, Ellison first planned to fund the Incubator, then pulled back, forcing TBI to find money from elsewhere. Past reporting said Ellison’s funding fostered a pro-AI sales culture inside the institute. The institute declined to comment on the details raised by sources.Unfinished products and internal doubts
Former staff describe slideware and early prototypes rather than market-ready tools. Two examples keep coming up:- A “delivery dashboard” that uses AI to advise political leaders on choices.
- A “policy black box” that ingests documents and returns recommendations.
Risks highlighted by the Tony Blair Institute AI incubator investigation
Black-box decision-making
Baroness Beeban Kidron warns that AI-driven policy advice can erode democratic accountability. If models shape choices without clear reasoning, trust falls. Elected leaders must own outcomes and be transparent about evidence and trade-offs. Opaque tools make that harder.Weak business case
Analyst Rachel Coldicutt notes that even top AI labs are still figuring out revenue. Building software is expensive. Without scale, a think tank may struggle to cover costs. Consulting slides can be profitable; product development seldom is, especially in government markets with long sales cycles and strict procurement rules.Reputation and conflict questions
The institute sits between politics, philanthropy, and tech. That mix invites scrutiny:- Funding sources tied to Big Tech can shape agendas.
- Moving staff between government and the institute raises conflict-of-interest risks.
- Selling to authoritarian markets poses human rights and export control concerns.
Why this pivot matters for public services
Chasing Palantir’s niche
Palantir has won big UK contracts by stitching together sensitive data and offering decision tools. TBI appears to chase a similar niche, promising insider knowledge of how government works. But access and insight are not enough. Public buyers need security, uptime, compliance, audit trails, and clear liability. Meeting those standards is hard for a young product team.Data, accountability, and procurement
Government AI must answer basic questions:- Where does the model get its training data?
- How do we log prompts, decisions, and outcomes for audits?
- Who owns the IP and who is liable if things go wrong?
- How do we ensure human oversight and avoid bias?
What to watch next
- Funding stability: Does the institute secure long-term backing that is not tied to a single tech donor?
- Product maturity: Do pilots move beyond demos to live deployments with measurable outcomes?
- Governance: Are transparency standards, red-team tests, and ethics reviews published?
- Procurement integrity: Are conflicts managed when staff move between Whitehall and the institute?
- Export safeguards: Are sales to restrictive regimes screened for rights risks?
Paths that could rebuild trust
Make it transparent by design
If TBI continues, it should publish model cards, testing results, risk registers, and impact reports. It should document human-in-the-loop controls and show how it addresses bias and error. It should allow independent audits and commit to clear incident reporting.Partner, don’t just build
Working with established vendors can cut risk and cost. The institute could focus on policy expertise, workflow design, and evaluation, while relying on proven platforms for hosting, security, and compliance. That hybrid approach matches what many public buyers already expect.Keep policy advice separate from product sales
To avoid conflicts, the institute should draw a bright line between consulting that shapes public policy and any product pitches. Firewalls, disclosure, and cooling-off periods for staff moving into government roles would help. The Tony Blair Institute AI incubator investigation raises a clear test: can a political think tank become a credible, accountable AI supplier for governments? The answer will depend on funding that is stable, products that work in the open, and governance that earns public trust.(Source: https://democracyforsale.substack.com/p/blair-bids-to-build-own-ai-tools-rival-palantir-tbi)
For more news: Click Here
FAQ
Contents