Insights AI News Tony Blair Institute AI incubator investigation exposed
post

AI News

12 Jan 2026

Read 10 min

Tony Blair Institute AI incubator investigation exposed

Tony Blair Institute AI incubator investigation reveals plans to sell decision tools to governments

The Tony Blair Institute AI incubator investigation shows a costly push to build government-facing AI tools, pitched as a rival to Palantir. Insiders describe big salaries, unfinished products, and key funders stepping back. Critics warn about opaque “black box” decision-making and question whether a think tank can become a tech vendor without clear guardrails. Tony Blair believes AI can supercharge government. His institute has shifted from advice and policy papers to building software. Sources say a new AI Incubator aims to sell tools to governments, including Gulf states. This move follows job cuts, a reported 2024 loss, and questions about how the project will make money or protect public trust.

What the Tony Blair Institute AI incubator investigation found

A push to build, not just advise

TBI created an in-house AI Incubator in spring 2024. Staff say leaders want to rival major contractors by building products on top of large language models. They do not plan to compete with ChatGPT or Claude directly. Instead, they want tools that help ministers and officials make decisions. A recent job ad promised to “reinvent government leadership” with applied AI. Insiders say top earners in the unit can make up to £370,000. Critics inside the organisation call the strategy risky for a non-profit. Several sources said the Incubator is seen as central to Tony Blair’s legacy and the institute’s future identity.

Money, partners, and influence

The institute has relied on major donations from Oracle founder Larry Ellison. According to sources, Ellison first planned to fund the Incubator, then pulled back, forcing TBI to find money from elsewhere. Past reporting said Ellison’s funding fostered a pro-AI sales culture inside the institute. The institute declined to comment on the details raised by sources.

Unfinished products and internal doubts

Former staff describe slideware and early prototypes rather than market-ready tools. Two examples keep coming up:
  • A “delivery dashboard” that uses AI to advise political leaders on choices.
  • A “policy black box” that ingests documents and returns recommendations.
Critics ask why governments would buy these tools from a think tank instead of a large vendor. “What stops OpenAI or others?” one former employee asked. Some staff urge partnership with established tech firms instead of trying to build and own the IP from scratch.

Risks highlighted by the Tony Blair Institute AI incubator investigation

Black-box decision-making

Baroness Beeban Kidron warns that AI-driven policy advice can erode democratic accountability. If models shape choices without clear reasoning, trust falls. Elected leaders must own outcomes and be transparent about evidence and trade-offs. Opaque tools make that harder.

Weak business case

Analyst Rachel Coldicutt notes that even top AI labs are still figuring out revenue. Building software is expensive. Without scale, a think tank may struggle to cover costs. Consulting slides can be profitable; product development seldom is, especially in government markets with long sales cycles and strict procurement rules.

Reputation and conflict questions

The institute sits between politics, philanthropy, and tech. That mix invites scrutiny:
  • Funding sources tied to Big Tech can shape agendas.
  • Moving staff between government and the institute raises conflict-of-interest risks.
  • Selling to authoritarian markets poses human rights and export control concerns.

Why this pivot matters for public services

Chasing Palantir’s niche

Palantir has won big UK contracts by stitching together sensitive data and offering decision tools. TBI appears to chase a similar niche, promising insider knowledge of how government works. But access and insight are not enough. Public buyers need security, uptime, compliance, audit trails, and clear liability. Meeting those standards is hard for a young product team.

Data, accountability, and procurement

Government AI must answer basic questions:
  • Where does the model get its training data?
  • How do we log prompts, decisions, and outcomes for audits?
  • Who owns the IP and who is liable if things go wrong?
  • How do we ensure human oversight and avoid bias?
Without strong answers, officials risk buying tools they cannot explain or defend. That risk grows when a vendor also shapes policy advice behind the scenes.

What to watch next

  • Funding stability: Does the institute secure long-term backing that is not tied to a single tech donor?
  • Product maturity: Do pilots move beyond demos to live deployments with measurable outcomes?
  • Governance: Are transparency standards, red-team tests, and ethics reviews published?
  • Procurement integrity: Are conflicts managed when staff move between Whitehall and the institute?
  • Export safeguards: Are sales to restrictive regimes screened for rights risks?

Paths that could rebuild trust

Make it transparent by design

If TBI continues, it should publish model cards, testing results, risk registers, and impact reports. It should document human-in-the-loop controls and show how it addresses bias and error. It should allow independent audits and commit to clear incident reporting.

Partner, don’t just build

Working with established vendors can cut risk and cost. The institute could focus on policy expertise, workflow design, and evaluation, while relying on proven platforms for hosting, security, and compliance. That hybrid approach matches what many public buyers already expect.

Keep policy advice separate from product sales

To avoid conflicts, the institute should draw a bright line between consulting that shapes public policy and any product pitches. Firewalls, disclosure, and cooling-off periods for staff moving into government roles would help. The Tony Blair Institute AI incubator investigation raises a clear test: can a political think tank become a credible, accountable AI supplier for governments? The answer will depend on funding that is stable, products that work in the open, and governance that earns public trust.

(Source: https://democracyforsale.substack.com/p/blair-bids-to-build-own-ai-tools-rival-palantir-tbi)

For more news: Click Here

FAQ

Q: What did the Tony Blair Institute AI incubator investigation reveal? A: The investigation by Democracy for Sale and Lighthouse Reports found that the Tony Blair Institute created an in-house AI Incubator aimed at building government-facing AI tools and to “rival Palantir”, with insiders reporting large salaries, unfinished products and funders stepping back. It also documented job cuts, a reported £3.2m loss in 2024, and internal concern about opaque “black box” decision-making and the think tank’s shift from advisory work to product development. Q: When was the AI Incubator launched and what is its aim? A: The Incubator was launched in spring 2024 and aims to build products on top of large language models to help ministers and officials make decisions rather than compete directly with ChatGPT or Claude. Insiders say leadership hopes the unit will transform the think tank into a tech provider and sell tools to government clients, particularly Gulf states. Q: Who funded the Tony Blair Institute and what happened to planned funding for the incubator? A: The institute has relied heavily on donations from Oracle founder Larry Ellison, who has pledged and donated £257 million to TBI, and sources say he was originally set to fund the AI Incubator but later decided not to. That withdrawal reportedly forced TBI to reallocate money from elsewhere in the organisation to support the project. Q: What kinds of products has the Incubator been developing and are they complete? A: Former staff describe unfinished prototypes and “slideware” rather than market-ready products, with examples including a “delivery dashboard” to advise political leaders and a “policy black box” that ingests policies and returns recommendations. Sources said these tools exist mainly as early prototypes and PowerPoint concepts rather than deployed systems. Q: What are the main concerns about using such AI tools for government decision-making? A: Critics warn that “black-box” AI tools can undermine democratic accountability by making decisions without transparent reasoning, making it harder for elected leaders to own policy outcomes. The investigation also highlights worries about conflicts of interest, sales to authoritarian regimes, and whether a think tank can meet the security, compliance and procurement standards governments require. Q: Why do some insiders compare TBI’s ambitions to Palantir? A: Insiders and former staff say TBI wants to “rival Palantir” by building decision tools for governments, echoing Palantir’s niche of stitching together sensitive data and winning government contracts. That comparison reflects internal ambition to own IP and become a tech vendor rather than a traditional consultancy, according to sources. Q: What governance or transparency steps did the investigation suggest to rebuild trust? A: The piece recommends publishing model cards, testing results, risk registers and impact reports, documenting human-in-the-loop controls and allowing independent audits to increase transparency and accountability. It also suggests clear incident reporting and drawing bright lines between policy advice and product sales to manage conflicts of interest. Q: What should watchdogs and officials monitor next after the Tony Blair Institute AI incubator investigation? A: They should watch funding stability, whether pilots move beyond demos to live deployments, the publication of governance and ethics standards, procurement integrity when staff move between government and the institute, and safeguards for exports to restrictive regimes. Tracking these areas will indicate whether the incubator matures into a credible and accountable supplier for governments.

Contents