Introducing the FAIR Initiative
Electric Sheep has announced the launch of the FAIR (Future of AI Research) Initiative, a think tank built upon the collective expertise of the Electric Sheep AI for impact community dedicated to guiding artificial intelligence (AI) toward ethical, sustainable, and compassionate welfare outcomes for our planet and all its inhabitants. This blog discusses its origins and makes a few provocations surrounding its forward agenda.
Why does AI need FAIR?
Traditional AI safety approaches prioritize technical alignment and risk minimization. The AI ethics field - such as it is - tends to run alongside, in the shadows of the technical work.
This may be as good as it gets for those who care about such things - but equally it may not be enough. As artificial intelligence grows more powerful, its influence seeps deeper into every corner of human life—from automating farms and forecasting weather to shaping newsfeeds and rewriting economic models. Might it be true that we need AI systems that don't just avoid catastrophic errors, but that also understand welfare value judgements and moral consequences, and contribute positively to society as a precondition of avoiding such errors? As machines take on more responsibility, we face a fundamental question: if AI is to ‘go right,’ must we inculculate within AI an understanding of right from wrong?
The AI - welfare policy agenda - beyond ‘speciesism’
Most think tanks and policy agendas begin from a substantive moral view of how the world ought to be. Think tanking is distinct from research in so far as the former seeks to create a specific political outcome that itself hinges on a model or models of welfarism. They may be rooted in political doctrine or religious principle, or some other set of principles besides. AI think tanks and thought leadership agendas are no different and yet no such AI-welfare think tank currently exists.
The Future of AI Research Initiative (FAIR) posits that it’s a non-trivial possiblity that the such a gap is a critical flaw in our AI architecture—and not only that, but that the future of our safety might even depend on filling it — and that no one is sufficiently interrogating this possibility.
What does this mean in practice?
It means a welfarist research and policy influence agenda that focuses on AI development, safety and planet positive, socially, evironmentally and species protective outcomes.
It means research and policy work from an AI development and safety perspective that speaks to multiple overlapping welfarist theories that in turn produce implementable policy outcomes.
It also means arguing for a reframing of what it means for AI to "go wrong," both in the short and long terms.
Mistakes in AI aren’t just technical bugs—they are political and moral failures. When an AI model hallucinates information, amplifies bias, or recommends harmful strategies, the harm isn’t accidental; it’s often the result of systems acting without regard for foreseeable consequences. FAIR calls this phenomenon welfare recklessness—and sees it as a critical new lens for understanding AI behavior. To get to the heart of this, an essay or a research piece won’t suffice.
There’s an entire policy agenda herein.
Big premise, practical outputs.
FAIR is where Electric Sheep’s massive online community of researchers and policy analysts produce welfare-first AI safety research. Our outputs are political and influence focussed in nature; we produce practical tools, like policy briefs and moral audit kits, to help regulators and developers recognize and respond to ethically risky behavior in AI systems. Our outputs will be structured along the following lines.
Policy Briefings: Collaborating with policymakers to create regulations that ensure AI technologies are developed and deployed responsibly with due consideration for environmental ecosystems, species, and social concerns.
Ministerial Champion Roundtables: We will bring together those in power who hold the levers of welfare and governance with out path-breaking fellowship program and ideas engine, to drive a deeper understanding of AI's practical implications and influence multiple legislative agendas thereby.
Research and influence Targetting, Lectures and Events: Ensuring that AI development considers the welfare of ecosystems, social strata and all sentient beings - and its evolving salience is recognised as such key-opinion formers.
FAIR’s public-facing outputs—quarterly reports, explainer blogs, and civic briefings as well as direct advocacy work on passing legislation — will translate complex ethical questions into accessible insights.
This all culminates in FAIR’s planned flagship conference, Politics in the Machine which will bring together global thinkers from philosophy, AI, law, and policy to debate the ethics of machine agency and accountability. The event will serve as both a launchpad and a litmus test for FAIR’s vision: that we can, and must, build artificial intelligence that reflects our deepest values.
Collective Intelligence - Time for FAIR.
In an era where AI shapes not just what we do, but who we are becoming, the Future of AI Research Initiative is asking the question too few are willing to ask but whose answering will determine the course of the next century and beyond: What if the next frontier of AI isn’t intelligence—but welfare-maximising morality?
And if we’re even half right on that front - isn’t it about time we started getting those in positions of power to come our way?
We are blessed at Electric Sheep to be the convenors of a global community of some of the most forward thinking, practical AI cogniscent thinkers and doers of our generation. Going forward we will not only to build safe AI or articulate welfare-maximising alignment research but to influence the policy agenda as well.
Watch this space.