A new coalition including academics, faith leaders, and public figures has launched a Pro‑Human AI Declaration, advocating for stricter safety standards and accountability as public concern grows over rapid AI development and governance.

An eclectic group of academics, business figures, faith leaders and politicians has publicly endorsed a new Pro‑Human AI Declaration that urges tougher safety measures and greater accountability for companies developing advanced artificial intelligence. According to the announcement from the Future of Life Institute, signatories include Richard Branson, economist Daron Acemoglu and former Trump adviser Steve Bannon, while backers range from the American Federation of Teachers to the Congress of Christian Leaders and the Progressive Democrats of America.

The declaration opens with a stark assertion: "Artificial intelligence should serve humanity, not the reverse." It frames a vision in which "trustworthy and controllable AI tools amplify rather than diminish human potential, empower people, enhance human dignity, protect individual liberty, strengthen families and communities, preserve self‑governance and hep create unprecedented health and prosperity." The document sets out core principles including human oversight of AI, measures to prevent dominant AI monopolies, protections for children, preservation of individual agency and legal accountability for unsafe systems.

Organisers deliberately excluded representatives from the tech industry when assembling the signatories, a departure from some earlier safety petitions that involved company figures. The move appears intended to present civil society, labour and moral authorities as the primary voice pressing for limits on corporate discretion in AI deployment. A parallel poll released with the declaration found strong public support for the approach, with about four in five US voters saying humans should remain in charge of AI and that companies should face greater accountability.

This latest campaign sits alongside a string of philanthropic and advocacy efforts aimed at counterbalancing commercially driven AI development. A coalition of foundations led by MacArthur and Omidyar this year announced a multi‑hundred‑million‑dollar Humanity AI initiative to fund projects that steer AI toward democratic, educational and civic goals, while grassroots campaigns such as Protect What’s Human have pressed for commonsense regulatory safeguards, transparency and independent oversight. Together these initiatives reflect a growing ecosystem of actors seeking to shape AI policy outside the industry’s boardrooms.

The Pro‑Human declaration follows earlier interventions from the Future of Life Institute, which in 2023 coordinated a widely publicised call for a six‑month pause on the training of systems more capable than GPT‑4 and later mobilised signatories around a ban on superintelligent AI until safety can be demonstrated. Those prior campaigns drew thousands of signatures from researchers, business leaders and public figures, but did not deter rapid corporate development; some past signatories subsequently launched AI startups of their own. The new declaration therefore appears to adopt a different tactic by emphasising a broad alliance of non‑industry institutions and public opinion.

Polling and survey data released by advocacy groups underscore widespread public unease with the pace and governance of advanced AI. A national survey of U.S. adults conducted for the Future of Life Institute found that roughly three‑quarters favour strong regulatory oversight, comparable to pharmaceutical standards, and that a substantial majority would delay development of superhuman AI until it can be shown safe and controllable. Advocates point to this gap between corporate momentum and public appetite as justification for urgent legislative and regulatory remedies.

Despite unanimity among many civil society actors about the need for stronger rules, observers note tensions within the broader coalition about means and ends. Some signatories press for outright moratoria on certain classes of research, while others favour phased regulation, public funding for alternatives that prioritise social goods, or legal liability frameworks to hold companies to account. According to reporting on earlier open letters, the movement also encompasses diverse political perspectives, making the question of durable policy consensus one of the movement’s immediate challenges.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph: - Paragraph 1: [2],[1] - Paragraph 2: [2],[1] - Paragraph 3: [2],[6] - Paragraph 4: [4],[5] - Paragraph 5: [6],[7] - Paragraph 6: [6],[3] - Paragraph 7: [3],[2]

Source: Noah Wire Services

Verification / Sources

  • https://uk.finance.yahoo.com/news/unlikely-band-political-academic-leaders-142336376.html - Please view link - unable to able to access data
  • https://www.yahoo.com/news/articles/unlikely-band-political-academic-leaders-142336436.html - An unusual coalition of academics, businesses, religious leaders, and political figures have raised concerns about artificial intelligence by signing a new 'pro-human' declaration. Backed by the Future of Life Institute, the Pro-Human AI Declaration calls for renewed focus on AI safety and stricter regulation for companies controlling it. Signatories include billionaire entrepreneur Richard Branson, Nobel-Prize-winning economist Daron Acemoglu, and former Trump administration adviser Steve Bannon. Organisations supporting the declaration include the American Federation of Teachers, the Congress of Christian Leaders, and the Progressive Democrats of America. The declaration emphasises that AI should serve humanity, not the reverse, and outlines key tenets such as human control over AI, prevention of AI monopolies, protection of children from the technology, preservation of human agency and liberty, and corporate accountability for defects and inadequate safety controls. A new poll published alongside the declaration found that 80% of US voters supported keeping humans in charge of AI, as well as greater accountability for AI companies. Organisers deliberately excluded industry representatives, who have previously been involved in similar petitions for improved AI safety. Previous efforts by the Future of Life Institute to enforce AI safety included a 2023 attempt to introduce a six-month moratorium on the development of AI systems and a petition last year to ban the development of superintelligent AI systems until safety is proven. Neither effort was heeded by the tech industry, with some signatories of the 2023 letter going on to launch their own AI startups.
  • https://time.com/7327409/ai-agi-superintelligent-open-letter/ - An open letter coordinated by the Future of Life Institute calls for a prohibition on the development of superintelligent AI until it can be proven safe and controllable. The letter, endorsed by over 700 prominent figures—including Nobel laureates, tech pioneers, celebrities, scientists, policymakers, and religious leaders—warns of the urgent risk posed by advanced AI. Signatories include Apple co-founder Steve Wozniak, AI trailblazers, political figures like Steve Bannon, and public personalities such as Prince Harry, Meghan Markle, and Joseph Gordon-Levitt. The letter defines superintelligence as AI that surpasses human performance on all useful tasks and warns that such technology could emerge in one to two years. The campaign aims to raise public awareness and press for regulation, highlighting concerns over a small number of wealthy tech companies driving AI progress against broader public skepticism. A recent poll showed 64% of Americans favour delaying superintelligence until it is safe, while only 5% support rapid development. The letter underscores a broad consensus that AI should aid humanity rather than replace it, warning that uncontrolled superintelligence development could have irreversible consequences.
  • https://apnews.com/article/1038b76f0ae4ef3d94095120815a65d0 - A coalition of ten philanthropic foundations has launched an initiative called Humanity AI, committing $500 million over five years to counterbalance the influence of profit-driven AI developers and ensure AI technology serves human needs. Spearheaded by the MacArthur Foundation and Omidyar Network, the coalition includes organisations like the Mozilla Foundation, Ford Foundation, and Mellon Foundation. They aim to guide AI's evolution toward public good rather than corporate efficiency. Humanity AI will focus its grants on five areas: advancing democracy, strengthening education, protecting artists, improving work, and defending personal security. The effort comes amid rising concerns about AI's impact on jobs, misinformation, privacy, and its energy demands. Leaders such as Mozilla’s Nabiha Syed and MacArthur Foundation’s John Palfrey emphasised that flourishing—not just efficiency—should be at the heart of AI development. Unlike past efforts driven by government or private enterprise, this coalition seeks to elevate civil society’s role in shaping AI’s path. Initial grantees include the National Black Tech Ecosystem Association, AI Now Institute, and a Howard University Law initiative focused on civil rights. A collaborative fund managed by Rockefeller Philanthropy Advisors will facilitate future grants.
  • https://protectwhatshuman.org/ - Protect What’s Human is a campaign advocating for commonsense regulation to keep AI safe and under human control. The initiative highlights concerns that AI designed to replace people threatens jobs, families, and way of life. The campaign calls for regulating AI with the same commonsense safety standards applied to other powerful technologies. It emphasises that true innovation should empower humans, not replace them, and that AI systems should operate openly with explainable models, visible data sources, clear accountability, and independent oversight. The campaign also stresses that responsibility remains exclusively human, rejecting any future that shifts moral responsibility to software. The manifesto commits to protecting human dignity in an automated world, ensuring technology amplifies equality, preserving freedom at the core of digital progress, and treating AI as a tool, never an authority.
  • https://futureoflife.org/recent-news/americans-want-regulation-or-prohibition-of-superhuman-ai/ - A national survey of 2,000 U.S. adults reveals that three-quarters want strong regulations on AI development, preferring oversight akin to pharmaceuticals rather than industry 'self-regulation'. The findings show widespread discontent at the current trajectory of advanced AI development, with only 5% in support of the status quo of fast, unregulated development. Almost two-thirds (64%) feel that superhuman AI should not be developed until it is proven safe and controllable, or should never be developed. The survey highlights a clear disconnect between the stated mission of leading AI companies and the wishes of the American public. The public’s preference for strong, independent oversight signals a clear mandate for policymakers and industry leaders to advance cautiously and responsibly, educate broadly, and involve trusted scientific institutions in governance.
  • https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments%3A_An_Open_Letter - Pause Giant AI Experiments: An Open Letter is a letter published by the Future of Life Institute in March 2023, calling for all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. The letter cites risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, and Yuval Noah Harari.

Noah Fact Check Pro

The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.

Freshness check

Score: 8

Notes: The article was published on March 5, 2026, and reports on a recent event, indicating high freshness. However, similar narratives have appeared in other reputable outlets, such as The Independent, published on the same day, which may suggest some overlap in reporting. (independent.co.uk)

Quotes check

Score: 7

Notes: The article includes direct quotes from the Pro-Human AI Declaration. While these quotes are consistent with the declaration's publicly available text, they cannot be independently verified as originating from the specific signatories mentioned. (humanstatement.org)

Source reliability

Score: 9

Notes: The article is published on Yahoo Finance, a major news organisation, which generally indicates high reliability. However, the presence of similar reports in other reputable outlets suggests that the content may be summarised or aggregated from a common source, potentially affecting its originality. (independent.co.uk)

Plausibility check

Score: 8

Notes: The claims about the Pro-Human AI Declaration and its signatories are plausible and align with information from other reputable sources. However, the article's reliance on a single source for direct quotes raises concerns about the independence of the information presented. (humanstatement.org)

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary: The article provides a timely and plausible account of the Pro-Human AI Declaration and its signatories. However, the reliance on a single source for direct quotes and the presence of similar reports in other reputable outlets suggest potential issues with originality and source independence. These factors warrant a medium level of confidence in the article's accuracy and reliability.