Gladstone AI Kicks off "Safety Forward" Briefing Series with AI National Security Policy Guide for Congress
The guide is built from the first-ever AI National Security Action Plan, authored by Gladstone, and calls on Congress to explore licensing, liability, and whistleblower protections
WASHINGTON, June 6, 2024 /PRNewswire/ -- Gladstone AI, the company behind the first U.S. government-commissioned action plan for advanced AI safety and security, and the first company to deliver a GPT-4-powered application to the U.S. Air Force, today announced the launch of its "Safety Forward'' briefing series. Along with the announcement, Gladstone has released the first installment of the series: "The National Security Policy Guide for Congress." This guide, distilled from the comprehensive 280-page AI National Security Action Plan commissioned by the U.S. State Department, highlights the alignment with proposed legislative frameworks with Gladstone AI's recommendations, while underscoring the urgent need for Congress to strengthen licensing, liability, and whistleblower protections to secure AI development and national security interests.
The "Safety Forward" briefing series is designed to provide ongoing, objective insights into national security and AI policy. This first installment focuses on ensuring that legislative measures keep pace with technological advancements to protect national security and U.S. competitive interests.
To access the guide, please visit gladstone.ai/safety-forward.
To implement safety-forward AI policy, we need clarity on requirements for companies in the AI supply chain, capacity to respond to the evolving technical landscape, and consequences for tech companies that fail to meet obligations. Key recommendations include:
- A Safety-First Development Paradigm: Gladstone AI argues that the current practice of training powerful AI models with unknown safety properties and then evaluating them after development is backward. The company advocates for a "Safety Forward" approach, where developers demonstrate with high confidence the safety and security properties of their models before development begins.
- Licensing and Liability Frameworks: Gladstone AI calls for a tiered licensing system for AI model developers, hardware providers, and data center infrastructure providers, establishing clear requirements and consequences for non-compliance. A comprehensive liability framework is also recommended to address accidents, reckless development practices, and deliberate misuse of AI systems.
- Whistleblower Protection: The guide highlights the importance of whistleblower protection given the concerns raised by researchers from leading AI labs who have shared their often sobering outlooks on safety and security situations within their organizations. Gladstone AI urges Congress to establish strong whistleblower protections to ensure that individuals who raise critical safety concerns are not penalized.
Jeremie Harris, CEO and Co-Founder of Gladstone AI stated:
"There's a wide disconnect between the public messaging of top AI company executives and the security and safety concerns expressed by the researchers who work for them. Our investigation was the first to collect and surface reports from whistleblowers at these labs. We're proud to have helped pave the way for greater transparency in frontier AI development. As these concerns continue to be surfaced, it's incumbent on us all to ensure whistleblower reports are translated into good policy. That's why protecting these researchers is essential."
Edouard Harris, CTO and Co-Founder of Gladstone AI added:
"As AI startup founders ourselves, we started our investigation with some skepticism towards regulating the frontier AI sector. The lab insiders we spoke to changed that view. Their sobering reports made it clear that frontier lab safety and security practices are critically inadequate. These practices need to be improved, radically and urgently, as a national security imperative. How to execute those improvements was a central theme of our State Department-commissioned Action Plan, and is the subject of this AI National Security Policy Guide for Congress."
Steve Bunnell, former Senior Advisor, U.S. Department of Homeland Security, stated:
"The Gladstone team was instrumental in bringing DHS leadership up to speed on advanced AI at the highest levels and in particular on the national security dimensions of the technology. They helped prepare us for the generative AI wave well before the release of ChatGPT, and long before anyone was paying attention. They've been far ahead of this issue, and are unique in their depth of understanding on the policy, technical, and national security components of the AI problem set."
Gladstone AI's guide recommends that Congress hold open hearings to delve deeper into licensing and liability regimes to guard against harms, explicitly addressing the limitations of current AI model evaluations. Additionally, Gladstone joins security-concerned experts in urging Congress to consider the insights of whistleblowers from top AI labs, whose perspectives are critical in crafting effective legislation.
To access the guide, please visit gladstone.ai/safety-forward. For a primer for policymakers, visit gladstone.ai/ai-primer.
About Gladstone AI: Gladstone AI was founded by national security experts and Silicon Valley AI executives dedicated to advancing U.S. security interests amidst the rapid development of advanced AI. As an inaugural member of the Dept of Commerce's AI Safety Consortium (AISIC) and a trusted DOD partner, Gladstone AI has achieved several "firsts" in supporting the federal government. The company offers AI training programs, AI-powered products, and consulting services tailored to national security needs. Co-founders Jeremie Harris and Ed Harris have collaborated with national security agencies and top AI research labs like OpenAI and DeepMind, briefing senior officials and global AI policy decision-makers. Gladstone AI operates as a for-profit company with primarily government revenue, intentionally avoiding outside funding from venture capital or donors.
SOURCE Gladstone AI
Share this article