By Ben Winters, EPIC Counsel
This week, Politico’s Alex Thompson detailed the close relationship between the Biden administration’s Office of Science and Technology Policy and Schmidt Futures, the philanthropic initiative of former Google CEO Eric Schmidt. The links between Schmidt Futures and Biden’s OSTP add to the already significant influence of Schmidt, who is heavily invested in AI companies, in crafting AI policy for the United States.
Politico’s piece explained how the organization indirectly paid the salaries of now-OSTP Chief of Staff Marc Aidinoff while he was working for the agency and had a similar arrangement for other OSTP staffers. Another OSTP staff member was urged by OSTP’s legal team to withdraw from a Schmidt Futures-funded fellowship. Politico also reported that “Twoother OSTP officials continued to work part-time at the Broad Institute of Harvard and MIT in Cambridge, Mass., a leading biotech facility that had been headed by [former OSTP Director Eric] Lander and where Schmidt chairs the board.”
The U.S. Government has not yet enacted legislation or dedicated substantial funding to protecting individuals from the threats that automated decision-making systems and AI pose to privacy rights or the discriminatory impacts they encode and exacerbate. Despite some agency steps toward protecting individuals—such as actions taken by the Federal Trade Commission and the Equal Employment Opportunity Commission—a pair of largely unfulfilled Executive Orders and piecemeal enforcement has ensured that federal policy prioritizes AI development over the protection of rights. EPIC detailed this funding disparity in recent comments to the National Institute of Standards and Technology. EPIC advocates for building oversight, testing, and regulatory capacity to prevent, control and remediate AI harm.
The federal government has, however, created several influential advisory boards to help inform policy and funding decisions. One of these bodies was the National Security Commission on Artificial Intelligence, or NSCAI. The NSCAI was charged with “review[ing] advances in artificial intelligence, related machine learning developments, and associated technologies” and making policy recommendations to Congress and the President. The NSCAI, like the Defense Innovation Board, was chaired by Eric Schmidt and included executives of large tech companies that regularly vie for defense contracts such as Google, Oracle, and Amazon. These companies stand to profit from the recommendations the Commission made.
Although squarely covered by the Freedom of Information Act and the Federal Advisory Committee Act, which requires advisory committees to hold open public meetings, the NSCAI initially refused to comply with its transparency obligations. EPIC filed suit after the Commission failed to act on EPIC’s open government requests throughout its first year of operation. As a result of EPIC’s case, the Commission was ordered by a court to open its meetings and records to the public. As part of the Commission’s compliance with EPIC’s FOIA request, EPIC received several presentations from outside groups that informed some of the NSCAI’s report: one which framed U.S. AI policy in terms of direct competition with China and argued that the U.S. was lacking in AI adoption, and another discussing “How . . . psychology and AI . . .[can be combined to] accomplish goals. For example: Video interviews, Behavior tracking/monitoring.” There is substantial evidence that this type of technology cannot currently be accurate or free of bias.
The NSCAI’s recommendations focused on increasing AI research and development to increase American output of AI and compete with China as a global superpower, a point that Schmidt has repeatedly emphasized in his own remarks. The NSCAI paid lip service to democratic values, privacy, transparency, and accountability but failed to make specific recommendations to Congress to protect individuals from potential harm of continued AI proliferation. In comments responding to the NSCAI’s draft of its influential final report, EPIC urged that “Unless express, binding limits on the use of AI are established now, the technology will quickly outpace our collective ability to regulate it . . .The Commission cannot simply kick the can down the road, particularly when governments, civil society, and private sector actors have already laid extensive groundwork for the regulation of AI.”
Screenshot of Twitter post from the NSCAI on September 22, 2021, explaining that “NSCAI’s work inspired hundreds of bills–some already law. We are optimistic that more of our recommendations will pass Congress, reach the President’s desk, & have a lasting impact on the future of #AI”
The NSCAI called for a substantial increase in funding in AI research and deployment by Congress, such as hundreds of millions of dollars in the 2022 National Defense Authorization Act (NDAA) alone. In 2021, the federal government spent an estimated $6 billion on AI related research and development projects. For 2022, the Biden administration requested funding of $1.7 billion in civilian AI research and development investments. But this funding is not accompanied by accountability mechanisms to prevent gratuitous public investment in private AI vendors or the prioritization of AI development for development’s sake. Meanwhile, an Executive Order to simply publish information about how the federal government is using AI tools remains unfulfilled and unprioritized. Instead of committing time and money to building transparency, accountability, oversight, and capacity to enforce civil rights laws, the federal government has largely focused its resources on investing in and rolling out more untested AI systems. Without clear guardrails, the U.S. will not back itself into “democratic” AI that respects the rule of law or equity just because it’s built in America or by Americans, as the NSCAI’s report would suggest. As the NSCAI boasted on Twitter, many of its recommendations impacted legislative decisions almost immediately.
Schmidt, NSCAI Co-Chair Robert Work, and NSCAI Executive Director Ylli Bajraktari have now formed a privatized version of the AI Commission called the Special Competitive Studies Project funded by the Eric & Wendy Schmidt Fund for Strategic Innovation. Notably, the SCSP recently published a blog post characterizing protective AI regulation as a product of “FOMO” and “virtue signaling.”
Eric Schmidt is a venture capitalist, government advisor, and ex-CEO of Google (now Alphabet). When Schmidt joined the NSCAI, he had a 38-page financial conflict of interest disclosure (which was withheld in full under FOIA disclosures)—far longer than the disclosures of his fellow commissioners. Relevant to his role in influencing federal U.S. AI policy, Schmidt:
- Is substantial and founding investor in an Alphabet (Google’s parent company) spinoff called “Sandbox AQ (AI and Quantum Technology” which is, in the words of Schmidt, “developing commercially viable, quantum technologies using a combination of today’s high-performance computing power and emerging quantum platforms,”;
- Was an early and substantial investor in Rebellion Defense, a defense contractor creating AI to sell to the military and granted several government contracts;
- Holds 20% stake in DE Shaw, a $60 billion hedge fund;
- Is an investor in Abacus.AI;
- Is chair of “Reimagine New York Commission,” a body focused on the role of technology in New York’s COVID-19 economic and social recovery;
- Is a founder and board-member of Civis Analytics;
- Is a member of President’s Council of Advisors on Science and Technology from 2009-2017;
- Created Schmidt Futures;
- And is chair of the Board of Broad Institute of Harvard and MIT, a biotech facility.
EPIC believes AI policy should be guided by the protection of civil rights and civil liberties and that the United States must devote funding and resources to building oversight capacity. The current approach best reflects the desired benefits of Schmidt and others that are instrumental in guiding policy, while directly benefiting from it. Congress and federal agencies must allocate additional funding and resources to AI accountability so there is not a reliance on outside groups with clear conflicts of interest to develop policy.