The topic of artificial intelligence has raised concerns and passionate discussions, often involving dramatic language that depicts technology’s potential dangers. While every new form of technology, especially one as transformative and groundbreaking as AI, warrants caution to prevent misuse.
Recent reports suggest that well-funded companies fuel AI-related concerns through significant financial support to shape public policies that favor their bottom line.
Sources revealed that the Open Philanthropy Foundation funds AI fellows strategically placed in vital policy-shaping positions throughout Washington, D.C., including congressional offices, federal agencies and prominent think tanks. Most of the foundation’s funding is attributed to Dustin Moskovitz, CEO of Asana and co-founder of Facebook, along with his wife, Cari Tuna.
The contest aimed to surface novel considerations that could affect our views on timelines and risks around the rise of transformative AI systems.
While we don't agree with every argument in these entries, we thought they all made valuable contributions to a collective debate.
— Open Philanthropy (@open_phil) September 30, 2023
Open Philanthropy, through its Horizon Institute, directly sponsors tech fellows in critical roles within organizations such as the Department of Defense, the Department of Homeland Security and the State Department. Their reach extends to the House Science Committee and Senate Commerce Committee, which play crucial roles in crafting AI-related regulations.
Horizon Institute fellows are also contributing to AI policy efforts at the Rand Corporation, Georgetown University’s Center for Security and Emerging Technology and have integrated themselves into the staff of influential politicians.
Recognizing the extent of their influence sheds light on how AI policy decisions are being shaped and by whom, ultimately impacting our technological future.
Open Philanthropy has utilized its influence to highlight long-term threats posed by artificial intelligence to humanity’s survival. However, critics within the AI sphere argue that such concerns divert attention from immediate and realistic AI issues. It also leads policymakers toward regulations that may harm the nation while benefiting major tech corporations.
An AI researcher, Deborah Raji, criticized Open Philanthropy for sensationalizing fears, diverting attention from issues like AI’s impact on personal privacy, misinformation and copyright protections. Raji emphasizes that focusing on sensationalism may lead to inappropriate solutions or policies.
For instance, Senator Richard Blumenthal and Senator Josh Hawley are developing a framework for government-required licenses for companies working on advanced AI. Raji contends that Open Philanthropy’s advocacy for a licensing regime would further strengthen the position of a few large, well-funded firms in their network, potentially leading to the concentration of existing monopolies.
Mike Levine, a spokesman for Open Philanthropy, says that the foundation operates separately from Horizon. He stated that Horizon initiated the fellowship program as consultants to Open Philanthropy until they could establish their own entity to fund fellows directly. He dismissed concerns about Open Philanthropy’s involvement in fellow selection, training and placement.
Horizon’s ability to finance fellows in federal government roles is based on legislation passed decades ago. The Intergovernmental Personnel Act of 1970 allows nonprofits to cover salaries for fellows working in Capitol Hill or federal government roles.
However, Tim Stretton, director of the congressional oversight initiative at the Project On Government Oversight, questions the appropriateness of congressional fellows working on issues where their funding organization has specific policy interests.
Stretton emphasized that fellows should not draft legislation or educate lawmakers on topics their backers could stand to gain, raising concerns about conflicts of interest.
Notable companies associated with Open Philanthropy include OpenAI, Anthropic and DeepMind— all of which were among the signatories of a letter in May warning of the “risk of extinction from AI.” The Center for AI Safety coordinated the letter and also received a $5.2 billion grant from Open Philanthropy.
Critics contend that Open Philanthropy’s approach reflects a concerning pattern reminiscent of authoritarian tactics. The strategy they employ revolves around emphasizing and sensationalizing fears surrounding AI. Using these fears as a pretext, they advocate for increased government control over AI regulations and initiatives.
This approach raises significant concerns as it could curb personal liberties and hinder innovation and development within the AI sector. It is essential to address the ethical and safety implications of AI. However, utilizing fear to advance regulatory measures may inadvertently stifle progress, limit competition and concentrate power among a few well-funded entities.
The concern is this approach may inadvertently lead to a concentration of influence and control in the hands of a select few, potentially at the expense of the broader public’s interests and freedoms. This perspective highlights the importance of critically examining and questioning the motivations and consequences of AI policy advocacy.