Get Started
Log In
Menu
Get Started
Log In

Collaborative Models for Understanding Influence Operations: Lessons From Defense Research - Carnegie Endowment for International Peace

by JACOB SHAPIRO, MICHELLE NEDASHKOVSKAYA, JAN OLEDAN, on Jun 29, 2020 9:00:00 AM

This article was originally published by the Carnegie Endowment for International Peace. Click here to view the full article.

 

Introduction

Social media has proven itself an essential tool for catalyzing political activism and social change in the United States and around the world.1 Yet the very features that make it so useful to those seeking to advance the greater good—scalability, mobility, and low costs to entry—also make it prone to manipulation by malign actors who use it to perform influence operations and spread divisive rhetoric. These bad actors looking to sway public opinion include both fringe groups and well-funded, highly staffed government institutions, and they have been prolific in recent years.2 This new type of statecraft is but one example of the wide-ranging impacts social media have on society, a topic that demands greater research and long-term institutional investments.

A key barrier to expanding the knowledge base on influence operations is getting credible, reproducible, scientific research done with highly sensitive data. This challenge can in principle be addressed either by company staff (insourcing) or by outside academics (outsourcing), but both routes have limitations.

Despite the expressed desire of and clear incentives for social networks and internet platforms to support better research, insourcing such research is difficult. Common challenges include: getting personnel and budgets for research that lacks an obvious pathway to product, managing data privacy concerns, establishing the credibility of studies, grappling with the difficulty of starting projects that might reflect poorly on platforms, and overcoming the limitations on work that crosses platform boundaries because of a widespread reluctance among social media companies to share data with competitors. Recruiting top-tier talent can also be a challenge, given that many researchers are reluctant to work at such companies for fear they will not be able to publish or that their independence will be questioned.3Because of such obstacles, companies are doing little internal research on long-term, noncommercial issues, at least relative to the massive volumes of data produced every second of every day.4

Outsourcing research by sharing data with academics introduces a different set of challenges.

Technology companies face a wide range of legal and reputational risks when they allow outsiders to work with individual user account data. Previous attempts to circumvent these obstacles by outsourcing research have also fallen well short of expectations. To cite one example, in 2018, Facebook and a consortium of foundations along with scholars from Harvard, Stanford, and other universities established ­Social Science One. The organization’s Election Research Commission followed a model in which a group of scholars worked with Facebook to identify data that could be safely anonymized and then released to researchers—a task that turned out to be much harder to operationalize than anticipated.5 Efforts to establish data-sharing institutions analogous to the U.S. government’s Federal Statistical Research Data Centers (RDCs) have not yet borne fruit for the wider academic community (the Stanford Internet Observatory, for example).6 And while ad hoc collaborations have solved some of the challenges discussed above, they are typically based on personal relationships that do not scale.7

Both researchers and industry actors (a shorthand term for social media companies and internet platforms) need new models for thinking about how to build durable organizations that can draw on a wide range of expertise, pivot to new problems, remain stable enough to attract top talent, and work flexibly with a plethora of highly sensitive information. To maximize knowledge development, these organizations need to address incentive-related issues at multiple stages in the research process—from conceptualizing to resourcing to data sharing to publishing. Such organizations also need to enable cross-platform work in ways that no existing system does.

The long-running relationship between academic researchers and the U.S. defense community provides several institutional models that can serve as inspiration for developing new solutions to fill this gap. These rich forms of ongoing collaboration almost all sit somewhere between the platonic open science ideal on the one hand and proprietary inhouse research on the other. They involve compromises and the clever use of contractual, financial, and reputational mechanisms to connect skilled researchers with hard problems while giving them the resources necessary to make progress. These models could shape how industry actors think about and seek to catalyze research on issues ranging from combating malign influence operations to mitigating the psychological costs of distraction.

Taken as a whole, the defense research ecosystem has largely solved a range of tasks that need to be addressed in industry-academia collaboration (or at least their defense equivalents). Three stand out as especially critical in light of recent experiences:

  • Task 1: maintaining credibility and engaging external researchers;
  • Task 2: safeguarding data while enabling cross-platform research; and
  • Task 3: overcoming structural barriers, including:
    • insufficient decision cycles at the top,
    • reluctance to study certain topics,
    • product-oriented culture,
    • and short-term focus.8

Throughout this paper, these three tasks serve as a framework for evaluating different initiatives and the issues they do, or do not, solve. The remainder of the paper explores lessons from the rich history of defense-academia collaboration, providing examples that could inform thinking about how to better support research on online influence operations, among other topics related to emerging technologies. The first section briefly discusses the limitations of existing models for addressing the three tasks outlined above, covering both the challenges of insourcing and the difficulties of outsourcing. Next, the paper examines formal and informal models of collaboration from other fields, primarily defense and intelligence but also labor economics. These models are familiar to some but are not well-known in the wider community working on influence operations. The final section concludes with a discussion of the benefits of intermediate organizations and five concrete principles for setting them up.

Limitations of Existing Models

In recent years, there have been several efforts to better understand how social media can be used for malign purposes. The major social media platforms all have established internal research groups, several think tanks have created research arms, and various academics have developed a number of models in parallel. Collectively, these institutions have cultivated a wide range of valuable knowledge, but they have also fallen short in important ways.

CHALLENGES OF INSOURCING

One seemingly clear solution to the challenge of studying the impact of social media on society is for firms to undertake the work themselves by insourcing. Unfortunately, while social media and internet platforms obviously must play a critical role in these investigations, they face specific constraints that hamper their ability to drive scientific research on multiyear time frames.9

Industry actors face real challenges producing credible research (see task 1 above), as evidenced by the lukewarm public reaction to large platforms’ significant steps to increase their internal capacities for combating online influence efforts. Facebook, for example, has hired hundreds of staff and invested in a range of new technologies to help target what it calls “coordinated inauthentic behavior” across its products.10 Google has been supporting more effective journalism through its Google News Initiative while working on proactive strategies for keeping disinformation out of search results.11 And Twitter has been shutting down accounts associated with online manipulation and publishing extensive data to contribute to research on state-sponsored influence operations.12 Despite these recent initiatives,13 social media companies continue to come under fire for their failure to quell inauthentic behavior.14

Why is that the case? Part of the answer is surely the fallout from recent scandals, which have shaken public trust in industry actors, exacerbating systemic obstacles to credible research.15 Trust in Facebook, for example, reportedly declined by 66 percent in the wake of the major 2018 privacy scandal involving the use of the company’s data by Cambridge Analytica.16 Since then, the platform has also come under fire for releasing a Facebook Research app that granted the company expansive access to users’ mobile devices.17 Such scandals have dramatically impacted public perceptions of industry actors, thereby magnifying concerns regarding the independence and ethics of internal research.18 They also reinforce companies’ history of incomplete disclosures, which raises questions about their willingness to publish findings that would reflect poorly on their impact or policies.19 That history also elevates concerns about file-drawer bias—the distortion of overall knowledge when null results are less likely to be published.20 Because companies have strong financial incentives to suppress unfavorable research findings, purely insourced research will likely always be met with suspicion.

The core of task 2, safeguarding data, is relatively straightforward for purely internal research. But the second half of this task (enabling cross-platform research) makes maintaining security much harder.21 By definition, cross-platform work cannot be purely insourced. This poses a real challenge because sophisticated social media manipulators operate across many platforms at once, so the ability of any single company to identify and target them is inherently limited.

Moreover, fully understanding the structure and impacts of influence operations will likely require working with information from smaller platforms that typically have limited analytical resources. Tracking the profiles and activities of bad actors across platforms is important for understanding how influence campaigns are managed, but researchers cannot reliably assess that impact if they systematically lack exposure to data from smaller platforms. Yet many such firms lack the resources to make the kinds of investments in identifying online manipulation that larger companies have done. Insourcing alone simply cannot suffice, at least not without dramatic improvements in cross-platform information sharing.

Finally, industry actors have not been able to overcome the range of structural barriers that task 3 entails. Even when relevant actors intend to do so, many of their efforts have been stymied or have taken far longer than expected because they require senior leaders’ attention for coordination across different divisions. Projects expected to yield findings unfavorable to firms (that are therefore unpopular with some senior executives) are put off or go under-resourced. And getting research plans approved is often complicated by the frequent internal reorganizations endemic to most social media platforms. Most of these companies have grown amid a period of extremely rapid change, leading to a culture of frequent personnel turnover and reorganization.22 This makes executing multiyear research projects extremely difficult as each leadership change or restructuring necessitates another round of coalition building for such projects.23 High turnover also creates obstacles related to talent recruitment. People with the skills to execute the most challenging basic research typically want to work in places where they can focus on one problem set for a long period of time.

In some ways, the most significant structural barrier relates to the short-term, product-focused culture prevalent at most social media companies.24 The connection between such research inquiries and companies’ products is often unclear at the start of basic research projects (indeed the outcome is by definition unknown—otherwise it would not be research), so they are a hard sell at many companies. That uncertainty also makes it hard to defend personnel and budgetary allocations when unexpected product-related issues come up (if, for example, a major software release has unanticipated consequences for other products and requires reallocating engineering resources). These challenges are exacerbated when projects require coordination across multiple divisions. In addition to a strong product-oriented culture, most social media platforms also tend to focus on the short term, especially on results for the next quarter, half or year. Though characteristic of many dynamic, for-profit companies, this feature makes it hard to execute projects that last several years, as many social science projects often do.

Between the myriad challenges associated with the lack of trust in technology companies, data privacy concerns, limitations on cross-platform research, and structural barriers, insourcing alone cannot solve the research gap. This leaves several unanswered questions that are important to democracy: what effect (if any) do influence operations have on audience decisionmaking? Do existing countermeasures on influence operations work? And could platforms’ content moderation policies inadvertently stifle political speech by U.S. citizens? Addressing such inquiries will require innovative approaches that go beyond insourcing.

DIFFICULTIES OF OUTSOURCING

Outsourcing poses a different set of challenges. Industry actors working with outside organizations have achieved some notable successes. For example, the Digital Forensics Research Laboratory, which is supported by Facebook information sharing, produces some of the richest publicly available data sources on troll tactics, techniques, and procedures. Unfortunately, more often than not, data privacy concerns stymie such forms of collaboration.

The Social Science One initiative exemplifies these challenges.25 Social Science One was specifically designed to credibly engage outside researchers (task 1) through a three-stage process. First, a distinguished academic commission, the so-called Election Research Commission, would serve as a trusted third party, understanding the needs of the academic community while enjoying full access to the company’s proprietary data so that it could identify data that would serve scholarly goals. Then the board and companies (initially Facebook) work to anonymize the data to preserve user privacy, and lastly, outside academics whose research plans pass a peer review process gain data access and the ability to publish their findings without the company’s prior approval, a measure enacted to support research independence.26 Impartiality concerns would also be allayed by funding the initiative with support from an ideologically diverse group of charities. This approach was designed to enable high-credibility research while protecting user privacy as well as firms’ closely guarded trade secrets.

While Social Science One made admirable strides toward surmounting the issues inherent to inhouse approaches—and while it supported important data releases and promising studies—it also faced a host of unanticipated challenges that ultimately prevented it from achieving many of its stated goals. Most critically, the group’s leadership underestimated the difficulty of addressing privacy concerns associated with releasing the initiative’s first data set.27 Facebook was unable to give researchers the promised fine-grained data because large-scale anonymization proved technically and legally more challenging than anticipated. The limited data that Social Science One could provide paled in comparison to what had been promised, leading some funders to pull support from the initiative.28

Ultimately, operational hurdles prevented Social Science One from addressing task 2 and task 3. It was not able to safeguard data in the manner needed to get the most out of the academic researchers. It also did not address many of the long-term structural barriers that go beyond data access (nor was it intended to do so, except by serving as a demonstration of one possible approach). Social Science One required a large, bespoke, one-off effort at data anonymization that advanced the state of the art on that topic, but it did not create a set of durable organizational processes.

Of course, the litany of data-sharing problems that can stymie research extend well beyond anonymization. Even highly competent companies can have horribly messy data. Firms often have not standardized data across products, compounding the challenges of sharing and making sense of it.29 And documentation is usually sparse and incomplete compared to what academic researchers need. Resolving those problems within firms before data is shared can require costly engineering efforts as well as collaboration across business units. These additional complications mean that researchers need to develop deep, long-term, collaborative relationships to be able to address the third task. Otherwise they will have a hard time knowing what kinds of data they can, or cannot, expect to have for their research, nor what to make of the data that is released.

The failures of outsourcing up to this point should not stop the search for institutional frameworks that would allow for data sharing by industry actors. But the most prominent efforts that firms and academic researchers have undertaken are corner solutions, either purely insourced or focused on pushing data out in an open science framework, two approaches that both will likely continue to falter in important ways. That state of affairs should push industry and the research community toward innovative institutional solutions.

Models from the Defense and Security Fields

The long history of cooperation between academic researchers and the U.S. government in several areas, including defense and security, provides a rich set of examples for how to address the three key tasks identified in the introduction:

  • Task 1: maintaining credibility and engaging external researchers;
  • Task 2: safeguarding data while enabling cross-platform research; and
  • Task 3: overcoming structural barriers.

This section first highlights the role of informal mechanisms in fostering productive relationships between academic and defense institutions. It then reviews a number of formal organizations that have facilitated research projects that could neither be fully insourced by the government nor outsourced to academic institutions through competitive research grants.

INFORMAL MECHANISMS

Personal interactions between researchers and practitioners have long been a key part of the research base underpinning U.S. defense policy. Such relationships provide scholars with the deep knowledge of often highly sensitive national security matters necessary to identify which questions can be answered with the academic’s craft.

Foundational work on nuclear strategy by Nobel laureates Thomas Schelling and Albert Wohlstetter, for example, was deeply informed through engagement with the operational nuclear weapons community while at the RAND Corporation.30 Similarly, critical work by Scott Sagan on the inherent organizational limits to nuclear weapons safety drew on his time as a Council on Foreign Relations (CFR) International Affairs Fellow working for the Joint Chiefs of Staff and later as a consultant in the same office.31 New research by Caitlin Talmadge on the potential for nuclear escalation in conventional wars draws on dozens of interviews conducted with current and former officials, many of whom were willing to speak on the basis of interpersonal ties formed during Talmadge’s time as a CFR Stanton Nuclear Security Fellow, as well as her interlocutors’ tours as government fellows at think tanks and universities in Washington, DC.

To be sure, there are supporting institutions that nurture such long-term interpersonal ties. The most notable on the academic side is the CFR International Affairs Fellowship (IAF), which has funded approximately 600 fellows since 1967. IAF places mid-career foreign policy professionals, including academics, in new roles to expose the public sector to scholars and vice versa. Some of the leading academic research on the long-term implications of drones and artificial intelligence, for example, would not have happened absent connections and background knowledge that Michael C. Horowitz developed during his IAF stint.32 On the military/defense side, the Army War College Military Education Level 1 (MEL-1) Fellows program is one of many funded educational opportunities through which active-duty personnel can build relationships with the research community. MEL-1 fellows spend a year at universities such as Harvard, Princeton, and Stanford or think tanks such as the Brookings Institution. By giving researchers and their colleagues in the defense policy community opportunities to build trust and establish shared knowledge, these programs help lay the groundwork for data sharing and for overcoming structural barriers (task 3).

Informal relationships are also at the core of the last decade’s flourishing body of research on insurgencies and irregular warfare. One of the authors of this paper, Shapiro, has been part of that effort as co-director of the Empirical Studies of Conflict Project (ESOC),34 a multi-university consortium based out of Princeton University.35 Since 2009, ESOC has supported more than one hundred research papers in various ways, including: working to declassify and build data on combat incidents in Afghanistan, Iraq, Mexico, and the Philippines;36 collaborating with U.S. Defense Department agencies to release internal documents from terrorist organizations and co-authoring research on them with scholars from the RAND Corporation;37 developing qualitative data on program implementation in Iraq;38 and working with multiple state-level bureaucracies to build information on contracting and development program execution in India.39

The ESOC experience highlights a number of principles for how academics can effectively engage with industry actors on highly sensitive topics.40 First and foremost, regular, ongoing conversations are the best way to identify relevant data and research possibilities.41 Researchers should get out there and understand their potential partners’ problems before they ask for data. Such cooperative engagement helps researchers make more refined, precise requests—a critical consideration when partners have other responsibilities beyond supporting research, as they always do—and such back-and-forth conversations also create a set of shared concerns that help make partners more willing to spend time answering the kinds of unanticipated questions that come up any time one digs into new kinds of data.

Second, researchers should be flexible and expect projects to involve extensive iterations and phases of discovery. Many of the projects ESOC supported entailed multiple cycles through the process of identifying a question of interest to both sides, running initial analysis, figuring out that the data have various quirks and inconsistencies that must be understood to make sense of the results, providing initial results to the operational partner, asking follow-up questions, and revising the initial question. Building the kind of trust-based relationship that enables such iteration requires spending time in the operators’ shoes. Asking operational partners to simply throw data over the wall rarely leads to good studies, whether those partners are government organizations or technology firms.

Third, researchers should be sensitive to the operational environment facing their partners.42 As a practical matter, that means working to align cadences, questions, and interests. The key to aligning cadences is recognizing that large parts of scholars’ early research process—scoping out new problems and organizing existing knowledge on a topic—can be exceptionally helpful to those building out tools on short time frames. By sharing what is learned in the preparations to do the work—for example, what the research literature does or does not say about a given policy’s likely impact—researchers can earn substantial good will. When combined with breaking down big research questions into scientifically meaningful components that are relevant today, such informal consulting can help generate momentum for the data sharing needed to address larger questions.

Aligning on questions and interests requires translating basic research questions into terms that are meaningful to partners, whether they are staff officers in the military or product managers at technology companies. At a fundamental level, informal cooperation means horse-trading data access, support, and information for helping a given firm with research-driven insights. If scholars cannot make the answers meaningful, they have nothing to trade.

Finally, any informal collaboration requires patience with the logistics of securing data. There are often a wide range of legal questions surrounding access to data, whether through declassification of national security data or the release of sensitive company information. These can often be finessed—in the defense space that often means forgoing release of some potentially useful fields—but doing so requires a clear-eyed understanding of the rules.

Even when data can be released, many partners will not have key pieces of data in accessible formats. Production systems are rarely engineered for long-term storage, retrieval, and aggregation. Awareness of limitations is important as researchers making infeasible requests provide managers a good excuse to shut down potential studies. Once data are released, high-reliability research still requires a detailed understanding of the data-generating process. This can be a problem. Often the documentation needed for studies is not part of the operational data. Researchers need to be very motivated to share it because doing so often requires digging through old records, especially for studies that go back more than a year.

Interpersonal ties between the defense and academic communities have been key to furthering research on sensitive issues. Through iterative collaboration, researchers and practitioners have developed a rich understanding of each other’s limitations while generating innovative ideas. Over time, industry-academia collaboration could likewise build trusted relationships to stimulate research. To do so, the communities could draw upon lessons from programs that have successfully fostered trust and informal ties in defense research.

ENABLING INSTITUTIONS

While developing trusted relationships is an art, there are institutions that have turned it into more of a science. The Laboratory for Analytic Sciences (LAS) at North Carolina State University, for example, has enabled wide-ranging examples of research collaboration between faculty and the intelligence community around sensitive security issues since its founding.43 Government personnel spend multiyear tours at LAS where they work with university faculty and industry partners to address specific problems facing their home organizations. LAS has overcome several challenges in the process, such as surmounting structural and cultural differences across organizations, accommodating different approaches and jargon when problem solving, and fostering mutual trust so personnel work toward shared goals.44

LAS has been largely successful at developing a structured approach to encouraging collaboration among actors with vastly different institutional backgrounds. It employs several distinct annual workflows that offer predictability to faculty and students with a consistent approach to building teams that include academics, industry actors, and government officials. Over time, LAS has learned the characteristics of effective project leadership for its specific setting and has developed a pipeline to bring such people onto both the government and academic sides.45 Because such extreme interdisciplinarity is not the norm for researchers or intelligence community personnel coming to work at LAS, the organization developed a dedicated collaboration team. That team advises leadership and facilitates research by suggesting process improvements to increase teamwork capacity across smaller units within the organization.46

The Defense Advanced Research Projects Agency (DARPA) is also a key enabler of informal ties because of how it staffs its R&D efforts. Established in 1958 within the U.S. Department of Defense to take on “high-risk, high-reward” R&D challenges, DARPA has solved a range of complex technical problems while creating connections among academics, government organizations, and industry.47 DARPA’s innovation model combines ambitious goals with a system of temporary project teams comprised of interdisciplinary experts from industry, universities, and federal government organizations.48

DARPA projects are run by program managers, individuals recruited from the outside and tasked with developing, pitching, and then executing projects. DARPA program managers are term-limited, between three to five years, with an informal norm that one cannot do more than two tours in a career.49 That structure encourages entry by people with innovative ideas and then pushes them out to other sectors with the knowledge and connections gained during their tenure.50 Program managers join “to get something done, not build a career.”51 And if projects are not working, program managers can terminate them and reallocate funds to a new or existing project that demonstrates more promise.52

DARPA’s innovative, risk-taking culture is sustained by unusual hiring and contracting flexibility compared to other Department of Defense agencies.53 Importantly, these include the ability to hire experts from outside the government on short-term contracts.54 Overall, DARPA’s time-limited appointments and capacity to directly hire people with commercial or academic backgrounds into senior government positions encourages an exchange of ideas and perspectives that otherwise would never happen—a net benefit for all parties.

LAS and DARPA illustrate the broader principle that tackling research topics that cross organizational boundaries fosters long-term interdisciplinary relationships. They have solved many of the contracting and human-resources challenges involved in engaging external researchers and have overcome a range of structural barriers to insourcing research in the defense and intelligence communities.

FORMAL MECHANISMS

There are two key formal mechanisms for collaboration between the federal government and academia when it comes to economic and defense issues involving sensitive data: RDCs and Federally Funded Research and Development Centers (FFRDCs).55 This section looks at both these organizations for inspiration, focusing in particular on how they address the three key tasks identified in the introduction.

Federal Statistical Research Data Centers

RDCs enable individual researchers to work with highly sensitive, government-collected information, including individual-level survey data and firm-level responses to Bureau of Labor Statistics surveys.56 These secure facilities are managed by the Census Bureau at twenty-nine different locations across the United States, including at universities, Federal Reserve Bank branches, and a few other research institutions such as the National Bureau for Economic Research. To ensure data security, RDC researchers are required to obtain Census Bureau Special Sworn Status by passing a background check and swearing to protect respondent confidentiality for life.57 These centers partner with more than fifty different research organizations, including universities, nonprofit research institutions, and government agencies. Thanks to their stringent security requirements, RDCs enable a wide variety of research using sensitive datasets such as firm-level surveys with identifying information.

RDC data have been used to study a wide range of topics over the years, including studies on the effect of fraudulent financial reporting on employees, the potential impact of adding citizenship questions to the 2020 census, and the ways in which having children shape employment dynamics for U.S. women.58 These studies would be impossible without academic access to sensitive individual and firm data.

RDCs work for two reasons. First, by controlling the computing infrastructure on which analysis is conducted, the RDCs can ensure data do not leak. This way, firms are comfortable sharing sensitive data on Bureau of Labor Statistics surveys even though they know researchers may eventually work with the data. Second, researchers can plan projects before they go to RDCs because extensive data dictionaries are available, even when data themselves cannot be made public.

That institutional structure makes it possible to engage external researchers (task 1) and safeguard data (task 2). Even risk-averse scholars engage with confidence on projects that rely on Census Bureau and Bureau of Labor Statistics data at the RDCs because they can plan projects before accessing the data, and once cleared, they know the access will be sustained.59 Data sent to the RDCs is safeguarded through the combination of controlled systems and vetting.

RDCs or similar structures would not, however, address the full range of issues in task 3. The analytical work is done by scholars with no contractual obligation to the RDCs, so these institutions can only support research of inherent interest to academic researchers. Because there are a wide range of issues that are critical for society and industry, but not necessarily of academic interest, a different solution is required when it comes to political influence operations and other aspects of industry’s impact on society.

FRDCs and UARCs

The history of FFRDCs starkly illustrates how trusted intermediary institutions can meet dynamic and rapidly evolving research needs. FFRDCs “provide federal agencies with R&D capabilities that cannot be effectively met by the federal government or the private sector alone.”60 They are operated on a not-for-profit basis by contractors (including universities and other not-for-profit organizations such as the RAND Corporation and the MITRE Corporation, which administers FFRDCs), typically through five-year renewable contracts.61 A given FFRDC is subject to special rules on procurement, contracting, and function, complying with the provisions governing the particular federal agency that sponsors it.62 In particular, these entities help meet long-term federal research and development requirements with minimal commercial conflicts of interest, while providing a home for highly specialized personnel.63 Industry actors should draw from the FFRDC experience—in particular the combination of clear sponsor guidelines, strong prohibitions against other commercial work, and time-limited yet flexible contracts—for institutional models that could address key research needs.

The management and governance of FFRDCs is formalized in Section 35.017 of the Federal Acquisition Regulation.64 Sponsor agencies (such as the Department of Defense) are responsible for oversight of their FFRDCs and must conduct annual audits, performance assessments, and reviews before renewing agreements.65 Currently, each sponsor agency has its own management processes, and there is no fixed interagency procedure. Annual research plans are created and must be approved by the sponsor agency and the corresponding FFRDC before delegating workloads and assigning hours for tasks.66

Similar to FFRDCs, university-affiliated research centers (UARCs) also provide specialized research and expertise to their respective sponsor agencies.67 While UARCs are not formally defined in federal law, the Department of Defense has codified procedures and rules on their management and operations.68 UARCs must be university-affiliated and operate out of university or college campuses, have education as a core part of their mission, and remain flexible in terms of competing for public or private contracts.69 Currently, there are fourteen Department of Defense–operated UARCs.70

Both types of institutions can be traced back to World War II, when scientists, engineers, and academics mobilized to support the war effort. Early in the Cold War, such examples of collaboration were formalized in FFRDCs whose mission was to address national security research requirements. The FFRDCs took on niche activities that could not be accomplished directly by the government or by for-profit firms.71

The Department of Defense was the first agency to use FFRDCs, as it saw the need for “policy guidance for operations and strategic planning” and “unbiased technical guidance and expertise for major systems developments.”72 As the federal government research agenda has changed over the decades, so has the role and mission of FFRDCs. There have been 123 FFRDCs since 1948, though only forty-two are currently active and sponsored by thirteen federal agencies, according to the National Science Foundation master list.73

That flexibility is enabled by rules governing FFRDCs, which provide them with the ability to engage with new challenges as they arise and to shut down when problem sets change.74 Under the Federal Acquisition Regulation, sponsor agencies are required to reevaluate their FFRDC agreements at least every five years, providing a measure of stability while enabling adaptations to the changing security environment.

FFRDCs address a wide range of issues beyond traditional national security concerns, including terrorist threats, cybersecurity issues, protection of U.S. information technology infrastructure, healthcare, environmental issues, and civil infrastructure modernization.75 Sometimes these institutions respond to emergency issues. FFRDCs affiliated with the Department of Homeland Security and the Department of Energy, for example, rapidly mobilized technology and explosives experts to assess threats and weaknesses in airport security systems following an attempted airplane bombing on December 25, 2009, and proposed a number of specific improvements.76 As the misinformation challenge posed by COVID-19, the disease caused by the new coronavirus, has so clearly highlighted, having established institutions capable of expanding research capacity on emergent issues could provide significant value to technology companies. Creating such structures will be key to getting ahead of such problems in the future.

Another key structural element of FFRDCs has been their recruitment of academics through sabbatical periods, consulting arrangements, and competing on the academic job market for core permanent staff. These practices foster the cross-pollination of ideas between government and academia. In RAND’s early days, such cross-pollination arguably laid the foundation for the United States’ Cold War–era nuclear weapons strategy. The Institute for Defense Analyses (IDA) and other FFRDCs continue these practices to the present day. As with the CFR IAF program, defense-academia connections help the defense community address task 3 beyond the lifetime of any given project.

FFRDC CONTRACTING

The key to how FFRDCs have historically addressed all three key tasks lies in the guidelines of the sponsoring agency agreements. Clear but minimal requirements for these are set out in the admirably terse Federal Acquisition Regulation Section 35.017-1, which outlines the five necessary sections of a sponsoring agency agreement. These nine short sentences have enabled the creation of more than one hundred bespoke research institutions in the last forty years:

  1. A statement of the purpose and mission of the FFRDC;
  2. Provisions for the orderly termination or nonrenewal of the agreement, disposal of assets, and settlement of liabilities. The responsibility for capitalization of an FFRDC must be defined in such a manner that ownership of assets may be readily and equitably determined upon termination of the FFRDC’s relationship with its sponsor(s);
  3. A provision for the identification of retained earnings (reserves) and the development of a plan for their use and disposition;
  4. A prohibition against the FFRDC competing with any non-FFRDC concern in response to a federal agency request for proposal for other than the operation of an FFRDC. This prohibition is not required to be applied to any parent organization or other subsidiary of the parent organization in its non-FFRDC operations. Requests for information, qualifications, or capabilities can be answered unless otherwise restricted by the sponsor;
  5. A delineation of whether or not the FFRDC may accept work from other than the sponsor(s). If non-sponsor work can be accepted, a delineation of the procedures to be followed, along with any limitations as to the non-sponsors from which work can be accepted (other federal agencies, state or local governments, nonprofit or profit organizations, et cetera).77

Note how these regulations combine strict prohibitions on competing with commercial firms for federal government work with flexibility of purpose and method. The language does not place strict requirements on what kinds of data the FFRDCs can use, how their studies will be reviewed, what their specific personnel rules should be, or how they should be organized internally. The Federal Acquisition Regulation guidelines effectively allow each FFRDC and its sponsoring agencies to craft an institution that can address their specific problems. Many have built mechanisms for prioritizing research in collaboration with the sponsoring agency while also allowing entrepreneurial activity by researchers at the FFRDC. Those researchers can work collaboratively with potential government sponsors to define projects and execute them, effectively enabling personnel at supported agencies to flexibly access the research capacity at the FFRDCs.78

The Department of Defense, for example, incorporates guidelines for work performance within its sponsor agreements with FFRDCs. Broadly, proposed projects go through several stages to determine whether they are appropriate and align with a given FFRDC’s core competencies.79

Of course, the FFRDC structure is no panacea. They have historically faced challenges and criticism from their customers, Congress, and academia. Challenges include the need to prioritize and defer projects due to congressionally imposed limits on annual working hours and staffing levels, as well as infrastructure modernization issues and concerns over competition with other military-funded projects.80 Historically, RAND and other FFRDCs faced criticism for their involvement in the Vietnam War, consistent with widespread, anti-war sentiment at universities and among the general public.81 The study that aroused the most public ire showed that U.S. forces were succeeding in demoralizing enemy forces through repeated attacks, including large-scale, aerial bombardment.82 That study was allegedly used to justify continuing combat operations, leading to protests against university associations with the military at many campuses. Some of those incidents escalated to physical assaults on facilities, including at least one bombing with casualties.83

Interestingly, that study was just one of a series seeking to better understand enemy motivation and morale. At least one of the studies in the series yielded opposing findings, suggesting that U.S. combat operations were ineffective in reducing enemy motivation and morale.84 That study’s publication and the controversy it caused within the government actually highlight the important role FFRDCs have played in bringing forward evidence that contradicts the preferences of senior policymakers.

Overall, the FFRDC’s contract arrangements have enabled research on sensitive topics with secret data while still allowing for an important measure of independence and publication of heterodox findings. That combination could be helpful for making progress on the broad issue of influence operations where the core data involve significant privacy concerns, the companies holding it need some control over research, yet society as a whole could benefit from findings that are uncomfortable for the platforms.

DA AND LESSONS FOR CROSS-PLATFORM DATA SHARING

The IDA merits special mention because of its ability to work with government and private sector partners on projects involving proprietary commercial data without the protections of the national security classification system. The origins of the IDA lie with the Weapons System Evaluation Group (WSEG). The WSEG was established in December 1948 as a high-level advisory group to serve the Joint Chiefs of Staff and the Secretary of Defense.85 It was intended to combine military and civilian expertise to achieve three primary objectives: to bring scientific, technical, and operational military expertise to bear in evaluating weapons systems; to employ advanced techniques of scientific analysis and research; and to approach its tasks from an impartial perspective that transcended the various branches of the armed forces.

Eventually, as demands on WSEG continued to strain its small staff and limited resources, Department of Defense authorities considering the contractual alternatives available for WSEG realized university sponsorship could lend greater scientific prestige to the enterprise while attracting more civilian research analysts and enabling closer ties to the academic community.86 Toward this end, the Secretary of Defense and Joint Chiefs of Staff convened a group of leading universities to sponsor a nonprofit corporation to assist WSEG in addressing some of the country’s most pressing security issues. The organization, formally incorporated as the IDA, was established in 1956 by five university members.87 Cooperation between academia and government was key to establishing the new institution.

The IDA has dealt with proprietary information from multiple companies competing for multimillion and sometimes multibillion-dollar contracts. That it can do so across multiple competing firms (for example, Lockheed Martin or Boeing) on multibillion-dollar procurement contracts is a testament to the power of its institutional model and the strict noncompete provisions discussed above. Personnel who work on technology assessments get to know their industry counterparts very well and have authority to see full cost structures. Yet the IDA remained a trusted interlocutor for cost-effectiveness analysis. Firms are comfortable sharing proprietary information with the IDA because they knew the organization has strong security protections in place and that its contracts prevented it from monetizing any information it receives. All the incentives are aligned for protecting proprietary information. Critically, much of this information sharing happens without the legal protections of the national security classification system. It is enabled by the combination of clear contractual incentives and a long-run culture of treating proprietary data with great care.

Benefits of Intermediate Organizations

Lessons learned from the FFRDC experience can inform society’s widely shared objective of addressing influence operations on social media platforms. Democratic governments around the world are interested in better understanding the nature of influence operations to protect their societies and defend the integrity of their elections. Similarly, technology companies want to address this phenomenon because they face reputational and business risks for failing to mitigate malign influence operations.

A key challenge in addressing influence operations is building the stable, objective, independent R&D capacity necessary to tackle the problem. The long-running relationship between researchers and the U.S. federal government provides many lessons for how to do this. While formal institutions for government-academia data sharing have played a role, and while informal relationships have enabled a large body of excellent work over the years—some of which guided strategy for how to avoid the literal end of the world through nuclear conflict at the height of the Cold War—organizations that sat between the government and academia also played a central role.

HOW FFRDCS FIT IN

The structure of FFRDCs—the long-term commitment to the sponsor, a lack of commercial conflicts of interest, and the retention of high-skilled expertise—allows them to address all three critical tasks for tech-academia collaboration. They routinely work with classified and proprietary data (task 1). Their reputations and internal peer-review processes provide credibility even when full soup-to-nuts replication data cannot be made public in the tradition of open science (task 2).88 And, importantly, the close relationship between FFRDCs and their sponsoring agencies has not prevented them from publishing heterodox findings (task 3).

Over the years FFRDCs have done research on a wide range of sensitive topics. Recently these include the inherent challenges of assessing progress in counterinsurgency campaigns (citing mistakes made by U.S. forces in Afghanistan and Iraq) and the many failures of the federal government’s so-called war on drugs.89 Similarly, JASON, a program run out of MITRE though not technically an FFRDC, provided a highly controversial outside assessment of the Department of Energy’s Lifetime Extension Program for deployed nuclear weapons in 2009.90

When it comes to maintaining credibility and engaging external researchers (task 1)—FFRDCs and related organizations have long provided a venue for developing skilled researchers and connections. And they have worked out standards regarding research integrity that create credibility. For example, a measure of independence from government manipulation of studies is baked into sponsoring agency agreements and internal standards, which typically specify mechanisms for peer-review and limit the government’s ability to restrict publication. Such rules create contractual protections for researcher independence. While tacit government influence over FFRDC work is always present because researchers at FFRDCs have long-term relationships to maintain with officials at their sponsoring agencies, contractual independence has enabled these institutions to tackle a wide range of sensitive topics and regularly publish controversial findings.

As for safeguarding data while enabling cross-platform research (task 2)—IDA’s success in managing data security and trust issues between multiple industry actors stands out. It demonstrates that properly structured contracts combined with institutional prohibitions on other kinds of for-profit work can address many concerns, even without the protections of the national security classification system.

Finally, the codified research initiation mechanisms at various FFRDCs often helped with overcoming institutional barriers (task 3) by allowing mid-level personnel in government to get work done without getting senior leaders’ approval, exempting them from the usually lengthy processes related to drafting proposals, getting authorization, and securing funding. Ironically, having structured processes for setting the research agenda ultimately enabled a nimble and dynamic research approach that cut through the bureaucratic obstacles typical of secure government initiatives. The entire procedure—from identifying an area of concern for R&D efforts to ultimately dispersing funds—was greatly simplified.

Certain FFRDC models also incorporated an element of competition to ensure efficiency.

In the Department of Defense approach, for example, offices could invite multiple FFRDCs to bid on the opportunity to pursue a given research objective. Allowing the requesting office to compare the capabilities of each and identify the best-equipped organization for the job served to keep quality up and costs down without the full set of lengthy contracting procedures required for open procurement.

Overall, FFRDCs and related institutions allowed the government to benefit from the contributions of top scientific talent. Early on, many were drawn by the organizations’ prestige. To this day, FFRDCs routinely attract scientists who do not want to work directly for the government, at least not for their entire careers.

Making the FFRDC model work required support in terms of federal government contracting rules, willing sponsors, and, most importantly, a strong demand for research over decades. There are more than seventy years of history behind the FFRDC ecosystem. The growth of these entities in the early post–World War II period was driven by the conviction that the government needed help from people who would not work for it directly but would work for one of these institutions. That growth was enabled by contracting rules in two respects. First, the rules prohibited FFRDCs from competing on for-profit contracts, which meant they posed no threat to companies in terms of their material interests. Second, the government retained ultimate release control for the most sensitive studies, limiting reputational and security risks.

To be sure, the FFRDC model and other intermediate solutions fall well short of the open-science ideal. They do not expand the set of researchers in a democratic manner (in the sense of allowing a diverse set of scholars to self-select into working with data). And peer-review for classified work has to happen in a constrained manner, primarily by others inside the intermediate institutions. But it is exactly because they compromise on some of the open-science ideals that they can overcome the kind of data-sharing constraints that currently stymie a wide range of potentially beneficial work.

FROM TASKS TO PRINCIPLES

This study began by identifying three key unmet tasks: maintaining credibility and engaging external researchers, safeguarding data while enabling cross-platform research, and overcoming structural barriers. In the defense community, a range of institutions have solved some or all of these issues. In particular, the basic principles laid out in the Federal Acquisition Regulation for FFRDCs have supported a rich ecosystem of organizations that have addressed a huge variety of scientific problems over seven decades.

Industry leaders and academics interested in countering influence operations should draw inspiration from that history and begin thinking about the principles that would allow a similar sort of ecosystem to flourish and address their problems. Such principles could also be useful in building organizations to tackle the wider set of practical research and development challenges around social media’s impact on society.

So what principles should guide the development of intermediate organizations that would bring similar capabilities to the challenge of countering malicious influence operations as FFRDCs brought to the defense community? Five stand out: longevity, collaborative prioritization, professional staff, noncompete provisions, and peer review.

  1. Longevity: Any new intermediate organization must have sustained funding for a number of years that is not subject to changes in sponsor priorities. This could be accomplished, for example, through the creation of a trust by several social media firms.
  2. Collaborative prioritization: Each organization should have a sponsoring agreement by which firms would articulate what topics they would want a new organization to work on and establish an annual process for allocating that organization’s resources.
  3. Professional staff: Core personnel should be drawn from a combination of research and tech communities, along the lines of the LAS model, with senior staff hired on multiyear contracts to provide the stability top-notch researchers will demand.
  4. No competition: Organizations should be nonprofits or B-corporations in structure, the by-laws of which enshrine strong privacy protections and restrict them from bidding on work for companies they are sharing data with or those firms’ competitors.
  5. Peer review: The ability to withstand the criticism of other scholars is the cornerstone of scientific credibility. These organizations need a set of standards for peer review, as well as prepublication advisory review and comment by the technology companies. The latter will ensure companies have a chance to provide feedback and reduce their concerns about erroneous findings. In this setting, peer review could be modeled on the Government Accountability Office’s protocols, which mandate that the organization solicit agency comments before publication and include response to those comments in its reports.91 Properly institutionalizing research independence would go a long way toward enhancing the credibility of new organizations while also addressing recruiting challenges.

Work could begin today on model contracts and agreements to instantiate those principles. Having them would make it much easier for industry to support institutions that can carry out the vast volume of critical research that sits in the gray world between what companies can do themselves and what could be done by academics given data-sharing constraints.

If the long experience of developing the FFRDC and UARC ecosystem is any guide, one single brilliant model solving all problems inhibiting tech-academia collaboration on influence operations is unlikely. Instead, there will be a process of innovation in which the right set of principles enables many different solutions to be tried. Some will fail, but others will succeed and help address the problems posed by malign information operations. That kind of entrepreneurial process is one technology companies should be able to get behind.

Conclusion

While industry actors are making substantial efforts to address complex challenges like those posed by online influence operations, more must be done with an eye toward the future. Social media companies face obstacles to user trust and safety that will endure and likely intensify over time. This is perhaps more evident at the time of this writing than ever before, as misinformation surrounding the coronavirus pandemic continues to spread, threatening public health in countries around the world.

Nongovernmental organizations and academic institutions have responded quickly to some aspects of what United Nations officials have termed an “infodemic.”92 Fact-checking organizations in dozens of countries soon began debunking false claims, and a number of organizations, including ESOC, began collecting data on disinformation narratives.93 The major social media platforms took action to remove known misinformation, albeit with varying levels of success, sometimes spurred on by research showing their efforts were not yet highly effective.94 And researchers moved quickly to understand who was spreading coronavirus-related misinformation on some platforms. Encouraging as these ad hoc efforts may be, they surely fell short of what could have been done had there been established research institutions that could turn to the problem with deep data access, technical capacity, and personnel skilled in working with social media data.

Efforts to build institutions equipped with the needed research capacity and cross-disciplinary expertise to tackle misinformation across platforms could draw heavily on lessons from the long history of defense-academia collaborations. The diverse organizational ecosystem around defense-relevant R&D demonstrates that there are ways to meet the three critical tasks of this endeavor. Establishing knowledge-sharing institutions will be critical to addressing current problems facing technology companies—such as nation-state-led information operations and the increasing use of synthetic videos—and will build readiness to get ahead of future crises.

Please see the original article for cited material.

Topics:Press

Recent Posts