Get Started
Log In
Menu
Get Started
Log In

Artificial Intelligence and Autonomous Systems Legal Update (1Q19)

by Gibson Dunn, on Apr 24, 2019 4:15:11 PM

This article has been transferred from its original source. To read the original article in full, please click here.

Written by Gibson Dunn Staff, as featured on Gibson Dunn. --

I.    U.S. National Policy on AI Begins to Take Shape
Under increasing pressure from the U.S. technology industry and policy organizations to present a substantive federal AI strategy on AI, in the past several months the Trump administration and congressional lawmakers have taken public actions to prioritize AI and automated systems.  Most notably, these pronouncements include President Trump’s “Maintaining American Leadership in Artificial Intelligence” Executive Order[1] and creation of AI.gov.[2]  While it may be too early to assess the impact of these executive branch efforts, other executive agencies appear to have responded to the call for action.  For example, in February, the Department of Defense (“DOD”) detailed its AI strategy and on March 6 to 7, the Pentagon’s research arm, the Defense Advanced Research Projects Agency (“DARPA”), hosted an Artificial Intelligence Colloquium to publicly discuss AI.[3]  The clear interest asserted by the Trump administration and growing traction within executive agencies should provide encouragement to stakeholders that the federal government is willing to prioritize AI, although the extent to which it will provide government expenditures to support its vision remains unclear.

A.    PRESIDENT TRUMP’S EXECUTIVE ORDER

On February 11, 2019, President Trump signed an executive order (“EO”), titled “Maintaining American Leadership in Artificial Intelligence.”[4]  The purpose of the EO was to spur the development and regulation of artificial intelligence, machine learning and deep learning and fortify the United States’ global position by directing federal agencies to prioritize investments in AI,[5] interpreted by many observers to be a response to China’s recent efforts to claim a leadership position in AI research and development.[6]  Observers particularly noted that many other countries preceded the United States in rolling out national AI strategy.[7]  In an apparent response to these concerns, the Trump administration warned in rolling out the campaign that “as the pace of AI innovation increase around the world, we cannot sit idly by and presume that our leadership is guaranteed.”[8]

To secure U.S. leadership, the EO prioritizes five key areas:

(1) Investing in AI Research and Development (“R&D”): encouraging federal agencies to prioritize AI investments in their “R&D missions” to encourage “sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.”[9]

(2) Unleashing AI Resources: making federal data and models more accessible to the AI research community by “improv[ing] data and model inventory documentation to enable discovery and usability” and “prioritiz[ing] improvements to access and quality of AI data and models based on the AI research community’s user feedback.”[10]

(3) Setting AI Governance Standards: aiming to foster public trust in AI by using federal agencies to develop and maintain approaches for safe and trustworthy creation and adoption of new AI technologies (for example, the EO calls on the National Institute of Standards and Technology (“NIST”) to lead the development of appropriate technical standards).[11]

(4) Building the AI Workforce: asking federal agencies to prioritize fellowship and training programs to prepare for changes relating to AI technologies and promoting Science, Technology, Engineering and Mathematics education.[12]

(5) International Engagement and Protecting the United States’ AI Advantage: calling on agencies to collaborate with other nations but also to protect the nation’s economic security interest against competitors and adversaries.[13]
AI developers will need to pay close attention to the executive branch’s response to standards setting.  The primary concern for standards sounds in safety, and the AI Initiative echoes this with a high-level directive to regulatory agencies to establish guidance for AI development and use across technologies and industrial sectors, and highlights the need for “appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies”[14]  and “foster public trust and confidence in AI technologies.”[15]  However, the AI Initiative is otherwise vague about how the program plans to ensure that responsible development and use of AI remain central throughout the process, and the extent to which AI policy researchers and stakeholders (such as academic institutions and nonprofits) will be invited to participate.  The EO announces that the NIST will take the lead in standards setting.  The Director of NIST, shall “issue a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies” with participation from relevant agencies as the Secretary of Commerce shall determine.[16]  The plan is intended to include “Federal priority needs for standardization of AI systems development and deployment,” the identification of “standards development entities in which Federal agencies should seek membership with the goal of establishing or supporting United States technical leadership roles,” and “opportunities for and challenges to United States leadership in standardization related to AI technologies.”[17]

Observers have criticized the EO for its lack of actual funding commitments, precatory language, and failure to address immigration issues for AI firms looking to retain foreign students and hire AI specialists.[18]  For example, unlike the Chinese government’s commitment of $150 billion for AI prioritization, the EO adds no specific expenditures, merely encouraging certain offices to “budget” for AI research and development.[19]  To begin to close this gap, on April 11, 2019, Congressmen Dan Lipinski (IL-3) and Tom Reed (NY-23) introduced the Growing Artificial Intelligence Through Research (GrAITR) Act to establish a coordinated federal initiative aimed at accelerating AI research and development for U.S. economic and national security. The GrAITR Act (H.R. 2202) would create a strategic plan to invest $1.6 billion over 10 years in research, development, and application of AI across the private sector, academia and government agencies, including NIST, the National Science Foundation, and the Department of Energy (DOE)—aiming to help the United States catch up to other countries, including the UK, who are “already cultivating workforces to create and use AI-enabled devices.”  The bill has been referred to the House Committee on Science, Space, and Technology. [19a]

In April 2019, Dr. Lynne Parker, assistant director for artificial intelligence at the White House Office of Science and Technology Policy, noted that regulatory authority will be left to agencies to adjust to their sectors, but with high-level guidance from the Office of Management and Budget (“OMB”) on creating a balanced regulatory environment, and agency-level implementation plans.  Dr. Parker said that a draft version of OMB’s guidance likely would come out in early summer.[20]

For more details, please see our recent update President Trump Issues Executive Order on “Maintaining American Leadership in Artificial Intelligence.

B.    AI.GOV LAUNCH

On March 19, 2019, the White House launched ai.gov as a platform to share AI initiatives from the Trump administration and federal agencies.[21]  These initiatives track along the key points of the AI EO, and ai.gov is intended to function as an ongoing press release.  Presently, the website includes five key domains for AI development: the Executive order on AI, AI for American Innovation, AI for American Industry, AI for the American Worker, and AI with American Values.[22]

These initiatives highlight a number of federal government efforts under the Trump administration (and some launched during the Obama administration).  Highlights include the White House’s charting of a Select Committee on AI under the National Science and Technology Council, the Department of Energy’s efforts to develop supercomputers, the Department of Transportation’s efforts to integrate automated driving systems, and the Food and Drug Administration’s efforts to assess AI implementation in medical research.[23]

C.    U.S. SENATORS INTRODUCE “ALGORITHMIC ACCOUNTABILITY ACT” TO ADDRESS BIAS

On April 10, 2019, a number of Senate Democrats introduced the Algorithmic Accountability Act, which “requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans.”[24]  The bill stands to be the United States Congress’s first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific activity, such as autonomous vehicles.  While observers have noted congressional reticence to regulate AI in past years, the bill hints at a dramatic shift in Washington’s stance amid growing public awareness for AI’s potential to create bias or harm certain groups.[25]

The bill casts a wide net, such that many technology companies would find common practices to fall within the purview of the Act.  The Act would not only regulate AI systems but also any “automated decision system,” which is broadly defined as any “computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.”[26]  This could conceivably include crude decision tree algorithms.  For processes within the definition, companies would be required to audit for bias and discrimination and take corrective action to resolve these issues, when identified.  The bill would allow regulators to take a closer look at any “[h]igh-risk automated decision system”—those that involve “privacy or security of personal information of consumers[,]” “sensitives aspects of [consumers’] lives, such as their work performance, economic situation, health, personal preferences, interests, behavior, location, or movements[,]” “a significant number of consumers regarding race [and several other sensitive topics],” or “systematically monitors a large, publicly accessible physical place[.]”[27]  For these “high-risk” topics, regulators would be permitted to conduct an “impact assessment” and examine a host of proprietary aspects relating to the system.[28]  Additional regulations will be needed to give these key terms meaning but, for now, the bill is a harbinger for AI regulation that identifies key areas of concern for lawmakers.

The bill has some teeth—it would give the Federal Trade Commission the authority to enforce and regulate these audit procedures and requirements—but does not provide for a private right of action or enforcement by state attorneys general.[29]  While the political viability of the bill is questionable, Senate Republicans have also recently renewed their scrutiny of technology companies for alleged political bias.[30]  At a minimum, companies operating in this space should certainly anticipate further congressional action on this subject in the near future, and proactively consider how their own “high-risk” systems may raise concerns related to bias.  In addition, companies may also wish to consider whether and how they ensure that their voice is heard and considered in future legislative efforts.

D.    HOUSE SUBCOMMITTEE HEARS TESTIMONY ABOUT HOW AI CAN COMBAT FINANCIAL CRIME

The EO’s promised availability of governmental data may also prove beneficial for those in certain AI industries that are looking to expand their datasets beyond private data.[31]  This may be particularly relevant for agencies that have already expressed interest in data collection to ensure AI safety (e.g., in the context of the regulation of autonomous vehicles by the National Highway Traffic Safety Administration (“NHTSA”)).  Some AI businesses are now making their request for data access known.  On March 13, 2019, the National Security, International Development and Monetary Policy Subcommittee heard testimony from Gary Shiffman, founder and CEO of an AI security firm, who urged the government to implement AI to combat financial crimes, money laundering, trafficking and terrorism, noting that in order to advance this type of AI technology, the government forms an important, and perhaps necessary, part in making the AI systems by providing training data sets.[32]  In due course, companies whose products require access to public datasets may well be able to take advantage of emerging partnerships between the federal government and private sector.

E.    DOD AND DARPA DETAIL AI EFFORTS

On February 12, 2019, the DOD unveiled its AI strategy, which builds on the recent EO.[33]  The DOD’s chief information officer explained that “[t]he [executive order] is paramount for our country to remain a leader in AI, and it will not only increase the prosperity of our nation, but also enhance our national security. . . .”[34]  To that end, the DOD’s plan announced that it will adopt AI to maintain its strategic position.[35]  To operationalize that goal, the DOD will rely on the Joint Artificial Intelligence Center and highlighted a key role for academic and industry partners.[36]

In early 2019, DARPA launched a major project called Guaranteeing AI Robustness against Deception (“GARD”), aimed at studying adversarial machine learning.  Adversarial machine learning, an area of growing interest for government machine-learning researchers, involves experimentally feeding input into an algorithm to reveal the information on which it has been trained, or distorting input in a way that causes the system to misbehave.  With a growing number of military systems—including sensing and weapons systems—harnessing machine learning, there is huge potential for these techniques to be used both defensively and offensively.  Hava Siegelmann, Director of the GARD program, told MIT Technology Review recently that the goal of this project was to develop AI models that are robust in the face of a wide range of adversarial attacks, rather than simply able to defend against specific ones.[37]

II.    Recent Bias Concerns for AI
As noted above, the recently introduced Algorithmic Accountability Act would require that companies audit any automated decision-making for bias and discrimination.  A number of similar developments at national, state and international levels evidence the growing concern with this subject matter, and companies who currently use or are considering using AI to automate decision-making processes should track these developments closely.  We are closely monitoring the trends and developments in these areas and stand ready to assist company efforts to anticipate and navigate likely future requirements and concerns to avoid improper bias and discrimination.

A.    THE AI NOW INSTITUTE AT NEW YORK UNIVERSITY PUBLISHES NEW REPORT, “DISCRIMINATING SYSTEMS: GENDER, RACE, AND POWER IN AI”

The AI Now Institute, which examines the social implications of artificial intelligence, recently published a report that examines the scope and scale of the gender and racial diversity crisis in the AI sector and discusses how the use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.  The report includes recommendations for improving workplace diversity (such as publishing harassment and discrimination transparency reports, changing hiring practices to maximize diversity, and being transparent around hiring, compensation, and promotion practices) and recommendations for addressing bias and discrimination in AI systems (such as implementing rigorous testing across the lifecycle of AI systems).[38]

B.    SEVERAL GOVERNMENT AGENCIES SEEK TO ROOT OUT BIAS IN ARTIFICIAL INTELLIGENCE SYSTEMS

In companion bills SB-5527 and HB-1655, introduced on January 23, 2019, Washington State lawmakers drafted a comprehensive piece of legislation aimed at governing the use of automated decision systems by state agencies, including the use of automated decision-making in the triggering of automated weapon systems.[39]  In addition to addressing the fact that eliminating algorithmic-based bias requires consideration of fairness, accountability, and transparency, the bills also include a private right of action.[40]  According to the bills’ sponsors, automated decision systems are rapidly being adopted to make or assist in core decisions in a variety of government and business functions, including criminal justice, health care, education, employment, public benefits, insurance, and commerce,[41] and are often unregulated and deployed without public knowledge.[42]  Under the new law, in using an automated decision system, an agency would be prohibited from discriminating against an individual, or treating an individual less favorably than another on the basis of one or more of a list of factors such as race, national origin, sex, or age.[43] Currently, the bills remain in Committee.[44]

In the UK, the world’s first Centre for Data Ethics and Innovation will partner with the UK Cabinet Office’s Race Disparity Unit to explore potential for bias in algorithms in crime and justice, financial services, recruitment and local government.[45]  The UK government explained that this investigation was necessary because of the risk that human bias will be reflected in the recommendations used in the algorithms.[46]

C.    ARTIFICIAL INTELLIGENCE ETHICS IN POLICING

Police departments often use predictive algorithms for various other functions, such as to help identify suspects.  While such technologies can be useful, there is increasing awareness building with regard to the risk of biases and inaccuracies.[47]

In a paper released on February 13, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found that police across the United States may be training crime-predicting AIs on falsified “dirty” data,[48] calling into question the validity of predictive policing systems and other criminal risk-assessment tools that use training sets consisting of historical data.[49]

In some cases, police departments had a culture of purposely manipulating or falsifying data under intense political pressure to bring down official crime rates.  In New York, for example, in order to artificially deflate crime statistics, precinct commanders regularly asked victims at crime scenes not to file complaints.  In predictive policing systems that rely on machine learning to forecast crime, those corrupted data points become legitimate predictors, creating “a type of tech-washing where people who use these systems assume that they are somehow more neutral or objective, but in actual fact they have ingrained a form of unconstitutionality or illegality.”[50]

III.    Autonomous Vehicles
The autonomous vehicle (“AV”) industry continues to expand at a rapid pace, with incremental developments towards full autonomy.  At this juncture, most of the major automotive manufacturers are actively exploring AV programs and conducting extensive on-road testing.  As lawmakers across jurisdictions grapple with emerging risks and the challenge of building legal frameworks and rules within existing, disparate regulatory ecosystems, common challenges are beginning to emerge that have the potential to shape not only the global automotive industry over the coming years, but also broader strategies and policies relating to infrastructure, data management and safety.

A.    LEGISLATIVE ACTIVITY AT FEDERAL LEVEL

As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (3Q18), there was a flurry of legislative activity in Congress in 2017 and early 2018 towards a national regulatory framework.  The U.S. House of Representatives passed the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (SELF DRIVE) Act[51] by voice vote in September 2017, but its companion bill (the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (AV START) Act),[52] stalled in the Senate as a result of holds from Democratic senators who expressed concerns that the proposed legislation remains immature and underdeveloped in that it “indefinitely” preempts state and local safety regulations even in the absence of federal standards.[53]  So far, there have been no attempts to reintroduce the bill in the new congressional session, and even if efforts to reintroduce it are ultimately successful, the measure may not be enough to assuage safety concerns as long as it lacks an enforceable federal safety framework.

Therefore, AVs continue to operate under a complex patchwork of state and local rules, with federal oversight limited to the U.S. Department of Transportation’s (“DoT”) informal guidance. As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (4Q18), the DoT’s NHTSA released its road map on the design, testing and deployment of driverless vehicles: “Preparing for the Future of Transportation: Automated Vehicles 3.0” (commonly referred to as “AV 3.0”) on October 3, 2018.[54]  However, while AV 3.0 reinforces that federal officials are eager to take the wheel on safety standards and that any state laws on automated vehicle design and performance will be preempted, the thread running throughout is the commitment to voluntary, consensus-based technical standards and the removal of unnecessary barriers to the innovation of AV technologies.

B.    LEGISLATIVE ACTIVITY AT STATE AND LOCAL LEVELS

Recognizing that AVs and vehicles with semi-autonomous components are already being tested and deployed on roads amid legislative gridlock at federal level, thirty states and the District of Columbia have enacted autonomous vehicle legislation, while governors in at least 11 states have issued executive orders on self-driving vehicles.[55]  In 2019 alone, 75 new bills in 20 states have ‘pending’ status.[56]  Currently, ten states authorize testing, while 14 states and the District of Columbia authorize full deployment.  16 states now allow testing or deployment without a human operator in the vehicle, although some limit it to certain defined conditions.[57]  Increasingly, there are concerns that states may be racing to cement their positions as leaders in AV testing in the absence of a federal regulatory framework by introducing increasingly permissive bills that allow testing without human safety drivers.[58]

Some states are explicitly tying bills to federal guidelines in anticipation of congressional action. On April 2, 2019, D.C. lawmakers proposed the Autonomous Vehicles Testing Program Amendment Act of 2019, which would set up a review and permitting process for autonomous vehicle testing within the District Department of Transportation.  Companies seeking to test self-driving cars in the city would have to provide an array of information to officials, including— for each vehicle it plans to test—safety operators in the test vehicles, testing locations, insurance, and safety strategies.[59]  Crucially, it would require testing companies to certify that their vehicles comply with federal safety policies; share with officials data on trips and any crash or cybersecurity incidents; and train operators on safety.[60]

Moreover, cities—who largely control the test sites—are creating an additional layer of rules for AVs, ranging from informal agreements to structured contracts between cities and companies, as well as zoning laws.[61]  Given the fast pace of developments and tangle of applicable rules, it is essential that companies operating in this space stay abreast of legal developments in states as well as cities in which they are developing or testing autonomous vehicles, while understanding that any new federal regulations may ultimately preempt states’ authorities to determine, for example, safety policies or how they handle their passengers’ data.  We will continue to carefully monitor significant developments in this space.

C.    INCREASING FOCUS ON CONNECTIVITY AND INFRASTRUCTURE IN AV DEVELOPMENT

AVs operate by several interconnected technologies, including sensors and computer vision (e.g. radars, cameras and lasers), deep learning and other machine intelligence technologies, robotics and navigation (e.g. GPS).  As lawmakers debate how to integrate AVs into existing infrastructure, a key emerging regulatory challenge is “connectivity.”  While AV technology resides largely onboard the vehicle itself, and sensor systems are rapidly evolving to meet the demands of AV operations, fully autonomous vehicles nonetheless require sufficient network infrastructure to communicate efficiently with their surroundings (i.e. to communicate with infrastructure, such as traffic lights and signage, and vehicle-to-vehicle, collectively known as Vehicle-to-Everything communication, or “V2X”).[62]

At present, there are two competing technical standards for V2X on the European market: ITS-G5 Wi-Fi standard and the alternative “C-V2X” standard (“Cellular Vehicle-to-Everything”).  C-V2X is designed to work with 5G wireless technology but is incompatible with Wi-Fi.  There is presently neither regulatory nor industry consensus on this topic.  A group of automakers, the 5G Automotive Association, now counts more than 100 members who argue that C-V2X is preferable to Wi-Fi in terms of security, reliability, range and reaction time.[63]  However, in April 2019, the European Commission proposed a legal act to regulate so-called “Cooperative-Intelligent Transport Systems (C-ITS),” backing the ITS-G5 Wi-Fi standard.[64]

By contrast, in the United States, the AV 3.0 guidelines acknowledged that private sector companies were already researching and testing C-V2X technology alongside the Dedicated Short-Range Communication (“DSRC”)-based deployments, but also cautioned that while V2X is an important complementary technology that is expected to enhance the benefits of automation at all levels, “it should not be and realistically cannot be a precondition to the deployment of automated vehicles” and that DoT “does not promote any particular technology over another.”[65]  This approach appears to be in line with the DoT’s overarching desire to remain “technologically neutral” to avoid interfering with innovation.  Nonetheless, in December 2018, the DoT announced that it was seeking public comment on V2X communications,[66] noting that “there have been developments in core aspects of the communication technologies needed for V2X, which have raised questions about how the Department can best ensure that the safety and mobility benefits of connected vehicles are achieved without interfering with the rapid technological innovations occurring in both the automotive and telecommunications industries,” including in both C-V2X and “5G” communications, which “may, or may not, offer both advantages and disadvantages over DSRC.”[67]

Meanwhile, AVs built in China—which has set a goal of 10% of vehicles reaching Level 4/5 autonomy by 2030—will support the C-V2X standard, and will likely be developed in an ecosystem of infrastructure, technical standards and regulatory requirements distinct from those of their European counterparts.[68]  In addition to setting a national DSRC standard, China also plans to cover 90% of the country with C-V2X sensors by 2020.[69]  In 2017, the Chinese government called for more than 100 domestic standards for AVs and other internet-connected vehicles. Instead of GPS, AVs will support China’s BeiDou GNSS standard — which requires different receiver chips to communicate with Chinese satellites.  Major Chinese cities also enforce license plate jurisdiction and have roads and lanes dedicated to specific vehicle types, allowing for more effective geo-fencing of AV testing and operating areas.  AV companies will have to engage with forthcoming standards and development plans from China’s Ministry of Industry of Information Technology, its AV-coordinating commission (the “Internet of Vehicles Development Commission”), and quasi-private industry groups.[70]

Given the lack of international (or even national) consensus and the potential burden of developing and installing different systems in vehicles for domestic markets and for export, companies operating in the AV space should remain alert to developments in this rapidly evolving landscape of technical standards and infrastructure.

IV.    Ethics and Data Privacy
The rapidly expanding uses for artificial intelligence, both personal and professional, raise a number of issues for governments worldwide and also for companies attempting to navigate an evolving ethics landscape, including threats to data privacy as well as calls for transparency and accountability.

A.    GOVERNMENT REGULATION OF ARTIFICIAL INTELLIGENCE

The United States continues to be a key player and dominant force in the development of artificial intelligence, and the U.S. government continues to identify AI as a key concern when it comes to cybersecurity and data privacy.  For example, the Office of the Director of National Intelligence, recently highlighted, in its 2019 “National Intelligence Strategy” report, that U.S. adversaries benefit from that AI-created military and intelligence capabilities, and emphasized that such capabilities pose significant threats to U.S. interests.[71]  But despite this key role in the development of emerging technologies, and the threats faced by the United States, there has been little by way of public guidance or regulation of AI, at least on the federal level.[72]

In contrast, the European Union (“EU”) has recently issued guidance on ethical considerations in the use of AI.  In connection with the implementation of its General Data Privacy Rules (“GDPR”) in 2018, the EU recently released a report from its “High-Level Expert Group on Artificial Intelligence”: the EU “Ethics Guidelines for Trustworthy AI” (“Guidelines”).[73]  The Guidelines lay out seven ethical principles “that must be respected in the development, deployment, and use of AI systems”:

(1) Human Agency and Oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

(2)  Robustness and Safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.

(3)  Privacy and Data Governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

(4) Transparency: The traceability of AI systems should be ensured.

(5) Diversity, Non-Discrimination and Fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.

(6) Societal and Environmental Well-Being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.

(7) Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
In addition to laying out these principles, the Guidelines highlight the importance of implementing a “large-scale pilot with partners” and of “building international consensus for human-centric AI.”[74]  Specifically, the Commission will launch a pilot phase of guideline implementation in Summer 2019, working with “like-minded partners such as Japan, Canada or Singapore.”[75]  The EU also intends to “continue to play an active role in international discussions and initiatives including the G7 and G20.”[76]

While the Guidelines do not appear to create any binding regulation on stakeholders in the EU, their further development and evolution will likely shape the final version of future regulation throughout the EU.  Therefore, the Summer 2019 pilot program, as well as any further international work between the EU and other partners, merits continued attention.

B.    DARPA PRIORITIZES ETHICS IN AI DEVELOPMENT

DARPA hosted an Artificial Intelligence Colloquium from March 6-7, 2019 in Alexandria, Virginia, to increase awareness of DARPA’s expansive AI R&D efforts.[77]  During the weeks after the colloquium, several news sources reported on DARPA’s AI research and technology.  In an interview discussing DARPA’s AI-infused drones that would be used to map combatants and civilians in the field, the agency discussed how ethics is informing its development and implementation of AI systems.[78]  DARPA highlighted that they met with ethicists before advancing technical development of the technology.[79]

C.    UN URGES BAN ON AUTONOMOUS WEAPONS THAT KILL

The United Nations Secretary-General António Guterres has urged restriction on the development of lethal autonomous weapons systems, or LAWS,[80]arguing that machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.[81]  Subsequently, Japan pledged that it will not develop fully automated weapons systems.[82]  A group of member states—including the UK, United States, Russia, Israel and Australia—are reportedly opposed to a preemptive ban in the absence of any international agreement on the characteristics of autonomous weapons.[83]

Topics:Press

Recent Posts