Corey Recvlohe

Designing AI Policy

Artificial Intelligence presents an immense challenge for policy designers. Across contemporary culture, the specter of intelligent machines haunts the imaginations of job seekers, scientists, and industrialists. Contained within simple tweets or New Yorker Magazine exposés are anxieties over who will manage and deploy leverage provided by big data sets or self-learning neural nets; AI destroying Earth’s biomass is the new nuclear winter scenario — just beating out superbugs and alien invasions. However, far from our distant fates are practical day-to-day issues ranging from labor market dislocation, demographic discrimination, weaponization, and many other critical areas.

This essay attempts to focus on several sectors of human activity that will see significant impact from industrial-scale Artificial Intelligence, and the difficulties policymakers will face as these new technologies seep into public and private life. Everything from self-driving shipping trucks to Chicago crime heat-maps, and even targeting of terrorists with pilotless drones, all present immediate questions with unclear answers. Additionally, to what extent is our fixation on doomsday scenarios hurting or helping public engagement with substantive policy questions? This analysis looks at several factors and their roles within the fields of policy design and AI.

Smart Labor

Of the many interstate and highway roads across the nation, several corridors from Oregon to Washington and Florida up to Appalachia support infrastructure for the world’s first self-driving trucking routes. Using a series of parameters including freight volume, traffic, and other factors, specific locations across the south and north-west are primary targets for developing AI’s first foray into large-scale autonomous driving. It is the development of these areas that will hasten demand for increasing the scope of all kinds of autonomous driving vehicles, including shipping boats, cargo planes, and last-mile package delivery. However, this achievement comes at a price, which is to risk unemploying millions of blue-collar workers. This problem creates a daunting challenge for policymakers. How can millions of workers be retrained to take advantage of new opportunities? Are those new opportunities themselves at risk of being eaten up by self-learning bots that burrow themselves into white-collar work?

Even the field of journalism is not safe from artificially intelligent automation. Just recently UK publisher Daily Mirror stated they are now integrating AI into their reporting process. Named Krzana, this mech-journo app looks across the vast expanse of social media, local news, and blogs to target stories bubbling up from the virtual ether. In this instance, it may not be a direct replacement, but surely there is a sense these writers are training their replacements. Maybe not the same immediate threat faced by the likes of truckers, but indeed there is an ominous feeling AI will soon be writing articles. Maybe journalists can get re-trained to write code then.

Where this becomes interesting for policy design is figuring out how to approach autonomous labor such that we still look out for the interests and safety of everyday folks. We cannot forget that as smart robots enter more and more fields, it is pivotal that regulators apply a critical eye to the world of machines — possibly with the aid of tools leveraging AI in the realm of policy crafting using high-resolution data. Whatever the case may be, the time has arrived for seriously considering how smart robots affect work, no matter how simple or complex.

Discriminatory Data

Data by design are distinctions between binary points in a model, whether speaking of objects stored in a database or represented by cumulative calculations that quantify those objects. But when discussing discrimination in the context of AI, what we are talking about is different from technical definitions or professional jargon; in the realm of intelligent applications, data is necessarily reflective of the focus desired by writers of code. Data becomes the digital history of exchange between user and service, generator and allocator, sensor and server. Questions of policy regarding use of data for intelligent machines are steadily confronting the ever-present reality of how to use data and for what purposes; who negotiates the logic of design required for deriving actionable use from smart machines?

Trends of note point towards building comprehensive data models, particularly concerning AI software that attempts to identify persons through facial recognition. It has become an issue especially for African Americans who might be misidentified as retail shoplifters, based on AI that uses historical recordings of past violators. Even so much as rare false-positives could negatively impact these groups in ways that are pervasive. It becomes imperative that as a matter of policy, strict and transparent communication between encoders of intelligent machines, data captured from sensors, and decisions derived from algorithmic processes are cataloged to adjudicate misfortunes of justice when and where they occur.

By recognizing that AI is capable of mistakes and is — in its current form — subject to the whim and influence of human manipulation, there must be ways to make up for failings that contribute to continued errors. Recognizing these errors, understanding how they rise to the surface, and instigating efforts to counter-act maleficence is not only essential but required for industrial AI to ever gain wide societal acceptance.

Machine Learning War

On the battlefield, AI is incrementally becoming an essential part of how militaries will engage with threats. Intelligent machines open the door to a Third Revolution in military affairs, transforming the theater-space of combat to include new forms of autonomous weaponry, full spectrum reconnaissance, and predictive capabilities, giving campaigns and missions precision far outpacing human-centered decision making. As of the publication of this article, nearly one-third of all flying US military vehicles incorporate remote control systems — either for landing and taking off, targeting, or navigation. Human officers still decide when to fire ammunition or release bombing payloads, but innovations in Machine Vision have already met the threshold for instantaneously identifying objects based on narrow classifications. The question becomes, do we keep people in the decision-loop, or will machines gradually perform tasks once deemed necessary for ethical-actors alone?

One aspect central to the use of AI in warfare is civilian casualties. Indeed, many of those who advocate for banning autonomous weapons are concerned with collateral damage, far outweighing any benefits from neutralizing threats. The focus of efforts to ban military AI use has instead opted for establishing international treaties prohibiting the development of smart-systems used for killing. But it is naive to think that as commercially available sensors and open-source algorithms become more widely used, those nations with strategic interest will avoid developing intelligent weapons.

Policy decision-makers must be keen on ethical problems and also hard realities. Ever since the sharpening of rocks, weapons development has been a tactical advantage never taken for granted. Now, it becomes apparent that while democratic societies are compelled to restrict technological innovation in this area, it is clear not all interested parties will follow suit. It is crucial that developed countries take the lead in creating standards that military institutions will adopt across the globe; instilling cooperation that builds on conventions established through laws of war.


AI requires a cross-disciplinary approach to problem-solving. Without stakeholders, citizens, and professionals alike embracing cooperation and guidance, algorithmic applications for artificial intelligence will careen down a path of black-markets. As of today, China is already dedicating themselves to compete in areas of AI design and implementation by 2030, training scientists and professionals, as well as creating hardware sufficient to support datacenters the size of small metropolitan cities. It is incumbent on countries, especially the US — who value leadership and practicality — to innovate not only on technical frontiers but also ethical and moral planes as well. Delivering the promise of AI and mitigating its potentially catastrophic outcomes challenges not only individual sovereign states but all governments; achieving these goals demands rigorously nuanced, fair, and institutional processes that create consistent and uniform capacities for dealing with the myriad problems that may arise from the advent of thinking machines.

By analyzing all the different vectors of friction that can emerge from implementation and administration of AI, we can begin to frame how to guide the development in ways that are advantageous for likely participants interacting with deployed applications of autonomous vehicles, robotic soldiers, and data drawn from networks of tiny, ubiquitous sensors. Through these robust and comprehensive measures, the evaluation of accountability is available to courts and operators alike, contributing to improvements and negotiating consent without contention.

As every day passes, closer and closer arrives realities of ever-present semi-conscious machines. Think of your phone that listens to voice-to-text commands, an Amazon echo who waits at your beck and call, or Tesla sedan driving itself to your destination; it is not unreasonable to expect soon that these smart systems will become even more integrated than they already are. The implications for discovering meaning through work, enforcement of law and justice, and the ancient custom of warfare are immense and total. Finding our path through this mechanic maze requires new forms of thought, maybe even thinking that engages an analog organic mind rather than soldered cold metal between silicon chips.