Schedule

Abstracts

9.00-9.30 Registration and Coffee
9.30-10.00 Welcome and Introduction

Panel: Remote Sensing
Chair: Kathrin Friedrich

10.00-11.00 Keynote
Jennifer Gabrys, Goldsmiths, University of London
„Instrumenting the Planet: Sensing and Actuating Remote Environments and Smart Cities“

11.00-11.30 Coffee Break

11.30-13.00
Isabell Schrickel, Leuphana University CCP | CGSC
„Climate Mediation – Shifting Scales of Atmospheric Intervention“
Luci Eldridge, Royal College of Art, London
„Glimpsing Mars from the Centre of the Image: Terrain Models and Rover Driving“
Carolin Höfler, Cologne University of Applied Sciences
„The Void: Feedback Spaces and Scale Differences Between Vision and Haptics“

13.00-14.00 Lunch

Chair: Moritz Queisner

14.00-15.00 Keynote
Jutta Weber, University of Paderborn
„Social Network Analysis in Data-Driven Warfare“

15.00-15.30 Coffee Break

15.30-17.30
Lucy Suchman, Lancaster University
„Situational Awareness and Meaningful Human Control“
Antoine Bousquet
, Birbeck, University of London
„A Ghost in the War Machine: Human Autonomy within Contemporary Military Architectures of Control“
Kate Chandler
, Georgetown University
„Death by PowerPoint“
Nina Franz / Moritz Queisner
, Image Knowledge Gestaltung, Berlin
„Co-operative Killing. Controlling humans and machines in remote warfare“

19.00 Conference Dinner

Chair: Nina Franz

10.00-11.00 Keynote
Timothy Lenoir, University of California, Davis
„Into Deep: The AI Explosion, Machine Learning and the Closeness of Remote Control“

11.00-11.30 Coffee Break

11.30-13.00
Matthias Bruhn, HfG Karlsruhe, Image Knowledge Gestaltung, Berlin
„Artificial Proximity“
Matteo Pasquinelli,
HfG Karlsruhe
„Neural networks as control paradigm: Frank Rosenblatt and the birth of learning machines“
Panel respondents:
Lucy Suchman, Lancaster University
Antoine Bousquet, Birkbeck, University of London

13.00-14.30 Closing and Lunch

Instrumenting the Planet: Sensing and Actuating Remote Environments and Smart Cities

The drive to instrument the planet, to make the earth programmable not primarily from outer space but from within the contours of earthly space, has translated into a situation where there are now more “things” connected to the Internet than there are people. Sensors are such connected and intelligent devices that typically translate chemical and mechanical stimuli such as light, temperature, gas concentration, speed, and vibration across analogue and digital sensors into electrical resistors, that in turn generate voltage signals and data. By sensing environmental conditions as well as detecting changes in environmental patterns, sensors are generating remote stores of data that, through algorithmic parsing and processing, are meant to activate responses, whether automated or human-based, so that a more seamless, intelligent, efficient, and potentially profitable set of processes may unfold, especially within the contours of the smart city. Yet what are the implications for wiring up environments in these ways, and how does the sensor-actuator logic implicit in these technologies not only program environments but also program the sorts of citizens and collectives that might concretize through these processes? I take up these questions through a discussion of material from Program Earth and the Citizen Sense research project to examine the distinct environments, exchanges, and individuals that take hold through these sensorized projects.

Glimpsing Mars from the Centre of the Image: Terrain Models and Rover Driving
This paper analyses the use of 3D imaging in Mars exploration and rover driving from an arts and humanities perspective. 3D images reconstruct both vision and landscape in order to help scientists and engineers get closer to a feeling of ‘being there’ on Mars.Data from NASA’s Curiosity rover will be the central focus; at human eye-level, the rover’s Navigation and Mast Cameras provide visions analogous with our own. 3D models are constructed from stereoscopic data, captured by the rover at a specific time; as such the models place Curiosity at the centre of the image whilst also highlighting the portions of the landscape as yet unseen by the camera’s lens.
Driving rovers on Mars requires a certain level of understanding on how to read these models; engineers encounter this otherworldly terrain through active engagement with the image, through prolonged and intensified looking. ‘Glitch’ theory and Maurice Merleau-Ponty’s studies on perception are used to address primary research at the Jet Propulsion Laboratory (CA) to foreground the importance of embodied perception and the necessity to make Curiosity’s blind spots evident. This paper will argue that despite endeavouring to reconstruct a sense of ‘being there’ on Mars, the space of the 3D image can only ever allow us to ‘glimpse’ Mars; the glitch obscures a reconstruction of landscape that is very much framed by technology.
The Void: Feedback Spaces and Scale Differences Between Vision and Haptics
The lecture deals with physical-digital 3D-surroundings which are perceived, constructed, and steered with the aid of mobile Virtual-Reality-technologies. Dense entanglements of virtual realities, interactive real-time effects, and physical settings, as they are currently being tested in the leisure industry, in architecture, and forensics, promise to suspend the discrepancies between virtual and physical experience. In this regard, the built space, in which the VR-user is located, is reproduced as a digital action field being intensified by additional sensual information. Involved in these surroundings, visitors of theme parks are to delve into fictional worlds of imagination and space with all their senses and are to fully engage and actively launch themselves into the middle of action. In architecture, the close entanglement of built and technological surroundings is aimed at directly acting in/on space and at building correlations between digital drafts and material-analog structures. In forensics, however, the interplay of analog 1:1 model rooms and digital simulations is meant to serve as to reconstitute past crime scenes and progression of events.
But what happens if sight and touch are separated and then connected again digitally? What if the physical space is extended by virtual environments while the user loses sight of the genuine tactile sense of space? Which classifications are being developed if the depicted space complies with the physically real spatial dimensions of it, but differs in its qualities? With regard to this, the lecture pursues the phenomenon of real-virtual 3D spaces in three areas: the problem of setting and dissolving limits between image, space, and body, the issue of how the user coordinates information between control and loss of control, and, resulting from this, the concept of real-time distributed control.

Social Network Analysis in Data-Driven Warfare

Social network analysis (SNA) – which builds i.a. on sociometrics, statistics and graph theory to study social structures – is increasingly used in the military discourses and practices of the 21st century. I will analyze specific enactments of SNA in contemporary US military used as a remote-controlled and performative methodology of computational targeting, ‘counterterrorism’ & ‘counterinsurgency’. SNA like any social technology formats its object of research; namely, social worlds comprised into nodes and ties. What forms of the social do these data intensive methods and social technologies co-constitute? How are military tracking and targeting methods formatting both their objects of investigation and military logics in general?

Situational Awareness and Meaningful Human Control
This talk examines two tropes in contemporary discourses of war fighting – one longstanding, the other newly emerging – that are deeply implicated in configurations of remote control.  ‘Situational awareness,’ established in US military doctrine as a prerequisite for lawful killing, requires an understanding of one’s circumstances that is adequate to the distinction between combatants and non-combatants, and the identification of what constitutes an imminent threat.  Citing the ‘fog of war’ articulated by Prussian military analyst Carl von Clausewitz in 1832, situational awareness at once calls for clarity in moments of combat, and acknowledges its continued elusiveness.  In the context of contemporary debates regarding the regulation of automated weapon systems, ‘meaningful human control’ has been recently introduced as a requirement by the NGO Article 36, and is now accepted by a wide range of actors internationally.  Both of these tropes aspire to control over the actions of complex human-machine systems in the context of war fighting: In this talk I consider how they might be articulated and mobilised in the service of a less violent world.
A Ghost in the War Machine: Human Autonomy within Contemporary Military Architectures of Control
The dramatic advances in the technologies of remote control realised over the last few decades appear to have granted their human operators unprecedented powers to exercise their will over vast distances. In the military sphere, the emergence of “hunter-killer” drones in the context of the War on Terror has notably summoned visions of ubiquitous surveillance and global projections of precise lethality. Yet beyond any critical assessment of the fantasies of planetary control fuelled by the advent of such technologies, insufficient attention has been paid to the ways in which the human organism is being concurrently assimilated within sociotechnical architectures of control. Bound by ever tighter cybernetic feedback loops of command and control, the entanglement of man and machine is obliterating classical conceptions of human agency and responsibility in war. Through an examination of specific military assemblages of perception and remote targeting, this paper will underline the increasingly evanescent character of human autonomy in the operation of the contemporary war machine.

Death by PowerPoint

Recent studies tie drone technologies to the War on Terror, highlighting ethical, political and legal concerns raised by remote warfare. Yet, the figure of “the drone” may be misleading, obfuscating bureaucratic practices that organize killings carried out by unmanned aircraft; these implicate numerous governmental agencies, military industry and legal oversight. This analysis proposes drone warfare is as much a bureaucratic structure as it is a technology.  To explore what is at stake in this argument, I ask how remote warfare can also be interrogated through PowerPoint, a format used to analyze, evaluate and brief officials on targeted killings. The account studies “The Drone Papers,” Pentagon documents leaked in 2015 by The Intercept, and closely reads three PowerPoint presentations that are among the source materials. I use Lisa Gitelman’s (2014) study of the potentially analogous Pentagon Papers to examine how PowerPoint frames accounts of „The Drone Papers“ and debates that ensue.  Leaked electronic media differ from the paper copies that are the basis of the Pentagon Papers and I note the failure of the PowerPoint slides as evidence.  What is significant about “The Drone Papers” I suggest though are how their form and content as PowerPoint slides are symptomatic of the bureaucracy of drone killing and the detached violence unmanned aircraft supposedly enact.  The presentations organize and normalize what is put forth as a continuous cycle linking information and killing, “Find, Fix, Finish, Evaluate, Analyze,” represented as the F3EA cycle by a familiar PowerPoint graphic.  I consider what kinds of responses might be available to critics of remote warfare if one took seriously how these practices are implicated in banal, everyday media and the ways the remoteness of drones might be much more familiar than expected.

Co-operative Killing. Controlling humans and machines in remote warfare
Today’s remotely controlled military operations are defined as processes of “cooperation” between automated and partially autonomous technologies and the “human operator”. This puts  the human capability of action and decision making into a precarious relationship with the efficiency of non-human systems. In contrast to  positions that argue for the recognition of non-human actors, agency in these contexts is always already understood as human-technological co-agency within a given system, revealing the epistemological roots of “man-machine coupling” within the historical paradigm of cybernetics: The human actor is regarded as “element” or “component” of the operative system.
Against the backdrop of the military understanding of operation the paper argues that the notion of “operative” and “operational” results in problematic misconception of human-machine relations.  Based on a case study of the so-called Ground Control Station for unmanned aerial vehicles the paper investigates the practices of remote operation. The control interface of the GCS is thereby revealed to be a scene of constant re-definition of roles, of re-negotiation and contested responsibilities.  The study is based on an exchange with drone operators that took place in February 2017 on Maxwell Air Force Base in Montgomery, Alabama.

Into Deep: The AI Explosion, Machine Learning and the Closeness of Remote Control

This paper will discuss the recent massive takeoff of AI and the uptake of deep learning techniques in several areas. AI no longer represents just a stronghold of academic activity: roughly 1650 companies worldwide raised $5B in startup funding in 2016, with Google leading all scientific publications in AI with 218 papers. The massive investment in AI has implications of singular proportions. I want to examine some key commercial developments and implementations of the new AI/Deep Learning techniques by companies such as Google, Amazon, and Microsoft as well as commercial/military developments of brain-machine interfaces, affective computing and neuromarketing making use of the new AI that point in the direction of unprecedented prospects of remote control.

Climate Mediation – Shifting Scales of Atmospheric Intervention
The escalating explicitness and inversion of the notion of environment is one of the most exciting cultural, technological and scientific endeavors in the 20th and 21st centuries. This inversion is also in many ways a pre-condition for the large-scale development of practices and technologies of remote control discussed during the workshop. The scientific exploration of the atmosphere made great contributions to this environmental inversion, at the latest with the discovery of a planetary scaleanthropogenic climate change. However, attempts to investigate the history of this exploration from an interventionist perspective are still rare. I use the opportunity of the workshop to discuss both historical and contemporary targets of atmospheric intervention through the lens of remote control, because it allows us to embed some of the recent debates on climate change policies in a broader context of control thinking and to highlight some of the problems and asymmetries involved. I compare the more laboratory-inspired atmospheric experiments to control spatially distant atmospheric environments in the mid of the 20th century with the recent visions of climate or geoengineering and the temporal scales and complexities involved. This comparison will hopefully contribute an analysis of the changing scope of atmospheric intervention and the shifting scales of remote control.

Neural networks as control paradigm: Frank Rosenblatt and the birth of learning machines
The first operative neural network, the Perceptron conceived by Frank Rosenblatt in 1957 at Cornell Aeronautical Laboratory, was designed for pattern recognition of simple shapes such as letters, yet with the ambition to automate recognition of radar signals and voice messages. The paper illustrates the relation between neural networks and feedback loops in cybernetics, showing how neural networks basically apply a control feedback loop to each node of decision and computation of their structure. Neural networks theoretically describe a matrix of control of infinitesimal resolution (that is ‘intelligent’) compared to the cybernetic apparatuses of the same age. Nonetheless, due to the so-called ‘winter of Artificial Intelligence’ and skepticism around their performance, their paradigm would rise only in the late 1980s. Besides remote and automated control, neural network design aimed specifically to build machines capable of adaptive and self-learning control.

Artificial Proximity

Scopic instruments suggest a certain ‘closeness’, in terms of a physical
proximity, to their objects of observation, due to the magnification and
containment of the visual field. The paper is (a) to give historical
examples of technical devices and images utilizing this effect and (b) to
discuss implications of limited sight in view of recent developments in
computer-aided sports simulcast.