ArtWare. Re-enacting Cybernetic Art
By Dr Stefan Höltgen
1. Methodology of the re-enactment
The second part of the seminar was aimed at applying the re-implementation of historical computer graphics as a method of analytical media historiography based on the previously acquired theories of the information aesthetics and media archaeology. To legitimise the approach the concept of the re-enactment was derived from three different sources: Robert Collingwood’s (1947) theory “History as Re-Enactment”, Ian Bogost’s (2012:85-112) “Carpentry” as well as Andreas Ficker’s (2015) “Hands-on! Plädoyer für eine experimentelle Medienarchäologie”: Shortly summarized the following considerations emerged from the discussion of mentioned concepts:
- According to Collingwood, historical processes show a lack of tangibility for the historians because they tend to transfer them into the present. His “new thinking” of historical processes does not only apply a valuation, but also an actualisation of the historem. This a-historical moment can also be found in operative media, that are radically present in their media condition (Ernst 2012:113) – even if they are historical technologies or comprise and display bygone content.
- Theory gains a new non-discursive system for written records and thereby avoids the problematic moments of the discourse (negotiability, subjectivity,…). The experiment and the demonstration represent separate forms of non-discursive theory building and perform technological temporality and structures that do not only state something about the content of the experiment, but also about the experimental setup and the utilised media.
- In comparison to Ficker, experimental media archaeology (as re-enactment) permits absolutely no statements about bygone utilisation and social meanings of media technology and content because it (see point 1) constantly occurs in the presence of experimentation.
In the light of the above, the tool which is supposed to be used for the re-enactment becomes an ‘epistemic thing’ (Rheinberger 2001:18-24): In the process of the experiment it continuously has to be kept in mind as a constitutive element and it simultaneously has to operate as a ‘Werkzeug’ as well as a ‘Zeug’ (Heidegger 1967:68-83) that evokes the user’s knowledge about it during its use.
2. BBC BASIC
For the implementation of the graphics, a likewise ‘historical’ programming language is being used: BBC BASIC (www.bbcbasic.co.uk), which was developed by the company Acorn for the British school computer BBC Micro in 1981. BBC BASIC is one of the many dialects of the programming language BASIC, which was already developed in 1964 at the Dartmouth-College for the students of the art and humanities department, and, to a large extent, was abstracted from informatic and mathematic concepts. The imperative coding of BASIC, which is closely affiliated to the assembler, facilitates the learning of programming languages (for instance through the trial & error method), especially for autodidacts. Up to the present day, BBC BASIC is being further developed for various platforms and enables the programming of historical programming paradigms (imperative, unstructured) as well as the usage of modern concepts (structured, procedural, object-orientated).
The first programming block contained the development of the BASIC instruction repertoire, the fundamental structure of BASIC- programs as well as those elements, that are pivotal for the programming of the graphics algorithms: loops, conditional branching and mathematical functions (particularly pseudo-random numbers, trigonometric and iteration function). The second programming block introduces the graphics function of BBC BASIC: firstly the pseudo graphics programming (through character set elements and their positioning on the display), subsequently the pixel graphics (dots, lines, geometric objects as well as absolute and relative positioning). The related mathematical epistemes (the display as a Cartesian coordinate system or as a Gaussian vector space, the im/possibility of the random number generating in deterministic machines) were thereby experimentally tested.
3. Heuristic re-enactment
But how can cybernetic artworks be reprogrammed in BBC BASIC, if their algorithmic foundations are unknown? Even the basic knowledge of a programming language permits heuristic accesses to solutions. Through the sheer observation of the graphics, the measuring, the counting and the tracing back of portrayed objects (the “simple signs” [vgl. Nake 1974:59], as they are called in the information aesthetics) algorithms can be developed that enable re-enactment.
3.1 A first step
Fig. 1: A. Michael Noll “Vertical-Horizontal Number Three” (1965)
For the first experiment an artwork, implemented by several different artists, was used which combined incidentally positioned horizontal and vertical lines. A. Michael Noll’s “Vertical-Horizontal Number Three” served as the starting object:
- Firstly, the students were encouraged to guess the number of lines that the graphic comprises.
- The question, whether the line possesses a beginning and an end, was answered.
- The connection between the lines was deliberated.
- The number of character runs was estimated.
- The aspect ratio of the picture was measured.
Once the construction principle (point 3) was discovered (sign of coordinate [x0/y0] to [x1/y0] to [x1/y1] to [x2/y1] and so forth), a first program was written. The output revealed that too many iterations had taken place and that the picture was drawn too close to the edge of the picture. Therefore, the following step was characterized through a reduction of the iterations and the output was repositioned by means of the origin-command. After the result was satisfying, the program was modified once again.
- A key-controlled infinite loop was implemented to draw an arbitrary amount of variants of the picture.
- The drawing of the individual lines was decelerated by the use of the WAIT-command to facilitate the evaluation of the automatic design process (and thereby the random numbers if applicable).
As a result, the following code emerged:
for i=1 to 50
Named code produced the graphic to be seen in fig. 2 in one of its iterations.
Fig. 2: An output of the re-enactment of “Vertical-Horizontal Number Three”
3.2 The boundaries of the cybernetic re-enactment
The second re-enactment was executed on George Nees’ picture “Locken”. Once more, with the aid of heuristic methods, the algorithm was compiled: The question, which ‘simple signs’ constitute the foundation of the graphic (circles) was answered, the quantity and size of the circles were determined, and, with the help of the circle-command, a simple program was written.
Fig. 3: Georg Nees: “Locken” (1971)
In doing so, the first obstacle/hurdle of a purely digital re-enactment arose. While Noll’s graphic could easily be reproduced on the display, it became apparent that a special effect was necessary for the reproduction of Nees’ picture which solely could be achieved by the ‘hard copy’ of a plotter: Apparently, the dark borders of the picture were the result of the collision of the drawing pencil with the border of the surface. This effect could at most, and then only with a high level of effort, be imitated. A participant of the course suggested reducing the size of the output window at the end of the program run to such an extent that the circles at the top and right border would be cut off.
During the inspection of “Locken”, the question emerged whether the picture might have been generated in multiple iterations. Based on the programming skills, the course soon developed the deliberation that Nees probably had the varyingly large circles drawn onto the paper one after the other in four iterations. Consequently, the BBC-BASIC-program was drafted in such a way that the four iterations (one per circle size) were carried out successively.
for i=1 to 100
for i=1 to 100
for i=1 to 100
for i=1 to 100
The first implementation displayed that the number of circles was underestimated. Furthermore, the repetitive programming style, which successively operated four very similar algorithms, proved to be immensely time-consuming. Elements that were used multiple times were outsourced into subroutines (e.g. the random number generator for the centre of the circle). As a result, the program could be ‘slimmed down’ and was able to resemble the procedural programming (in ALGOL) preferred by Nees.
Fig. 4: An output of the re-enactment of “Locken” and panel design
3.3 Digital Tools
For the third re-enactment, black and white were exchanged for coloured graphics. For this purpose Herbert W. Franke’s picture “Quadrate” was selected. On grounds of the insights acquired while re-enacting “Locken”, a three-stage formation process was priorly assumed during the heuristic approach to recreating the original: The different squares varying in size and colour were probably drawn in three consecutive program parts.
Fig. 5: Herbert W. Franke “Quadrate” (1970)
For the determination of the colours a digital colour sensor was added to the display output of “Quadrate”. The hereby ascertained RGB colours were assigned to different colour-pens. In addition to that the size of the squares was measured in proportional dependence to one another (1:2:4). Finally, the course attempted to guess the quantity of the squares on the screen area, as well as its measurements.
Here, similar to the primal elaboration, the first iteration evinced that the quantity was underestimated. In addition, the four big (black) squares were commonly illustrated in a way that they exceeded the borders of the assumed image size. To avoid the algorithmic correction of this circumstance, the program was run through several times, until all four big squares were incidentally positioned inside the frame. (Cf. fig. 6; this procedure undoubtedly deviated from Franke’s, since his drawing, on the one hand, was created on paper and therefore was too material and time-consuming for suchlike experiments and on the other hand couldn’t have featured a similar exceedance due to the paper size.)
Fig. 6: An output of the re-enactment of “Quadrate”
During the editing of the program, the colour definitions were carried out at the beginning which imitated an assignation of plotter colour pens. Aside from that, as with the re-enactment of Noll’s artwork, a key-controlled reboot of the program was implemented to be able to generate new variants of the picture without great effort.
colour 1,247,114,204: rem magenta
colour 2,252,188,117: rem orange
colour 3,84,72,74: rem grey
for i=1 to 4
for i=1 to 100
for i=1 to 100
4. Algorithmic re-enactments
An agenda of the cybernetic art in the late 1960s was to help provide a more positive reputation of the computer. Artists of the first generation tried to shift society’s connotation of computer technology from a predominant prominence in fields like the military, science, and economy and the accompanying impression of cold rationality to the exact opposite: the precision, speed, and endurance that computers evince when developing algorithms were supposed to be transferred as tools for a new art form. The rationalistic demand in connection to computers was integrated into the theory of the art: information aesthetics (see the first part of this article).
Precisely the term algorithm was widely discussed for the first time during that period. According to Jasia Reichardt (2008:72) and others, it is not the pictures but rather the algorithms behind them, which constitute the artworks. The formulation by information aesthetics and cybernetics that a formal linguistic expression may contain a concealed virtuosity was perhaps their most provocative thought imposed on the art world, which was defined by a scepticism towards technology. (Nees reported about a dispute between him and the artist Heinz Trökes in 1965 at the TH Stuttgart, in which the latter criticised that the computer art was missing “duktus”, to which Nees responded, that this could be programmed as soon as one knew what exactly the “duktus” is. [Nees 2006:XIII].)
The inherent didactic impetus of the cybernetic art also becomes apparent in the way that authors present their works – particularly when it comes to the explanation of the functions.
Algorithms in diverse manifestations are used to elucidate the composition and emergence of computer graphics. Occasionally, they serve (for instance like Mohr ) as proper captions. The second re-enactment part of the course was dedicated to such captions, to reconstruct the artworks with their aid. In doing so, the adequacy of the algorithmic description was supposed to be evaluated en passant, which could not least be displayed by the similarity of the original and the re-enactment.
4.1 “unambiguously in English”
Alan M. Turing states in 1953 that a computable problem (therefore, a problem which can be solved by a computer) doesn’t require a specific language to formulate, to be mediated in an unambiguous manner.
„If one can explain quite unambiguously in English, with the aid of mathematical symbols if required, how a calculation is to be done, then it is always possible to programme any digital computer to do that calculation, provided the storage capacity is adequate.“ […] „problem is reduced to explaining ‚unambiguously in English‘“ (Turing 1953:289)
Algorithms can therefore also be phrased in natural languages. Only when they are intended to be processed by a computer do they require a technical and mathematical description in a programming language. This exact thought appeared to be pursued by Manfred Mohr because he discloses the process of development of the programs behind the pictures that he uses on his homepage and in his publications in the form of German keywords. The following he states about his “Computer Generated Random Number Collages” created in 1969:
“About the algorithm: Around a central line, random numbers determine the position, height, width, and existence of the rectangular white lines. This is a visual music collage, bringing to mind rhythm and frequencies.” (https://www.emohr.com/sc69-73/vfile_random69.html)
This text should be based on the first “algorithm-oriented” re-enactment. For this purpose, the course participants were asked to translate the text to a hand-drawn graphic. In doing so drawings were created which resembled the pictures (cf. fig. 7), but also contained differences regarding central details. Without having drawn a comparison to the original just yet, the drawings were utilized as a foundation for the draft of a computer program. It soon became apparent that important information (e.g. number, positioning, and colour of the “rectangles”) was missing.
plot 0,0:draw 0,1024
for i=1 to 5
if d=1 then x=-x
if d=2 then x=x
rectangle fill x,y,w,h
Fig. 7: Manfred Mohr: Computer Generated Random Number Collage Number 1 (1969)
With this anticipated information computer graphics arose that only vaguely resembled the original (fig. 8). Thus, the re-enactment partially failed, though the question, whether Mohr’s descriptions actually contained all the necessary information for the generation of their corresponding pictures, could be negated. They are “algorithms” in name only.
Fig. 8: Output of an attempted re-enactment
The “translation gap” between descriptions in natural language, encoding in formal language, and iconic drawing of an artwork maybe contributed to the failure of the first attempt. Perhaps an algorithm which uses the same sign system (in the sense of Peirce) as the picture would be more adequate? This approach was initiated by the transfer of a flowchart by Frieder Nake in BBC BASIC. Thereby a process of interpretation could be omitted because the flowchart is generally characterized by its unambiguity, which is achieved through the precisely defined graphical elements.
Fig. 9: Flowchart of the program ploy1 (polygonal chain)
This example originates from Frieder Nake’s book “Ästhetik als Informationsverarbeitung” [Nake 1974:198]. After a short introduction of the forms of representation of flowcharts, the course participants were encouraged to gloss the diagram on the margin with BASIC-resembling pseudo code. This allowed the participants to transfer the temporal structure into a symbolic structure of the code. While doing so it became clear that misunderstandings, let alone missing information would no longer constitute a problem. The BBC-BASIC program then drew the graphics, which highly resembled the example image in Nake’s book (1974:199):
move x(i-1),y(i-1):draw x(i),y(i)
if i<n then goto loop
Fig 10: “Zufälliger Polygonzug, 1963. 10 x 10 cm” (above), Ausgabe des Re-Enactments (below)
4.3 Code Translation
In the last experiment the course participants were confronted with a problem, which has been known for quite some time in the field of informatics: Some computer programs have to be adapted to current systems because either the original hardware, on which the program is/was running can no longer be used or because the age of the code (or of the programming language) poses a security risk or cannot be properly maintained. In the case that the source code is available a manual translation to the target language is rather easily executed, otherwise, the object code has to be recompiled by the computer.
1 ‘BEGIN’ ‘COMMENT’ SCHACHTELUNG., 2 ‘REAL’ LI, RE, UN, OB, H.,
3 H.=.5., OPEN(0,0).,
5 UN.= -90., OB.=90.,
7 LEER(0,OB)., LINE(LI,OB).,
8 LINE(LI,UN)., LINE(RE,UN).,
9 LINE(RE,OB)., LINE(0,OB).,
10 LINE(LI,0)., LINE(0,UN).,
11 LINE(RE,0)., LINE(0,OB).,
14 ‘IF’ RE ‘GREATER’ 1.0 ‘THEN’ ‘GOTO’ ANF., 15 CLOSE
16 ‘END’ SCHACHTELUNG.,
As an example of this process, a source code excerpted from Georg Nees’ dissertation “Generative Computergrafik” from 1969 has been chosen. The program codes, printed in the book, are composed in a variation of the programming language ALGOL. ALGOL was one of the main influences that lead to the development of BASIC (Thomas Kurtz in: Biancuzzi/Warden 2009:80) so that the program structures already look quite familiar:
Once again, the translation to BBC BASIC was achieved through the glossing of the original code. Considering the knowledge about the apparent relation of the two programming languages, some transposition already appeared reasonable during the process of recoding – for example, that the waiver of different data types severely facilitates the programming. The resulting BBC BASIC program therefore also represents a kind of diachronic (programming) linguistics.
move 0,OB:draw LI,OB
draw LI,UN:draw RE,UN
draw RE,OB:draw 0,OB
draw LI,0:draw 0,UN
draw RE,0:draw 0,OB
if RE>1 then goto ANF
The validity could easily be verified with the aid of an comparison between the original graphic from Nee’s book and the output of the re-enactment:
Fig. 11: G. Nees’ “Schachtelung (Bild 3)” + re-enactment
This concluded the practical part of the course. Further re-enactments surely would give rise to new and deepened insights into cybernetic art, but also required advanced knowledge about the programming in BBC BASIC. Thus, the works of Frieder Nake, Manfred Mohr or Vera Molnar suggest complex algorithmic structures.
Likewise, it appears reasonable to re-enact works of the fine arts (for instance objects of Mohr) with the 3D printing method or graphics taken from analog computers (like the early works of Herbert W. Franke). Though this would mean to be obliged to prepare further techno-mathematical knowledge.Ultimately, it would suggestive to keep track of the future advancement of cybernetic art in the direction of computer graphics and animations. The course discussed named points by means of presentations.
5. Conclusion: From cybernetic to artificial art
Fig. 12: image from the “Arnolfini” collection, drawn by AARON (1983)
The artist AARON, introduced by one of the presentations, established a point of culmination for the seminar. AARON is an artificial intelligence developed by Harold Cohen. In the early 1980s, AARON started to generate computer art. AARON escalates the question of the algorithm because its artworks are not easily legible by mere inspection of the program code, which constitutes the basis of the AI. It rather establishes a structural condition of the possibility of data-driven graphics. Hence, the cybernetic art suspends the subjective momentum on the producer’s side (the human artist as a constructor of the algorithms). From here one, according to the discussion, one direct line leads to the computer-generated imagery (CGI) in which computer graphics become part of the utility art and the other leads to newer discourses like the educable neural networks that have also been concerned with the production of art more recently.
Fig. 13: Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network)
As the picture shows, it is no longer the precision of the computer tools that makes the artwork stand out, but rather the exact opposite: a kind of artistic “imprecision” which comes very close to what has been described as “Duktus” in the Nees debate. The alleged last refuges of cybernetic art become apparent when neural networks do not strive for “art” but for photo-realistic synthesis: Fig. 13 shows the artistic portrait of a woman that has never existed.
While this circumstance appears to be quite striking, it is equalised by the disturbing artefact on the right picture margin. The algorithm had started to generate an additional face; the “marginal conditions” of the format, however, seem to have caused a faulty illustration – similar to Nees’ image “Locken” which condensed to a dark edge as soon as they collided with the material picture margin, the data of the AI algorithm caused the face to be sort of compressed. This does not necessarily bother the viewer since the neural networks do not need subjects for the evaluation of their output. Human critics would be far too slow to regulate the computerised learning process. Instead, there seem to be two different computer processes which are competing against one another: One as an artist and the other one as a critic – in a solely through an algorithm controlled learning-feedback-process.
Fig. 14: “These Persons do not exist” (28.2.2019, 12:57 Uhr)
“The creation or coding of an automatic artist or critic is not at the top of the list of tasks, but will sooner or later be attempted nevertheless.” (Nake 1974:5 – own transl.)
translated by: Chiara Rochlitz
Fig. 1: ìVertical-Horizontal Number Threeî (1965, A. Michael Noll), Quelle: https://protect-eu.mimecast.com/s/yiZGCnxp2h75mpPImEOKe?domain=collections.vam.ac.uk (16.03.2019)
Fig. 2: Source: Stefan Hˆltgen
Fig. 3: ìLockenî (1971, Georg Nees), Quelle: https://protect-eu.mimecast.com/s/03elCoVq3sr4vpghoqn1p?domain=dada.compart-bremen.de (16.03.2019)
Fig. 4: Source: Stefan Hˆltgen
Fig. 5: ìQuadrateî (1970, Herbert W. Franke), Quelle: https://protect-eu.mimecast.com/s/5RemCp804FnDAEVc7Y8Tk?domain=collections.vam.ac.uk (16.03.2019)
Fig. 6: Source: Stefan Hˆltgen
Fig. 7: ìComputer Generated Random Number Collage Number 1î (1969, Manfred Mohr), https://protect-eu.mimecast.com/s/2uDYCq7vgC8WX6Esvtop1?domain=emohr.com (15.03.2019)
Fig. 8: Source: Stefan Hˆltgen
Fig. 9: ìFlussdiagramm des Programms poly1 (Polygonzug)î (Nake 1974:198)
Fig. 10: Nake 1974:199
Fig. 11: Nees 1969:99
Fig. 12: Hardold Cohen/AARON ìArnolfiniî (1983), https://protect-eu.mimecast.com/s/UMxfCr8wjF8923ksLk94f?domain=dam-gallery.de
Fig. 13: Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network), https://protect-eu.mimecast.com/s/d8AiCvlAnT7XA0YIE5Q92?domain=christies.com (15.03.2019)
Fig. 14: https://protect-eu.mimecast.com/s/_GgLCwVBosGpyZquXt8hi?domain=thispersondoesnotexist.com (28.02.2019)
Collingwood, Robert H. (1947): “History as Re-Enactment”. In: Ders.: The Idea of History. Oxford University Press.
Biancuzzi, F./Warden, S. (2009): Masterminds of Programming. Beijing u.a.: O’Reilly.
Bogosts, Ian (2012): Alien Phenomenology. Or What it’s like to be a Thing. Minneapolis/London: Univ. of Minnesota Press.
Andreas Fickers “Hands-on! Plädoyer für eine experimentelle Medienarchäologie” (2015)
Rheinberger, Hans-Jörg (2001): Experimentalsysteme und epistemische Dinge. Eine Geschichte der Proteinsynthese im Reagenzglas. Göttingen: Wallstein.
Ernst, Wolfgang (2012): Chronopoetik. Zeitweisen und Zeitgaben technischer Medien. Berlin: Kadmos.
Heidegger, Martin (1967): Sein und Zeit. Tübingen: Max Niemeyer Verlag.
Nake, Frieder (1974): Ästhetik als Informationsverarbeitung. Grundlagen und Anwendungen der Informatik im Bereich ästhetischer Produktion und Kritik. Wien/New York: Springer.
Nees, Georg (2006/1969): Visuelle Performanz. Einführung in den Neudruck des Buches Generative Computergraphik. In: Ders.: Generative Computergrafik. Herausgegeben von Hans-Christian von Herrmann und Christoph Hoffmann. Kaleidoskopien Band 6, S. IX-XXI.
Mohr, Manfred (2014): Der Algorithmus des Manfred Mohr: Texte 1963-1979. Berlin: Spector.
Turing, A. (1953): Digital Computers Applied to Games (Original)
Reichardt, J. (2008): In the Beginning … In: Brown, P. (Ed): White Heat Cold Logic. British Computer Art 1960-1980. Cambridge/London: MIT Press, S. 71-82.