Category Archives: Site News

Media at Madeira

The Sea, Pastel de Natas, and one too many Ponchas

By Roxana Pomplun (BA Media and Communications, Year 3)

The view we had when getting out of the airport was already stunning. Madeira: wide blue sea, sunshine, and the Desertas Islands in the distance. We couldn’t believe how beautiful it is and the excruciatingly early start in the morning, plus a 4 hour flight were forgotten at once. For now, we couldn’t wait to see Funchal.

Beautiful Funchal

During the cab drive from the airport, we had more overwhelming views. Driving high above sea level on the hills, we were able to see the gorgeous town of Funchal with all its orange rooftops and palm trees in an ascending order built from the sea up to into the hills. When getting into Funchal, driving up and down the steep streets got us squeaking a little bit, as it was a tiny bit scary but it was so much fun, too.

“We” were six girls from the BA Media and Communication, our programme leader Maria, and lecturer for film and TV degrees Chris. Though, these trips are open to all students from the department only our programme and year took part. However, the group had a very good size and we all got to spend a lot of time together. We stayed for almost four days but quite frankly could’ve stayed for longer, as Madeira has so much to offer and we weren’t even close to seeing all of it. For Funchal, being a relatively small city, it was sufficient, so we got to explore a lot there.

Part of the trip was to meet people from the University of Madeira and on the first day of the trip we took part in the PhD researcher’s project Fragments of Laura. It was a transmedia project that provided a storytelling experience across multiple platforms and formats and is in development for travellers visiting Funchal. Apart from this project, I found it particularly interesting to observe how PhD students and researchers are working for their projects, how they execute a case study and thus collect data that they’re going to use for their thesis. The researchers were (as basically all Portuguese we met) very nice and relaxed people.

After finishing the intermediated tour we had a chat with the local researchers and they gave us advice on what to do and try in Funchal. They highly recommended the Poncha which is a traditional drink, native to Madeira, made with Aguardente de Cana, honey, sugar, orange/lemon juice and with different fruit juices according to the version of Poncha. Traditionally though, lemon juice is used (the Fisherman’s Poncha). Clearly, we had to try it later and we did so while watching the sunset by the sea, in a bar that seemed to be mostly known to locals. This gave the whole atmosphere a very nice hassle-free flair.

The next morning we were meeting researchers of the Madeira Interactive Technologies Institute (MITI) for a tour through their research projects. After struggling a bit to find the right building, as the whole university and research complex was massive, we finally met our guides at the correct entrance. Firstly, they introduced us to their new International Master of Interactive Media Design that is also interesting for the students of our School of Design. It combines technological aspects with design and seems to debate current relations in digital media.

They presented their interfaculty projects of master and PhD students that were all interactive and designed for necessary cognitive purposes, particularly within psychology. The students responsible for the projects enthusiastically presented their research and results to us and gave us the chance to participate and interact with their work.

After our tour through the MITI we were walking back to the centre of Funchal, strolling around until we found a nice restaurant. There we tried local dishes, like a traditional soup, traditional bread, and loads of other smaller Portuguese dishes that we shared, especially seafood. After this, we were so full of all the food that we could only sit by the sea and enjoy the view (and yet managed to eat another Pastel de Nata (the Portuguese custard tard) – there’s always room for dessert).

After sunbathing and eating more food the next day, we decided to take the cable cars up to the hill to visit the botanical gardens. The view from the cable cars was spectacular and it was interesting to see how we go up from sunshine by the sea into the clouds up the hills. Up there, we realised that there were two things up there: the botanical gardens and the Jardim Tropical Monte Palace. We didn’t get to see the palace but we still had a superb time.

It was beautiful on the hills but our favourite place was the old town of Funchal: loads of little unique restaurants and bars, pretty buildings, and narrow streets. The Rua de Santa Maria was the most gorgeous street, as it had a lot of street art, like beautifully painted doors (so very Instagramable).

After having an excellent dinner, we went on a little bar crawl (alongside one or two Ponchas) and experienced Funchal’s nightlife, with many people being out on the streets, enjoying their drinks outside, and simply having a good time. Everyone was extremely friendly and chatty and altogether Funchal had an amazing atmosphere, even at night.

On our final day we appreciated some more good food and our last bits of the warming Madeira sun, before leaving for the airport. Later in the evening we arrived in London – tired but happy, for we had such a wonderful time on our brief vacation.

ArtWare II

ArtWare. Re-enacting Cybernetic Art

(Part 2)

By Dr Stefan  Höltgen

1. Methodology of the re-enactment

The second part of the seminar was aimed at applying the re-implementation of historical computer graphics as a method of analytical media historiography based on the previously acquired theories of the information aesthetics and media archaeology. To legitimise the approach the concept of the re-enactment was derived from three different sources: Robert Collingwood’s (1947) theory “History as Re-Enactment”, Ian Bogost’s (2012:85-112) “Carpentry” as well as Andreas Ficker’s (2015) “Hands-on! Plädoyer für eine experimentelle Medienarchäologie”: Shortly summarized the following considerations emerged from the discussion of mentioned concepts:

  • According to Collingwood, historical processes show a lack of tangibility for the historians because they tend to transfer them into the present. His “new thinking” of historical processes does not only apply a valuation, but also an actualisation of the historem. This a-historical moment can also be found in operative media, that are radically present in their media condition (Ernst 2012:113) – even if they are historical technologies or comprise and display bygone content.
  • Theory gains a new non-discursive system for written records and thereby avoids the problematic moments of the discourse (negotiability, subjectivity,…). The experiment and the demonstration represent separate forms of non-discursive theory building and perform technological temporality and structures that do not only state something about the content of the experiment, but also about the experimental setup and the utilised media.
  • In comparison to Ficker, experimental media archaeology (as re-enactment) permits absolutely no statements about bygone utilisation and social meanings of media technology and content because it (see point 1) constantly occurs in the presence of experimentation.

In the light of the above, the tool which is supposed to be used for the re-enactment becomes an ‘epistemic thing’ (Rheinberger 2001:18-24): In the process of the experiment it continuously has to be kept in mind as a constitutive element and it simultaneously has to operate as a ‘Werkzeug’ as well as a ‘Zeug’ (Heidegger 1967:68-83) that evokes the user’s knowledge about it during its use.

2. BBC BASIC

For the implementation of the graphics, a likewise ‘historical’ programming language is being used: BBC BASIC (www.bbcbasic.co.uk), which was developed by the company Acorn for the British school computer BBC Micro in 1981. BBC BASIC is one of the many dialects of the programming language BASIC, which was already developed in 1964 at the Dartmouth-College for the students of the art and humanities department, and, to a large extent, was abstracted from informatic and mathematic concepts. The imperative coding of BASIC, which is closely affiliated to the assembler, facilitates the learning of programming languages (for instance through the trial & error method), especially for autodidacts. Up to the present day, BBC BASIC is being further developed for various platforms and enables the programming of historical programming paradigms (imperative, unstructured) as well as the usage of modern concepts (structured, procedural, object-orientated).

The first programming block contained the development of the BASIC instruction repertoire, the fundamental structure of BASIC- programs as well as those elements, that are pivotal for the programming of the graphics algorithms: loops, conditional branching and mathematical functions (particularly pseudo-random numbers, trigonometric and iteration function). The second programming block introduces the graphics function of BBC BASIC: firstly the pseudo graphics programming  (through character set elements and their positioning on the display), subsequently the pixel graphics (dots, lines, geometric objects as well as absolute and relative positioning). The related mathematical epistemes (the display as a Cartesian coordinate system or as a Gaussian vector space, the im/possibility of the random number generating in deterministic machines) were thereby experimentally tested.

3. Heuristic re-enactment

But how can cybernetic artworks be reprogrammed in BBC BASIC, if their algorithmic foundations are unknown? Even the basic knowledge of a programming language permits heuristic accesses to solutions. Through the sheer observation of the graphics, the measuring, the counting and the tracing back of portrayed objects (the “simple signs” [vgl. Nake 1974:59], as they are called in the information aesthetics) algorithms can be developed that enable re-enactment.

3.1 A first step

Fig. 1: A. Michael Noll “Vertical-Horizontal Number Three” (1965)

For the first experiment an artwork, implemented by several different artists, was used which combined incidentally positioned horizontal and vertical lines. A. Michael Noll’s “Vertical-Horizontal Number Three” served as the starting object:

  1. Firstly, the students were encouraged to guess the number of lines that the graphic comprises.
  2. The question, whether the line possesses a beginning and an end, was answered.
  3. The connection between the lines was deliberated.
  4. The number of character runs was estimated.
  5. The aspect ratio of the picture was measured.

Once the construction principle (point 3) was discovered (sign of coordinate [x0/y0] to  [x1/y0] to [x1/y1] to [x2/y1] and so forth), a first program was written. The output revealed that too many iterations had taken place and that the picture was drawn too close to the edge of the picture. Therefore, the following step was characterized through a reduction of the iterations and the output was repositioned by means of the origin-command. After the result was satisfying, the program was modified once again.

  • A key-controlled infinite loop was implemented to draw an arbitrary amount of variants of the picture.
  • The drawing of the individual lines was decelerated by the use of the WAIT-command to facilitate the evaluation of the automatic design process (and thereby the random numbers if applicable).

As a result, the following code emerged:

origin 100,100
x1=rnd(300)
y1=rnd(800)
move x1,y1
for i=1 to 50
x2=rnd(300)
y2=rnd(800)
draw x1,y2
wait 1
draw x2,y2
wait 1
x1=x2:y1=y2
next i

Named code produced the graphic to be seen in fig. 2 in one of its iterations.

Fig. 2: An output of the re-enactment of “Vertical-Horizontal Number Three”

3.2 The boundaries of the cybernetic re-enactment

The second re-enactment was executed on George Nees’ picture “Locken”. Once more, with the aid of heuristic methods, the algorithm was compiled: The question, which ‘simple signs’ constitute the foundation of the graphic (circles) was answered, the quantity and size of the circles were determined, and, with the help of the circle-command, a simple program was written.

Fig. 3: Georg Nees: “Locken” (1971)

In doing so, the first obstacle/hurdle of a purely digital re-enactment arose. While Noll’s graphic could easily be reproduced on the display, it became apparent that a special effect was necessary for the reproduction of Nees’ picture which solely could be achieved by the ‘hard copy’ of a plotter: Apparently, the dark borders of the picture were the result of the collision of the drawing pencil with the border of the surface. This effect could at most, and then only with a high level of effort, be imitated. A participant of the course suggested reducing the size of the output window at the end of the program run to such an extent that the circles at the top and right border would be cut off.

During the inspection of “Locken”, the question emerged whether the picture might have been generated in multiple iterations. Based on the programming skills, the course soon developed the deliberation that Nees probably had the varyingly large circles drawn onto the paper one after the other in four iterations. Consequently, the BBC-BASIC-program was drafted in such a way that the four iterations (one per circle size) were carried out successively.

for i=1 to 100
gosub xy
circle x,y,50
next i
for i=1 to 100
gosub xy
circle x,y,75
next i
for i=1 to 100
gosub xy
circle x,y,100
next i
for i=1 to 100
gosub xy
circle x,y,150
next i
end
(xy)
x=rnd(350)
y=rnd(850)
return

The first implementation displayed that the number of circles was underestimated. Furthermore, the repetitive programming style, which successively operated four very similar algorithms, proved to be immensely time-consuming. Elements that were used multiple times were outsourced into subroutines (e.g. the random number generator for the centre of the circle). As a result, the program could be ‘slimmed down’ and was able to resemble the procedural programming (in ALGOL) preferred by Nees. 

Fig. 4: An output of the re-enactment of “Locken” and panel design

3.3 Digital Tools

For the third re-enactment, black and white were exchanged for coloured graphics. For this purpose Herbert W. Franke’s picture “Quadrate” was selected. On grounds of the insights acquired while re-enacting “Locken”, a three-stage formation process was priorly assumed during the heuristic approach to recreating the original: The different squares varying in size and colour were probably drawn in three consecutive program parts.

Fig. 5: Herbert W. Franke “Quadrate” (1970)

For the determination of the colours a digital colour sensor was added to the display output of “Quadrate”. The hereby ascertained RGB colours were assigned to different colour-pens. In addition to that the size of the squares was measured in proportional dependence to one another (1:2:4). Finally, the course attempted to guess the quantity of the squares on the screen area, as well as its measurements.

Here, similar to the primal elaboration, the first iteration evinced that the quantity was underestimated. In addition, the four big (black) squares were commonly illustrated in a way that they exceeded the borders of the assumed image size. To avoid the algorithmic correction of this circumstance, the program was run through several times, until all four big squares were incidentally positioned inside the frame. (Cf. fig. 6; this procedure undoubtedly deviated from Franke’s, since his drawing, on the one hand, was created on paper and therefore was too material and time-consuming for suchlike experiments and on the other hand couldn’t have featured a similar exceedance due to the paper size.)

Fig. 6: An output of the re-enactment of “Quadrate”

During the editing of the program, the colour definitions were carried out at the beginning which imitated an assignation of plotter colour pens. Aside from that, as with the re-enactment of Noll’s artwork, a key-controlled reboot of the program was implemented to be able to generate new variants of the picture without great effort.

clg
colour 1,247,114,204: rem magenta
colour 2,252,188,117: rem orange
colour 3,84,72,74: rem grey
for i=1 to 4
gosub xy
gcol 3
rectangle x+100,y+100,100,100
next i
for i=1 to 100
gosub xy
gcol 2
rectangle x+100,y+100,25,25
next i
for i=1 to 100
gosub xy
gcol 1
rectangle x+100,y+100,6,6
next i
a$=get$
run
(xy)
x=rnd(600)
y=rnd(600)
return

4. Algorithmic re-enactments

An agenda of the cybernetic art in the late 1960s was to help provide a more positive reputation of the computer. Artists of the first generation tried to shift society’s connotation of computer technology from a predominant prominence in fields like the military, science, and economy and the accompanying impression of cold rationality to the exact opposite: the precision, speed, and endurance that computers evince when developing algorithms  were supposed to be transferred as tools for a new art form. The rationalistic demand in connection to computers was integrated into the theory of the art: information aesthetics (see the first part of this article).

Precisely the term algorithm was widely discussed for the first time during that period. According to Jasia Reichardt (2008:72) and others, it is not the pictures but rather the algorithms behind them, which constitute the artworks. The formulation by information aesthetics and cybernetics that a formal linguistic expression may contain a concealed virtuosity was perhaps their most provocative thought imposed on the art world, which was defined by a scepticism towards technology. (Nees reported about a dispute between him and the artist Heinz Trökes in 1965 at the TH Stuttgart, in which the latter criticised that the computer art was missing “duktus”, to which Nees responded, that this could be programmed as soon as one knew what exactly the “duktus” is. [Nees 2006:XIII].)

The inherent didactic impetus of the cybernetic art also becomes apparent in the way that authors present their works – particularly when it comes to the explanation of the functions.

Algorithms in diverse manifestations are used to elucidate the composition and emergence of computer graphics. Occasionally, they serve (for instance like Mohr [2014]) as proper captions. The second re-enactment part of the course was dedicated to such captions, to reconstruct the artworks with their aid. In doing so, the adequacy of the algorithmic description was supposed to be evaluated en passant, which could not least be displayed by the similarity of the original and the re-enactment.

4.1 “unambiguously in English”

Alan M. Turing states in 1953 that a computable problem (therefore, a problem which can be solved by a computer) doesn’t require a specific language to formulate, to be mediated in an unambiguous manner.

„If one can explain quite unambiguously in English, with the aid of mathematical symbols if required, how a calculation is to be done, then it is always possible to programme any digital computer to do that calculation, provided the storage capacity is adequate.“ […] „problem is reduced to explaining ‚unambiguously in English‘“ (Turing 1953:289)

Algorithms can therefore also be phrased in natural languages. Only when they are intended to be processed by a computer do they require a technical and mathematical description in a programming language. This exact thought appeared to be pursued by Manfred Mohr because he discloses the process of development of the programs behind the pictures that he uses on his homepage and in his publications in the form of German keywords. The following he states about his “Computer Generated Random Number Collages” created in 1969:

“About the algorithm: Around a central line, random numbers determine the position, height, width, and existence of the rectangular white lines. This is a visual music collage, bringing to mind rhythm and frequencies.” (https://www.emohr.com/sc69-73/vfile_random69.html)

This text should be based on the first “algorithm-oriented” re-enactment. For this purpose, the course participants were asked to translate the text to a hand-drawn graphic. In doing so drawings were created which resembled the pictures (cf. fig. 7), but also contained differences regarding central details. Without having drawn a comparison to the original just yet, the drawings were utilized as a foundation for the draft of a computer program. It soon became apparent that important information (e.g. number, positioning, and colour of the “rectangles”) was missing.

mode 1
origin 640,0
plot 0,0:draw 0,1024
for i=1 to 5
    y=rnd(1024)
    x=rnd(640)
    d=rnd(2)
    if d=1 then x=-x
    if d=2 then x=x
    h=rnd(205)
    w=rnd(640)-x
    rectangle fill x,y,w,h
next i

Fig. 7: Manfred Mohr: Computer Generated Random Number Collage Number 1 (1969)

With this anticipated information computer graphics arose that only vaguely resembled the original (fig. 8). Thus, the re-enactment partially failed, though the question, whether Mohr’s descriptions actually contained all the necessary information for the generation of their corresponding pictures, could be negated. They are “algorithms”  in name only.

Fig. 8: Output of an attempted re-enactment

4.2 cybernetic/diagrammatic

The “translation gap” between descriptions in natural language, encoding in formal language, and iconic drawing of an artwork maybe contributed to the failure of the first attempt. Perhaps an algorithm which uses the same sign system (in the sense of Peirce) as the picture would be more adequate? This approach was initiated by the transfer of a flowchart by Frieder Nake in BBC BASIC. Thereby a process of interpretation could be omitted because the flowchart is generally characterized by its unambiguity, which is achieved through the precisely defined graphical elements.

Fig. 9: Flowchart of the program ploy1 (polygonal chain)

This example originates from Frieder Nake’s book “Ästhetik als Informationsverarbeitung” [Nake 1974:198]. After a short introduction of the forms of representation of flowcharts, the course participants were encouraged to gloss the diagram on the margin with BASIC-resembling pseudo code. This allowed the participants to transfer the temporal structure into a symbolic structure of the code. While doing so it became clear that misunderstandings, let alone missing information would no longer constitute a problem. The BBC-BASIC program then drew the graphics, which highly resembled the example image in Nake’s book (1974:199):

clg
dim x(100),y(100)
input f1
input f2
input f3
n=f1
x(1)=f2:y(1)=f3
i=2
(loop)
x(i)=rnd(f2):y(i)=rnd(f3)
move x(i-1),y(i-1):draw x(i),y(i)
i=i+1
if i<n then goto loop
end

Fig 10: “Zufälliger Polygonzug, 1963. 10 x 10 cm” (above), Ausgabe des Re-Enactments (below)

4.3 Code Translation

In the last experiment the course participants were confronted with a problem, which has been known for quite some time in the field of informatics: Some computer programs have to be adapted to current systems because either the original hardware, on which the program is/was running can no longer be used or because the age of the code (or of the programming language) poses a security risk or cannot be properly maintained. In the case that the source code is available a manual translation to the target language is rather easily executed, otherwise, the object code has to be recompiled by the computer.


1 ‘BEGIN’ ‘COMMENT’ SCHACHTELUNG., 2 ‘REAL’ LI, RE, UN, OB, H.,
3 H.=.5., OPEN(0,0).,
4 LI.=-130.,RE.=130.,
5 UN.= -90., OB.=90.,
6 ANF..
7 LEER(0,OB)., LINE(LI,OB).,
8 LINE(LI,UN)., LINE(RE,UN).,
9 LINE(RE,OB)., LINE(0,OB).,
10 LINE(LI,0)., LINE(0,UN).,
11 LINE(RE,0)., LINE(0,OB).,
12 LI.=LI*H.,RE.=RE*H.,
13 UN.=UN*H.,OB.=OB*H.,
14 ‘IF’ RE ‘GREATER’ 1.0 ‘THEN’ ‘GOTO’ ANF., 15 CLOSE
16 ‘END’ SCHACHTELUNG.,

As an example of this process, a source code excerpted from Georg Nees’ dissertation “Generative Computergrafik” from 1969 has been chosen. The program codes, printed in the book, are composed in a variation of the programming language ALGOL. ALGOL was one of the main influences that lead to the development of BASIC (Thomas Kurtz in: Biancuzzi/Warden 2009:80) so that the program structures already look quite familiar:

Once again, the translation to BBC BASIC was achieved through the glossing of the original code. Considering the knowledge about the apparent relation of the two programming languages, some transposition already appeared reasonable during the process of recoding – for example, that the waiver of different data types severely facilitates the programming. The resulting BBC BASIC program therefore also represents a kind of diachronic (programming) linguistics.

origin 500,500
H=0.5:clg
LI=-130:RE=130
UN=-90:OB=90
(ANF)
move 0,OB:draw LI,OB
draw LI,UN:draw RE,UN
draw RE,OB:draw 0,OB
draw LI,0:draw 0,UN
draw RE,0:draw 0,OB
LI=LI*H:RE=RE*H
UN=UN*H:OB=OB*H
if RE>1 then goto ANF
end

The validity could easily be verified with the aid of an comparison between the original graphic from Nee’s book and the output of the re-enactment:

Fig. 11: G. Nees’ “Schachtelung (Bild 3)” + re-enactment

This concluded the practical part of the course. Further re-enactments surely would give rise to new and deepened insights into cybernetic art, but also required advanced knowledge about the programming in BBC BASIC. Thus, the works of Frieder Nake, Manfred Mohr or Vera Molnar suggest complex algorithmic structures.

Likewise, it appears reasonable to re-enact works of the fine arts (for instance objects of Mohr) with the 3D printing method or graphics taken from analog computers (like the early works of Herbert W. Franke). Though this would mean to be obliged to prepare further techno-mathematical knowledge.Ultimately, it would suggestive to keep track of the future advancement of cybernetic art in the direction of computer graphics and animations. The course discussed named points by means of presentations.

5. Conclusion: From cybernetic to artificial art

Fig. 12: image from the “Arnolfini” collection, drawn by AARON (1983)

The artist AARON, introduced by one of the presentations, established a point of culmination for the seminar. AARON is an artificial intelligence developed by Harold Cohen. In the early 1980s, AARON started to generate computer art. AARON escalates the question of the algorithm because its artworks are not easily legible by mere inspection of the program code, which constitutes the basis of the AI. It rather establishes a structural condition of the possibility of data-driven graphics. Hence, the cybernetic art suspends the subjective momentum on the producer’s side (the human artist as a constructor of the algorithms). From here one, according to the discussion, one direct line leads to the computer-generated imagery (CGI) in which computer graphics become part of the utility art and the other leads to newer discourses like the educable neural networks that have also been concerned with the production of art more recently. 

Fig. 13: Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network)

As the picture shows, it is no longer the precision of the computer tools that makes the artwork stand out, but rather the exact opposite: a kind of artistic “imprecision” which comes very close to what has been described as “Duktus” in the Nees debate. The alleged last refuges of cybernetic art become apparent when neural networks do not strive for “art” but for photo-realistic synthesis: Fig. 13 shows the artistic portrait of a woman that has never existed.

While this circumstance appears to be quite striking, it is equalised by the disturbing artefact on the right picture margin. The algorithm had started to generate an additional face; the “marginal conditions” of the format, however, seem to have caused a faulty illustration – similar to Nees’ image “Locken” which condensed to a dark edge as soon as they collided with the material picture margin, the data of the AI algorithm caused the face to be sort of compressed. This does not necessarily bother the viewer since the neural networks do not need subjects for the evaluation of their output. Human critics would be far too slow to regulate the computerised learning process. Instead, there seem to be two different computer processes which are competing against one another: One as an artist and the other one as a critic – in a solely through an algorithm controlled learning-feedback-process. 

Fig. 14: “These Persons do not exist” (28.2.2019, 12:57 Uhr)

“The creation or coding of an automatic artist or critic is not at the top of the list of tasks, but will sooner or later be attempted nevertheless.” (Nake 1974:5 – own transl.)

*

translated by: Chiara Rochlitz

Figures

Fig. 1: ìVertical-Horizontal Number Threeî (1965, A. Michael Noll), Quelle: https://protect-eu.mimecast.com/s/yiZGCnxp2h75mpPImEOKe?domain=collections.vam.ac.uk (16.03.2019)

Fig. 2: Source: Stefan Hˆltgen

Fig. 3: ìLockenî (1971, Georg Nees), Quelle: https://protect-eu.mimecast.com/s/03elCoVq3sr4vpghoqn1p?domain=dada.compart-bremen.de (16.03.2019)

Fig. 4: Source: Stefan Hˆltgen

Fig. 5: ìQuadrateî (1970, Herbert W. Franke), Quelle: https://protect-eu.mimecast.com/s/5RemCp804FnDAEVc7Y8Tk?domain=collections.vam.ac.uk (16.03.2019)

Fig. 6: Source: Stefan Hˆltgen

Fig. 7: ìComputer Generated Random Number Collage Number 1î (1969, Manfred Mohr), https://protect-eu.mimecast.com/s/2uDYCq7vgC8WX6Esvtop1?domain=emohr.com (15.03.2019)

Fig. 8: Source: Stefan Hˆltgen

Fig. 9: ìFlussdiagramm des Programms poly1 (Polygonzug)î (Nake 1974:198)

Fig. 10: Nake 1974:199

Fig. 11: Nees 1969:99

Fig. 12: Hardold Cohen/AARON ìArnolfiniî (1983), https://protect-eu.mimecast.com/s/UMxfCr8wjF8923ksLk94f?domain=dam-gallery.de

Fig. 13: Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network), https://protect-eu.mimecast.com/s/d8AiCvlAnT7XA0YIE5Q92?domain=christies.com (15.03.2019)

Fig. 14: https://protect-eu.mimecast.com/s/_GgLCwVBosGpyZquXt8hi?domain=thispersondoesnotexist.com (28.02.2019)

Bibliografy:

Collingwood, Robert H. (1947): “History as Re-Enactment”. In: Ders.: The Idea of History. Oxford University Press.

Biancuzzi, F./Warden, S. (2009): Masterminds of Programming. Beijing u.a.: O’Reilly.

Bogosts, Ian (2012): Alien Phenomenology. Or What it’s like to be a Thing. Minneapolis/London: Univ. of Minnesota Press.

Andreas Fickers “Hands-on! Plädoyer für eine experimentelle Medienarchäologie” (2015)

Rheinberger, Hans-Jörg (2001): Experimentalsysteme und epistemische Dinge. Eine Geschichte der Proteinsynthese im Reagenzglas. Göttingen: Wallstein.

Ernst, Wolfgang (2012): Chronopoetik. Zeitweisen und Zeitgaben technischer Medien. Berlin: Kadmos.

Heidegger, Martin (1967): Sein und Zeit. Tübingen: Max Niemeyer Verlag.

Nake, Frieder (1974): Ästhetik als Informationsverarbeitung. Grundlagen und Anwendungen der Informatik im Bereich ästhetischer Produktion und Kritik. Wien/New York: Springer.

Nees, Georg (2006/1969): Visuelle Performanz. Einführung in den Neudruck des Buches Generative Computergraphik. In: Ders.: Generative Computergrafik. Herausgegeben von Hans-Christian von Herrmann und Christoph Hoffmann. Kaleidoskopien Band 6, S. IX-XXI.

Mohr, Manfred (2014): Der Algorithmus des Manfred Mohr: Texte 1963-1979. Berlin: Spector.

Turing, A. (1953): Digital Computers Applied to Games (Original)

Reichardt, J. (2008): In the Beginning … In: Brown, P. (Ed): White Heat Cold Logic. British Computer Art 1960-1980. Cambridge/London: MIT Press, S. 71-82.

ArtWare I

By Dr. Stefan Höltgen

Art history and science provide various methods of researching historical artworks: They approach ‘them from the outside’ with a historical analysis of motives and discourses to describe them beyond their purely visual aspects. For example, the political, aesthetic or other topics that are hidden behind specific elements of illustration. Furthermore, the science of art also approaches the materiality of art ‘from within’: What fabrics and colours were utilized? Can the process of creation be reconstructed? Can any indications of special production techniques be found? For this purpose non-hermeneutic Methods and Technologies are being used, comparable to their utilization in the field of forensics, archaeometry, chemistry or material sciences.

In addition to that, the recent archaeological research (very early cultures) has established a third method, which is broadly known as experimental archaeology. It comprises the attempt to determine the process of creation and its conditions of the individual artwork, through the recreation of the artwork with presumed methods and materials. (Cf. Coles 1979) This access to historical and theoretical knowledge was recently formulated as “carpentry” (Bogost 2012):

“Making things is hard. Whether it’s a cabinet, a software program, or a motorcycle, simply getting something to work at the most basic level is nearly impossible. […] Carpentry might offer a more rigorous kind of philosophical creativity, precisely because it rejects the correlationist agenda by definition, refusing to address only the human reader’s ability to pass eyeballs over words and intellect over notions they contain. Sure, written matter is subject to the material constraints of the page, the printing press, the publishing company, and related matters, but those factors exert minimal force on the content of a written philosophy. While a few exceptions exist […], philosophical works generally do not perpetrate their philosophical positions through their form as books. The carpenter, by contrast, must contend with the material resistance of his or her chosen form, making the object itself become the philosophy.” (Bogost 2012:92f.)

Based on this methodical triad of historical analysis of art, I have held a seminar about the ‘re-enactment’ of historical cybernetic art during the winter term 2018/2019 at the University of Greenwich. This period of art can be distinguished from others by the general circumstance that the participation of computers – for the first time in history – was of great significance in the process of their creation. Apart from the new materialistic constitution, the simultaneous occurrence of the ‘computer art’ and the development of a new theory: Cybernetics accompanied by the information theory and information aesthetics. Since the early 1960s artists have tried to not only utilize computers and their peripheries (monitors, plotters and speakers) as art tools but also to redefine the arising art, as well as the process of production and perception with the help of cybernetics:

“The remarkable step taken here is the step from aesthetics as a rigorously rational analytic aesthetics to a generative method. […] The interpretation that we traditionally expect from an aesthetics gets changed into construction. The effort to rigorously define measures in order to evaluate certain characteristics of the work (of art), in the case of the model of Information Aesthetics is shifted to the opposite effort of algorithmically generating such works. Scientific and engineering methods break into the realm of the humanities – a provocation!” (Nake 2012:70)

The seminar started with the acquisition of knowledge about the theoretical fields of cybernetics and its mathematical method (information theory). How can entropy (in Shannon’s definition) be used for ‘artificial communication’? What proportion of the informativity of a message is being provided by statistics? But also: How are redundancy and complexity related to one another, when they comprise the role of the message, which is being interchanged between artist and recipient? We gathered the mathematical definitions by Max Bense (1971), George David Birkhoff (1932), Helmar Frank (1968) and Herbert W. Franke (1977) and discussed the aforementioned questions based on their contributions.

The media archaeological difficulty was to devote the attention to the technical and formal a priori of the priorly discussed historical theories: The setup and functioning of the graphics hardware (especially displays, printers and the architecture of graphical hardware) as well as the semi-technical (Kittler 2001) examination of computer graphics and algorithms established the basis:

“Computer images are the output of computer graphics. Computer graphics are software programs that, when run on the appropriate hardware, provide something to see and not just to read. […] Simplified accordingly, a computer image is a two-dimensional additive mixture of three base colors shown in the frame, or parergon, of the monitor housing. Sometimes the computer image as such is less apparent, as in the graphic interface of the newfangled operating systems, sometimes rather more, as in ‘images’ in the literal sense of the word.” (Kittler 2001:31)

Two excursions were arranged to exemplify this perspective ‘live’ on hard- and software:
The exhibition about mathematics and information technology in the Science Museum were examined from a media archaeological point of view: To demonstrate that media technology principally resists human categorization when not projected onto the perspective of application (or the perspective of their effect, aesthetic or economy), both exhibitions were analyzed in regard to the media technological criteria of differentiation ‘analogous/digital’, ‘information/material/energy’ and ‘ operativity/materiality’. The question, which dispositive and discursive factors influenced the functional structure and design of the exhibition and the selected exhibits were also objects of the examination as well as as well as the methods used by museum didactics to compensate the media state of dysfunctional media technology.

After the discourse about real historical hardware, the museumization of software could be exemplified in the special exhibition ‘Games’ in the Victoria and Albert Museum. The overemphasis of a semi-materialistic status of software (as in paratexts, production notes, accompanying text, videos, interviews, etc.) compared to their necessary operativity was clearly recognizable in this exhibition with a definite focus on the design of computer games. The prevailing criteria for the differentiation of computer games – their interactivity – was only evident for a few exhibits and those rather emphasized the artistic ambitions of their developers than the actual act of gaming.

Frieder Nake’s theory of the “second image” (2008) emphasizes the duality of computer images: the surface and the subface. This theory can generally be transferred onto technical media: Their surfaces (output and interfaces) embody a gateway for recipients and users whereas the subfaces (the hardware with its specific technologies and temporality, as well as the codes) constitute an invisible layer, at best accessible for the developers. What kind of knowledge could be gained from turning the analytical view to subfaces of historical media and by working with them in the meaning of experimental archaeology? The answer to this question will be given in the second part of the seminar.

(Translation from German: Chiara Rochlitz)

Cited Works:

Bogost, Ian (2012): Alien Phenomenology, or What It’s Like to Be a Thing. University of Minnesota Press.
Coles, John (1979): Experimental archaeology. London: Academic Press.
Nake, Friedrich (2012): Information Aesthetics: An heroic experiment. In: Journal of Mathematics and the Arts, 6:2-3, 65-75, DOI: 10.1080/17513472.2012.679458 (http://dx.doi.org/10.1080/17513472.2012.679458)
Bense, Max (1971): The projects of generative aesthetics. In: Reichardt, Jasia (ed.) Cybernetics, Art, and Ideas.London: Studio Vista, pp. 57–60.
Birkhoff, George David (1932):, A mathematical theory of aesthetics. In: The Rice Inst. Pamphlet. 19, pp. 189–342.
Frank, Helmar (1968): Informationsästhetik und erste Anwendung auf die mime pure. Quickborn: Schnelle.
Franke, Herbert W. (1977): A Cybernetic Approach to Aesthetics. In: Leonardo, Vol. 10, No. 3 (Summer, 1977), pp. 203–206.
Kittler, Friedrich: Computer Graphics: A Semi-Technical Introduction. In: Grey Room 02, Winter 2001, pp. 30–45.
Nake, Frieder (2008): Surface, Interface, Subface. Three Cases of Interaction and One Concept. In: Seifert, Uwe / Kim, Jin Hyun / Moore, Anthony (Eds.): Paradoxes of Interactivity. Perspectives for Media Theory, Human-Computer Interaction, and Artistic Investigations. Bielefeld: Transcript, pp. 92–109.

Greenwich in St Petersburg

Over December 2018 and January 2019, Dr Maria Korolkova of Greenwich Media Studies was visiting ITMO University in St Petersburg as a British Council Fellow for teaching, research and collaboration. Among other projects during her time in St Petersburg Dr Korolkova gave a public lecture at one of St Petersburg bars – a trend specific for this city of culture.

More research and exchange collaborations are under negotiations, so watch this space for further announcements.

Maths and Videogames in Media

By Roxana Pomplun, third year BA Media and Communications student

Media at Greenwich is not just about media and broadly speaking humanities. Sometimes you need to remember your maths and coding. And this is when it gets exciting! Hands-on Archaeology of Early Computer Graphic Arts – this is the topic area of a unique seminar series, held by Dr Stefan Höltgen from the Humboldt-University Berlin here at Greenwich. The seminars cover fundamentals of Computer Art theory and technologies, as well as programming workshops in BBC BASIC. Part of the schedule was also a trip to South Kensington, where we visited the Science Museum and the Victoria and Albert Museum. At the Science Museum we went to the exhibitions Mathematics: the Winton Gallery and Information Age in form of a guided tour conducted by Dr Höltgen.

Throughout the collection of early mathematical machines, he pointed out specific examples, ranging from the ‘Moniac’, a machine that modelled the British Economy using the flow of water to represent money, over an astronomical telescope and regulator clock (with regards to the Greenwich Observatory obviously) to a differential analyser that was used for secret military research purposes during the second world war. Within the Information Age exhibition he was referring a lot to Alan Turing’s work, like the Pilot ACE which embodied Turing’s idea of a universal machine (Turing wrote the specification for computer many years before a prototype was built) that was able to perform any logical task. Dr Höltgen further showed us examples of early microprocessors and microchips that revolutionised digital technologies. Eventually, we had a joint discussion on how all that what we just have seen relates to media technologies and the subjects of our studies before we continued our trip and headed over to the V&A.

This time we were all strolling though the exhibition by ourselves, at our own pace. The videogames exhibition offers a range of topics related to video games and shows not only pieces about production and design but also considers topics such as race, sexual objectification, political criticism, online communities, fan art, etc. – to put it in a nutshell: there’s basically something to see for anyone. There were many aspects that made me relate to and think of various experiences. When looking at the stages of character design I remembered the work of a friend, who studies animation in Greenwich. I was particularly happy when I saw examples of how professionals work with Unity – a game engine that I used myself for the Creative Coding module in year 1 to create visuals. Since we live in an era, where almost everything became interactive, you can also interact with parts of the exhibition, in either requesting further information about a topic or simply playing video games (there was a whole section dedicated for visitor to actually play). I really enjoyed this exhibition and would highly recommend it to anyone interested in gaming or digital arts (hurry, though, it’s only on until February 24th).

The trip was a great experience and I was happy to take part in such an opportunity, as in the whole seminar series. The seminar series was initially organised for the MA Media and Creative Cultures programme but is also open to other media students within the Greenwich School of Design. Hence, as a 3rd year BA Media and Communications student I appreciated to be given the chance to participate in such extracurricular sessions, particularly because I personally set my research focus on the computer science aspect of media technologies. But enough of myself, as a student of this wide-ranging programme, I would recommend every media and communications student to take part in additional events and courses like this. Make the most of your time as a student here and enhance your general experience in opening up to different areas within this extensive industry, because you’ll always learn something from it and different experiences are extremely necessary to excel in your career.

This slideshow requires JavaScript.

Visiting Azzadine Alaïa Show

 

This slideshow requires JavaScript.

From sculptor to couturier, Alaia’s nonconformist nature had led to him being known as the ‘rebellious outsider’ of high couture. His success rose in the 1980s, and from then until his death in November 2017, he disregarded fashion week deadlines and has stayed true to his own personal style.

Throughout the visit, one thing was clear: his works were designed as a second skin to the female figure. The material of each piece was draped along the curves of a woman’s body and it was not difficult to see why he was so unique; Alaia thought with his hands. He played with the concept of motion; using materials such as metallic cloths, lace and mesh to emphasize the fluid movement of the skirt as the model sashayed along the catwalk. One piece stood out; A sturdy, unyielding crop-top, paired with a skirt that was designed to mimic movement. The metal-like material used to make the skirt, was sown side by side with mesh material. Vertical strips of metal. Mesh. Metal. Mesh. And so on. The skirt enabled the model to walk comfortably as it adjusted to her stride. Practical AND aesthetically mesmerizing.

I found another collection interesting. Within this collection, the dressed had an obvious oriental influence from the Chinese Cheongsam. The collars, short-capped sleeves and body fit reminded me of the dresses worn by Chinese women at traditional festivities such as Chinese New Year as well as weddings. Another collection showcased a dress with detailed floral lace designs, which looked Arabic in style. However, the structure of the piece mirrored the Indian Sari; formed by a crop top and a long ankle-length skirt. All in all, Alaia has masterfully created designs that are elegant and sophisticated, characterized by patterned fabric and clean lines – with no glaring slogans, logos or neon colours, all of which are so often seen in today’s fashion trends.

By Tiffany Kwok, MA Media and Creative Cultures