This two-part essay offers an investigation of the ‘depths’ of the algorithmic workings of digital visualisation systems. The focal point of this study is digital “memory”, as explored in relation to the Algebras of “Logic”, the notions of process paradigm, pre-representational grids and emergence. In this way, the algorithmic “unconscious” of digital visualisation technology – i.e. the inherent paradoxes and unknowns, the uncontrollable data flaws, the hidden inconsistencies, the exchanges and the potentialities of the system – is unveiled. New possibilities and ways of creating more than mere visual ‘phenomena’ emerge.

Process and digital trace

Digital technology has introduced deep changes not only to the domains of representation and knowledge organisation, but also, to the expansion of human cognition and perception. As Rejane Cantoni argues: “..digital technologies will introduce changes that range from the installment of new models of representation and the organization of knowledge to our own transformation or cognitive expansion”… [1]

numero70_The algorithmic 02

Essentially, with the advent of digital technology, process has not only become a transient multi-medium, but it is also considered as a kind of ‘paradigm’ through which we map ourselves and the world. The advantages and the limitations of such a ‘paradigm’ are best outlined by Lucas Evers & Susanne Jaschsko‘s in their paper entitled “Process as Paradigm” (presented at the 16th International Symposium on Electronic Art, Ruhr, 2010), in which the field between certainty and unpredictability is critically explored.

“Process – non-linear and non-deterministic – has become one of the major paradigms in contemporary art & culture… This paradigm reflects the fact we have ever better means, concepts and technology to observe reality. As a result we have both a deeper understanding but simultaneously realise reality is far more difficult to master. Reality is an all-entangling process that holds many uncertainties – of which we are part..” [2]

Objects ‘give way’ to systems and their matter-energy information exchanges that are non-linear & non-deterministic. As coined by Manuel Castell and discussed in Lev Manovich‘s writings, such a shift reflects the characteristic transition from Modernism to ‘informationalism’. This shift is manifested in the broader art and science contexts i.e. the ‘death of the object’ in the art of the sixties and the rise of Quantum Physics. [3]

numero70_The algorithmic 03

Such a ‘paradigm shift’ alters our cognition and perception, particularly, in terms of the ways in which we define the relationship between form and in-formation as well as our interaction with them. As we delve into the various data ‘currents’ – whose ‘ebbs and flows’ characterise not only the workings of digital visualisation systems, but also our environmental context in general – we encounter a dynamic endogenous transitionality that has various states and degrees of emergence, resulting in the loss of what we define as essential/primary or ‘good’ form. Diverse types of dynamically interacting in-between realities are activated, revealing various degrees of dimensionality and interstitiality that can be creatively explored.

Following Manovich’s argument on whether “software and computer networks redefine the very concept of form”, we may reach a similar conclusion to Manovich that indeed, “the new forms are often variable, emergent, distributed and not directly observable”i Even in the case of digital “primitives” [4], where various behaviours and dynamics are stored, we encounter open states of potentiality, as exemplified, for instance, in the work of the award-winning architect, Greg Lynn. [5]

It becomes clear that an in-depth systematic investigation of digital visualisation systems surpasses the limiting focus on imaging. Instances of data ‘currents’ can be visually “captured” as digital traces that have intrinsic types of in-formation. These “traces” are a product of the complex workings, inherent flaws and inconsistent data exchanges of digital visualisation systems. Consequently, these digital traces escape not only a sole frame of reference as providing a proper and full description of them, but also the outworn principles and aesthetics of a mere image production and processing, as well as the customary modes of simulation.

numero70_The algorithmic 04

The theoretical and practical investigation of those digital traces enable us to unravel the workings of digital visualisation systems, and even the depths of the algorithm. An investigation like this is not limited to the observation of surface effects, but expands into the complexity and the half/by- products of the dynamic data interchange between the diverse computational levels and models of computer infrastructure.

Digital traces reveal a multitude of virtuality states that are implicitin various interacting potentiality fields. In extension to this condition, it may be argued that in the context of digital technology the notion of potentiality can be interpreted as the ‘gap’,e.g. lack of consistency, transparency and/or immediacy, that exists between user and machine, as well as between software and hardware, and between the various computational levels of the digital visualisation system. Although this ‘gap’ is supposed to have been remedied through the use of the appropriate programming languages, it still dominates the software engineering discourse.

An investigation of what may be defined as an intrinsic structure and geometry of the system prior to any kind of (re)presentational production poses an interesting challenge. It can be argued that pre-representational ‘grids’ may be operating in the highly unstable depths of digital visualisation systems.

numero70_The algorithmic 05

Transformation matrix and memory

The mathematical matrix is defined as the traditional means of making the world observable and a solution for controlling and ultimately eliminating disorder, entropy, asymmetry, and inconsistency. Ian Charles Braid implements a mathematical matrix as a regulatory mediator between the binary and the graphic levels of digital 3D modelling systems, in order to avoid the direct conversion of binary arithmetic into digital volumes, as this has been proved to be highly problematic. In this way, Braid creates a numerical model that consists of transformation matrices mediating between the drafting/modelling system and the binary model of the digital visualisation system.

The problem of imprecise visualisation arose from the imprecise conversions between the diverse levels of the computer infrastructure. In particular, as Braid explained, “no easy correspondence between Boolean operations on volumes and Boolean operations on bits is to be found”. [6] That lack of correspondence came to light when Braid attempted to solve the problems of his first scheme. Those problems included the impractical visualisation of solids through a large number of bits and the inconsistent transformation and placement of those solids.. [7]

Braid created special algorithms for applying Boolean set operations to solids, instead of directly visualising how those operations were applied to bits. He implemented that solution by using a “transformation matrix” to describe a solid metrically. [8] The matrix described the hierarchical combinations of solids, enabling the creation of 3D models through constructive solid geometry. [9] In that way, the boundaries of a solid were visualised on the drafting system, after their geometrical elements were specified on the numerical model.

numero70_The algorithmic 06

Conversion errors however cannot be eliminated in the majority of CAD and modelling applications. Exact computation or regularisation through the use of programmable constraints remains unattainable, despite the implementation of the transformation matrix. [10] As Christoph M. Hoffmann concludes “the interplay of symbolic and arithmetic computation is a critical dimension in solid modelling”. [11] As a result of such inherently problematic computation, the visualisation outcomes can be contradictory and disproportional to their input due to such imprecise conversions. [12]

As can be seen through investigating the case of Braid’s transformational matrix, the computational workings of digital visualisation systems are by no means tamed and predictable. Even if the operation of certain applications is based on what seems to be a more direct type of real-world data processing, the operation principles and foundations of digital visualisation systems remain essentially similar. We are dealing with highly complex systems that have various high-abstraction levels that contain a number of unknown behaviours as well as varying degrees of instability and emergence.

The mathematical matrix is not simply meant to tame the contrasting dynamics and inconsistent conversions between the syntax/semiotic-based high abstraction levels and the lower level computations of the system. The situation is far more complex. The lower levels of the system retain their built-in variability, irrespectively of external interactions and input, while the interchangeable states of reduction and excess, the machine constraints, the randomness, and the incompatibilities between the diverse computational levels dominate the system even more.

As a result of such complexity, not only a paradoxical “oscillation” between binary, semiotic and geometrical types of “spatiality” occurs, but also an interplay between valid and invalid ones emerges. Such an ‘oscillation’ affects boundary definition and conditions and causes states of elliptical or excessive and multiple dimensionality to occur by default.

numero70_The algorithmic 07

The workings of the interaction between the various computational levels of a digital visualisation system can be investigated in more depth in relation to artificial ‘memory’ and Sigmund Freud’s‘A note upon the Mystic Writing Pad (1925). Freud’s Mystic Writing Pad is not only relevant to media theory due to the ways it deals with the essential operations of memory, but also, to the notion of digital trace and drawing. As Rosalind Krauss describes, Freud’s Mystic Writing Pad operates like a “Wunderblock”: as its top layer registers new impressions from the world, these are simultaneously transferred on the underlying waxen slab.It isonly during this periodic procedure that both layers can be re/de-touchable.

If the top layer of the Mystic Writing Pad bears an almost ‘perspectival’ depth due to the fact that it is constantly ‘over-painted’ by new impressions, the second layer permanently retains a non-hierarchical superimposition of personal experience. [13] The first layer resembles the function of our short-term or ‘working’ memory, while the second layer plays the role of our long-term memory.The latter is the ‘sediment’ of our experience that is processed as knowledge that turns into preconception.

The short-term and long-term memory interaction determines the assimilation and reworking of new and past information. As explained by Crick Francis and Christof Koch, the interpretation of the visual world and the creation of viewer-centred representations that lead to “cognitive” representations, are the products of such interaction. [14]

numero70_The algorithmic 08

Artificial memory, which functions more like an ‘enhanced’ long-term memory, is not meant to deteriorate. It can be adaptable up to a certain degree, but it lacks the processes and the true development of human memory. Each piece of information is stored and classified according to its format, size and date of insertion or modification. Normally, each cluster maintains its independence from the rest. The user may access it at any time as well as copy it ad infinitum. Simply by clicking the ‘undo’ or ‘redo’ options, information is perfectly restored, time is regained and mistakes are ‘perfectly’ corrected.

As the artist and educator Leo Duff describes: The use of new media gives a new dimension to the use of memory in drawing, particularly long term memory. It will ‘remember’ and store images for us. What would have previously been rubbed-out can be cut and saved in case we decide to reintroduce it to the drawing at a later point. Does this make the all-important editorial process (whether automatic or carefully considered) of drawing easier or more complicated?”.[15]

Artificial memory records past and present inter-dependencies and links (as contrasted to real relations) between various elements and affects the course of design and creative practice in various ways e.g. particular stages of the creation process may be suspended, shortened, omitted, juxtaposed, etc. .[16]

What tends to be forgotten is that artificial memory is a fragment from a larger complex system that is characterised by varying degrees of instability; from an assemblage of dynamically and unpredictably interacting computational layers, data flows and matrices. We are able to grasp the increasing degree of complexity, inconsistency and instability of digital visualisation systems, particularly when working with architectural 3D modelling and VR, rather than simple imaging applications.

The element of the unexpected arises not simply from the interaction between user input and stored information, but also, from the flaws of the system that obstruct the regularising function of the transformational matrix and cause artificial memory to degenerate. In particular, the hierarchy and regularity of the arithmetic order of the transformation matrix, as well as the robustness of the system, become corrupted by noise, data saturation etc.

numero70_The algorithmic 09

The transformation matrix numerically describes a digital solid as a data structure that is composed of vertices, edges, and faces. As Braid explains, only “vertices and edges, unlike face equations, are explicitly stored for each object” and created anew in each operation and transformation. Braid also explains that “every face that is generated from the intersection [of the original faces] lies on an original face”. Subsequently, the faces of a solid “show traces of its construction” and they therefore maintain the characteristics of the original solids. Those characteristics include the colour, orientation, and triangulation of the original solids. Furthermore, as Braid reduces the amount of stored information for each solid, the “connections between edges and faces do not matter”.[17]

Consequently, the faces of a solid are not explicitly defined and the connections between vertices, edges and faces may be loose. [18] In this case, the orientation of the faces may be too ambiguous and thus the geometrical elements of a solid may be unconnected. Such a solid is defined as ‘invalid’, because its structure, characteristics and behaviours are flawed. This problematic condition becomes more apparent and increases even when a simple transformation, such as, rotation, is applied to the solid [19].

The existence of the algorithmic paradox and the unexpected comes to the fore, as the inconsistent and unexpected outcomes that are described above, derive from the use of a seemingly consistent system. The creation of a digital 3D model can be understood is a non-linear passage between algorithmic incidents of de/re-generation. When working with such unstable systems, a point is reached where we cannot completely unveilthe creation process we have followed. Any attempt to do so, takes us back to a diffused condition, instead of a ‘ground zero’, an origin. There is a lack of hierarchy, stability and robustness. A coherent and consistent whole is thus proved to be arbitrary. This condition gives rise to what Gilles Deleuze defines as “leakages”, as synonymous with “indistinctions” and “in-between spaces”.[20]

numero70_The algorithmic 10

The intervention of paradox is the symptom of the arbitrary and flawed nature of technology itself. The realisation of the wider complexity of co-ordinates into which a plan falls, and which causes inconsistencies between cause and effect, draws attention to the elements of paradox and emergence in technology, and provides the challenge for creatively ‘delineating’ those complexities and in-between states.. [21]


References:

[1] - Cantoni, Rejane, “Intelligent Environments: Research and Experiments in Interactive Cinema”, in ISEA2010 RUHR Conference Proceedings, eds., Judith Funke, Stefan Riekeles, Andreas Broeckmann, Hartware MedienKunstVerein, Revolver Verlag, Berlin, 2010, p. 431.

[2] - Evers, Lucas & Susanne Jaschsko, “Process as paradigm”, in ISEA2010 RUHR Conference Proceedings, eds., Judith Funke, Stefan Riekeles, Andreas Broeckmann, Hartware MedienKunstVerein, Revolver Verlag, Berlin, 2010, p. 419.

[3] - Lev Manovich’s official Web Site, “The Poetics of Augmented Space”, (2005), http://www.manovich.net/DOCS/Augmented_2005.doc (accessed September 17, 2010) and Lev Manovich, “Abstraction and Complexity”, NeMe, article no. 94, (2005), http://www.neme.org/main/94/abstraction-and-complexity (accessed November 8, 2010).

[4] - Lev Manovich’s official Web Site, “The shape of information”, (2005), http://www.manovich.net/DOCS/IA_Domus_3.doc, (accessed September 17, 2010).

[5] - Fratzeskou, Eugenia, “Operative Transformations (Part 2)”, in Digimag, Issue 67, September 2011, http://www.digicult.it/digimag/article.asp?id=2146, (accessed November 8, 2011).

[6] - Braid, Ian Charles, Designing with Volumes, Cambridge, Cantab Press, 1974, p. 29, (113, 28).

[7] - Ibid, pp. 27-29,35 (35,38).

[8] - Ibid, p. 30.

[9] - Ibid, pp. 28-31.

[10] - Mantyla, Martti, An introduction to solid modelling, computer science Press, Rockville Md, 1988, p. 111, Hoffmann, Christoph M., Robustness in Geometric Computations, (Journal of Computer and Information Science and Engineering, 1(2), June 2001, pp.143-155) http://www.cs.purdue.edu/homes/cmh/distribution/PubsRobust.html, (accessed March 12, 2004) pp.3,6. Hoffmann, Christoph M., Solid & Geometric Modelling, Morgan Kauffmann Publications, 1989, pp. 5, 39 (33).

[11] - Ibid, p. 8.

[12] - Fratzeskou, Eugenia, Visualising Boolean Set Operations: Real & Virtual Boundaries in Contemporary Site-Specific Art, LAP – Lambert Academic Publishing, 2009, pp. 73, 75-77.

[13] - Krauss, Rosalind, The Optical Unconscious, MIT Press (October Book), 1998 (1993), pp. 54-57.

[14] - Francis, Crick and Christof Koch, “The Problem of Consciousness”, Searchlight: Consciousness at the Millennium, ed. L. Rinder, Thames & Hudson, London, 1999, pp. 141-143. For a creative approach on the relationship between digital memory and diagramming see Fratzeskou, Eugenia, “Primary Solids” (March 2002), in TRACEY Drawing & Visualisation Research Journal, Loughborough University, UK, http://www.lboro.ac.uk/departments/sota/tracey/journal/dat/fratzeskou.html (accessed November 8, 2011).

[15] - Duff, Leo, ed., Editorial introduction to “Mapping and Memory” Issue (May 2001), in TRACEY Drawing & Visualisation Research Journal, Loughborough University, UK, http://www.lboro.ac.uk/departments/sota/tracey/journal/mam1.html (accessed November 15, 2001).

[16] - Fratzeskou, Eugenia, Operative Intersections: Between Site-Specific Drawing and Spatial Digital Diagramming, LAP – Lambert Academic Publishing, 2010, pp. 21-23.

[17] - Braid, Ian Charles, Designing with Volumes, Cambridge, Cantab Press, 1974, pp. 37-38, 116.

[18] - Fratzeskou, Eugenia, Visualising Boolean Set Operations: Real & Virtual Boundaries in Contemporary Site-Specific Art, LAP – Lambert Academic Publishing, 2009, pp. 75-77.

[19] - Ibid, pp.72-77 and Fratzeskou, Eugenia, “Inventing New Modes of Digital Visualisation in Contemporary Art” in “Transactions” Special Issue, Leonardo 41, No. 4 (2008), p. 422.

[20] - Gilles Deleuze in Rajchman, John, Constructions, Writing Architecture Series, MIT Press, 1998, pp. 68, 71, 69-72.

[21] - Fratzeskou, Eugenia, New Types of Drawing in Fine Art: The Role of Fluidity in the Creation Process, LAP – Lambert Academic Publishing, 2010, pp. 30-33.

SHARE ONShare on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn