Clockwork automata, but also motor automata, in short, automata of movement, made way for a new computer and cybernetic race, automata of computation and thought, automata with control and feedback. The configuration of power was also inverted, […] power was diluted in an information network.1
As depicted in the movie Elysium, in the near-future scenario of 2124, data will not simply be processed by machines or by brains but rather exchanged across brains by means of machines.2 Elysium is a self-sustainable, pollution-free space habitat that lives off the underclass work of a derelict planet earth, overpopulated and deranged, with a dying human species. In this scenario, machines cannot think and rather seem to be simply instrumental to human-oriented intentions (exposing a traditional moral puzzle of the battle of good versus evil, which is ultimately ascribed to voluntary decisions). Whilst appearing to be mere channels of governance, machines constitute the computational infrastructure of Elysium, whose operations are precisely neutral: unable to understand the cause of things and thus devoid of will (since the AI probe parole officer cannot interpret Max’s allusive comments and jokes, it says to him: “Do you want to speak to a human?”). The neutrality of this algorithmic architecture, however, also reveals the effective power of instrumental reason. Here machines do not simply rebel against the human (as in the I Robot movie for instance), but they more importantly process (i.e., select, exchange, store, activate) any form of information that can be destructive and creative, beneficial or detrimental to the human race. This form of neutralized automation, whereby robots do exactly what humans tell them to do, comfortably reveals a sort of unilateralization of thought, an asymmetry in power between human and machines through which instrumental reason is realized. It is only the rebooting of the information matrix, requiring a radical change in the initial conditions of its algorithmic inputs, that the entire techno-organic elite of Elysium can be eliminated. And yet it is precisely the emphasis on the ultimate possibility to change the initial conditions of the automated architecture of Elysium that exposes instrumental reason to an interesting equivalence of thought and automation, revealing that algorithmic processing, in spite of much critique, is open to revision.
Instead of reducing this near-future possibility to the mere fantasy of a Promethean thought free from finitude and death, I suggest that algorithmic automation exposes the potentiality of instrumental reason to go beyond its premises, that is beyond determinism or the repetition of its initial conditions. In particular, this article wants to offer a conception of thought that whilst challenging the framework of representation based on discrete acts of cognition and perception, at the same time points to a dynamic form of automation based on the possibility that discreteness and finitude are conditioned by randomness and infinity.
I address Gilles Deleuze’s critique of the “image of thought”, according to which pure thought is always already based on either a voluntary and common act of thought, the “I think” (Descartes’s method) or on “re-cognition” (the Kantian method of the transcendental subject).3 By pushing representation towards its own limits, Deleuze’s critique aims to create new categories of thought defined not by pre-formed subjects or objects but by contingent encounters. According to Deleuze, these encounters force thought to think anew through a mode of learning aiming not to prove ideals but to reveal that the being of the sensible or the ontological condition of affection is the primary motor of thought, indeed it is thought (thought qua affect). This condition cannot be met by the bare empirical form of sensorimotor perception, equipping the senses with the capacity of generating thought. On the contrary, thought emerges from the intensity of an encounter afforded by the being of the sensible and not by the mere fact of sensing. Intensity is not a capacity of the individual but entails a process of individuation across scales and dimensions in which singularities, or differential tendencies, explain thought in terms of heterogeneity. Deleuze discusses this ontogenetic principle of thought as matter that is affected by external forces that impinge upon the body. In other words, thought is precipitated by the capacity to be affected and to affect which includes the “destratification” of thought from what is directly sensed and re-cognized (i.e., thinking what is already known).
The condition of the being of the sensible is thus determined by the uncertainties of events, changing the rules of thought. In other words, to think is a matter of contingency in which affects and percepts are activated by the capacities of a body to feel. Deleuze’s rejection of the image of thought is thus also opposed to both cognitivism and computationalism, for which thinking simply amounts to the repetition of discrete states or rule-based processing that can run on any form of hardware.4 Recent articulations of such views for instance include Andy Clark’s theory of extended cognition: it maintains that to think results from the input-output relations of neural networks that reveal that cognitive faculties can be extended to and through machines.5 Here, the Cartesian subject “I think”, which Deleuze criticizes because based on the universal common sense that everybody (and everything in this case) thinks, is not only preserved but amplified by mechanical (analog and digital) devices that act as external plugs of the now enlarged neural networks of cognition.
Nevertheless, whilst the view of extended cognition appear to reify rather than challenge a notion of thought based on a universal mechanical repetition of initial condition, at the same time it seems important to explore complexity in computation. This article aims to problematize the fusion of affect and thought as well as the too- swift opposition between automation and affect, or between the being of the intelligible and the being of the sensible. Here the assumption is that only the sensible defined by the affective encounter can expose the productive and heterogeneous dynamics of thought beneath representation, whereas the intelligible is always already trapped in either eternal forms (Platonic ideas), common sense (the Cartesian universalism of I think) or by the comfort of re-cognition (the Kantian transcendental subject, ante-posing the subject to the transcendental condition). In this article I will attempt to argue instead that dynamism (i.e., the being open to revision) is at the heart of the intelligible and can be evinced from within the formal logic of computation. My aim is to address automation as the dynamic computational architecture of the intelligible by discussing one of the fundamental problems in computation, the problem of the incomputable or randomness. These problems, I want to point out, reveal that the repeatable condition of discrete rules is not immune from the infinite varieties of information or randomness that these rules encounter in online, distributed and parallel computational systems, in which the possibility of changing initial conditions is actualized. As pointed out in the movie Elysium, the instrumental reason of its automated network infrastructure is offered the possibility not simply to execute rules but to change its initial conditions through an irreversible re-booting in which the being of the intelligible is revised. In other words automation exposes the inevitable randomness intrinsic to the being of the intelligible, and to automation itself. From this standpoint, instead of claiming that the being of the intelligible is unable to change, my effort here is to expose the dynamics of the intelligible, randomness in computation. In other words, I want to suggest that to challenge the representational framework of thought (the Platonic ideal form, the Cartesian I think and the Kantian transcendental subject), it is important to rearticulate the being of the intelligible and thus reevaluate the relation between the sensible and thought, between affect and reason.
In particular, I will turn to Alfred N. Whitehead’s notion of prehension as this includes both the distinct but necessary dynamic activities of the sensible and the intelligible involved in the physical and conceptual selection and evaluation of data.6 For Whitehead, prehensions explain the function of reason in terms of the concreteness of abstract relations, but also of the non-reversible and non-equal relation between physical and conceptual prehensions. In other words, I want to suggest that the theory of prehension contributes to revisit the notion of affective thought as including the non-reversible and yet dynamic conditions of the being of the sensible and of the intelligible. Similarly, the theory of prehension will help us to redefine the computational view of cognition in terms of open-ended rules, that is, rules that are open to be revised or rescripted not only because they are responsive to the physical environment which they seek to simulate, but more importantly because their discrete operations become infected and changed by informational randomness. The apparent opposition between affect and computation is here dissolved to reveal that dynamic automation is central to the capitalization of intelligible functions.
Gilles Deleuze laments that the presupposition of thinking (i.e., thought as Being or beings) rests upon what he calls the image of thought defined by the general credo of Cogitatio natura universalis, according to which “conceptual philosophical thought has as its implicit presupposition a pre-philosophical and natural image of thought, borrowed from the pure element of common sense.”7 From this standpoint, everybody thinks and everybody has a desire to know. Thinking therefore is implicitly based on a pre-philosophical conception of thought, on the very essence of thought as pure thought. Deleuze rejects both the Platonic form of reason, the synthesis of all thoughts, the Cartesian implicit naturalization of thinking and the Kantian transcendental model of recognition.8 The image of thought here constitutes an ideal orthodoxy, which is rooted in form, common sense and the transcendental model. The world of representation is thus defined by “identity with regards to concepts, opposition with regards to the determination of concepts, analogy with regard to judgement, resemblance with regard to objects.”9
Deleuze, however, insists that to think is to open up a crack within the crust of thought revealing that the conditions of thought are primarily the destruction of an image and “the genesis of the act of thinking in thought itself.”10 Instead of being an act of recognition, to think is “a fundamental encounter” defined not by what can be sensed, but by the emergence of a sensibility that disarticulates (de-territorializes, de-stratifies) a given sense. This is an aesthetic encounter but is not determined by the expression of qualities, which are, according to Deleuze, the point of view of recognition based on the empirical exercise of the senses. The aesthetic encounter rather involves the activities of the imperceptible that are confronted by a limit. Beyond or beneath recognition, there advances the activities of the imperceptible force precipitating a change within thought. To think thus involves what can be sensed but yet remain imperceptible.11 To the question: what are the conditions for thought to think? Deleuze irrevocably responds: intensity.12 This is difference in itself, the germ of the differential relation, giving rise to more difference. The contingency of the encounter thus coincides with the eruption of intensity breaking open and re-establishing connections “which travers[e] the fragments of a dissolved self as it does the borders of a fractured I.”13 To think is not to re-cognize what we already know but to be confronted with problems rather prepositions, imperceptible activities rather than perceptual qualities, with sense-making and not with logos. Ideas are not pre-formed thoughts but instead pose problems to thought that cannot be solved through pre-established rules or axiomatic truths.
As opposed to the automata of thought, which now dominate contemporary culture, Deleuze describes pure thought in terms of intensity because the force of contingency is registered in terms of affection, exposing the being of the sensible as the primary condition for thought. Intensity is therefore the motor of the sensible and the being of the sensible operates by means of affective encounters. According to Deleuze, affect coincides with the recording of change from one state to another as felt by a body.14 According to this view, the being of the intelligible has a secondary role and is rather dependent on the being of the sensible whose material dynamics somehow guarantee that thought cannot remain the same. In what follows, I precisely aim to tackle how the relation between measuring and the un-measurable, between discrete units and continuous form has characterized debates in the context of computational culture.
Central to these debates is a specific attention to the “timing of affect,” which both include the temporality of the event and a passive and active synthesis of time affording a body with a capacity to be affected and to affect. The timing of affect thus points out that thought is at once pre-temporal and durational and emerges from the imperceptible and anticipatory effects emanating from the eventful character of an encounter, where the future bends over the sheets of the past. Deleuze describes this process through the action of the “dark precursor”;15 what is not yet known is not what is non-accessible to thought, but rather what advances from a virtual field of potentialities entering the field of possibilities and probabilities. The timing of affect specifically addresses this bending of temporalities as being primarily a movement taking up the whole body, suspending intentionality and emotion. The timing of affect, it has been argued, coincides with a pre- or a-cognitive time, with a primary movement of thought occurring before and fundamentally constituting any form of emotion and cognition. As opposed to theories of representation, the notion of affect offers us a pre-cognitive approach to perception and thought, emphasizing the primacy of what is immediately (i.e., non-mediated by ideas, the mind or subjective will) felt by a body. In this context, the timing of affect is crucial to the extent that what is immediately felt is recorded by a body as a plenitude of energetic potentials and not as in terms of perception, emotion and cognition. It has also contributed to extend the understanding of power away from ideological critique whilst revealing the affective dimension of power, which involves not simply the manipulation of emotions, but more importantly the capacity of power to activate potential responses that catalyze decision-making in the present according to what may happen in the future. This mode of power is said to operate aesthetically as it modulates the capacities of a body to feel and be directed towards action.16 At the same time however, the timing of affect has also become the ultimate moment of resistance against the automation of thought that is now said to operate through interactive feedback.
In particular, algorithmic automation now refers to the capacity of automation to include temporality as a variable or probability within its quantitative calculations. For instance, smart phone applications are based on interactive algorithms that multiply their functions by establishing connections between what the software affords human users to do. Such affordances are not simply already scripted in the program but are instigated by a set of possibilities that aim to incite and anticipate users’ behavior. Algorithmic automation is thus able to quantify human responses through the feedback generated between users and machines or between real-time data retrieved from the user’s environment and set of data inscribed in databases. Moreover, it has been argued that the extended impact of search engines such as Amazon and especially Google developed through interactive algorithms have led the new form of aesthetic power to directly act on neuroperceptual capacities, modulating responses and anticipating choice.17
At the same time, however, it has been suggested that within interactive systems, qualitative responses such as changes in skin conductivity, or the presence of “activation potentials” in the brain are necessarily analog because they can only be represented by a continuous modulation of variables. Affective responses are therefore not directly translatable within algorithmic automation, insofar as computation necessarily operates through the digital determination of values (i.e., quantification).18 By suggesting that lived and analog qualities are superior to computational modes of quantification, the algorithmic automation of affects is said to reduce the being of the sensible, and thus the potentialities of feeling, to intelligible procedures, involving a discretization of continuous processes. From the standpoint of affect theories, the reduction of the sensible can only occur by means of approximation and not through the exact measurement of feelings. In other words, interactive automation fails to calculate affective responses or intensity of affective thought.
The tension between the infinite potentialities of affective synthesis versus the pre-set probabilities of automatic calculation, I want to argue, has reinforced rather than challenged the opposition between dynamics and mechanics, or thought versus representation. Much debate about digital culture, aesthetic and politics is grounded in problematic assumptions about notions of order and chaos in information theory. Nevertheless, recent information theory points out that the question of order and chaos within computation rather involves an understanding of randomness in terms of infinite or non-compressible quantities. The next section addresses this question to suggest that automated thought is not opposed to the pre-individuated activities of affect.
Automation involves the breaking down of continuous processes into discrete components whose functions can be constantly reiterated without error. In short, automation means that initial conditions can be reproduced ad infinitum. The form of automation that concerns us here was born with the Turing machine, an absolute mechanism of iteration based on step-by-step procedures. As already discussed elsewhere, nothing is more opposed to pre-cognitive thought, or the being of the sensible, than this discrete-based machine of universal calculation. The Turing architecture of pre-arranged units that could be interchangeably exchanged along a sequence is effectively the opposite of a differential continuum, intensive encounters and affect.
Nevertheless, I want to point out that since the 1960s, the nature of automation has undergone dramatic changes as a result of the development of computational capacities of storing and processing data across a network infrastructure of online, parallel and interactive systems. Whereas previous automated machines were limited by the amount of feedback data they could collect and interpret, algorithmic forms of automation now analyze a vast number of sensory inputs and confront them with networked data sets and finally decide which output to give. Algorithmic automation is now designed to analyze and compare options, run possible scenarios or outcomes, perform basic reasoning through problem-solving steps not contained within the machine’s programmed memory. For instance, expert systems now use reasoning to draw conclusions through search techniques, pattern matching, and web data extraction. From global networks of mobile telephony to smart banking and air traffic control, these complex automated systems have come to dominate our everyday culture.
Much debate about algorithmic automation as a form of digital simulation based on the Turing discrete computational machine (a series of discrete steps to accomplish a task) suggests that algorithmic automation is yet another example of the Laplacian view of the universe, defined by determinist causality.19 Instead, I want to draw attention to the role that randomness plays in computational theory, suggesting that the calculation of infinities has now turned incomputable functions, that is real numbers, into probabilities, which are at once discrete and infinite. In other words, whereas algorithmic automation has been understood as being fundamentally a Turing discrete universal machine that could repeat the initial condition of a process, the increasing volume of incomputable data (or randomness) within online, distributive and interactive computations is now revealing that infinity is central to computational processing and that probability no longer corresponds to a finite state.
In order to appreciate the role of incomputable algorithms in computation, it is necessary to make a reference here to the logician Kurt Gödel, who challenged the axiomatic method of pure reason, by proving the existence of undecidable propositions within logic. In 1931, Gödel took issue with mathematician David Hilbert’s metamathematical program. He demonstrated that there could not be a complete axiomatic method, not a pure mathematical formula, according to which the reality of things could be proved to be true or false.20 Gödel’s “incompleteness theorems” explained that propositions were true but could not be verified by a complete axiomatic method. Propositions were therefore ultimately deemed to be undecidable: they could not be proved by the axiomatic method upon which they were hypothesized. In Gödel’s view, the problem of incompleteness in the attempt to demonstrate the absolute validity of pure reason affirmed instead that no a priori decision, and thus no finite sets of rule, could be used to determine the state of things before things could run their course.
Not too long after, the mathematician Alan Turing also encountered Gödel’s incompleteness problem whilst attempting to formalize the concepts of algorithm and computation through his famous thought experiment, now known as the Turing machine. In particular, the Turing machine demonstrated that problems that can be decided according to the axiomatic method were computable problems.21 Conversely, those propositions that could not be decided through the axiomatic method would remain incomputable. According to Turing, there could not be a complete computational method in which the manipulation of symbols and the rules governing their use would realize Leibniz’s dream of a mathesis universalis.22
For Turing, the incomputable determined the limit of computation: no finite set of rules could predict in advance whether or not the computation of data would halt at a given moment or whether it would reach a zero or one state, as established by initial conditions. This halting problem meant that no finite axiom could constitute the model by which future events could be predicted. Hence, the limit of computation was determined by the existence of those infinite real numbers that could not be counted through the axiomatic method posited at the beginning of the computation. In other words, these numbers were composed of too many elements that could not be ordered into natural numbers (e.g., 1, 2, 3). From this standpoint, insofar as any axiomatic method was incomplete, so too were the rules of computation. As Turing pointed out, it was mathematically impossible to calculate in advance any particular finite state of computation or its completion.23
In recent computational theory, a new mode of calculating non-denumerable infinities (i.e., incomputable data) has come to the fore. Algorithmic information theorist Gregory Chaitin calls these probabilities imbued with infinities Omega. Omega is the halting probability of a universal free-prefix self-delimiting Turing machine,24 which Chaitin demonstrates to be a computably enumerable probability despite being infinitely large. In other words, Omega defines the limit of a computable, increasing, converging sequence of rational numbers. At the same time however, it is also algorithmically random: its binary expansion is an algorithmic random sequence, which is incomputable (or non-compressible into a rational number).25 From this standpoint, the Laplacian mechanical universe of computation dissipates through the self-delimiting power of computation, in which non-denumerable infinities do not only defy the determinism of initial conditions, but also point to another idea of order and chaos, involving the inclusive activities of randomness and discreteness in the calculation of infinities. Omega is at once a probability, a discrete cipher, and an incomputable number, or randomness. According to Chaitin, Omega demonstrates the limits of the mechanical view of the universe according to which chaos or randomness is an error within the formal logic of calculation. On the contrary, such limits do not describe the failure of intelligibility versus the triumph of the incalculable. These limits more subtly suggest the possibility of a dynamic realm of intelligibility defined by the capacities of incomputable infinities or randomness to infect any computable or discrete set. In other words, randomness or the infinite varieties of infinities are not simply outside the realm of computation, but more radically become its unconstrainable condition to the extent that by becoming partially intelligible, randomness also enters order and irreversibly provokes a continuous revision of its rules. It is precisely this new possibility for revision or transformation of rules driven by the inclusion of randomness within computation that reveals dynamics within automated system and automated thought. This means that whilst Omega suggests that randomness has become intelligible within computation, incomputables cannot however be synthesized by an a priori program, theory or set of procedures that are in size smaller than them.
According to Chaitin, these discrete states are themselves composed of infinite real numbers that cannot be contained by finite axioms. What is interesting here is that Chaitin’s Omega is at once intelligible yet unsynthesizable by universals or by the subject. I take it to suggest that computation – qua mechanization of thought – is intrinsically conditioned by incomputable data, or that discrete rules are open to contingency. This is not simply to be understood as an error within the system, or a glitch into the coding structure, but rather as a fundamental condition of computation. This is also to say that far from dismissing computation, it is in the axiomatic nature of computation that incomputable algorithms emerge to defy the superiority of deductive form and inductive fact.
This is the sense in which the timing of affect involves a radical shift in post-cybernetic culture: in as much as automated algorithms are entering all logics of modeling (urban infrastructures, media networks, financial trades, epidemics, work flows, weather systems etc.), so too are their intrinsic incomputable quantities unraveling the problem of contingency in algorithmic automation. As Chaitin hypothesizes, if the program that is used to calculate infinities is no longer based on finite sets of algorithms but on infinite sets (or Omega complexity), then programmability will become a far cry from the algorithmic optimization of infinities actualized by means of probabilities (i.e., already set results). Programming will instead turn into the calculation of complexity by complexity, chaos by chaos: an immanent doubling infinity or the infinity of the infinite. If the age of algorithm automation has also been defined as the age of complexity, it is because the computational power involved in the search space of incomputables has become superior to the algorithmic procedures of optimizing solutions. Contrary to the Laplacian mechanicistic universe of pure reason, Chaitin’s pioneering information theory explains how software programs can include randomness from the start. Thus the incompleteness of axiomatic methods does not define the limit of computation, but its starting condition, through which new axioms, codes, and sets of instructions are immanently determined.
From this standpoint, contrary to the view that computation is yet another image of thought, I want to take computation as an instance of intelligible functions that are imbued with randomness – infinite varieties of quantities – to argue for the incomputable conditions of automated thought. In particular, I want to suggest that A. N. Whitehead’s notion of speculative reason can contribute to dethrone the intelligible from representational theories of thought whilst maintaining that the function of reason is open to revision.26 In particular, I draw on Whitehead’s notion of prehension, of physical and conceptual prehensions to re-articulate the existing tension between the sensible and the intelligible, and of affective thought versus computation. As much as physical prehension is said to involve the simple capture of what concretely is and becomes in the world, a conceptual prehension is a pure mental operation, referring to the possibility of how “actuality may be definite.” A conceptual prehension in this sense is the abstract, non-cognitive, and non-physical capture of infinities. If as Whitehead argues, conceptual prehensions bear “no reference to particular actualities, or to any particular world,” computational thinking implied by the processing of data by computer software could thus easily be conceived as a form of pure conceptual prehension. This means that the automation of thought does not just correspond to mechanized performativity. On the contrary, the algorithmic prehension of numerical data involves the capture of abstract ideas not yet actualized yet nevertheless real.
Whitehead’s notion of speculative reason is defined by the asymmetrical activities of physical and conceptual prehensions. It supports my suggestion that computational processing is at once a physical and conceptual mode of registering, evaluating and producing data. It is physical in the actual operations of the hardware-software machine, conceptual in the grasp of incomputables (i.e., of what is not and yet may be). This also means that incomputability coincides with uncertainty and with the timing of futurity, or the ingression of potentiality in actuality, or continuity into discrete units. Incomputables therefore are the real abstraction of all algorithmic codes, what transforms their discrete order and makes them into a mode of thought in itself. Here the timing of affect appears to extend towards the nonlinear time of algorithmic intelligibility in which the initial conditions of automated thought become infected with randomness, prehended as infinities within the discrete order of sequences.
The nonlinear time of the intelligible can be precisely conceived in the Whiteheadian terms of speculative reason.27 Whitehead explains that the function of reason28 is to counteract the linear chain of cause and effect based on the return to initial conditions. Similarly, a speculative view of computation may suggest that algorithmic automation implies that each set and subset of instruction is conditioned by what cannot be calculated, the incomputables that burst within discrete units. This means that a notion of speculative reason is not concerned with the prediction of the future through the data of the past, but with incomputable quantities of data that rather transform initial conditions. This is why a notion of speculative algorithms or automation is not to be confused with intelligible forms that derive from a representational theory of thought. Speculative reason is here used to suggest that randomness – non-compressible data – is the unconditional condition for intelligible operations to function. This means that not only the realm of the sensible but also the realm of the intelligible can be understood in non-representational terms and yet without unifying their non-equivalent activities into a totalizing frame. If the timing of affect explains the primary capacities of a feeling-thought to grasp and process the real before this becomes cognized, then complexity at the core of computation also points out that intelligible operations are not simply mechanical but are inevitably confronted with infinities, revealing the capacity of conceptual prehensions to add new data (revise and rescript) to the mechanical chain of cause and effects.
Whitehead’s study of the function of reason neither sits comfortably with the formal nor practical conception of reason and suggests instead that reason must be re-articulated according to the activity of final causation, and not merely by the law of the efficient cause.29 The final cause of reason explains how conceptual prehensions are not reflections on material causes, but instead add new ideas to the mere inheritance of past facts. Conceptual prehensions are modes of valuation that open the fact of the past to the pressure of the future. Final cause, therefore, does not simply replace efficient cause, but rather defines a speculative tendency intrinsic to the function of reason. Whitehead explains how decisions and the selection of past data become the point at which novelty is added to the situation of the present. In other words, reason is the speculative calculation that defines the purpose of a theory and a practice: to make here and now different from the time and the space that were there before.
It would however be misleading to equate this notion of final cause or purpose with a teleological explanation of the universe, since for Whitehead the function of reason is “progressive and never final.”30 This means that the purpose of reason is to revise and change its premises rather than being determined by rules returning to its initial conditions. While reason does not stem from matter, it is attached to physical decay, entropy and randomness occurring through the infinite layers of matter.31 For Whitehead, speculative reason implies the asymmetrical and non-unified entanglement of efficient and final cause, and must be conceived as a machine of emphasis upon novelty.32 In particular, reason provides the judgment by which novelty passes into realization, into fact.33
It is suggested here that the being of the intelligible found in computation must be reconceived from the standpoint of speculative reason. In order to do so, computation must be made to confront its indeterminate condition: incomputables at the core of its closed formal scheme. This means that just as computation has to be rethought in terms of speculative reason, so too must automation be conceived in terms of algorithmic prehensions: the possibility of automation to become infected with infinities expose complexities at the core of the intelligible. From this standpoint, the timing of affect also involves the indeterminacy of intelligible complexity that lies beyond both the digital ground of axiomatics. It marks the moment at which the limit of computation (and of intelligible systems) unleashes the incompressible nature of information into automation.
It would, however, be wrong to view with naïve enthusiasm that incomputables are central to the realm of the intelligible. Instead, it is important to address algorithmic automation without overlooking the fact that the computation of infinity is the motor of a new capitalization of intelligible capacities. My insistence that incomputables are not exclusively those non-representable infinities that belong to the being of the sensible, expressed by the affective capacities of a body to produce new thought, but also reveal the dynamic nature of the intelligible, is indeed a concern with the ontological and epistemological transformation of thought in view of the algorithmic functions of reason. This concern however is not an appeal to an ultimate computational being determining the truth of thought. On the contrary, I have turned to Chaitin’s discovery of Omega because it radically undoes the axiomatic ground of truth by revealing that computation is an incomplete affair open to the revision of its initial conditions. Since Omega is at once a discrete and infinite probability, it testifies to the fact that the initial condition of a simulation – based on discrete steps – is and can be infinite. The incomputable algorithms discovered by Chaitin therefore suggest that the complexity of real numbers defies the grounding of reason in finite axiomatics.
Beside the critique of representational thought intrinsic to the debate on the timing of affect (e.g., the superiority of the sensible versus the intelligible), the understanding of computation in terms of speculative reason shows that there is another possibility of dethroning thought from pure ideas, subjectivity and cognition by exposing incomputables at the core of automation and the primacy of complexity in algorithmic infrastructure. To conclude, I have suggested that automated modes of thought cannot be subsumed under a totalizing framework of representation. I have argued that automated thought may be conceived in terms of prehensions: whereby to prehend means to select, evaluate data, make decisions and generate new solutions. This involves not only the physical, but also the conceptual prehensions of data: the capacity of rule-based processes to add new data and changing the initial conditions of data processing. To conceive of the realm of the intelligible in terms of speculative reason is thus to pose the question of whether automated algorithms can become critical and thus able to prehend their own final cause. Whether this is already the case or not, it is hard to dismiss the possibility that the automation of thought has exceeded representation and has instead revealed that the timing of computation is now driven by uncertainty.
1 Gilles Deleuze, Cinema 2: The Time-Image (London: The Athlone Press, 1989), p. 265.
2 Elysium is an American science fiction action-thriller film written, co-produced and directed by Neill Blomkamp (released on August 9, 2013).
3 Gilles Deleuze, Difference and Repetition (London: The Athlone Press, 1994), p. 129–134.
4 Gilles Deleuze and Felix Guattari, What is Philosophy? (London: Verso, 1994), p. 11, 128 and 138.
5 Andy Clark and David Chalmers, “The Extended Mind,” Analysis 58 (1998): p. 7–19.
6 Alfred N. Whitehead, Process and Reality (New York: The Free Press, 1978), p. 22–24.
7 Deleuze, Difference and Repetition, p. 131.
8 Ibid., p. 133–134.
9 Ibid., p. 137–138. Deleuze calls the “I think” of representation as subtending the source and the unities of all these faculties: “I conceive, I judge, I imagine, I remember and I perceive – as though they were the four branches of Cogito.”
10 Ibid., p. 139.
11 Ibid., p. 143.
12 Deleuze specifies: “[N]ot qualitative opposition within the sensible, but an element which is in itself difference, and creates at once both the quality in the sensible and the transcendent exercise within sensibility. This element is intensity, understood as pure difference in itself, as that which is at once both imperceptible for empirical sensibility which grasps intensity only already covered or mediated by the quality to which it gives rise, and at the same time that which can be perceived only from a point of view of a transcendental sensibility which apprehends it immediately in the encounter.” Ibid., p. 144.
13 Ibid., p. 145.
14 There are different theories of affect that have a different legacy and more importantly here specific ontological implications. In this article, I am not concerned with giving an overview of these theories and discuss their non-matching ontological implications. Instead I shall draw specifically on the theory of affect as developed by Gilles Deleuze and his re-articulation of such concept as derived from Baruch Spinoza and Henri Bergson. I want to point out that Deleuze’s re-articulation of the notion of affect involves both the physical capacity to impact on something, i.e. force, and the abstract capacity to undermine what is perceived and cognized as image. In this context, Deleuze’s critique of the image of thought, against the already posed image or representation, can also be unpacked through the notion of affect.
15 “Thunderbolts explode between different intensities, but they are preceded by an invisible imperceptible dark precursor, which determined their path in advance but in reverse, as though intagliated,” Deleuze, Difference and Repetition, p. 119.
16 Massumi elaborates here rigorously this theory. See: Brian Massumi, “The Autonomy of Affect,” Cultural Critique 31 (Autumn, 1995): p. 83–109.
17 Compare: Anna Munster, An Aesthesia of Networks: Conjunctive Experience in Art and Technologies (Cambridge, Mass.: MIT, 2013), p. 126–130.
18 Brian Massumi, “On the Superiority of the Analog,” in: idem, Parables for the Virtual: Movement, Affect, Sensation (Durham: Duke University Press, 2002), p. 133–143.
19 Francis Bailly and Giuseppe Longo, “Randomness and Determination in the Interplay between the Continuum and the Discrete,” Mathematical Structures in Computer Science 17.2 (2007): p. 289–307; available in pdf at http://www.di.ens.fr/longo/download.html (retrieved September 15, 2013).
20 David Hilbert, “The New Grounding of Mathematics: First Report,” in: W. B. Ewald, ed., From Kant to Hilbert: A Source Book in the Foundations of Mathematics, Vol. 2 (New York: Oxford University Press,1996), p. 1115–1133; Rebecca Goldstein, Incompleteness: The Proof and Paradox of Kurt Gödel (W. W. Norton & Company, 2005); Solomon Feferman, Some Basic Theorems on the Foundations of Mathematics and their Implications. Collected Works / Kurt Gödel, Vol. III (Oxford: University Press, 1995), p. 304–323.
21 Alan M. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 2nd Series, 42 (1936); for further discussion of the intersections of the works between Hilbert, Gödel and Turing, see also: Martin Davis, The Universal Computer. The Road from Leibniz to Turing (New York/London: W.W. Norton & Company, 2000), p. 83–176.
22 Mathesis Universalis defines a universal science modeled on mathematics and supported by the calculus ratiocinator, a universal calculation described by Leibniz as a universal conceptual language. See: Norbert Wiener, Cybernetics or the Control and Communication in the Animal and the Machine (Cambridge Mass.: The MIT Press, 1965), p. 12.
23 For a clearer explanation of the implications of Gödel’s theorem of incompleteness and for Turing’s emphasis on the limit of computation can be found in: Gregory Chaitin, MetaMaths. The Quest for Omega (London: Atlantic Books, 2006), p. 29–32.
24 The definition of the halting probability is based on the existence of prefix-free universal computable functions, defining a programming language with the property that no valid program can be obtained as a proper extension of another valid program. In other words, prefix-free codes are defined as random or uncompressible information. See: ibid., p. 130–131 and p. 57.
25 Chaitin writes: “Given any finite program, no matter how many billions of bits long, we have an infinite number of bits that the program cannot compute. Given any finite set of axioms, we have an infinite number of truths that are improvable in that system. Because Ω is irreducible, we can immediately conclude that a theory of everything for all of mathematics cannot exist. An infinite number of bits of Ω constitute mathematical facts (whether each bit is a 0 or a 1) that cannot be derived from any principles simpler than the string of bits itself. Mathematics therefore has infinite complexity,” Gregory Chaitin, “The Limits of Reason,” Scientific American 294/3 (March 2006): p. 74–81, here p. 79.
26 Alfred N. Whitehead, The Function of Reason (Boston: Beacon Press, 1929), p. 65–72.
27 “Speculative reason is its own dominant interest, and is not deflected by motives derived from other dominant interests which it may be promoting,” ibid., p. 38.
28 Whitehead explains that there are at least two functions of reason. On the one hand, reason is one of the operations constituting living organism in general. Thus reason is a factor within the totality of life processes guided by reason. Thus against the slow decay of organic entities, reason has the function of counteracting such entropic decay. On the other hand, however, reason only defines an activity of theoretical insights, which are autonomous of any physiological process and from general processes in nature. Here, reason is “the operation of theoretical realization,” ibid., p. 9.
29 Whitehead’s efficient cause and final cause can be understood as two modes of prehensions, causal efficacy and presentational immediacy, which are another parallel level of the distinction between physical and mental poles of an entity. Efficient causes, therefore, describe the physical chain of continuous causes and effects, whereby the past is inherited by the present. This means that any entity is somehow caused and affected by its inheritable past. It is the mental pole of any actual entity – the conceptual prehensions that do not necessarily involve consciousness – that explains how efficient cause is supplemented by final cause. For Whitehead, a final cause describes how an actual entity is marked by its own sufficient reason, a conceptual prehension of the data that have nested in its unrepeatable eventuation driven by the final trajectory towards the exhaustion of its own potentials, which involves a counter-current moving against the efficient repetition of the data of the past. See: Alfred N. Whitehead, Process and Reality (New York: The Free Press, 1978). On “efficient cause,” p. 237–238; on “final cause,” p. 241; on “the transition from efficient to final cause,” p. 210.
30 Ibid., p. 9.
31 Similarly, purpose in reason does not have to be exclusively attributed to higher forms of intelligence. For Whitehead, all entities, lower and higher, have purpose. The essence of reason in the lower forms entails a judgment upon flashes of novelty that is defined by conceptual appetition (a conceptual lure towards, a tendency of thought upward) and not by action (reflexes or sensorimotor responses). However, according to Whitehead, stabilized life has no proper room for reason or counter-agency since it simply engages in patterns of repetition.
32 From this standpoint, Whitehead attributes reason to higher forms of biological life, where reason substitutes action. Reason is not a mere organ of response of external stimuli, but rather is an organ of emphasis, able to abstract novelty from repetition.
33 Ibid., p. 20.
Affect, or the process by which emotions come to be embodied, is a burgeoning area of interest in both the humanities and the sciences. For »Timing of Affect«, Marie-Luise Angerer, Bernd Bösel, and Michaela Ott have assembled leading scholars to explore the temporal aspects of affect through the perspectives of philosophy, music, film, media, and art, as well as technology and neurology. The contributions address possibilities for affect as a capacity of the body; as an anthropological inscription and a primary, ontological conjunctive and disjunctive process as an interruption of chains of stimulus and response; and as an arena within cultural history for political, media, and psychopharmacological interventions. Showing how these and other temporal aspects of affect are articulated both throughout history and in contemporary society, the editors then explore the implications for the current knowledge structures surrounding affect today.