[Ethan Ham / Anthroptic / 2007]
There is an analogous relationship between technological translations of data from one type to another with synaesthetic responses: the transcoding of electromagnetic telemetry by Dr. Donald Gurnett is one a striking and direct example of this type of sonification of non-sound data; however, it is also, in many ways, a non-significant transfer: the data in question are readings of wave-form encounters. The electromagnetic information produced from the Cassini mission, among others, has a long-recognized analogous relationship to sound, so the transfer from light waves to sound waves should come as no surprise—each is a physical phenomenon whose transfer is less dramatic than the cross-modal sensory transfers familiar from synaesthesia.
Sonification is not a synaesthetic process; it is an adaptation either of one wave-form description to another (light to sound) or the technical translation, following an arbitrary series of parameters. Both these cases, while they can generate non-trivial effects, are entirely different from the biological phenomena of cross-modal processing known as synaesthesia. The assumption that dominates in discussions of aesthetic synaesthesia in relation to technical apparatus is that an analogous transfer, following a human-determined and controlled paradigm, results in a technical approximation of synaesthesia. For non-digital systems, such as those devised throughout the twentieth century by inventors such as Thomas Wilfred or Mary Hallock-Greenewalt, alternatives to this assumption were not available; generative and expert systems suggest the limits of this assumption.
A more intriguing potential for transcoding lies within digital technology, and there may be a more fruitful comparison to be made with this model: whatever the initial input into the digital system, once it has been processed (digitized) all inputs become equivalent. While the human sensory apparatus is not a digital or digitizing system, all sensory inputs are translated into the same medium—nerve impulses—and are interpreted into the dynamic reality we experience by different "modules" of the brain. Synaesthesia appears within this organic model as the interpretation of one senses' inputs by a different "module" than typically interprets those inputs. The occasional mis-attribution of digital files and their consequent mis-interpretation through the "wrong" software is thus a logical analogy for synaesthesia within a technical matrix, a technesthesia.
Organic cross-modal experiences are autonomous, functioning outside of direct conscious control, the associations generated in the processing being specific to the particular individual synaesthetic experience. It arises spontaneously from the interpretative dimensions of consciousness which are responsible for taking sensory experience and transforming it into the world we interact with. Recent projects with facial recognition, such as Ethan Ham's Anthroptic where software mean for facial recognition is tweaked to produce irrational, non-face results suggests the potential for non-trivial overlaps between digital interpretative systems and our own organic interpretations.
However, to assume a literal overlap between the human synaesthetic experience and the technesthetic is to make a fundamental category mistake. The apparent results of such "errors" do not necessarily resemble the human sensory experience; however, they do reveal something similar—underlying and inaccessible structures of the interpretative apparatus itself. With the human mind, what appears are aspects of the synaesthetic individuals' idiosyncratic interpretative apparatus and how that normally inaccessible apparatus functions, with a machine, what is revealed are some of the assumptions ingrained in the design and programming of the software itself.
It might be tempting to identify the technical synaesthetic effect as a "glitch" would be an error. The mechanical misinterpretation in question is not a result of an error in the coding, but rather a result of misattribution—a confusion of type rather than a mistake in execution. The digital file in question is intact, it is merely being decoded following an incorrect interpretative paradigm, akin to the errors that appear as a result of incompatibilities between different versions of the same program.
Sound-to-Image visualizers, or image-to-sound translators do not qualify as examples of technesthesia. The audible translation of an image following a predetermined (or user-mappable) set of parameters is nothing more than the attempt to recreate synaesthesia following a set paradigm; the autonomous translation of a QuickTime movie into a sound file is, in contrast, an example of technesthesia. It is autonomous, uncontrollable and follows the underlying structure of an otherwise normally-functioning software. It is a special case of misattribution within a normally functioning system, not the normal function of a program designed to produce some specific transcoding. This abnormality of function in an otherwise normally functioning system is the key to this analogical relationship to synaesthesia—synaesthetes (typically) are not crippled by their conditions, nor are these experiences the results of some special intervention against otherwise normal function. A technical analogue to the organic experience must meet the same base-line criteria of exceptional experience within otherwise normal function.
The custom creation of "patches" or codecs to enable these transfers so long as they do not destabilized normal functions, however, would meet the base-line criteria for a technesthetic "player" or software program.
Yet, it might be more conceptually fruitful not to look for a "visualizer" but rather to follow the model implied by simulations of human abilities such as facial recognition. In such a model, the human capability is modeled as a series of rules and the results either match our experience or deviate from it: this is the "false positive" that such programs can generate; however, these "false positives" can be thought of not as failures, but rather as a discovery of technical "fantasy" within the digital system, non-conscious, but redolent of human failings. A technesthetic result might initially appear as such a technical failure, rather than as a positive effect to be explored and examined. It is precisely the rupture from responses mirroring the human that makes technesthesia difficult to differentiate from glitch—it is also a result of a mistake in the processing, but rather than being a break-down, it is a confusion of type, and as such is repeatable, stable and consistent in its effects—the opposite of the prominent features of glitches: their transitory appearance and instability. A glitch may not be repeatable, but technesthesia is.