127
Technical Research |Spatial Functions
126
Technical Research |Spatial Functions
By understanding the various levels of activity that are present within these dynamic spaces, one can begin to realize the different ways these objects can function not only individually but also together within a space of distributed intelligence. With this, all the required concepts of this framework are now established. As such, it is now possible to logically deduce the best way of simulating these spaces.
A motorized window louver for example, can operate by time of day, where a rotation value is set by a specific time variable; by environmental temperature, where the louver communicates with other coded infrastructures such as temperature sensors to acquire a temperature variable; or by occupancy number, where the amount of people within the space can be determined either by coded infrastructure such as proximity sensors, or coded objects such as cellphones that the occupants are carrying. Within all three of these scenarios, the louver operates at the coded infrastructure level but has the option of utilizing different forms of coded processes as well as different forms of input to achieve a similar result. The dynamics of these louvers can arise as a pre-defined action or as a function of human and environmental interaction, where the output action can be as simple as defining a rotational degree variable, to additional deployment percentages, opacity, and any other attributes depending on the typology of the louver.
This example is just one possibility, but in reality the typology of these objects can be limitless, ranging from lights, to projections, to mechatronics, to furniture, all of which may react to sound, temperature, ambient lighting, occupancy numbers and identities, or be simply pre-programed from a pattern or noise. As already discussed, this diversity poses a challenge for developing a singular method for simulating these objects, but fortunately, unlike human crowds where one must translate from analog behaviors to digital functions, these dynamic objects already operate on a code-based hierarchy.
This digital to digital translation makes this a much simpler process compared to establishing the human systems from the past chapter (2.3). There is no longer a need to interpret analog behaviors to create a new system, instead, one can replicate directly the digital logic of the software that controls the dynamic objects in the physical world. Fox remarks on this by stating, “The sensors and robotic components are now both affordable and simple enough for the design community to access; and all of the parts can be easily connected to each other. Designing interactive architecture in particular is not inventing so much as understanding what technology exists and extrapolating from it to suit an architectural vision.”[5] What this means is that as long as one has a rough understanding of how these objects operate in the physical world, as well as considered the different urban typologies that may arise, as well as how these objects might be connected within a larger system through software, then one can simulate these objects by establishing its input, its processing, and its action. In essence, simulating these objects is less about developing a specific algorithmic model, but rather developing a methodology to understand and simplify these objects into their fundamental qualities.
5 Fox, Interactive Architecture: Adaptive World, 12.
Therefore, when creating such an object, the steps can be as follows:
Determine its functionality, whether it is a louver, a light, or a piece of furniture.
Assign attributes to the object that defines what it can do. A light might have an RGB variable as well as a brightness variable. A louver might have a rotational degree variable, to additional deployment percentages, opacity, and any other attributes depending on the typology of the louver. A reactive mechatronic sculpture might have a position variable along with lighting RGB and brightness, as well as a sensor that tracks the number of nearby people.
Consider its input, whether it is a human intervention such as a touch input from touchscreen, or proximity sensor, or sound; An environmental intervention such as temperature, daylighting, or air pressure; Or predefined algorithms such as a written message, or a generated pattern.
Consider its processing in relation to its attributes, whether it is a direct translation such as simply displaying the temperature on a medium, or if it utilizes intermediary processes such as generating a color and a location based on the number of occupants within a set space, or creating various profiles based on user identities within the building.
Consider its action/output, whether it is simply an object that can be moved, or if it lights up from a source, or if it projects onto a wall, or if it moves within a space, or if it deforms from interaction, or if it does all of these and more.
This same methodology can also be utilized to simulate some of the simpler elements mentioned in Chapter 1.1 such as a water fountain. Undisturbed, the fountain will have a pre-defined function to determine the state of its water texture within the environment. Upon a touch input by the occupants, however, it will utilize a function to generate ripples, in which it will output to the location of the touch input.