121

Technical Research |Spatial Functions

120

Technical Research |Spatial Functions

Spatial Functions

With the emergence of interactive and dynamic architecture, the occupants are no longer the only possible dynamic elements within the space. The human systems model that was established in the previous chapter (2.3) may have been adequate for visualizing conventional static spaces, but would not suffice in fully visualizing dynamic spaces, where elements of the space can also change position or level of attraction. Since these interactive spaces are dynamic, one must account for this and find a way to translate these dynamics into a logic that the machine can understand. As such, much like for human behaviors in the last chapter (2.3), the goal of this chapter (2.4) is to unpack the methodologies for simulating these dynamic spaces and how they can function within the virtual space.

As mentioned in Chapter 1.1, dynamic spaces can utilize both simple passive elements such as water and sand features, as well as more complex computational elements such as elevators, motorized louvers, and sensors. While these spaces do not necessarily require computation and mechatronics to be dynamic, the integration of such technologies greatly increases the possibilities of spatial utilities within the space. These systems can incorporate many elements with sensors and motors to create features that can respond to natural forces, occupants, as well as operate on repetitive patterns. These features can come in the form of projections on walls to motorized doors to raising floors and ceilings, or anything else that designers can dream of, all of which has the potential to communicate with one another, and at a multitude of scales ranging from a single system imbedded within a room to networks of subsystems that can span entire cities.

The typology of these elements can be very diverse, and is becoming even more so with the addition of data-driven computational systems within infrastructure. The problem with this intrinsic diversity is that it causes complications in simulating these objects, as the variety of functions makes it challenging to develop a singular system that works for every scenario. Since each object can potentially have a different function, the software that controls them in the physical world would need to be customized and tailored to each specific instance, meaning that this logic would then need to be recreated within the simulation model depending on the functionality and utility of the object. As such, one cannot use the same code for every element, but instead must create a generic model and tailor to the functionality of the object, the typology of the space, and the specific scenario of the simulation. While it is true that this might make it more challenging to find a universal system that works for every scenario, one can overcome this by simplifying its processes into parts. Much like the process of establishing the human systems in the last chapter (2.3), one can create a framework where functionalities can be added depending on the typology and functionality of the space and its elements.

A basic understanding of these systems is required to create this framework—specifically on how they work and what these resulting spaces may entail. In Mike Crang and Stephen Graham’s article, “Sentient Cities Ambient intelligence and the politics of urban space,” they talk about three different typologies in which an urban environment can become automated through ubiquitous computing systems.[1]

Augmenting space relies on the fact that the existing environment has already been saturated with information. Computing systems can utilize this physical information using sensors and tracking to overlay new digital media on top of the existing structures. This allows users to both see the physical world as well as the dynamic graphical information of the virtual world. This produces a reactive environment where emphasis is placed on the user’s activity.

Enacting space relies on the fact that computation inhabits everything around us, ranging from the things we carry on our bodies, to the cars on the streets, to the infrastructures of our cities. Unlike augmenting space, which emphasizes the user’s activity, this approach further utilizes intermediary processes, which reallocates agency back to the environment. This allows the computer system to suggest through the interaction of space or the display of data.

Transducing space relies on the digitalization and identification of people, where the layering and cross-referencing of identities allow the system to form a technological consciousness through the automation of data without cognitive inputs. This type of space can recognize its occupants and allow for autonomous tasks but comes at the cost of user awareness and user agency.

It is evident that these approaches all utilize data in some form. Environments have always been saturated with information in the forms of signage and the existence of occupants and objects, but it has only been recently that technology has begun to utilize this information by converting it into digital data with various sensors, cameras, and machine vision. This translation of analog-to-digital allows us to blur the boundary between the physical and the virtual, which not only provides greater flexibility in data utilization but also allows us to redefine spatial functionalities by creating homogeneous spaces of technological integration. By manipulating this data in different ways, it is possible to influence the distribution of agency within the space, which can in turn affect spatial operation in unforeseen ways.

1 Mike Crang and Stephen Graham, “Sentient Cities Ambient Intelligence and the Politics of Urban Space,” Information Communication and Society 10, no. 6 (2007): 792–794, https://doi.org/10.1080/13691180701750991.