Harman's concept of “tool-being'” invites us to refresh our understanding of human experience through the ubiquitous objects in our quotidian landscape, including consumer-level digital products that are black-boxed machines. In part 4 of artists' writings, “tool-being” series, Longman Luk looks inside a 360 panoramic camera for the everyday consumer, to hack its customized functions, in order to test its operational limits. (Editor)
Longman Luk: A spinning 360 camera: FPS vs RPM (3m30s) | viewable on site at Floating Projects 2022.12.27-2023.01.11
Consumer-level 360 cameras typically stitch images captured by 2 fisheye lenses. The footage can be reframed into a fixed aspect ratio such as 16:9 by assigning a viewing angle. With the internal microphone, some cameras may even restore a somewhat believable binaural sound field. What happens if I ask the camera to lock onto a specific angle while spinning it with a machine? Under what circumstances will it generate a stable audio/visual image? If it succeeds to highlight some physical/computational artefacts, can I further exploit them?
Video: 3K, 100 fps
Sound: internal microphone
Spinning rates: 8 levels, but uncertain.
Note: The change of fan speed during the footage is indicated by the more/less wobbling container, the remote controller beep sound, and the changed motor pitch.
MY VIDEO MANIFESTO
As a musician who works with spatial audio, I would sometimes use the Insta360 One R, a consumer-level 360 camera that a friend sent me as a gift, to pair with my ambisonic field recordings. Then it soon became my most frequently used camera over the past year.
Capturing and presenting panoramic images. In 1787, Robert Barker presented large-scale panoramic paintings in purpose-built buildings. In 1887, John Connon patented his whole circuit panoramic camera that records 360 degrees of the horizon on a long roll of film. Since then, the
human fascination with immersive panoramic experiences kept on going until technological development in recent decades brings 360 cameras to the consumer market. These devices are usually in pocket size, use two fisheye lenses to cover a whole spherical field, and do not necessarily come with the most satisfactory image quality compared to normal cameras of similar price range. I wonder if there are more creative possibilities. Therefore, I set out on a mission to explore the extended uses of a 360 camera and branched out into two separate directions. The first approach revolves around exploiting its physical limitations by combining the camera with a spinning motor and the outcome is titled fps/rpm (presented at Micro Narratives 2022: Performative Videography). The second approach attempts to address the concept of machine versus human perception and the result is titled a soundwalk.
a soundwalk consists of short samples of 52 waypoints during a particular soundwalk from Central to Wan Chai on Nov 27, 2022. Sequenced in an algorithmic and generative manner, the footage is either replayed in chronological order or shuffled randomly under rhythmic instructions.
The human perceptual system is complex and powerful, yet at the cost of being highly selective as dictated by the rules of evolution. Human ears come with amplifying and attenuating mechanisms to filter irrelevant information, but the hearing range is limited from around 20hz to 20khz. Human eyes may be strong in recognizing colors and depth, but the field of view is also narrowed. Therefore, at any moment when we exercise our hearing and vision, we are filtering out the majority of the surrounding. In other words, perception begins with reducing information.
Machines, or the 360 camera, on the other hand, capture all directions for reproduction.
Theoretically, one single panoramic image offers finite but countless viewing angles to be
framed into a screen of normal aspect ratios. In addition, Ricoh Theta Z1, the particular 360
camera being used in this experiment, offers an ambisonic recording function. The 4 internal microphones, though of lower quality, reproduce a basic spatial representation of the soundscape. Working in line with the framed video, the audio delivers spatial information and a listening experience at a particular viewing angle. Therefore, multiple instances can be generated with each footage.
Is it possible to draw a boundary between soundscapes? While there are more straightforward cases such as indoor/outdoor whereby the entrance clearly shuts most outside sound sources, most of these sonic boundaries are dynamic, ambiguous, and unidentifiable. For instance, when you take an escalator from the MTR platform to the concourse, the screen doors, the train, the chimes, and other sounds would gradually fade away and quietly disappear. There is a good chance that one would realize the changes only when he/she is already approaching the exit and entering another distinct soundscape with traffic and construction, possibly also music and advertisements.
Living in a machinic and mass-mediated environment, we can hardly escape from the total cacophony. This leaves us no choice but to listen distractedly. Soundwalk, which literally refers to a walk with one’s ears opened, asks us to act against our daily habit, which can be unpleasant at times, despite that I believe Hong Kong’s urban soundscape has a lot to offer.
(Luk, December 2022)