SOUND RECORDINGS TO MUSICAL JOURNEYS: A DIGITAL PORTFOLIO DISSERTATION OF SEVEN ELECTROACOUSTIC COMPOSITIONS FOR DATA-DRIVEN INSTRUMENTS by OLGA OSETH A DISSERTATION Presented to the School of Music and Dance and the Graduate School of the University of Oregon in partial fulfillment of the requirements for the degree of Doctor of Musical Arts in Music Performance of Data-driven Instruments June 2019 DISSERTATION APPROVAL PAGE Student: Olga Oseth Title: Sound Recordings to Musical Journeys: A Digital Portfolio Dissertation of Seven Electroacoustic Compositions for Data-driven Instruments This dissertation has been accepted and approved in partial fulfillment of the requirements for the Doctor of Musical Arts in Performance of Data-driven instruments degree in the School of Music and Dance by: Dr. Jeffrey Stolet Chairperson Dr. Jeffrey Stolet Advisor Dr. Akiko Hatakeyama Core Member Dr. Jack Boss Core Member Ying Tan Institutional Representative and Janet Woodruff-Borden Vice Provost and Dean of the Graduate School Original approval signatures are on file with the University of Oregon Graduate School. Degree awarded June 2019 ii © 2019 Olga Oseth This work is licensed under a Creative Commons Attribution-Non Commerical-NoDerivs (United States) License. iii DISSERTATION ABSTRACT Olga Oseth Doctor of Musical Arts School of Music and Dance June 2019 Title: Sound Recordings to Musical Journeys: A Digital Portfolio Dissertation of Seven Electroacoustic Compositions for Data-driven Instruments This Digital Portfolio Dissertation presents seven original real-time electroacoustic compositions that employ data-driven instruments, videos of their performances, associated files needed to perform the works, and a text document that describes the seven compositions. Each of the seven compositions began with recorded material that was subsequently transformed into musical experiences and journeys. I used Symbolic Sound’s Kyma to execute the sonic transformations. Each of the seven compositions employed different performance interfaces and required different performance techniques. The control data, data mapping strategies, technical configurations, and sound-producing algorithms were all elements of my data-driven instruments. The individual capability of the interfaces, embodiment, as well as my choice of interaction with my interfaces were elements of performance techniques. In the written document I will discuss how my seven compositions were transformed from sound recordings to musical journeys focusing on musical, technical, and creative concepts. iv CURRICULUM VITAE NAME OF AUTHOR: Olga Oseth GRADUATE AND UNDERGRADUATE SCHOOLS ATTENDED: University of Oregon, Eugene, Oregon St. Cloud State University, St. Cloud, Minnesota DEGREES AWARDED: Doctor of Musical Arts, Data-driven Instrument Performance, 2019, University of Oregon Master of Music, Intermedia Music Technology, 2015, University of Oregon Bachelor of Music, Piano Performance, 2012, St. Cloud State University Bachelor of Arts, New Media and Composition, 2012, St. Cloud State University AREAS OF SPECIAL INTEREST: Live, interactive performance with data-driven instruments Sound design Electroacoustic music Composition Collaborative piano PROFESSIONAL EXPERIENCE: Graduate Employee, University of Oregon, Eugene, Oregon, 2015-2019 Teaching Assistant, University of Oregon, Eugene, Oregon, 2013-2015 Lab Monitor/Tutor, St. Cloud State University, St. Cloud, MN, 2009-2012 GRANTS, AWARDS, AND HONORS: Graduate Teaching Fellowship, Music Technology, 2015-2019 Outstanding Student Employee Award, University of Oregon, 2016 Outstanding Graduate Scholarship Achievement Award, University of Oregon, 2015, 2019 v Music Student Meinz Award, St. Cloud State University, 2013 Summa cum Laude, St. Cloud State University, 2012 CONFERENCE PRESENTATIONS: Kyma International Sound Symposium 2014, 2016, 2018 Society of Electro-Acoustic Music in the United States, 2015, 2016, 2017, 2018 SEAMUS Vol. 26 CD, 2017 International Computer Music Conference, 2016 Women in Music Technology Conference, 2016 vi ACKNOWLEDGMENTS I wish to express my sincere appreciation to my committee: Professors Jeffrey Stolet, Akiko Hatakeyama, Jack Boss, and Ying Tan for their support, patience, knowledge, and time that each worked with me during my graduate studies. In addition, special thanks are due to: Professors Scott Miller, Kristian Twombly, Daniel O’Bryant, Catherine Verrilli, and Larissa Simanenkova, as well as teachers Dave Enyart and Jane Hovey, who taught me so much about music and inspired me to become an educator. An expression of gratitude also goes out to my Eugene family: Ceci Lafayette and pets, Carol Koton, and dear friends. A special thanks to my partner, Croy, for his support, optimistic thinking, and access to particular audio recordings that helped me so much during my doctoral studies. A very special thanks to my family. My Mama who taught me kindness and to follow my dreams. My Babushka Rimma and Dedushka Ivan who taught me hard work. My Uncle Andrei who taught me courage. This endeavor would not been possible without your love, sacrifices, encouragement, and support. Thank you! vii TABLE OF CONTENTS Chapter Page I. INTRODUCTION ................................................................................................... 1 II. CONCEPTUAL BACKGROUND AND THEORETICAL FRAMEWORKS ..... 5 Data-driven Instrument ......................................................................................... 5 Modularity ............................................................................................................. 6 Data Sources ......................................................................................................... 7 Data Communication ............................................................................................ 7 Data Mapping ........................................................................................................ 10 Kyma Sound Synthesis Engine ............................................................................. 11 Performative Space ............................................................................................... 13 Observability ......................................................................................................... 14 III. DISCUSSIONS OF INDIVIDUAL PORTFOLIO COMPOSITIONS ................ 15 III.1 RIDE ON ...................................................................................................... 15 Creative Concept .............................................................................................. 15 Sonic Material .................................................................................................. 17 Performance Interface ...................................................................................... 19 Musical Challenges .......................................................................................... 21 Musical Opportunities ...................................................................................... 22 Data Mapping ................................................................................................... 23 Sound Design ................................................................................................... 25 Formal Structure .............................................................................................. 30 Performative Techniques ................................................................................. 31 viii Additional Comments ...................................................................................... 31 III.2 USCGC HEALY (WAGB-20) ..................................................................... 33 Creative Concept .............................................................................................. 33 Sonic Material .................................................................................................. 37 Performance Interface ...................................................................................... 41 Musical Challenges .......................................................................................... 42 Musical Opportunities ...................................................................................... 43 Data Mapping ................................................................................................... 43 Sound Design ................................................................................................... 46 Formal Structure .............................................................................................. 50 Performative Techniques ................................................................................. 52 Additional Comments ...................................................................................... 53 III.3 IVANA KUPALA ........................................................................................ 55 Creative Concept .............................................................................................. 55 Sonic Material .................................................................................................. 57 Performance Interface ...................................................................................... 58 Musical Challenges .......................................................................................... 61 Musical Opportunities ...................................................................................... 62 Data Mapping ................................................................................................... 62 Sound Design ................................................................................................... 64 Formal Structure .............................................................................................. 66 Performative Techniques ................................................................................. 67 Additional Comments ...................................................................................... 68 ix III.4 WIND IN THE FOREST .............................................................................. 69 Creative Concept .............................................................................................. 69 Sonic Material .................................................................................................. 71 Performance Interface ...................................................................................... 72 Musical Challenges .......................................................................................... 73 Musical Opportunities ...................................................................................... 74 Data Mapping ................................................................................................... 75 Sound Design ................................................................................................... 77 Formal Structure .............................................................................................. 81 Performative Techniques ................................................................................. 82 Additional Comments ...................................................................................... 83 III.5 IMMIGRATION (GAMETRAK VARIATIONS) ....................................... 85 Creative Concept .............................................................................................. 85 Sonic Material .................................................................................................. 88 Performance Interface ...................................................................................... 89 Musical Challenges .......................................................................................... 90 Musical Opportunities ...................................................................................... 91 Data Mapping ................................................................................................... 91 Sound Design ................................................................................................... 93 Formal Structure .............................................................................................. 96 Performative Techniques ................................................................................. 97 Additional Comments ...................................................................................... 100 III.6 S/V LIST ....................................................................................................... 101 x Creative Concept .............................................................................................. 101 Sonic Material .................................................................................................. 103 Performance Interface ...................................................................................... 104 Musical Challenges .......................................................................................... 105 Musical Opportunities ...................................................................................... 106 Data Mapping ................................................................................................... 106 Sound Design ................................................................................................... 108 Formal Structure .............................................................................................. 113 Performative Techniques ................................................................................. 113 Additional Comments ...................................................................................... 114 III.7 BRIGIT ......................................................................................................... 115 Creative Concept .............................................................................................. 115 Sonic Material .................................................................................................. 118 Performance Interface ...................................................................................... 119 Musical Challenges .......................................................................................... 120 Musical Opportunities ...................................................................................... 120 Data Mapping ................................................................................................... 121 Sound Design ................................................................................................... 124 Formal Structure .............................................................................................. 128 Performative Techniques ................................................................................. 129 Additional Comments ...................................................................................... 130 SUMMARY ................................................................................................................ 131 BIBLIOGRAPHY ....................................................................................................... 133 xi LIST OF FIGURES Figure Page 1. Basic data flow diagram of the complete instrument for Ride On .................... 15 2. Photo of a skateboard controller powered up ................................................... 20 3. Photo of data streams used in Ride On ............................................................. 20 4. Max patch - data mapping example used in Ride On ....................................... 23 5. Capytalk example to control the amplitude in Ride On .................................... 24 6. Signal flow using a SoundToGlobalController in Ride On .............................. 25 7. Signal flow of sampling synthesis techniques used in Ride On ........................ 26 8. Signal flow of granular synthesis structure used in Ride On ............................ 27 9. Capytalk expression to create pulsating rhythms in Ride On ........................... 27 10. Signal flow of two layered sound structures used in Ride On ........................ 28 11. Signal flow of analysis and resynthesis structure used in Ride On ................. 29 12. Basic data flow diagram of the complete instrument for USCGC Healy (WAGB-20) .................................................................................................. 33 13. Photo of the USCGC Healy (WAGB-20) ship ............................................... 34 14. Photo of groups of walruses in Hanna Shoal .................................................. 36 15. Photo of Healy stuck in the ice ....................................................................... 37 16. Photo of cold, blue waters of the Chukchi Borderlands ................................. 39 17. Photo of jellyfish studies on the Hidden Ocean 2016 Expedition ................. 40 18. Data streams used in USCGC Healy (WAGB-20) ........................................... 42 19. Max patch - data mapping used in USCGC Healy (WAGB-20) ..................... 44 20. Max patch - another data mapping used in USCGC Healy (WAGB-20) ........ 44 xii 21. Signal flow in USCGC Healy (WAGB-20) using a SoundToGlobalController object .................................................................................................................... 45 22. Capytalk expressions inside musical parameters ............................................ 45 23. Kyma Timeline File for USCGC Healy (WAGB-20) ..................................... 46 24. Signal flow of Arpeggio sound design, first stage .......................................... 47 25. Signal flow of Arpeggio sound design, second stage ..................................... 48 26. Signal flow of a 3DControl7C PlayIntheMiddleOfCube sound design, first stage .......................................................................................... 49 27. Signal flow of a 3DControl7C PlayIntheMiddleOfCube sound design, second stage ...................................................................................... 49 28. Basic data flow diagram of the complete instrument for Ivana Kupala ......... 55 29. Photo of flowers in the prairies of Eastern Ukraine ........................................ 56 30. Photo of an installation using custom-made flower wreath interface ............. 59 31. Data streams used in the Ivana Kupala composition ...................................... 60 32. Arduino IDE sketch adopted from Daniel Jolliffe .......................................... 61 33. Max patch - data mapping used in Ivana Kupala .......................................... 63 34. Capytalk used for data mapping in Ivana Kupala .......................................... 63 35. Example of data mapping inside Kyma for Ivana Kupala ............................ 64 36. Signal flow of analysis and resynthesis using Tau in Ivana Kupala ............. 65 37. Signal flow of subtractive and sampling synthesis used in Ivana Kupala ..... 65 38. Signal flow of sound structure using granular synthesis used in Ivana Kupala 66 39. Basic data flow diagram of the complete instrument for Wind in the Forest 69 40. Photo of common trees along the Blanton Ridge Trailhead .......................... 70 xiii 41. Waveform representation of sonified wind data ............................................ 72 42. Diagram of data streams used in Wind in the Forest ..................................... 74 43. Photo of Delicode NI mate software .............................................................. 75 44. Data mapping using NI mate software for Wind in the Forest ....................... 76 45. Signal flow of further data mapping inside of Kyma for Wind in the Forest.. 77 46. Custom waveforms created from sonified wind data .................................... 78 47. Signal flow of a synthetic spectrum created out of custom waveforms ........ 78 48. Signal flow of another example of a synthetic spectrum created out of custom waveforms ....................................................................................... 79 49. Signal flow of sampling synthesis with AM in Wind in the Forest ............... 80 50. Basic data flow diagram of the complete instrument for Immigration (Gametrak Variations) ............................................................. 85 51. Photo of data streams used for Immigration (Gametrak Variations) ............ 90 52. Max patch - data mapping used for Immigration (Gametrak Variations) ..... 92 53. Capytalk used in Immigration (Gametrak Variations) .................................. 92 54. Capytalk used for breaching thresholds in Immigration (Gametrak Variations) 93 55. Signal flow of analysis and resynthesis techniques used in the first movement, Visa ..................................................................................... 94 56. Signal flow showing six decorrelated Kyma TauPlayers used in the first movement, Visa ............................................................................................. 94 57. Signal flow of sampling synthesis techniques used in the first and second movements ....................................................................................... 95 58. Signal flow showing amplitude of TauPlayers being modulated ................... 95 xiv 59. Signal flow of sampling and subtractive synthesis techniques ...................... 96 60. Gametrak used in the first movement, Visa ................................................... 98 61. Gametrak used in the second movement, Green Card ................................... 98 62. Gametrak used in the third movement, Passport ........................................... 99 63. Basic data flow diagram of the complete instrument for S/V List ................. 101 64. Photo of Shilshole Bay Marina ...................................................................... 102 65. Photo of S/V List ........................................................................................... 103 66. Image of Kyma Control application used in S/V List .................................... 105 67. First type of data mapping used in S/V List ................................................... 107 68. Second type of data mapping used in S/V List ................................................ 107 69. Third type of data mapping used in S/V List ................................................... 108 70. Signal flow of a bell-like sound design used in S/V List ................................ 109 71. Signal flow of a bass drum-like sound design used in S/V List ..................... 109 72. Signal flow of an enhanced audio recording with sound design used in S/V List ........................................................................................................... 110 73. Signal flow of analysis and resynthesis used in S/V List ............................... 111 74. Another example of analysis and resynthesis used in S/V List ...................... 112 75. Signal flow of granular synthesis used in S/V List ......................................... 112 76. Basic data flow diagram of the complete instrument for Brigit ...................... 115 77. Photo of spring-like objects found at a thrift store ......................................... 117 78. Photo of spring-like objects attached to the cookie sheet used in Brigit ........ 118 79. Photo of a hanging metal tray used in Brigit .................................................. 118 80. Photo of contact microphones used in Brigit .................................................. 119 xv 81. Max patch - first layer of data mapping used in Brigit ................................... 121 82. Data mapping within Kyma used in Brigit ..................................................... 123 83. Second example of data mapping within Kyma used in Brigit ...................... 123 84. Second layer, third example of data mapping used in Brigit .......................... 124 85. Signal flow of sampling and subtractive synthesis techniques used in Brigit 125 86. Capytalk expression used for panning in Brigit .............................................. 125 87. Signal flow of analysis and resynthesis and granular synthesis techniques used in Brigit ................................................................................ 126 88. Signal flow showing analysis and resynthesis in combination with a custom-made waveform .................................................................................. 127 89. Signal flow of AM and FM techniques used in Brigit .................................... 128 xvi CHAPTER I INTRODUCTION This Digital Portfolio Dissertation contains seven original real-time electroacoustic compositions that employed data-driven instruments, videos of their performances, associated files used to perform the works, and a text document that describes the seven compositions. Through the videos one can both hear and see performances of the works and the playing of these data-driven instruments. The associated files provided with this Digital Portfolio Dissertation include custom-made software, sound algorithms, and directions (to recreate the seven compositions for further study). The text document describes each of the compositions’ creative concept, sonic material, sound design, performance interfaces, musical challenges, musical opportunities, data mapping, formal structure, and fundamental techniques used to perform each composition. The broad theme that arose from the portfolio of compositions was the transformation of original recorded materials into musical journeys. In the context of this dissertation, the concept of journey means several things. First, a journey can relate to literal transportation through the physical and cultural environments in which we live. This physical existence of journey is apparent in all seven compositions where travel on a skateboard, expedition to the sea, wander through a prairie, walk in the forest, immigration pilgrimage, promenade through the marina, and treasure hunt through the thrift store serve an extra-musical anchor for the works. Second, a journey is also apparent in the creative and compositional processes that I employed in all seven compositions. All sonic material in the seven compositions within my Digital Portfolio 1 Dissertation began with recorded sounds that underwent transformations. The sound design process alone represents a metaphorical journey. These sonic transformational journeys were supported by my choices of performative actions used to play the compositions and by the formal structure itself. Together, this multi-layered conception of journey influenced and shapes my creative processes and compositional decisions. The seven compositions contained in this Digital Portfolio Dissertation are: Ride On for a skateboard controller, custom Max/MSP software, and Kyma; USCGC Healy (WAGB-20) for a Leap Motion controller, custom Max/MSP software, and Kyma; Ivana Kupala for a custom-made controller, custom Max/MSP software, and Kyma; Wind in the Forest for a Microsoft Kinect controller, Delicode NI mate software, and Kyma; Immigration (Gametrak Variations) for a Gametrak controller, custom Max/MSP software, and Kyma; S/V List for an iPad, Kyma Control application, and Kyma; Brigit for contact microphones on metal, custom Max/MSP software, and Kyma. Each of these compositions is described below. In Ride On, I recreated associations familiar to me, of riding a skateboard through a sonic world (created out of sound recordings made at a skatepark) and observable performative actions. I used my feet and shifting body weight to operate a skateboard- shaped controller, through which data streams were generated by my performance. I controlled musical parameters in real-time through sound-producing algorithms created in the Kyma environment. In USCGC Healy (WAGB-20), I transformed the recordings made on the icebreaker ship in the Arctic into a musical composition. Such transformations created my representation of what the sonic world would sound like to the Healy ship going 2 through the Arctic, if she were a living organism. I used the location of my hands in space to operate a Leap Motion controller. Using data streams created by this controller, I was able to control amplitude, frequency and other musical parameters of the sound- producing algorithms in Kyma. In Ivana Kupala, I referenced my Ukrainian heritage and used recordings of my voice to recreate warm memories of making flower wreaths with my family into a musical composition. I used my hands to touch the red ribbons of the wreaths to generate data streams. Such data streams controlled sound-producing algorithms within the Kyma environment. In Wind in the Forest, I used sonified wind data and recordings of percussive objects to recreate my representation of magical creatures coming out for mischief and play in the forest. In this composition I used the location and movement of my body and hands in three-dimensional space to operate a Microsoft Kinect controller. Data streams created by my body’s position were sent to Kyma to control musical parameters in real- time. In Immigration (Gametrak Variations), I referred to my journey of becoming an American citizen and used recordings of my voice to create a musical composition that represented my emotions going through the citizenship process. In the first movement, I used my hands and the whole body to pull retractable cables of a Gametrak controller placed inside of a backpack on my back. In the second movement, I used my hands to pull retractable cables of a Gametrak controller and wrapped them around three various size circles. In the third movement, I used my hands and body to pull retractable cables of 3 a Gametrak controller to give me a wider range of motion. Using data streams generated by this interface in Kyma I controlled the sound producing algorithms. In S/V List, I transformed original audio recordings made on the sailboat into a musical composition to recreate my overall impression of being on a sailboat at a marina. I used my hands to interact with the screen of an iPad (through the Kyma Control application), to articulate positions in 2D space with my fingers. Data streams generated by this interface through my performative actions were sent to Kyma environment for interaction with musical parameters in real-time. In Brigit, I repurposed old metal objects into a data-driven interface. Through my observable performative actions and transformation of original audio recordings made interacting with the two metal trays, I transformed the idea into a musical composition. I used my hands to strike and touch metal trays, which created vibrations that were picked up by contact microphones and sent as data streams to Kyma, for control of musical parameters. 4 CHAPTER II CONCEPTUAL BACKGROUND AND THEORETICAL FRAMEWORKS Prior to discussing each composition in detail, I want to clarify certain technical concepts that I employed throughout the course of this dissertation. In this chapter I will clarify what is a data-driven instrument, modularity, data sources, data communication, data mapping, the Kyma environment, performative space, and observability. Data-driven Instrument A data-driven instrument (DDI) includes: a performance interface that the user operates to generate data, a software layer where data streams are transformed, and a sound synthesis engine to which data streams are sent in order to control musical parameters.1 According to Dr. Jeffrey Stolet, “The interface is the part of the data-driven instrument where data is generated or acquired through human operation.”2 User interfaces can be of many forms and sizes. Some interfaces have buttons, such as those on a TV remote control. Some interfaces have faders, such as rotary volume knobs. Some interfaces have combinations of both buttons and faders. Buttons are good at starting and stopping events, an essential element of music. Buttons output packets of data to indicate their messages. Faders output streams of data that are good for ongoing control. Sometimes these faders take the form of what we often see on audio mixers, but this is not always the case. Some interfaces, such as a Gametrak controller, can have faders with three or more dimensions. Some interfaces are touchable, such as a skateboard controller. 1 Jeffrey Stolet, "Twenty-three and a Half Things about Musical Interfaces," (Lecture Notes, Kyma International Sound Symposium Keynote Address, Brussels, Belgium, September 14, 2013). 2 Ibid. 5 Some interfaces are un-touchable (untouched during their operation) such as a Microsoft Kinect controller. Interfaces can have symmetrical or asymmetrical control. For example, an iPad can have asymmetrical control because the X axis dimension is bigger compared to the Y axis dimension. An example of an interface with a symmetrical control would be a Leap Motion controller, which uses the same range for location of the left and right hand. According to Dr. Stolet, “Whereas traditional instruments are driven by energy exerted into their physical systems, data-driven instruments replace energy's function with data streams.”3 The way the performer plays data-driven instruments is by generating data through performative actions. Such performative actions create data streams involving interfaces and realize musical ideas through musical parameters.4 Modularity According to the Merriam-Webster dictionary, modularity derives from the word modular, meaning “constructed with standardized units or dimensions for flexibility and variety in use.”5 Modularity is an important concept of a data-driven instrument and performance. The little parts of code, the order in which things are connected, the combination of software, the musical elements that the composition is built on, the order in which one sets up for the composition, and a combination of all these elements together are parts of modularity. Each module of a data-driven interface – software, hardware, or performance element – is essential to the whole composition. Modularity is 3 Ibid. 4 Ibid. 5 Merriam-Webster, "Modularity," (accessed January 10, 2019), https://www.merriam- webster.com/dictionary/modularity. 6 a concept of creating a bigger picture out of smaller segments. For example, a chair: it is made of all sorts of parts, nails, wood chunks, cushion, and other elements. If the chair was assembled in another way it would have resulted in something other than a chair. The order of putting smaller segments into a bigger picture matter. Because different orders can produce different results, the selection and order of the modules will vary the output. Data Sources Often button and fader mechanisms take the form of sensors. Sensor(s) respond to a variety of stimuli including location, orientation, position, color, light, temperature, movement, touch, air pressure, frequency, amplitude, infrared radiation, and bioelectrical signals. Sensor readings from an interface can be independent, such as within my composition Ivana Kupala where each sensor was located inside of a ribbon on a custom- made wreath controller, each sensor sending a separate stream of data. Sensor readings can be interconnected, such as the X axis and Y axis of a Leap Motion controller working together to create a calculated hand position. Sensor readings may be transmitted to the computer via a Universal Serial Bus (USB) connection, a Bluetooth, or a network connection. Sensor readings are accessed through computer software (in some cases this accompanies the interface), Max/MSP (Max) software, or through a third-party developer software. Data Communication Communication protocols that data-driven instruments can use to transfer information from one device to another include Musical Instrument Digital Interface 7 (MIDI), Open Sound Control (OSC), serial communication, and Bluetooth. According to Andrew Swift, “[MIDI] is a technical standard that describes a communication protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments.”6 MIDI is a protocol that allows messages to be sent as digital data from one musical instrument to another musical instrument connected in a MIDI system. MIDI instruments are designed so that they respond to incoming messages according to a particular channel, with a total of sixteen channels.7 Users can specify which channel to use to transfer MIDI messages. According to Andrew Swift, “How a MIDI instrument responds to a MIDI message depends upon what mode it has been set to, with total of five modes.”8 I used MIDI communication in six out of the seven compositions in my Digital Portfolio Dissertation. Open Sound Control (OSC) is a communication protocol for networking computers, musical interfaces and other multimedia devices for sharing control data.9 According to Open Sound Control Protocol, “OSC advantages include inter-operability, accuracy, flexibility, enhanced organization and documentation.”10 OSC messages are transported across the internet and within local subsets using UDP/IP and Ethernet. OSC messages consist of address pattern, type tag, arguments, and an optional time tag.11 I used OSC communication in S/V List. 6 Swift, Andrew, "A Brief Introduction to MIDI," WHY MOBILE COMPUTING? WHERE CAN IT BE USED? http://www.doc.ic.ac.uk/~nd/surprise_97/journal/vol1/aps2/ (accessed January 9, 2019). 7 Ibid. 8 Ibid. 9 "Introduction to OSC," Opensoundcontrol.org an Enabling Encoding for Media Applications, http://opensoundcontrol.org/introduction-osc (accessed January 9, 2019). 10University of Padua, OSC: Open Sound Control Protocol, 3, https://elearning.dei.unipd.it/pluginfile.php/59467/mod_page/content/46/9_OSC-protocol.pdf. 11 Ibid. 8 According to the SparkFun website, “Serial communication is a process of sending data one bit at a time, sequentially over a communication channel or a computer bus.”12 Musicians use it when they create custom data-driven interfaces for communication between the Arduino microcontroller and a computer or other devices. All Arduino microcontrollers have at least one serial port to communicate either through its digital pins 0 and 1 or through the USB port.13 According to GPS protocol,” Serial communication was originally designed to transfer data over relatively large distances through some sort of data cable, including Ethernet and serial ATA.”14 At certain times, in working with electronics crosstalk can be created. According to the Merriam-Webster dictionary, crosstalk is an “unwanted signal or noise in the communication channel caused by transference of energy from another cable or circuit.”15 From my experience working with custom-made electronic devices cross talk happens frequently. I used serial communications in Ivana Kupala. Some interfaces may use Bluetooth as a communication protocol, as was the case in data communication between the controller and the wireless receiver dongle in Ride On. According to the book Internet of Things: Architecture, Protocols and Standards, “Bluetooth is a wireless technology standard for exchanging data over short distances using radio waves.”16 12 "Serial Communication," Sparkfun, https://learn.sparkfun.com/tutorials/serial-communication/all (accessed January 9, 2019). 13 Ibid. 14GPS Signal, 1, https://upload.wikimedia.org/wikiversity/en/7/7c/Serial_Comm.note.20170730.pdf. 15" Merriam-Webster, "Cross Talk," (accessed January 9, 2019), https://www.merriam- webster.com/dictionary/cross talk. 16 Simone Cirani, et al., Internet of Things: Architectures, Protocols and Standards (Hoboken, NJ: John Wiley & Sons, 2019), 31. 9 Data Mapping The second module of a data-driven instrument relates to mapping in software. This part of the data-driven instrument is where each value generated by the interface is mapped to a new output value. Common data mapping processes include scaling, offsetting, reshaping, inversion, smoothing, quantizing, thinning or filtering. I worked with data streams in the second stage of the data-driven instrument. I decided how to shape and use data streams in sound-producing algorithms. I used Max for the data mapping of five out of seven compositions. Max is a visual programming language optimized for music, sound and multimedia. In Max, I used objects such as the hi, 17 serial, aka.leapmotion, and adc~ objects to receive data streams from the user interfaces. I used each of the data mapping processes referenced above in my compositions. For inversion, scaling, and offsetting of data streams I used the scale object. This object allowed me to map one range of data to another range. To thin out data streams I used the speedlim and change objects. The speedlim object limits how often messages are permitted to pass. The change object (as seen in data mapping of my Ride On composition in Max patch) filters out repetitions of a number produced. For interpolation or smoothing of data streams I used the line object. The line object generates intermediate numerical values within specified times between one value and another. I could provide the line object with an argument, which specified how intermediate values were interpolated. To reshape data, I used the table object. The table object stores an array of numbers. 17 As a convention in this dissertation document, I will italicize the names of all Max objects 10 All data sent from the data mapping portion of my instruments are either MIDI or OSC messages. For Wind in the Forest I used Delicode’s NI mate for data mapping. The data mapping used in Wind in the Forest included inversion, scaling, and offsetting. Other data mapping was done within the Kyma environment to further reshape data streams to fit the needs of the compositions. After shaping data streams in the data mapping portion of the instrument, I routed them to the third module of DDI. The third module of DDI is the sound-producing algorithms contained in the Kyma environment. To enable Kyma to receive data I used software called KymaConnect, developed by Delora. According to the Delora website, “KymaConnect is a user agent program for the Apple Macintosh that provides a virtual MIDI patchbay between Max and the Paca [the hardware element of the Kyma system].”18 Kyma Sound Synthesis Engine The Kyma environment consists of two elements: the hardware called the Paca (or Pacarana) and the software called Kyma.19 For the creation of seven compositions in my Digital Portfolio Dissertation, I used the Paca hardware, which is a sound computation engine with two multi-core processors and four GB of sample RAM. The Paca runs Kyma 7+ software. For me, Kyma provides a way to organize my sounds into a musical composition. Most importantly, Kyma provides me with a live performance environment. Data streams that are sent into Kyma as messages are used to control musical parameters 18 "Delora," Delora KymaConnect, last modified 2018, (accessed January 10, 2019). http://www.delora.com/products/kymaconnect/. 19 Carla Scaletti and Kurt Hebel, "Kyma Sound Design Inspiration" Kyma, last modified 2017, (accessed May 18, 2017), http://kyma.symbolicsound.com/. 11 in real time. In the compositions of my dissertation I use Kyma’s control mechanism to shape musical parameters such as timbre, rhythm, pitch, volume, and location of sound in real time. Working with sound, I often had to alter incoming data streams to accommodate the needs of musical nuances. For example, sometimes a range of values had to be smaller or larger, depending on the range of a parameter, like frequency. In some cases, I did more of a creative kind of mapping. In these sections I included elements of indeterminacy to select from an array of values to control frequency or amplitude. I also defined and adjusted the parametric ranges. For instance, sometimes I did not know the exact pitch I would hear at any given time, which allowed me to make musical decisions in real-time based on what I heard. The Kyma environment uses the Capytalk language. According to Dr. Carla Scaletti, “Capytalk is a real-time parameter-manipulation language that one can use in the parameter fields of Kyma Sounds. Capytalk can generate a single number or a stream of numbers that can be used to control any parameters accessible via hot parameter fields.”20 There are many techniques used to generate these numbers. According to Dr. Scaletti such techniques include, “performing arithmetic and logical operations, accessing values from arrays, generating a changing over time data stream and analysis of attributes of sound.”21 One can express numbers in Capytalk expressions in many ways. Dr. Scaletti mentions, “A way to express numbers in Capytalk expression is by simple numbers, numbers with event values, through real time Capytalk expression, arithmetic, numbers through sound, numbers through Sound objects and combinations of all of the above.”22 20 Carla Scaletti, Kyma X Revealed! (Champaign, IL: Symbolic Sound Corporation, 2004), 126. 21 Ibid. 22 Ibid. 12 I also used Kyma as a score for organizing and performing my seven compositions. Software scores can provide performers with information about the order of occurrence of musical events, the pace that elements of musical sections occur, the pace of the overall composition, or the audio levels of a given section. I used Kyma’s Timeline to organize sound-producing algorithms into a musical structure and to provide me with performance directions. Performative Space According to Dr. Stolet, “Performative space is a space where a human performer engages the interface, and where its physical disposition may be radically transformed depending on how the performer chooses to engage it.”23 Musicians can engage with music via spatial metaphors which could require a transformation of literal space.24 Dr. Stolet states, “Whether a musical interface is originally created for music or appropriated from outside of music, conceptualizing how an interface fits into a performance space is essential in order to play music.”25 For example in my composition Brigit, the original baking sheets are created for baking and I have used them as an elements of a musical interface within the performative space. Lighting of the performance space should be considered as well. In some cases (such as the case in USCGC Healy (WAGB-20) and Wind in the Forest) lighting might affect the data streams creation and interfere with desired musical results. 23 Jeffrey Stolet, “The Cinematics of Music Performance,” Article, Eugene, Oregon, 2017. 24 Ibid. 25 Ibid. 13 Observability Observability is an important factor for an audience to observe the performative actions that cause the sounds they hear. Observability provided by the performer offers information about musical time. Observability enhances information about when musical events happen.26 Visual cues that are observed by the audience can help the audience recognize when a sound or some sort of musical event will occur in the near future. Visual cues provide information about when a sound or a musical segment has concluded. As Dr. Stolet indicates, “Visual cues provide understanding about ongoing sound that is controlled by humans.”27 Observability provides a full experience, enhances such experience with a temporal dimension and helps the listener to anticipate musical events and understand the music.28 26 Ibid. 27 Ibid. 28 Ibid. 14 CHAPTER III DISCUSSIONS OF INDIVIDUAL PORTFOLIO COMPOSITIONS CHAPTER III.1 RIDE ON My composition, Ride On, for a skateboard controller, custom Max/MSP software, and Kyma is a real-time performance composition approximately thirteen minutes in duration. The composition was conceptualized as a quadraphonic composition; however, the four audio channels may reasonably be diffused to eight or more speakers. The complete data-driven instrument for Ride On is comprised of the skateboard interface sending data and connected to the Max software that remaps the data and sends it to sound-producing algorithms contained in Kyma. Data is transmitted from the skateboard to the computer through a Bluetooth connection. This configuration is depicted in Figure 1 below. Figure 1: Basic data flow diagram of the complete instrument for Ride On Creative Concept The creative concept for this composition came from students zooming past me on their skateboards. Within American culture the skateboard is well known. When I thought of a skateboard or someone riding one, I immediately used three out of five senses, hearing, sight and touch, to recreate the image in my head. I imagined a dense, grungy noise of wheels against the concrete. The timbre of sound, when a skateboard or a 15 longboard passed me, was also distinguishable. The size, diameter, coating and material of a wheel (steel, clay, or urethane wheels 29) added to that distinguishable timbre of sound. As a skateboarder leaned from one side to another while passing or turning, I heard an audible change in the density of the sound. The spatial perception of sound shaped my understanding of where skateboarders were coming from and to which side of the road I jumped in order to avoid getting “run over.” A skateboarder could leisurely ride past me enjoying the feeling of fresh, spring air. A skateboarder could frantically zoom past me in a rush to get to their next class. Those leisurely or frantic varieties of skateboard riding affected my perceptions of rhythm heard in skateboard sounds. Despite the slight worry of being run over, rhythms that I perceived from skateboard sounds were soothing and musical to my ear. The culture of a college campus added to my perceived sight of skateboards and their riders. In my perception, skateboarders were anywhere in the age range from teenagers to adults in their early 30s. This age group seemed active, healthy, and slim, possibly because of the physical exercise they got out of riding skateboards. The fashion choice that I associated with riding a skateboard was formulated from my experience seeing student skateboarders on the college campus. Many skateboarders wore skinny pants, hoodies or a T-shirt, accompanied by a pair of sneakers. In my opinion the choice of outfit was partially for one’s representation of style and partially for comfort, since riding a skateboard requires extensive physical movement. Skateboarders sometimes ride at specifically-dedicated skateboard parks to practice their tricks, but they also use the 29"Skateboard Design: Making Skateboard Wheels | Exploratorium," Exploratorium: The Museum of Science, Art and Human Perception, last modified April 5, 2018, (accessed October 15, 2018), http://www.exploratorium.edu/skateboarding/skatedesignwheel.html. 16 college campus after hours to do tricks on railings, sidewalks, and stairs. Additionally, riders can be observed using their skateboards to commute from one side of campus to another. Separate from their choice of riding, it seemed to me that skateboard riders enjoyed riding skateboards alone or with their friends. Riding a skateboard does not seem like a forced activity to me, but rather a fun pastime. My knowledge and experience playing with and riding skateboards as a child added to my current perception of skateboards. I knew that the board itself had a sandpaper-like feel to give grip to the shoes. The length of the board affected how one positioned themselves. The siding or frame of the wheels was made out of metal, with a smooth, glassy, cold feel. I knew that a skateboard weighs five to eight pounds and it was easier to operate with one’s feet instead of hands or belly, for example. I knew that in order to go faster I needed to use my foot to push off from the ground, to get more momentum. A certain element of balancing was required to be a better rider. I knew the touch of wheels as they were gently used on the concrete. In my composition Ride On, I recreated those familiar associations of riding a skateboard through a sonic world and observable performative actions. Sonic Material The sonic material of Ride On arose from a series of audio recordings I made at Washington Jefferson skatepark (WJ skatepark) in Eugene, where many skateboarders ride. WJ skatepark is located at the crossing of Jefferson and Washington Street in Eugene, Oregon. According to the City of Eugene, “The skatepark is the largest covered 17 skatepark in the nation with 23,000 feet of riding terrain.”30 Skateboarders ride in the park, rain or shine. WJ skatepark is located under an I-105 traffic bridge, where sounds of transportation can be heard in the park. The terrain is made out of concrete, shaped into various skateboard runs including: a ribbon, a mini-snake run, and a blend of varied skate terrain.31 The surfaces of the park (such as various cement sides, walls and the concrete surface of the bridge) allow environmental sounds to bounce off of them, creating multiple reflections. Those reflections of sound that I heard added to my perceived understanding of a large and reverberant space. Multiple reflections added to the creation of a reverberant atmosphere audible in all of my recordings, which was further enhanced in my composition through sound design. My friend Rebecca Connor and I met at WJ Skatepark. We were joined by other riders who practiced their riding skills. The recordings included Rebecca riding her skateboard and doing tricks. During our session, Rebecca explained to me how a certain sound was created depending on the position of a skateboard on a surface. For example, if the surface was wet, the skateboard made a squeaky sound. I used these recordings of explanations as material for further sound design in my composition. Environmental sounds of the space itself and traffic leaking into my recordings added to the quality of recorded material. The environmental sound was further transformed in my composition through sound design to create the environmental sound world that I imagined. Sounds of other riders were heard at the skatepark and were further transformed into musical ideas. 30 "Skateparks & Community Involvement," A Dive into Junk (accessed October 15, 2018), https://blogs.uoregon.edu/kmartin8w15gateway/. 31 "WJ Skatepark Urban Plaza," Downtown Day Storage Pilot Service | Eugene, OR Website, (accessed October 15, 2018), https://www.eugene-or.gov/1733/WJ-Skatepark-Urban-Plaza. 18 Conversations of other riders were heard in certain parts of my recordings, which added to the overall environmental experience. In my composition, conversations were transformed and added as layers of background musical texture material. Sounds of cheers were heard during the recording session as well. Sounds of cheers were a good source for melodic material in Ride On. Through pitch shifting and analysis and resynthesis of these sound recordings, I was able to create semi-sustained, melodic motives. These motives are heard in the lyrical section of the composition. The sounds of Rebecca jumping on her skateboard, riding past me, and cheers of excitement were sonically interesting to me. In such sounds I heard a musical quality of rhythm, timbre, duration, and spatialization. I knew from the recording session that I could further artistically enhance and transform audio recordings from the skatepark to create the composition Ride On. Performance Interface My idea about the composition and sound recordings made at the skatepark encouraged my need for an interface which was tied in with the notion of skateboarding. After hours of searching on various MIDI forums I came across a Tony Hawk skateboard controller that I used as an interface in my composition, Ride On. Since I had not worked with a skateboard interface, I was not able to predict how I could use and get data streams out of the interface. The interface along one side had a series of buttons, however, there were no ports to connect the device to my laptop. After further research I learned I had to have a wireless receiver dongle that connected to the skateboard controller via Bluetooth and to the computer via USB. After trial and error, I realized that the order of operations 19 for powering up the skateboard interface mattered. First, the dongle had to be connected to the laptop via USB. Second, four batteries had to be inserted into the skateboard controller. Third, the On/Off button had to be pressed until a single blue light appeared (seen in Figure 2). Figure 2: Photo of a skateboard controller powered up If more than a single blue light appeared, that meant the interface (skateboard controller and data streams sent to the wireless dongle receiver) was not properly connected to the dongle. The process of restarting a skateboard controller had to happen over again. Using the hi object in Max, I accessed the data streams of the interface. In my Max patch, RideOn.maxpat, I routed data streams using the route object. Each data stream created by the operation of a skateboard controller was reshaped through data mapping techniques. The total of sixteen data streams was created from operating the skateboard interface. In Ride On, I use six of these sixteen data streams (Figure 3). Figure 3: Photo of data streams used in Ride On 20 Data stream 1 was produced by covering up infrared sensor one (within the Max patch it is represented as routed no. 24) so that the sensor detected a presence of an object (my foot) in front of it. Data stream 2 was produced by covering up infrared sensor two (routed no. 25). Data stream 3 was produced by covering up infrared sensor three (routed no. 26). Data stream 4 was produced by covering up infrared sensor four (routed no.27). Data streams 5+6 (routed no. 28 and 29) I used in combination by bringing the whole skateboard controller into vibration, by either stomping both of my feet on it (as can be seen at 1:43) or by letting it drop to the floor with acceleration (as can be seen at 12:35 or 12:52). Musical Challenges I discovered three challenges working with the skateboard interface. The first challenge was the jittery, unpredictable data streams produced by the skateboard controller. The jitter in data increased when I actively engaged the controller. This jittery data affected my choice of what performative actions I would use. One way I encountered this challenge was by converting data streams into button-like functions to trigger musical events. During the lyrical sections of the composition, I smoothed data streams extensively to solve the jittery data issue. The second challenge was the fact that the data streams were interconnected, making it difficult to rely on them to produce independent results. For instance, intentionally changing one data stream created simultaneous collateral changes in other data streams. Therefore, I interacted with one to two data streams at the same time. The third challenge was the lighting of the performance space. Other objects around the skateboard controller could potentially 21 interfere with other data streams due to the nature of the infrared sensors on the interface. To solve this issue, I placed the skateboard controller at center stage, approximately five feet away from objects in all four directions. Placing lights above the performer, instead of on the sensors of the skateboard controller, solved the problem of potential disruption of the control signals by the lighting. Musical Opportunities Working with the skateboard controller, I discovered several opportunities that I further employed in Ride On. The shape and size of the controller indicated to me that I should interact with it similarly to a real skateboard. I took my cultural knowledge of how one interacts with a skateboard and incorporated it into my composition. Throughout the composition I used my feet and shifting body weight to interact with the controller. Despite the skateboard controller’s lack of wheels, I moved ever so slightly around the performative space. This movement happened because the bottom surface of the skateboard controller had fabric padding causing it to be moved by my shifting body weight. Performing with the skateboard controller required tremendous balance and much practice. The audience appreciated my composition more when they observed my performative actions that were similar to those required in riding a real skateboard. The size and shape of the controller, my performative actions, and the sound world that I actualized, guided the audience to perceive the composition through the notions and culture associated with skateboarding. Another opportunity of working with the skateboard controller was its durability. The skateboard interface did not need to be 22 “babied.” The interface could easily hold 124 pounds of weight. Not many interfaces provide an artist with such opportunity. Data Mapping Before I could work with data streams to control sound-producing algorithms within Kyma for Ride On, a number of data mapping techniques had to be used to shape and route data (Figure 4). These data mapping techniques included scaling, offsetting, reshaping, smoothing and filtering. For the continuous controller data streams 24-27, I used the scale object in Max to scale and offset their original ranges of somewhere between 0 and 35 to a new range of 0 to 127. Additionally, in Max, I smoothed data using the line object as well as filtering out repetitions of numbers in jittery data streams of the skateboard controller by using the change object. For data streams that I turned into button-functions (data streams 28 - 29), I used a conditional statement to output on/off messages. I again filtered out repetitions of values by using the change object. Data mapping was also the process I used to convert continuous control data streams to button- like functions turning on and off events within Kyma. Both continuous control data streams and on/off data streams were routed to Kyma as MIDI controller messages using the ctlout object. Figure 4: Max patch - data mapping example used in Ride On 23 In Kyma I further reshaped data streams to control musical parameters. I used scaling, offsetting, inversion, smoothing, and the conversion of continuous control data streams into buttons. I wrote Capytalk expressions directly into sound parameter fields in addition to the use of SoundToGlobalControllers.32 Figure 5 shows how I controlled the amplitude of one of my sounds using high resolution MIDI. The data stream was received from Max, the range of the data stream was inverted, then data was scaled, offset, and smoothed. Figure 5: Capytalk example to control the amplitude in Ride On Another example of a data mapping technique I used within Kyma involved the breaching of a defined threshold. I used the breaching thresholds placed on the Kyma Timeline to control when I progressed from one musical section into another. As one can see in Figure 6, I controlled the WaitUntil Sound object with a Capytalk expression inside of a SoundToGlobalController Sound object. In this context I used a Boolean expression in which two data streams were required to breach a threshold in order for a triggering event to occur. In situations like this, I used the SoundToGlobalController Sound object to control musical parameters (if the sound-production algorithm was complex) because it helped me to see easily what musical parameters were being controlled. 32 As a convention in this dissertation document, I will italicize the names of all Kyma Sound objects. 24 Figure 6: Signal flow using a SoundToGlobalController in Ride On Sound Design The six data streams that I chose to use (after being scaled, smoothed, and filtered in Max) were sent to Kyma to control musical parameters. These data streams were sent throughout the composition and could be used at any moment to control any specific sound parameters. I edited my original audio recordings from the skatepark session. I categorized the resulting edited audio files into attack, crescendo, grace note, melodic, rhythmic, and sustained sounds. Such organization made it easy for me to search through the various audio files during the compositional process. Audio recordings that formed the basis of my sonic world were transformed in Kyma through sound modification processes such as sampling synthesis, granular synthesis, and analysis and resynthesis. About 98% of the sounds that I created for Ride On employed sampling synthesis techniques. I took audio recordings and played them through Kyma’s generator modules such as the Sample, Multisample, and SampleCloud Sound objects. I controlled amplitude and frequency of sounds, as well as when sounds started. I also defined using an indeterminate selection process, which audio recordings would play from a collection of audio recordings placed in the system’s memory. I decided whether a sound recording was to be reversed or if the recording was to be looped. I also decided which part of the 25 recording was to be looped and for how long. Additionally, I applied time variant control signals, such as envelopes, to audio recordings to transform the sound further. I used Capytalk expressions that receive real-time data streams created by my performative actions of operating a skateboard controller to shape sounds in overt and dramatic ways. Figure 7 shows how I used Kyma’s Multisample Sound object to playback three different audio recordings. When this Kyma Sound was active I could control which of the three audio recordings would play and determine when it would play. Other musical parameters such as amplitude and frequency were controlled with additional Capytalk expressions. Figure 7: Signal flow of sampling synthesis techniques used in Ride On More examples of sampling synthesis used in Ride On can be found in the second row of the Kyma sound file, RideOnSoundFile.kym. These sounds included elements of granular synthesis, and analysis and resynthesis. To achieve the musical result that I wanted, I often needed to combine different synthesis techniques. The structure of sounds that employed granular synthesis within Ride On was comprised of three elements: three SampleCloud Sound objects that modified audio recordings, a Mixer Sound object that combined the SampleCloud Sound objects, and a 26 MultichannelPan Sound object that spatialized the output of the Mixer (which can be seen in Figure 8). Figure 8: Signal flow of granular synthesis structure used in Ride On Within each Sound object I used my judgment to control musical parameters. For example, in the parameter fields of SampleCloud, I used Capytalk expressions that received real-time data streams to control the amplitude. I also used Capytalk expressions to control which portion of an audio recording would be granulated at any point in time, the density of the SampleCloud as a whole, the duration of each individual grain, the amount of jitter in the grain duration, and pan position. As can be observed in the first row of the Kyma sound file, RideOnSoundFile.kym, I had seventeen different sounds created that used granular synthesis. In Ride On, I combined simple sounds together into more complex sounds. As a result of these combinations I created rhythms. Through Capytalk expressions in the amplitude field (Figure 9) I created these background, lower-pitched, pulsating rhythms. Figure 9: Capytalk expression to create pulsating rhythms in Ride On In Figure 10, shows that I used two layers to create the final sound. Both layers of sound were created out of the same custom waveform (SB8.aif) that was extracted from 27 my skateboard recordings of Rebecca doing skateboard tricks. I generated twenty partials per layer and controlled the frequency of each partial. I further created rhythms out of each generated new spectrum by multiplying it with an Oscillator Sound object to create rhythmic envelopes. Each spectrum was used in a resynthesis process involving a FilterBank, and was rhythmized, amplified, detuned, and layered against a decorrelated copy of itself. Figure 10: Signal flow of two layered sound structures used in Ride On I used Capytalk expressions that received real-time data streams from the interface to control which partials would sound during a performance, essentially remixing the timbre of sound in live performance. I used Capytalk expressions to generate random numbers to control the amplitude of each partial. Such transformation made the sound dense in timbre and changed it over time. Layer one was lower and layer two had more of a melodic, higher pitched timbre. I combined these two layers, amplified, and spatialized them. I used analysis and resynthesis in this composition to create dense chords. In Figure 11, one can see how I used a combination of sampling, and granular synthesis, as well as analysis and resynthesis techniques all in one sound. This sound can be heard at 28 7:51, when I ended a section and prepared the audience for the buildup of sound that was eminent. Using Kyma’s Tau algorithm, I analyzed the original recordings of skateboarding. Using Capytalk, I randomly selected where to start in the recordings out of a range of predefined starting points. I combined two layers of sound together and granulated the sound. I used Capytalk expressions that received real-time data streams to control the formant and grain duration of the sound. I layered the sound one more time, detuning it, amplifying it, and applying a reverberant effect to it. Figure 11: Signal flow of analysis and resynthesis structure used in Ride On Additional examples of analysis and resynthesis techniques used in sounds of Ride On can be found in the fourth row of RideOnSoundFile.kym. A large portion of time working on Ride On was applied to sound design. When I felt satisfied with the variety of sounds and textures, I formulated and constructed the composition. Organization of the sounds into the composition came naturally to me. I organized them in the time domain using Kyma’s Timeline. Using Kyma’s Timeline, I was able to control how long each section was to be played and when to progress to the next section. I set overall levels of the composition and applied equalization to sounds that required it. I used the Timeline as a score to guide me through the performance of the composition. 29 In the VCS of the Timeline one can see directions that I gave myself for performing the composition. An on-screen clock was displayed while I performed the work so that I could accurately keep track of time during the performance. The color scheme employed on the Timeline alternated between bright and darker colors so that I could see important aspects of the Timeline at a glance while on stage. Formal Structure The primary musical material for Ride On was constructed from the audio recordings of skateboards ridden at the park, environmental sounds of the skatepark, and conversations with riders. Given that a single recording mechanism, a Tascam recorder, was used to make all the recordings, even the recording process played a role in the unification of the composition. I further unified the composition by restricting the number of audio synthesis techniques I used to sampling, granular synthesis, and analysis and resynthesis. The central musical actions in Ride On revolves around alternating two contrasting musical sections of fast, loud, dense, rhythmical, and highly spatialized material with soft, slow, and lyrical material. A two-note chromatic motive is heard through the composition and added to the unification of the composition. The composition’s climax is at 12:36 of the video recording, where many layers of sound are heard. The layers of sound increase in dynamics and density as well to the final attack sound of the composition. Ride On starts on fortissimo, allegro animato with a strong performative action of me jumping on the skateboard controller. Similar, to the beginning, the ending was also fortissimo with allegro animato accompanied by jumping off the skateboard. 30 Performative Techniques The durability of the controller also opened up a wide range of possible performative actions that I incorporated in the composition. I jumped on it, I jumped off it, stomped on it, covered each sensor with my foot one at a time, and covered combinations of sensors. I tilted the controller, and let it roll from side to side. I balanced on it and dropped it, without severely damaging the interface. Such a wide variety of interactions with the interface and my whole body brought an element of fun to the physicality of performing Ride On. The skateboard controller was portable and could be easily set up with the other elements of the data-driven instrument used in Ride On, making it possible for me to perform this composition at various locations around the world. Additional Comments After all technical and practical challenges were addressed, sound design created, and composition organized in a formal structure, it was time for me to consider the performative space. In an actual performance of the piece, the skateboard controller is centrally placed on a rug on an open stage. Lights are adjusted prior to the performance to not interfere with the wireless data streams of the controller. I chose not to dress in traditional, concert attire because of the physicality of the performance. To enhance the idea and atmosphere of the composition I dressed in skinny pants, T-shirt, and sneakers. The choice of my clothing was not image based, but rather it was for practicality. Shoes had to be sneakers to give my feet better grip on the controller. Ride On was physically involved, and I got warm by performing the piece. Through the sound recordings of 31 skateboarding, through the skateboard interface, and in my performance of Ride On, the cultural influence of skateboard riding was shown to play an important role in the creation of this musical journey. 32 CHAPTER III.2 USCGC HEALY (WAGB-20) USCGC Healy (WAGB-20), for a Leap Motion controller, custom Max/MSP software, and Kyma is a real-time performance composition approximately ten minutes in duration. The composition was created as a quadraphonic composition; the four audio channels may be diffused to eight or more speakers if the performer desires. The complete data-driven instrument for USCGC Healy (WAGB-20) is comprised of the Leap Motion interface sending data to the custom Max software that remaps the data and sends it to Kyma’s sound producing algorithms. This configuration is shown in Figure 12 below. Figure 12: Basic data flow diagram of the complete instrument for USCGC Healy (WAGB-20) Creative Concept The creative concept for this composition came from three sources: my visit to the USCGC Healy (WAGB-20) ship, my partner Mr. Croy Carlin’s (a marine technician on research ships) stories and pictures from a research cruise to the Chukchi Borderlands, and video documentation of Bureau of Ocean Energy Management (BOEM) oceanographer, Dr. Kate Segarra. According to the U.S. Coast Guard, “The US Coast 33 Guard ship Healy is USA’s newest and most technologically advanced polar icebreaker”33 (Figure 13)34. Figure 13: Photo of the USCGC Healy (WAGB-20) ship The Healy is designed for research activities and missions to the Polar Regions. According to the U.S. Coast Guard, such missions include ship escort, environmental protection, and search and rescue.35 I had a special opportunity to visit the ship at her home Port of Seattle, Washington on October 22, 2016. I met the Coast Guard crew, marine technicians and science party. I saw research laboratories and science equipment used for deployment. According to the U.S. Coast Guard, The Healy was named after Captain Michael A. Healy, Commanding Officer of BEAR.36 The U.S. Coast Guard states that Captain Healy dedicated twenty years of his life to the service of protecting natural resources of the Alaska region.37 The ship is 420 ft. in length and has a fuel capacity of 1,220,915 gallons.38 The maximum speed the ship can go at is seventeen knots, which is about nineteen and half miles per hour and the ship can break ice as deep 33 "USCGC HEALY (WAGB-20)," United States Coast Guard, last modified September 2018, (accessed November 9, 2018), https://www.pacificarea.uscg.mil/Our-Organization/Cutters/cgcHealy/. 34 Photo curtesy of U.S. Coast Guard. 35 Ibid. 36 United States Coast Guard, "USCGC HEALY (WAGB-20)". 37 Ibid. 38 Ibid. 34 as eight feet. 39 In other words, USCGC Healy (WAGB-20) is a large, heavy, slow, but strong ship. My tour of the Healy Coast Guard ship brought back childhood memories. Being aboard the ship, I was told that I had to speak ship’s language to communicate properly with other members. Going through passageways to the decks of the ship, opening watertight hatch doors, exploring the bridge, galley and engine room were all familiar to me from my childhood. As a child I visited my Grandfather Ivan and Uncle Andrei on their ships. The visit and memories of these previous visits to ships enhanced my understanding of my partner’s stories from the cruise to explore Chukchi Borderlands. I was inspired to create a composition incorporating my representation of what the sonic world would sound like to this big ship traveling through the artic if she were a living organism. On July 2nd, 2016, the United States Cost Guard icebreaker ship Healy started her 40-day journey to explore Chukchi Borderlands. According to Dr. Kate Segarra, “The journey was a Hidden Ocean 2016 expedition led by the National Oceanic and Atmospheric Administration (NOA) from Seward Alaska to Chukchi Sea.”40 The expedition was to explore rapidly changing and poorly understood ecosystems.41 141 members aboard the 420-ft. long icebreaker ship headed to 77.5° North, to Chukchi Borderlands. According to Dr. Segarra the Chukchi Borderlands is located over 700 miles away from the North Pole.42 On their way to the Chukchi Borderlands, the ship went through a large area of open water where a wide variety of sea life was observed. 39 Ibid. 40 Kate Segarra, and BOEM, "Exploring the Chukchi Borderlands with BOEM Oceanographer Kate Segarra," Bureau of Ocean Energy Management, last modified February 8, 2018, (accessed November 9, 2018), https://www.boem.gov/Exploring-the-Chukchi-Borderlands/. 41 Ibid. 42 Croy Carlin, "Science Cruise to Chukchi Borderlands 2016," Interview by author, August 25, 2016. 35 According to Mr. Croy Carlin’s stories such sea life included: humpback whales, grey whales, porpoises, ringed seals, sea lions, walruses, and polar bears.43 According to Dr. Segarra, “Hours after crossing the Arctic circle, passing through Hanna Shoal (a region of shallow water in Chukchi Sea), groups of walruses were spotted.”44 One of such groups can be seen in Figure 1445. Figure 14: Photo of groups of walruses in Hanna Shoal According to my partner, everyone was excited to see these silly looking creatures and one curious walrus kept swimming close to the ship to explore.46 Fascinated by my partner’s stories and pictures, I realized that I wanted to reference the curious walrus in my composition. In the video documentation of USCGC Healy (WAGB-20), at 8:20-8:40, one can hear a different sonic environment only heard once in this composition. This sonic environment sounded like little, tiny voices of animals or birds processed with a large amount of reverberation effect, heavily spatialized to imitate my representation of goofy walrus thought processes. 43 Ibid. 44 Segarra, "Exploring the Chukchi Borderlands with BOEM Oceanographer Kate Segarra." 45 Photo curtesy of Croy Carlin. 46 Carlin, "Science Cruise to Chukchi Borderlands 2016." 36 Two days after crossing the Arctic circle the Healy hit thick ice of approximately forty miles across.47 Getting through thick, dense ice was a slow process for the 420-ft. long ship. According to Dr. Segarra the process of breaking ice, called backing and ramming, made the ship slower.48 Mr. Carlin mentioned that, “backing and ramming proceeds at approximately one mile per hour, and sometimes less.”49 I used this idea of icebreaker’s backing and ramming as performative actions towards the end of my composition. In the video documentation (from 8:44-9:27) one can see the back and forth performative actions of my left hand. Such performative action helped to build the intensity to the final moment of the composition. Sonic Material Plowing through ice50 was a gradual process, even for the icebreaker. Mr. Croy Carlin mentioned that during this slow process members of the ship were spotting polar bears and preparing research equipment ready for sea trials.51 Mr. Carlin also mentioned that after several days of plowing through heavy, thick ice, the Healy started having mechanical issues and got stuck in the ice (which can be referenced in Figure 15). Figure 15: Photo of Healy stuck in the ice 47 Segarra, "Exploring the Chukchi Borderlands with BOEM Oceanographer Kate Segarra." 48 Ibid. 49 Carlin, "Science Cruise to Chukchi Borderlands 2016." 50 Ibid. 51 Ibid. 37 My partner was able to create and provide me with audio recordings of ice breaking and the ship moving back and forward. I used those recordings as primary sonic material in my composition USCGC Healy (WAGB-20). I realized when listening to the recordings I was able to imagine the sound world clearly. Audio recordings made of the big ship being stuck in the ice naturally consisted of many layers of sound due to the environment. The prominent sonic layer to me was grumblings of big chunks of ice moving against the metal surface of the ship. This layer was rich in its spectrum containing a wide frequency distribution. Sharp attacks naturally enhanced these grumblings, low sounds. The low sounds had a nice organic shape to them. The second layer of sound that I heard in the audio recordings made when the ship was stuck in the ice were higher pitched, crinkly sounds, perhaps the sound of smaller textures of ice breaking and blending with water. To me these sounds seemed like a sparkly beverage poured over dry ice. The next layer of sound was slightly distinguishable; it was a higher, sustained pitch (around B3) of electronic equipment running on the ship. This layer of sound reminded me of sounds heard in Christina Kubisch’s “Homage with Minimal Disinformation,” where she recorded sounds of electronic equipment heard through her custom-made electromagnetic headphones.52 Another layer of sound I heard in the audio recording was the sound of wind. More precisely, I heard objects flapping in the wind. Perhaps the flapping was of flags or straps on science equipment, but that sound had a faint, rhythmic pattern to it. Because the ship was stuck in the ice for four days, my partner had extra time on his hands. Encouraged by my emails to “go on a sonic walk around the ship and find me 52 Christina Kubisch, writer, Five Electrical Walks. Kristina Kubisch. Important Records, 2007, CD. 38 cool sounds of engines, equipment, anything that makes some sort of sound,” he walked around the Healy with my portable digital recorder to make his sonic walk a success. Among that which he recorded were some rhythmic sounds. Some of these rhythmic sounds were science equipment that made beeps, and others were of various ship equipment operating in the control and engine room. He also recorded a sound that reminded me of a box being dropped. This sound was short and precise, with a nice attack. I transformed the audio recordings taken by my partner aboard the Healy ship into musical material and used it for creation of the USCGC Healy (WAGB-20) composition. According to Dr. Segarra water masses from the Pacific, Atlantic, and Arctic oceans combined together in the Chukchi Borderlands.53 The combined waters created beautiful, ever-changing shades of turquoise and sapphire blue (Figure 16). 54 Figure 16: Photo of cold, blue waters of the Chukchi Borderlands The transition from the water to the sky was further colored with exclusive shades of blue. This color was enhanced with textural patterns of blocks of floating ice. A cold, mysterious, thoughtful, yet soothing sense existed in these never-ending cold waters. Listening to my composition USCGC Healy (WAGB-20) one can hear the musical material as thoughtful, mysterious, and perhaps cold at moments. I imitated the sound of 53 Segarra, "Exploring the Chukchi Borderlands with BOEM Oceanographer Kate Segarra." 54 Photo curtesy of Croy Carlin. 39 the ship going through these blue waters with wave-like musical swells. I also imitated the breath-taking cold air surrounding crewmembers of the ship with an emphasis given to the higher frequency range with the sparkly textures in USCGC Healy (WAGB-20). Discovering that living organisms live in the Chukchi Borderlands came as a surprise. According to Dr. Segarra, “The focus of this expedition was to study jellyfish species living in cold, distant waters.”55 Mr. Carlin’s job was to assist scientists in deploying research equipment into the ocean to explore the biodiversity of the Chukchi Borderlands. According to Dr. Segarra, “A diverse array of jellies lives in the Chukchi Sea.”56 They are colorful and round shaped. Some of the examples of jellyfish that were observed by me in the video documentation of this trip which can be seen in Figure 17. Figure 17: Photo of jellyfish studies on the Hidden Ocean 2016 Expedition As per Mr. Carlin’s descriptions and Dr. Segarra’s visual documentation of this trip I found these little creatures elegant in the way they moved. Jellyfish had a sine wave curve move. I also observed the smooth transition from one moment of movement to another. The smooth, gradual transition from one sonic element to the next in USCGC Healy (WAGB-20) was my representation of the way jellyfish move in the cold waters below a heavy ship. 55 Segarra, "Exploring the Chukchi Borderlands with BOEM Oceanographer Kate Segarra." 56 Ibid. 40 Performance Interface I chose to work with the Leap Motion controller because I wanted to perform the composition with performative movements of my hands. The Leap Motion controller is a small USB device, which I placed facing up on the flat surface, and connected to my computer. According to the Leap Motion website, “Leap Motion has two monochromatic IR cameras and three infrared LEDs.[…] The LED’s generate pattern-less IR light and the camera generates almost 200 frames per second of reflected data.”57 I found out that this controller was able to observe a hemispherical area of roughly three feet and two inches. The controller was also able to recognize the position of my hands and fingers above its cameras. Using Max I accessed the data streams of the interface using the external Max object aka.leapmotion created by Masayuki Akamatsu.58 Although this interface could create many data streams, I chose to use eight data streams; generated by the XYZ position of my left and right hands, the distance between my two hands, and the number of hands within the performance area. I used the route object to get data from the aka.leapmotion object. Figure 18 shows the data streams used. Data stream 1 was created by the X-position (left to right) of my right hand (Arm1 positive xAxis). Data stream 2 was created by the Y-position (up and down) of my right hand. Data stream 3 was created by the Z-position (forward and back) of my right hand. Data stream 6 was determined by the X-position of my left hand (Arm2 negative xAxis). Data stream 7 created the Y-position of my left hand. Data stream 8 was created by the Z-position of the left hand. For data stream 11, I converted 57 "Leap Motion," Leap Motion, last modified 2017, (accessed May 18, 2017), https://www.leapmotion.com/#112. 58 "Akalogue," Aka.objects | Akalogue, 2016, (accessed November 16, 2018), http://akamatsu.org/aka/max/objects/. 41 continuous control data into triggers. Data stream 12 was generated by the distance between my two hands. Figure 18: Data streams used in USCGC Healy (WAGB-20) Musical Challenges I recognized two challenges working with the Leap Motion interface. The first challenge was the lights of the room could affect the data streams produced. Room or theater lights needed to be positioned to the side of the Leap Motion controller instead of directly above it. Performance of the Leap Motion was more consistent under dimmer lighting. The lighting situation had to be addressed during the technical rehearsals prior to the performance. The second challenge was inaccuracy of the interface in recognizing number of fingers in the performance area. Every time a finger was removed the count was restarted. For me, using performative actions of my fingers was out of the question. The movement of changing fingers would have been small and less visible from the stage to my audience anyways. I decided to go with performative movements of my hands instead. 42 Musical Opportunities The data streams of the Leap Motion were interconnected, and I had to rely on visual feedback of where my hands were in the three-dimensional performance area. When one hand was removed from the performative space above the Leap Motion controller, the second hand could technically control data streams in the other hemisphere. Such nuance was initially both a musical challenge since the interface does not identify which hand; however, I used this restriction as an opportunity to create a new performative action. For example, at the beginning of the composition I set several thresholds to be breached by this performative action of using one hand. Another opportunity provided by the Leap Motion controller was the compactness and size of the interface, allowing me to travel internationally with the interface. Data Mapping Prior to applying the data streams of the Leap Motion controller to musical parameters, I used a combination of data mapping techniques to clean and shape the data streams. Once the data was parsed with the route object, I sent data of “palm” to new subpatcher arm, where I separated data into data stream for each hand, using the route object again which can be seen in my Max patch (Figure 19-left). I sent these separate data streams from each hand to a new subpatcher handsdata (Figure 19-right) where using the unpack object I extracted the XYZ positions of each hand. 43 Figure 19: Max patch - data mapping used in USCGC Healy (WAGB-20) Using the comparison object (<=) to separate positive from negative numbers. I was able to open and close the gswitch object so the passage of data could be controlled (Figure 20). Figure 20: Max patch - another data mapping used in USCGC Healy (WAGB-20) If data was positive, data was sent to the table object (data streams 1, 3, 6 and 8). For data streams 2, 7, and 12 I sent them directly into the table object. Essentially the table object served as a reshaping tool to get data to be outputted in a range of 0 to 127. For data stream 11 I converted continuous control data into triggers using the scale object. All eight data streams were sent to Kyma via ctlout objects to control musical parameters. Within Kyma I used the SoundToGlobalController Sound objects with 44 Capytalk expressions written inside of them to control sonic parameters as is shown in Figure 21. Figure 21: Signal flow in USCGC Healy (WAGB-20) using a SoundToGlobalController object This approach helped me understand how many data streams controlled each sound algorithm. In some cases, I used scaling, shaping, and data smoothing techniques prior to applying the data to parametric control. In this example, the XYZ positions controlled the mixing between the eight different audio files. I also converted continuous control data streams into buttons, activated by breaching specified thresholds. Capytalk expressions were written directly inside of musical parameters (Figure 22). Figure 22: Capytalk expressions inside musical parameters As one can see, I set a range of “notes” to be played in progression every time I breached the threshold. I also controlled the amplitude. Every time a new “note” was played, a note had a different amplitude value within a specified range. I used this principal of breaching thresholds on Kyma’s Timeline as well, to progress through musical sections through the course of the composition. 45 Sound Design I spent a great amount of time creating details in the sound design stage. Audio recordings of ice breaking, ship’s movement, and recordings from my partner, Mr. Croy Carlin’s sonic walk of the ship transformed through sound modification processes such as sampling, granular, and subtractive synthesis in Kyma. The sound design process took two stages. During the first stage where I spent well over two months designing 85 different sound algorithms in Kyma (which can be found in the folder USCGCHealy (WAGB-20)SoundFile.kym). Sounds that I designed in this first stage were beautiful, but their algorithms were complex, sometimes exceeding real-time computational resources or presenting considerable challenges related to the simultaneous control of the abundance of musical parameters on the Kyma Timeline. In order to use those sounds in a real-time performance, I chose to make audio files, which were less computationally expensive than running the originally designed sound algorithms. The second stage was exporting many of those sounds into audio files, bringing them back into sound algorithms in Kyma and doing further sound design before placing the algorithms on the Kyma Timeline (Figure 23). Figure 23: Kyma Timeline File for USCGC Healy (WAGB-20) 46 An illustration of this two-stage sound design process involves the Kyma sounds Arpeggio and 3DControl7C PlayIntheMiddleOfCube. In Arpeggio, first, I designed the sound using granular, sampling, and subtractive synthesis (Figure 24). Figure 24: Signal flow of Arpeggio sound design, first stage In this first stage, I took an audio recording of a high-pitched, beep-like sound and processed it with the SampleCloud Sound object which enabled me to control many musical parameters. I made three copies of this granulated audio recording and detuned them by changing the frequencies. The grain envelope for each of the SampleCloud Sound objects was a custom-made waveform that I created out of the audio recording of a box-like sound from the ship. The amplitude of each SampleCloud was controlled via an audio recording of the ship breaking ice. Then, I combined three SampleCloud Sound objects and created eight copies of each, in the Replicator Sound object. I applied a reverberation effect, some filtering, and finally spatialized the sound into a quadraphonic system. I created seven different versions of the sound and exported them as audio files. I brought newly recorded audio files back into Kyma (as seen in Figure 25) and created six more copies. I further detuned the frequencies. 47 Figure 25: Signal flow of Arpeggio sound design, second stage Using Capytalk expressions, I specified the selection of different notes within an octave range every time I breached a threshold with the data streams produced by my performative actions. I combined these six copies together, slightly delaying the timings of the attacks of the sounds, creating an arpeggio-like effect. Using subtractive synthesis, adding reverberant effect, and spatializing the sound, I further shaped the musical material. In 3DControl7C PlayIntheMiddleOfCube, first I designed the sound using subtractive synthesis (Figure 26). 48 Figure 26: Signal flow of a 3DControl7C PlayIntheMiddleOfCube sound design, first stage I used a custom-made waveform extracted from a recording of ice breaking in twelve oscillators and independently controlled the amplitude, frequency, and formant parameter fields. I further shaped the spectrum of sound into a narrower frequency range using subtractive synthesis. I spatialized the sound and applied reverberation to it. I created seven different versions of the sound again and exported each version as an audio file. I brought these newly recorded audio files back in to Kyma (as seen in Figure 27). Figure 27: Signal flow of a 3DControl7C PlayIntheMiddleOfCube sound design, second stage 49 I used sampling and granular synthesis. I used the Morph3DSampleCloud Sound object to combine eight different audio recordings using the concept of a virtual cube. Each of the audio recordings was conceptually assigned to the corner of an imaginary cube. To create a well-balanced mix of sounds I specified the amplitude of each audio recordings in this imaginary cube provided by the Morph3DSampleCloud Sound object. In addition to controlling spatialization, I also gave myself the ability to morph the sounds through the eight different textures in real-time. The Morph3DSampleCloud Sound object was useful in USCGC Healy (WAGB-20) because it granted me flexibility to freely mix timbres of sound, adding elements of improvisation to the performance. The visual representation of this Sound object, a three-dimensional virtual cube, was convenient to use since the interface did not have any tangible way of operating the faders and buttons functions. Working with the Leap Motion controller I had to rely on learned positions of my arms, as well as data streams updates of the actual virtual cube, to know where the position of my hand was in three-dimensional space to operate the interface. Formal Structure The musical material for USCGC Healy (WAGB-20) was created from the audio recordings of ice breaking, the ship moving, and ambient ship sounds. Similar, to Ride On, the audio recordings for USCGC Healy (WAGB-20) were recorded with the same recording mechanism, a Tascam Recorder, and added to the unification of the piece. My consistent use of synthesis techniques such as sampling, granular, and subtractive 50 synthesis to design the sound world for USCGC Healy (WAGB-20) linked the composition together even more. Stories of the science cruise and my visit to the ship played a big role in the organization of my composition. During the 40 days of this science cruise, crew members experienced many moments of gradual change as the days passed. Everyday tasks that created small changes influenced the overall arch of the gradual change that is only observed from a viewpoint of a 40-day journey. Moments of gradual change were experienced while covering the distance to reach specific science stations and while breaking through the ice. Gradual change was experienced observing the artic environment and deploying science equipment into the water. Fixing mechanical issues of the ship and forming new friendships while waiting for the ship to get through the ice also took gradual change. In USCGC Healy (WAGB-20) the development of musical material is a gradual process. Each section blends smoothly with another, creating a sense of the ship gradually moving through cold waters. The short percussive layers imitate different shapes of ice floating. I sonically imitated the dark waters with melodic, sustained sound. The pace of the composition is slow and thoughtful to represent the slow movement of the ship. The tempo can be thought of as largo with a changing meter from 4/4 to 3/4 to 5/4 (to create a sense of floating). Over the course of the composition dynamics intensify to the final climax of the composition at 9:28. 51 Performative Techniques Each section of the composition has combinations of performative actions that help articulate the musical organization of the composition. My performative actions direct the audience’s attention to how many sonic layers I controlled, articulated the beginning and end of the composition, and heighten the importance of musical moments. In this composition my performative actions are primarily slow and gradual, similar to the way a heavy ship pushes its way through ice. My performative actions are elegant and contain circular trajectories. These trajectories resemble the sine wave shape that I observed in the Chukchi Borderlands jellyfish that move through the waters. During my creative process I was aware of which performative actions I chose to apply in any given context. I reasoned that if the smooth musical transitions between different layers were performed with abrupt, edgy performative actions, the sonic result would probably be more sudden and less gradual thus making the overall experience feel unnatural. Dr. Stolet confirms this type of thinking when he writes, “When we experience a live musical performance, a complete experiential field is formed from what we see and what we hear.”59 I used my desire to align the physical motion of my performative actions with the musical outcomes and my knowledge about the Healy’s aquatic objectives to guide metaphorically the types of performative actions that I would use in the piece. Here are several examples. In the initial section (0:08-2:38) I use only my right hand to control all the sonic layers of the composition. With each new pitch change the observed movement evokes the image of pushing forward through the water. Later in the composition the use of both hands to play the sounds corresponds to the expansion of the sonic world heard at 59 Stolet, The Cinematics of Music Performance. 52 around 2:56. Throughout this section the left hand is associated with the sustained high- pitch layer and the right hand is associated with the pitch-based sustained layer. Such movement enhances each musical layer’s timbral progression. From 5:26-5:31, I hold my right hand in position, which emphasizes the sustained sonic world. The percussive layer of sound is enhanced with the performative actions of the right hand. From 8:40-9:28, the last musical motive rises in density, timbre, and amplitude leading to the finale of the composition; it is enhanced by my performative actions. I emphasize the idea of a ship getting through the ice with imitative “backing and ramming” movements. I add to the intensity of the grand finale with the use of identical performative actions in both hands. Near the conclusion of the composition, as sound dissolves into silence, my hands slowly sink down towards my body. My careful consideration of the performative actions in the piece enhance the audience’s perception of the mood, feel, and musical structure of the composition. Additional Comments Because of my choice of data streams to work with, I create a sense of three- dimensional space above the interface. I place the Leap Motion controller on a table in the center stage to engage with it standing up. This set-up allows me to make my performative actions more visible to the audience without relying on more pieces of equipment such as a camera or a projector. The visual three-dimensional feedback of my performative actions created in this space critically assisted the sound world in its musical journey. In the performance of USCGC Healy (WAGB-20) the sound world one 53 hears, accompanied by my performative actions, takes the listener on a journey of a big ship traveling through the ice-cold blue waters of the Chukchi Borderlands. 54 CHAPTER III.3 IVANA KUPALA Ivana Kupala is a composition for a custom-made controller, custom Max/MSP software, and Kyma. Ivana Kupala is a real-time performance composition approximately fourteen minutes in duration. This composition was structured as a quadraphonic composition and can be easily diffused to eight speakers. The complete data-driven instrument for Ivana Kupala is comprised of a custom-made wreath interface that was fabricated using Arduino, Arduino IDE software, sensors, and the custom Max software that remaps the data and sends it to sound-producing algorithms contained in Kyma. This instrumental design is shown in Figure 28 below. Figure 28: Basic data flow diagram of the complete instrument for Ivana Kupala Ivana Kupala is the only composition out of the seven in my Digital Portfolio Dissertation to incorporate a custom-built performance interface. I chose to build my own interface because no existing interface served my needs in the realization of my creative idea. Creative Concept The creative concept for this composition derives from my Ukrainian heritage and my memories of a Ukrainian summer holiday. In old, East Slavic customs, flower 55 wreaths called вінок (vinok), are worn by unmarried young Ukrainian women.60 Vinki (plural of vinok) have a significant symbolic value with respect to the choice of flowers and ribbons used in the wreaths. Flowers could be fresh, made out of paper, or waxed. The color scheme of both the flowers and the ribbons has significant meaning. Different regions of the country emphasize slightly different styles, colors and choice of flowers on the wreaths. Flower wreaths play an important role during the celebration of the Ivana Kupala summer holiday, and my composition takes its title from the holiday. Ivana Kupala is celebrated beginning on the night of July 6th and is a summer solstice celebration that includes a number of Slavic rituals. The name Ivana Kupala has two combined meanings: Ivana, referring to John the Baptist, and Kupala, to bathe. This holiday relates to a magical mythical story - believed by many that on the night of Ivana Kupala, ferns bloom. Luck and happiness would follow the one who finds the flower of a fern. I always considered the Ivana Kupala holiday fun because of the traditions and family time, and my memories of the holiday stayed with me. My family used to spend Ivana Kupala at our little cottage, about twenty miles outside of the city, Mariupol. Many beautiful prairies, corn, and sunflower fields surrounded the cottage (as can be seen in Figure 29).61 Figure 29: Photo of flowers in the prairies of Eastern Ukraine 60 "Ukraine Vinok (вінок) Seasonal Wreaths and Their Symbolism," Elder Mountain Dreaming, last modified June 19, 2017, (accessed November 28, 2018), https://eldermountaindreaming.com/2017/06/14/ukraine-vinok-вінок-wreath-symbolism/. 61 Photo courtesy of Croy Carlin. 56 On the day of July 6th my grandmother and mother would go out with me to search for wild flowers in the prairies that we weaved into wreaths. The weather was sunny, with beautiful blue skies. It was hot, and a slightly noticeable breeze was gently swaying the flowers. I remember picking wild, little daisies. We then would spend some time making our flower wreaths and attaching ribbons to them, to be used in the evening in the family celebration around the bonfire. One of the traditions associated with Ivana Kupala is making a wish and jumping over the flames of the bon fire to test one’s bravery. Another tradition is for young girls to make flower wreaths and float them in water, often accompanied with a lit candle. The flowing pattern of the flower wreaths is said to give a maiden a foresight into her relationship fortunes. Although I am not sure how deeply I understood those traditions as a child, the warm, fun feeling of making flower wreaths in the prairies and the Ivana Kupala holiday celebration stayed with me into my adult years. The warm feelings were a source of inspiration for me to want to share my culture with others by creating this musical composition Ivana Kupala. The structure of my composition imitates a day in the prairie, where you can find peacefulness, sunshine, flowers and an occasional cloud covering the sun. Sonic Material The sonic material of Ivana Kupala arose from audio recordings that I made from different types of beads on a thread hitting various objects. I also used audio recordings of myself speaking. During my recording sessions, I realized that subconsciously I chose the color of thread to be red, the same color choice as my red ribbons on the custom- made flower wreaths interface. Fascinated by the coincidence of the red thread, these 57 words developed on their own: “I am going to follow this invisible red thread, until I find myself again, until I figure out who I am meant to be.” I recorded myself speaking these words and used the first part of the recording “I am going to follow this invisible red thread, until I find myself again” as sonic material for the Ivana Kupala composition. To augment this material I also used some percussive sounds made from beads. Performance Interface As you can imagine, for young girls, ribbons are lots of fun to play with. I thought that it would be fun to truly “play” them and developed the idea of the performance interface. The original prototype of the flower wreath was supposed to be the size of two basketballs. After testing out my idea, I decided against such a large flower wreath, since it lacked the elegancy that I wanted. I decided that the wreath should be the same size as a wreath worn in the Ukrainian tradition. As a result of my careful considerations, I chose to use red peonies, white roses, and sunflower daisies in my flower wreaths. Peonies in Ukrainian culture are associated with flowering of maturity. Roses are associated with goodwill and prosperity. Daisies were my association with purity. I also had some greenery on the wreaths, which was my association with the beauty of nature and transformation. In addition, I chose my ribbons of the interface to be white and red for two reasons. The first reason was that these two colors symbolized joy, and I consider myself a joyful person. The second reason was that red worked well to accentuate my interactions with the ribbons. I originally made four flower wreaths to be used in a sound installation where people could interact with the flower wreath’s ribbons (as can be seen in Figure 30). 58 Figure 30: Photo of an installation using custom-made flower wreath interface I wanted to preserve the wreaths and used them as a performance interface for my composition, Ivana Kupala. The interface was in the shape of two flower wreaths with sensors embedded inside of the red ribbons (which can be seen in Figure 31). I embedded four sensors in the red ribbons: two force sensitive resistors (FSR) and two flex sensors. The force sensitive resistor varied the resistance based on how hard I pressed on the sensing area, which was half an inch wide.62 I used FSRs as both a button and a continuous controller. The flex sensors that I used were around two and a quarter inches in length and changed the resistance when the sensor was flexed.63 I also used the flex sensors as a continuous controllers and buttons. I chose these two kinds of sensors because they were small enough to fit inside of the ribbons. I used four sensors in the custom-made flower wreath interface because four data streams seemed adequate for the purposes of my composition. 62 "Force Sensitive Resistor 0.5," SparkFun Electronics, (accessed November 28, 2018), https://www.sparkfun.com/products/9375. 63 Ibid. 59 Figure 31: Data streams used in the Ivana Kupala composition The Arduino Mega microcontroller sent the sensor data via serial communication to the computer using a USB cable. I received four data streams from analog pins zero to three (where sensors were connected to the Arduino microcontroller) inside of the Arduino IDE software on my Arduino microcontroller board. The Arduino IDE sketch that I used was adapted from the work of artist Daniel Jolliffe with contributions from Seejay James and Thomas Oullet Fredericks 64 (which can be seen in Figure 32). 64 “Daniel Jolliffe / Electronic Media Art,” Daniel Jolliffe, last modified November 29, 2010, (accessed January 1, 2016), http://www.danieljolliffe.ca/writing code/writing code.htm. 60 Figure 32: Arduino IDE sketch adopted from Daniel Jolliffe Musical Challenges I experienced some challenges while working with the custom flower wreath interface. However, the opportunities of having a one-of-a-kind interface outweighed these difficulties. One of the challenges was crosstalk between wires. Because of this crosstalk, I always worried if the interface was going to work in the way I intended it to work when the time to set up for the performance came. The unpredictable occurrence of crosstalk tended to demand additional time during technical rehearsal to confirm all data streams were properly flowing. By working with this custom-made interface, I learned that it was best to have two identical copies of the interfaces so as to have a backup for performances. Another challenge was that the ribbons lost their elasticity by placing sensors inside of them due to the weight and shape of the wires attached sensors to the Arduino microcontroller. 61 Musical Opportunities One of the biggest advantages to the flower wreath interface was the complex and detailed set of associations that were created between the interface, the sound world, my physical demeanor, and the performative space. The colors and simplicity of the interface as well as my performative actions were easy for the audience to understand, allowing them to fully experience the sonic atmosphere and absorb the tranquility of the performance. Because I selected the sensors that would be embedded in the wreath interface, I essentially designed how a musical performance would work, thus providing an additional musical opportunity by offering increased artistic and performative freedom. I wanted the physical interactions with my interface to be notably different from the interactions I had with the Leap Motion controller. A primary intention was to formulate a performance style in which I can perform this fifteen-minute composition comfortably. I wanted to imitate the atmosphere of sitting on the ground in the prairie and tracing my fingers along the stems of flowers. Through designing the custom flower wreath interface, I brought my memories about this Ukrainian holiday into the visible and musical worlds for the audience to experience. Data Mapping Serial communication data from the Arduino microcontroller was received inside of my Max patch IvanaKupala.maxpat, which was also adopted from previous work of Danielle Jolliffe (seen in Figure 33). 62 Figure 33: Max patch - data mapping used in Ivana Kupala In Max, I used the line object to smooth out jittery data streams received from the sensors and used the table object to reshape data streams. Just like inside of USCGC Healy (WAGB-20), in Ivana Kupala I also scaled the data to be outputted within a range of 0 to 127 before sending to Kyma. Within Kyma, I did additional data mapping to control effectively musical parameters effectively using Capytalk. In Figure 34, one can see continuous control data being sent to control the frequency and formant musical parameters. This continuous control data was further scaled, offset, and smoothed. Figure 34: Capytalk used for data mapping in Ivana Kupala Figure 35 shows another example of data mapping that I did inside of Kyma. Throughout my composition, I transformed various continuous control data streams into buttons to trigger new events. I made these button functions simple so that I could control 63 complex musical algorithms with one button messages as well as using button functions as a way to transition between musical sections. Figure 35: Example of data mapping inside Kyma for Ivana Kupala Sound Design Through sound design, I transformed original audio recordings into a musical composition. I used sound modification processes such as analysis and resynthesis and Kyma’s proprietary Time Alignment Utility (Tau) algorithm. I also used sampling, subtractive, and granular synthesis. Using a combination of different synthesis methods, I was able to create custom layers of sound specific to the Ivana Kupala composition. I analyzed the audio recordings of my voice and modified envelopes of amplitude, frequency, formant, and bandwidth with Kyma’s Tau algorithm. Almost every time I used this method of sound design, I layered more than one TauPlayer object together. I slightly detuned each new spectrum of sound creating a natural sonic environment. A typical arrangement of multiple Tau algorithms is shown in Figure 36. 64 Figure 36: Signal flow of analysis and resynthesis using Tau in Ivana Kupala During the sound design process, I created some sounds by the real-time control of EventValues. I recorded these sounds to disk as audio files and brought them back into Kyma playing them in the composition with mechanisms like the Sample Sound object. I then modified the spectrum of these sounds using filters. Finally, the sound was spatialized (as seen in Figure 37). Figure 37: Signal flow of subtractive and sampling synthesis used in Ivana Kupala I took another approach to sound design in Ivana Kupala using granular synthesis (as shown in Figure 38). First, I resynthesized spectrums of the recordings of percussive sounds using the CloudBank Sound object. Second, I controlled how many partials would play, the overall frequency and amplitude, the number of grains per partial, and the 65 duration of each grain. Third, I combined three slightly detuned layers together with an additional layer of sound. In the second layer, I used sampling and granular synthesis. I granulated audio recordings of percussive sounds. I controlled amplitude, frequency, loop start and end points, total cloud density, pan position, and the duration of each grain. These granular sounds were especially pronounced from 4:37-6:24. Figure 38: Signal flow of sound structure using granular synthesis used in Ivana Kupala Formal Structure In Ivana Kupala the musical material was designed from the audio recordings of beads and my speaking voice. The structure of the composition, Ivana Kupala, enhanced the overall experience of spending a day in the prairie, as it gave a feeling of a peaceful time progression. I enjoy music when it develops in a gradual way, such as in compositions like Annea Lockwood’s A Sound Map of The Hudson River,65 and Eliane Radigue’s Kyema.66 Such gradual development of musical material was heard in both USCGC Healy (WAGB-20) and Ivana Kupala. In Ivana Kupala, musical material was 65 Annea Lockwood, writer, A Sound Map of the Hudson River. Lovely Communications, 1989, CD. 66 Eliane Radigue, and Karma-gliṅ-pa, writers, Kyema, Intermediate States. Experimental Intermedia, 1990, CD. 66 warm, lyrical, and present through the whole composition. The composition was mostly structured out of sustained layers of sounds built up one on top of another. A definite, melodic contour was transformed throughout the composition, with moments of contrasting rhythmical motives. The composition can be understood as occurring in four sections (0:00-4:36, 4:37-7:00, 7:00-13:05, 13:06-14:44) with the climax of the composition at 10:53. I used Kyma’s Timeline to organize my sound algorithms in a time domain format. Just like in Ride On, Ivana Kupala’s Timeline had special instructions I needed to see during the performance. The distinct color scheme I chose, helped me visually anticipate each new musical section as it came. Performative Techniques In addition to the musical structure and sonic material, physical motion was used to articulate the peaceful and comfortable environment of a Ukrainian prairie. I used my left and right hand to interact with the interface. I did not use movements that were sudden, jerky, or harsh. My performative actions enhanced the peaceful sonic world. My facial expressions absorbed the sensibilities of the aesthetic of the composition and seemed important. My facial expressions such as smiling can be understood as my sincere response to the ambiance of certain sonic moments that reminded me of my special memories - the sweet, warm times spent with my family on the Ukrainian prairie making wreaths on Ivana Kupala day. 67 Additional Comments Flower wreaths made their way into Ukrainian folk attire as a part of a national dress and are still considered a big part of Ukrainian traditions. Today, this national attire is worn during festivals and holidays. In the previous times flower wreaths were part of the everyday attire. The colors and embroidery of Ukrainian folk attire typically accentuated a geographical region of Ukraine. National attire is different for men and women, however both wear вишива́нки (vyshyvanky), embroidered shirts. In the performance of my composition Ivana Kupala I accentuated my Ukrainian heritage by wearing vyshyvanka and a flower wreath. The flower wreath interface was clipped onto a microphone stand. The microphone stand was raised high enough that a performer could comfortably reach each ribbon while sitting down. The placement of the interface and a performer on stage needed to form a 130-degree angle facing the audience. Such placement was necessary for the performative actions to be observed by the audience. The interface and the national attire worn by a performer were all part of the performative space, enhancing the live performance. As one can see, my Ukrainian heritage and summer solstice holiday played an important role in the creation of a one-of-a-kind, custom-made interface for Ivana Kupala. The flower wreath interface was inspired by my artistic progression of what sonic materials I used. Finally, sound recordings transformed through sound design naturally formed into a musical journey that came together through my performance of Ivana Kupala. 68 CHAPTER III.4 WIND IN THE FOREST My composition, Wind in the Forest, for a Microsoft Kinect controller, Delicode NI mate software, and Kyma is a real-time performance composition approximately eight minutes in duration. Similarly to the first three composition Wind in the Forest was conceptualized as a quadraphonic composition; however, the four audio channels may reasonably be diffused to eight or more speakers. The complete data-driven instrument for Wind in the Forest is comprised of the Microsoft Kinect interface connected to Delicode NI mate, which receives the data and maps and routes it to the sound-producing algorithms contained in Kyma. This instrumental arrangement is shown in Figure 39 below. Figure 39: Basic data flow diagram of the complete instrument for Wind in the Forest Creative Concept The creative concept for this composition came from two sources: wind data acquired from the Oceanus ship and trees that I passed by while I was jogging through an Oregon forest. Those trees made me imagine a magical forest with talking trees and magical creatures. On February 27, 2017 the Oregon State University ship Oceanus, was going through post shipyard sea trials near Alameda, CA. Marine technicians were testing science gear. One of the sea trials conducted was testing the ultrasonic wind sensor. 69 Derived, true wind data was gathered (using the Ashtech GPS for the ship’s heading) with this ultrasonic wind sensor. I received a text format of this wind data in seven datasets. These seven datasets carried information about relative wind, relative direction, true wind, true direction, ship’s heading, ship’s speed, and wind direction. Before I could do anything with this data, I studied the data to see if I found anything interesting just by looking at the numbers. Seven datasets of wind data were not clearly marked; therefore, I was unable to identify which of the columns referred to specific information from the sensor readings. The ambiguity of the datasets was inspirational to me, so I used this material as the basis for my Wind in the Forest composition. Another part of the creative concept for Wind in the Forest came from seeing a large variety of trees on my exercise runs along the Blanton Ridge trailhead. The trailhead started at an intersection of Willamette Street and W 52nd Avenue in Eugene, OR (which can be seen in Figure 40).67 Figure 40: Photo of common trees along the Blanton Ridge Trailhead The Pacific Northwest is rich in vegetation. Heavy precipitation and mild temperatures are the main contributing factor to the trees’ height and greenness. I recognized some of the trees – firs, maples, cedars, spruce and pines – on my runs. Many 67 Photo curtesy of the author. 70 trees were covered in green moss, giving them a deep, rich emerald appearance. I found quietness and comfort running through the park’s trails. These natural trails were places where I could brainstorm and find creative ideas. When I run my thoughts drift from one to another, I imagined running through a magical forest. Imagining a forest with magical creatures was easy as I read about them in many science fiction books. Some of the magical creatures that came to my mind were goblins, elves, and dwarfs. On one such run I was especially aware of the sonic signature of the forest as well as how trees were moving in the wind. I wanted to incorporate the essence of the movement that I saw in the trees into Wind in the Forest through my performative actions. In my composition, Wind in the Forest, one can hear wind-like material that was my imitation of the forest as a whole. Layers of different percussive timbres were my representation of magical creatures coming out for mischief and play in the forest. Sonic Material The sonic material of Wind in the Forest arose from the seven datasets of sonified wind data gathered along the West Coast. Given that I was not a scientist, analyzing the wind data was not possible and the considerable string of numbers made little sense to me. I was, however, interested in the sonic capabilities that the data could drive. I sonified seven datasets of wind data by using Kyma. To make the data more malleable in the sonic domain I filtered out some of the subaudio frequencies that had initially been present in the data. After sonifying the datasets, I used my musical sensibilities to determine which of the datasets would best suit my musical objectives. Waveform representations created from datasets 2, 6, and 7 are shown in Figure 41. 71 Figure 41: Waveform representation of sonified wind data I based my choices of selecting which datasets to work with on two attributes. First, sonically speaking, I liked the natural rhythms that these three datasets created. Second, visual representation of the sonified wind data contained in these three datasets was captivating to me due to percussive characteristic that lacked any sort of melodic material. From my experience working with recorded audio files, however, I knew that it was compositionally advantageous to work with audio recordings that possessed some melodic contour. Therefore, I augmented these sounds with audio recordings of four percussion instruments: celeste, cowbell, drum, and gong. Through the sound design process, I came to realize that percussive-pitched sounds combined well with the custom- made waveforms derived from sonified wind data. Performance Interface While working on sound design, I simultaneously spent time working with the Microsoft Kinect interface to determine which data streams to use for control of musical parameters. After much contemplation, I chose to work with the Microsoft Kinect interface for Wind in the Forest because I wanted to use my whole body in order to physically imitate trees moving in the wind, to execute the performative actions that would bring the composition to life. Kinect (formerly known as Project Natal) is a 72 gaming interface developed by Microsoft for Xbox 360. According to the “Kinect Hacking” article, “Kinect includes an RGB camera that sends images of 640 X 480 pixels 30 times per second.”68 The depth sensor of the Kinect uses an infrared LED laser and a micromirror array.69 Microsoft Fact sheet states, “the depth sensor consists of infrared laser projection, able to capture video data in three-dimensional space.”70 The Kinect conveniently sends its data to a computer through a USB connection. At least three Kinect models are publicly available, and I selected to use Kinect model 1414. Musical Challenges One of the biggest challenges in Wind in the Forest was the extreme density of the data streams the Kinect interface produced. Using OSC for data transmission consistently crashed the NI mate software running on my i5 core processor driven laptop. While working on Wind in the Forest, I had access to a computer which had an i7 core processor and processing OSC data was not a problem. When I transferred all of the materials to my laptop with an i5 core processor, I realized that I could not run eleven simultaneous data streams through OSC communication. Using MIDI communication resolved the data density issue. Another challenge working with the Kinect interface was, just like working with the Leap Motion interface, the Kinect’s sensitivity to lighting in the room. This sensitivity necessitated that I pay close attention to the ambient and other light that existed during practice, technical rehearsals, and performances. A particular challenge was that the data streams were interconnected. For example, if I was not facing 68 "Kinect Hacking," IDAV: Institute for Data Analysis and Visualization, (accessed December 3, 2018), http://idav.ucdavis.edu/~okreylos/ResDev/Kinect/MainPage.html. 69 Ibid. 70 Ibid. 73 the Kinect controller directly, data streams produced with the left hand influenced the data streams produced with the right hand. Data streams of hands Z position and body Y position also interfered. In my composition, careful consideration was given to the performative actions, so I minimized the impact of one data stream upon another. Musical Opportunities The Kinect interface is capable of producing many different data streams. I chose to work with eleven data streams: the XYZ position of each hand, Body XY, Bow, Lean, and Twist, to control the sound-producing algorithms residing in Kyma. In Figure 42 I label eleven data streams used in the performance space. Figure 42: Diagram of data streams used in Wind in the Forest The opportunity working with the Kinect interface provided was the ability to use a performer’s whole body to control musical parameters. Performative actions such as leaning the body and bowing and twisting the body were large performative movements 74 and were easily observed by the audience. Using the complete human body to execute a musical performance created data streams which resulted in musical outcomes when applied to the control of musical parameters. Data Mapping Wind in the Forest was one out of the two compositions in my Digital Portfolio for which I did not use Max for data mapping. Instead, I used Delicode NI mate software to receive data from the Kinect interface (Figure 43). Figure 43: Photo of Delicode NI mate software NI mate supports the Microsoft Kinect model 1414 interface71 and provides extensive options in terms of data mapping including scaling, offsetting, and smoothing. NI mate also provided multiple algorithms that governed the different ways that it could follow human movement. Among those choices were Skeleton tracking, Controller tracking, and Triggers. I chose to work with Controller tracking because the data streams that it produced created the most accurate representation of my body movements. NI mate software also allowed data streams to be sent from the Kinect controller to other 71 "NI Mate | NI Mate," NI Mate, last modified 2011-2017, (accessed May 18, 2017), https://ni-mate.com/. 75 programs via OSC or MIDI messages. I used the MIDI protocol to execute my musical intensions. I chose to work with MIDI data instead of OSC because MIDI data was less dense in comparison to OSC and would not crash the NI mate software. The NI mate software interface that permits user specification of data mapping and data routing is shown in Figure 44. Figure 44: Data mapping using NI mate software for Wind in the Forest Further data mapping was done inside of Kyma to control musical parameters. For example, one of the first Sound objects on the Kyma Timeline was RhythmGlassyHands LeftX. I inverted continuous controller 10 (Left Hand X position) to a data stream called Notes. This data stream served as an offset for the frequency field of the Sample Sound object that it was controlling. In this example I scaled, offset, smoothed, and inverted the data stream Notes to provide the musical result that I desired (Figure 45). 76 Figure 45: Signal flow of further data mapping inside of Kyma for Wind in the Forest In some cases, I converted continuous control data streams coming from the Kinect interface into button-like functions. In these cases, I set a numerical threshold that a continuous controller needed to breach in order for the button function to be executed. Button messages were used to progress from one section to the next on the Timeline during a performance. Sound Design After creating custom-made waveforms, I made my choice about which ones to keep based on my visual inspection of the waveforms. Ultimately, I created seven custom-made waveforms out of datasets 1, 2, and 7. I ended up using five out of the seven custom-made waveforms from datasets 2 and 7. Through experimentation, I discovered that my custom-made waveforms served my musical needs best if they occurred in full oscillations, both above and below the zero axis. The five custom-made waveforms used in the sound design of my musical material are shown in Figure 46. 77 Figure 46: Custom waveforms created from sonified wind data During the sound design process for Wind in the Forest, I took time to find appealing musical timbres. I created synthetic spectrums of sonified wind data and modified them by the use of granular synthesis. In Figure 47, one can see an example of how I created a synthetic spectrum out of a custom-made waveform of sonified wind data. Figure 47: Signal flow of a synthetic spectrum created out of custom waveforms From waveform 3 (shown in Figure 46), I created a spectrum consisting of 512 partials. Using Capytalk I controlled the frequency and amplitude of this spectrum while simultaneously selecting the 20 lowest partials to be resynthesized using the FormantBank Sound object. To make the overall sound more complex, I combined an 78 original signal with a pitch-shifted signal and a delayed version of an original signal. Using subtractive synthesis, I attenuated higher frequencies, compressed the sound and further spatialized it to eight channels. Figure 48 shows an example of how I created a synthetic spectrum of sound from a custom waveform using granular synthesis. Figure 48: Signal flow of another example of a synthetic spectrum created out of custom waveforms I controlled the amplitude and frequency of the created spectrum with Capytalk. I selected the lowest 30 partials and resynthesized the spectrum using the OscillatorBank Sound object. Using granular processing, I modified the synthetically generated output of the OscillatorBank controlling the density and grain duration. Using subtractive synthesis, I emphasized the higher frequency range, applied reverberation, and spatialized the sound. Another type of sound design prominent in Wind in the Forest was created with use of sampling and amplitude modulation. Figure 49 shows an example of this process. 79 Figure 49: Signal flow of sampling synthesis with AM in Wind in the Forest The Sample Sound object played back an audio recording of a celeste. Using Capytalk expressions, I controlled amplitude, frequency, and the loop end of the audio recording. In the amplitude field of the Sample Sound object, one can see that a part of the Capytalk expressions was a control signal from another Sound object, the AmplitudeFollower Sound object. The AmplitudeFollower took the amplitude envelope of the audio recording of sonified wind data (from dataset six) and used it to control the amplitude of the celeste audio recording. I then combined four copies of this celeste recording each of which were temporally decorrelated and amplitude modulated. I then slightly delayed the (four copies of the celeste recording) sound and created rhythmical patterns using the Chopper Sound object. I further combined this signal with a similarly built signal and spatialized it. Three described methods of sound design (Figure 47, 48 and 49) were used throughout the composition in Wind in the Forest. Through my sound design, I created musical material that represented what my imagined magical forest sounded like to me. 80 Formal Structure The musical material used for Wind in the Forest derives from sonified wind data and audio recordings of percussive instruments. To further unify the composition, I combined the percussive-pitched sounds with custom-made waveforms from sonified wind data. Another unification process that I employed in structuring my composition is applying amplitude modulation to sounds created with granular and sampling synthesis. These sonic modifications created a web of associations between all the sounds in Wind in the Forest. The structure of the Wind in the Forest consists of six sections (0:13-1:34, 1:34-2:40, 2:40-3:42, 3:42-6:01, 6:01-6:52, 6:52-8:00) and was characterized by my choice of performative actions. The climax of the composition was heard in the fourth section at 5:38. Section one served as an introductory section. In this section, I used data streams controlled by my body and hands. I decided that the sound should have a wide variety of layers to be controlled by a variety of data streams. Section two had a thinner variety of layers. I used data streams created with my hands. A sonic duet was happening and later was transformed in section five. Section three was similar to section one, with a sustained low-pitched layer of sound. The sustained low-pitched layer was embellished with distinct, percussive material. Section four functioned as the “development” of the Wind in the Forest composition. In this section, I used data streams created by my body and added data streams generated with the use of my hands towards the end of the section. Section five was a calming section. I used performative actions of my hands and, as a result, created light layers of sound that gently flowed. Finally, section six served as a closing section. I used data streams generated by movement of my body in the sixth section and brought back the performative actions from the beginning of the composition. 81 Performative Techniques Wind in the Forest was full of tempo changes and sudden dynamic changes to depict gusts of wind through the forest. In addition to the sonic characteristics, the physical motion used to perform this composition derived from the idea of the physical motion of trees moving in the wind. The four distinctive layers of sound existing in Wind in the Forest are sustained, low-pitched granular layer, cushion layer, percussive layer, and the duet layer. The sustained-low granular layer created a sustained timbre and was heard throughout the composition. The cushion layer assisted the sustained, low-pitched granular layer to give the sustained, low-pitched granular layer more musical depth. Through the composition, the sustained, low-pitched and cushion layers were predominately controlled by data streams created with the use of performative actions of my body. The percussive layer used the pitched percussive sounds with similar characteristics with changing rhythms through the composition and was controlled by data streams created with the use of performative actions of my hands. The duet layer was substantially heard in sections three and four; this was a layer of sound where I used sound in the higher and lower frequency ranges. The duet layer also was controlled by data streams caused by performative actions of my body. I also used performative actions to help guide the listener through different sections of the composition. For example, Wind in the Forest started out with a distinctive performative action of me bowing to the audience. The composition ended with the same performative action. When I transitioned from section one to section two, the contrasting use of performative actions of both hands emphasized the contrast in the sonic world. When I transitioned from section two to section three, the contrast in performative actions emphasized change into a new section. 82 The dramatic moment, enhanced by my performative actions, was at 3:41-3:43. This performative action was used to introduce a new, light timbre. At 4:41, the sudden, forward movement caused a wind-like sound. My choice of performative actions at 5:20- 5:38 helped build the climax of the composition. My repeated, sudden, forward-moving performative actions caused another spontaneous sound and the transition from section four to section five at 6:01. Finally, at the end of section six, my performative actions underlined the decaying of the sound world, which was my representation of wind finally calming down. Additional Comments In an actual performance of the piece, the Kinect controller was placed 107 inches away from the performer, giving the performer 75 x 107 inches of performance space to move around. As one can see in the video documentation, I marked the performative space with blue tape to help me see the space during the performance. To stay within the “reach” of the Kinect controller marking the stage with tape was necessary. The lighting had to be adjusted prior to the performance for the Kinect interface to properly function in recognizing the performer’s movements. The performative space had to be cleared from various unused objects interfering with creation of data streams and getting in the way of the performer moving on the stage during the performance. Unlike the performance of Ivana Kupala and Ride On, Wind in the Forest did not require any specific choice of clothing. However, for the Kinect controller to better recognize the position of one’s hands, one should choose clothing that leaves performer’s hands and arms free of fabric. 83 The sonic world that I created out of the sonified wind data and audio recordings of percussive instruments was accompanied by my performative actions in the performance space of Wind in the Forest. Listening to and watching my performance of Wind in the Forest, an audience can experience my journey - the journey of transforming a creative concept that developed from working with the sonified data and audio recordings into a musical composition through sound design, musical processes, and performative actions. Through the musical journey perhaps an audience could, like myself, imagine mischievous magical creatures playing in the windy forest. 84 CHAPTER III.5 IMMIGRATION (GAMETRAK VARIATIONS) My composition, Immigration (Gametrak Variations), for a Gametrak controller, custom Max/MSP software, and Kyma is a real-time performance composition approximately twenty-two minutes in duration. This composition is written in three movements and is the longest of the seven compositions in my Digital Portfolio Dissertation. Immigration (Gametrak Variations) was created for four audio channels which can be diffused to eight channels. The complete data-driven instrument for this composition is comprised of the Gametrak interface connected to and sending data to the custom Max software, which maps the data and sends it to sound-producing algorithms contained in Kyma. This configuration is shown in Figure 50 below. Figure 50: Basic data flow diagram of the complete instrument for Immigration (Gametrak Variations) Creative Concept The creative concept for this composition came from my journey moving to America with my mother in 2004, and the process it took for us to become American citizens. In July 2017, ten years after I became an American citizen, my mom finally received her long-awaited letter from the United States Citizenship and Immigration Services (USCIS) to schedule an interview for her citizenship test. In search for an idea for a new composition, I decided this fourteen-year process was an important experience 85 in my life. It was the perfect time for me to express my emotions through creativity, in anticipation of my mother receiving her citizenship. As I formulated musical ideas about Immigration (Gametrak Variations), I simultaneously focused on my choice of the interface, the overall structure of the composition, and the sonic material. I wanted to express this experientially difficult process of going through a long journey of us becoming American citizens through my music. The citizenship process included documents such as the visa, green card, and passport. In my composition, the actual events and inner emotions related to each of the three documents were expressed as three different movements of the composition. In 2003, while still in Ukraine, my mother and I applied for visas to come to the United States. Receiving approval for our visa request took us almost one year. Once we received our visas, we had under a month to pack our belongings and make the move from Ukraine to the United States. Together, we had one suitcase and two backpacks to put our most essential necessities for the trip. I wanted to take all of my books and toys, but it was not possible. I remember January 28th, 2004 like it was yesterday. As a thirteen-year old girl, I was excited about traveling. This journey from my former homeland to my “to be” homeland was my first flight on an airplane. I could not wait to get on the plane and see if America was anything like it was shown in Hollywood movies. I was excited about learning at a new school, making new friends, and having new experiences. I was uncertain about what to expect and what challenges would come my way. In the first movement, Visa, I recreated the adventure of flying on a plane for the first time. I also recreated my uncertainty about what was to come and the feeling of excitement as I anticipated new experiences. 86 In the second movement, entitled Green Card, I musically depicted the long waiting process during which many difficulties, challenges, and obstacles were faced. The second stage of the citizenship process was the longest, especially for my mother. An unbelievable amount of time, money, and paperwork were necessary to become permanent residents and to receive green cards. The third movement, Passport, represented the culmination of the composition. The third stage of the citizenship process for my mother was much quicker than the second stage. After filing documents for citizenship, my mother studied for her civics test and then waited to hear back from the USCIS. Finally, in July 2017, she received an invitation for an interview with the USCIS agent. The interview consisted of making a trip to the USCIS Field Office in downtown Minneapolis where my mother was questioned about every aspect of her life. She had to prove to the USCIS officer her knowledge of American culture, civics, writing, and reading in proper English. We were both nervous and scared because she had a strong accent speaking English. Sitting in the waiting area with my sister for an hour was the longest and most stressful hour of my life. Upon completion of the interview, we were not sure if my mom had passed it. We were advised by the USCIS officer to patiently wait for a letter that would determine the outcome. Within a week my mother received another letter from the USCIS inviting her to an Oath Ceremony. The Oath Ceremony was the last formal requirement by the United States in becoming an American citizen. As one could imagine, my family in disbelief read and reread the letter, but it was not until the Oath Ceremony that we truly felt the moment of joy and relief. To illustrate our experiences, the third movement of Immigration (Gametrak Variations), Passport, unfolds structurally quickly. Mixed 87 emotions of disbelief, fear, stress, and long-awaited relief were projected in this last movement. Despite every movement having its own musical form, the focal point of the composition builds up to the end of the third movement to express the long-awaited relief for my family during the process of becoming American citizens. Sonic Material The sonic material of Immigration (Gametrak Variations) arose from a series of audio recordings that I made. I recorded six sentences of me speaking in Russian about the citizenship process. The translations of these six sentences spoken by me were as follows: “What does immigration mean? Being far away from family and friends. Being in a different culture, where for a long time one feels out of place. The struggle to get a job, insurance, medical help, and difficulty finding ingredients for cooking food that you are accustomed to eating. Spending many years waiting to receive legal documents. So why go through it all?” In my composition Immigration (Gametrak Variations), I did not want my spoken words to be the foreground of the composition. I wanted my words to be unrecognizably transformed, as these words were the sonic material for the composition. On the contrary, I used snippets of un-transformed audio recordings at important moments to accentuate the musical nuances. I found musical moments in my speech describing my experience of going through the citizenship process. The musical moments were of melodic and rhythmic nature. I paid close attention to the sounds of consonants and vowels in my speech. Using Kyma’s Time Alignment Utility, I was able to further transform the spacing between sounds of my speech into an experiential musical journey by engaging with the flow and intonation of my speaking voice. The voice is such a 88 powerful sonic source that is rich in information, and it can be transformed in a variety of ways using sound synthesis techniques. In my composition, one of the ways I created variations of the sonic materials was by using different combinations of sound synthesis techniques. I used different combinations of sound synthesis to transform audio recordings of my voice into musical timbres and textures. Performance Interface In Immigration (Gametrak Variations) I chose to use the Gametrak controller as a performance interface because of the flexibility it offered in terms of how it could be physically positioned in the performance space. The Gametrak interface is relatively small with a dimension of 6.5” X 7” X 5”. The Gametrak weighs about two and half pounds and can be connected to the computer via USB. Originally the Gametrak was designed to be used in a virtual golf environment with a PlayStation2.72 The Gametrak has two red retractable cables made out of nylon.73 Each of the two cables are attached to the reel inside of the plastic encapsulation. The cables can be drawn out and retract back into the interface’s body via a small guide arm out of which the cable passes. The two cables of the Gametrak controller extend out to about nine feet to articulate X, Y, and Z positions in three-dimensional space (Figure 51). Six possible data streams are created which can be mapped and routed to musical parameters. 72 "Gametrak & Real World Golf PlayStation2," In2Games, Razor Tie Artery Foundation Announce New Joint Venture Recordings | Razor & Tie. 2000-2008, (accessed December 15, 2018), https://web.archive.org/web/20080610084443/http://www.deafgamers.com/05reviews_a/gametrak&rwgolf _ps2.htm. 73 Stolet, "Twenty-Three and a Half Things about Musical Interfaces." 89 Figure 51: Photo of data streams used for Immigration (Gametrak Variations) I wanted to interact with the interface in multiple ways, so I physically placed the Gametrak in three different positions that corresponded to the three movements of the work. In movement one, I placed the Gametrak inside of a backpack, in movement two I placed it on a table, and in movement three I positioned it on the floor. Musical Challenges One of the challenges with working with the Gametrak interface was the interconnectedness of the X, Y and Z axis data streams. Because I did not need to use all six data streams simultaneously, it was easy for me to work around this challenge. Another challenge working with the Gametrak interface was retractable cables sliding back into the interface which made click-like noises. However, because traditional instruments make sounds not related to musical intent, these Gametrak noises seemed to add to the performative ambiance. 90 Musical Opportunities These two minor challenges working with the Gametrak interface were outweighed by the opportunities. The Gametrak interface was small enough, I could easily travel with it and fit the device well inside of a backpack. The Gametrak interface was also light weight so that I could easily switch the interface’s position between movements. As I mentioned before, the Gametrak controller was a reliable interface in terms of sending data streams it provided. This general dependability was probably fortified through its USB connection with the computer. With some data mapping, the interface functioned as an easy “plug and play” device. Data Mapping Data mapping for Immigration (Gametrak Variations) happened in both Max and Kyma. Each movement used different settings of data mapping inside of Max. To do further data mapping inside of Kyma, I used Capytalk. Because I used different combinations and ranges of data, I chose to structure data mapping separately for each movement inside of Max. During a performance, switching between data mapping structures inside of Max was more efficient than doing the same inside of Kyma. In Figure 52 (Immigration (GametrakVariations).maxpat) one can see how I used the gate object to switch between three different data mappings for the three different movements. I used the route object to route data streams received from the Gametrak controller. The data mapping techniques used were scaling, offsetting, reshaping, and smoothing. I used the scale object to bring data to within desired ranges. I used the table object to reshape data streams. In the first movement I used four data streams, the X and Z axes of both 91 cables (shown in the yellow area of Figure 52). In the second and third movements, I used all six data streams (shown in the blue and green areas of Figure 52). Figure 52: Max patch - data mapping used for Immigration (Gametrak Variations) To control the sound-producing algorithms inside of Kyma, data mapping techniques were employed as well. These techniques included scaling, offsetting, and smoothing of data. In addition, in some instances originating data streams were analyzed and converted into triggers whenever data values breached a specified threshold. For example (Figure 53), to control the TimeIndex I used a combination of scaling, offsetting, and smoothing within one Capytalk expression contained in a SoundToGlobalController Sound object. Figure 53: Capytalk used in Immigration (Gametrak Variations) 92 Another example of data mapping used in Kyma involved the breaching of a threshold. As one can see in Figure 54, I used a Capytalk expression to convert the continuous control data stream into a button-like function. This button message triggered a start of a section when the Capytalk expression was true. The translation of this Capytalk expression translated in English would be, “If continuous control #2 is less than 0.97, then produce a 1,” which would trigger the desired event. Figure 54: Capytalk used for breaching thresholds in Immigration (Gametrak Variations) Sound Design The original audio recordings that formed the basis of the sound world were transformed through analysis and resynthesis and Kyma’s Tau algorithm, with granular and subtractive processing and by sampling synthesis. I wanted to make certain that there would exist a basic unity among the three movements. The way I did this was by beginning the basic sound design process with multiple TauPlayer Sound objects that were playing back spoken Russian while decorrelating the frequency, amplitude, and time-index values before combining them in a mixer. To assure each had its own distinct sonic character, variations to this basic structure were added in each movement. In the first movement, I used Kyma’s Tau algorithm, where I controlled the time-index, amplitude, and frequency using Capytalk expressions. After combining four TauPlayer Sound objects, I used cross filter synthesis before applying a small amount of reverberation and spatializing the sound (Figure 55). 93 Figure 55: Signal flow of analysis and resynthesis techniques used in the first movement, Visa Another example of the sound design techniques I used in the first movement, Visa, can be seen in Figure 56. Here I combined six TauPlayer Sound objects whose frequencies were slightly decorrelated and whose starting times were slightly varied. I then compressed this complex signal and passed it through a time varying filter, the REResonator Sound object. I finally amplified this more complete processed signal and spatialized it. Figure 56: Signal flow showing six decorrelated Kyma TauPlayers used in the first movement, Visa Because of the computational complexity of some of Kyma’s sound-producing algorithms, I sometimes had to record an algorithms output to disk as an audio file. The resulting audio file was then played back with the Sample Sound object as shown in Figure 57. 94 Figure 57: Signal flow of sampling synthesis techniques used in the first and second movements In movement two, I used the rhythmic characteristics of an existing audio file to modulate the amplitude of the four decorrelated TauPlayer Sound objects in addition to the same basic sound designs strategy. This is shown in Figure 58. Figure 58: Signal flow showing amplitude of TauPlayers being modulated In the third movement, Passport, I worked with sampling, subtractive, and granular synthesis. Figure 59 is a good example of how I used sampling, and subtractive synthesis. I used two different signal paths (both created out of snippets of original audio recordings) and played them back from Sample Sound objects. Inside of each Sample Sound object I controlled the frequency and the looping of the audio recordings. In the upper signal path, I combined four audio samples together and then filtered, pitch shifted, and spatialized this mixture. In the lower signal path, I combined five audio samples together, four of which were delayed versions of one of them, and then amplified and 95 spatialized the resulting sound. What one hears is the combination of the two layers of sound that were created using sampling and subtractive synthesis. Figure 59: Signal flow of sampling and subtractive synthesis techniques Formal Structure Immigration (Gametrak Variations) was united by a common theme of sound modification processes - analysis and resynthesis, granular, subtractive, and sampling synthesis - on the recordings of my voice related to my experiences of immigration. As one can see and hear in the video documentation, the structure of Immigration (Gametrak Variations) was in three movements. My choice of interaction with the interface helped shape these movements. In the first and third movements, I performed standing up while the second one I performed sitting down. The first movement was structured in a way to recreate my feelings and emotions associated with my first flight to America. The first and third movements started out with musical material of a recognizable human voice, albeit one transformed from the original audio recordings. The opening musical figure, a transformed voice, was repeated three times in the beginning of the first and third movements. Both the first and third movements had specifically designed moments of 96 dynamic swells and anticipation built into them. The second movement was the longest of the three. The gradual process of musical development in the second movement was similar to my two other compositions, USCGC Healy (WAGB-20) and Ivana Kupala. The second movement was developed slowly to emphasize the long waiting period and to portray the hardships experienced by my family in the citizenship process. The third movement emphasized the mixed emotions experienced by my family in the final days before receiving the invitation letter to the Oath Ceremony. The third movement includes had the climax of the composition at 22:44 where a loud dramatic musical moment was heard for the last time. Performative Techniques In addition to the structure and sonic material of the composition, the physical motion used to perform this composition derived from the emotional connections that I related to the citizenship process. Such emotional connections included traveling, prolonged anticipation, a mixture of stressful emotions, and waiting for relief. In all three movements, I used my hands to interact with the interface although I used my whole body to support the performative actions carried out by my hands in the first and third movements. In the first movement, I used such actions of pulling retractable cables of the Gametrak interface out of a worn backpack on my back. I thought of the backpack as a metaphorical signifier that related to my experience of traveling on an airplane for the first time. Due to my decision of positioning the interface inside a backpack, I alternated between the use of either hand or both hands. (shown in Figure 60). 97 Figure 60: Gametrak used in the first movement, Visa I discovered that by installing the Gametrak in a backpack and wearing the backpack limited which data streams could be affectively used. With the Gametrak position on my back, I could only use X and Z data streams of both retractable cables. In the second movement, Green Card, I involved additional physical objects to perform the music. I pulled, wrapped, and unwrapped the retractable Gametrak cables over plastic knitting circles of different sizes that influenced how far the cables were extended. I placed the interface in front of the performer (me) on a table and interacted with it by wrapping retractable cables around three different sizes of knitting circles (Figure 61). Figure 61: Gametrak used in the second movement, Green Card I chose this performance strategy for several reasons. Through gradually pulling retractable cables, I wanted to emphasize the long and dreary process of waiting. Pulling 98 retractable cables out of the interface reminded me of all the memories and thoughts that came from my experiences during the waiting process for the green card. By pulling retractable cables I was, in a way, pulling thoughts and memories from my past. I decided to metaphorically wrap these thoughts and memories around knitting circles to help me better understand the emotional complexity of the challenges that my family had to encounter and endure in order to establish a better future. I chose to interact with the interface by pulling retractable cables and wrapping them around three different sized knitting circles because I wanted to emphasize the three-part musical form of this movement. In the third movement, Passport, a Gametrak controller was placed on a bench close to the floor (Figure 62). Figure 62: Gametrak used in the third movement, Passport I chose to interact with the interface standing up for two reasons. Firstly, I brought back this performative action of standing from the first movement to the closing movement to be a repopulation and create a formal structure in this composition. Secondly, I wanted to create a larger performative space to emphasize the grand feeling of relief of becoming an 99 American Citizen. Standing up was an appropriate way to express this special moment of liberation. Additional Comments I created Immigration (Gametrak Variations) to portray the memories that arose from my family’s journey to becoming American citizens. In this composition, original sound recordings of my voice were developed through my creative concept, sound design, and performative space into a musical journey parallel to my story. 100 CHAPTER III.6 S/V LIST My composition, S/V List, for an iPad, the Kyma Control application, and Kyma is a real-time performance composition approximately nine minutes in duration. This composition is quadraphonic; however, the four audio channels may be diffused to eight speakers. The complete data-driven instrument for S/V List is comprised of an iPad interface connected to the Kyma Control application that passes its data to data mapping and sound-producing algorithms contained in Kyma. This configuration is shown in Figure 63 below. Figure 63: Basic data flow diagram of the complete instrument for S/V List Creative Concept The creative concept for this composition came from my impression of being on a sailboat at a marina. Seattle’s location is in close proximity to the waters of Puget Sound and Lake Washington, which opens up many possibilities for sea-loving enthusiasts. Seattle has it all: maritime industries, ferry transportation, seafood restaurants, recreational boating, and liveaboard lifestyle. The thought of having a sailboat in Seattle seemed like a fun experience for my partner and I. My first visit to a marina was in August 2017, when my partner Croy and I went to look at a sailboat to purchase. This trip led to our purchase of a sailboat named the S/V List that we would ultimately dock at the Shilshole Bay Marina. 101 According to The Seattle Times, “Shilshole Bay Marina is one of Seattle’s largest sailboat communities on the West Coast”74 (Figure 64).75 Figure 64: Photo of Shilshole Bay Marina As we approached the marina’s docks, the beauty surrounding Shilshole Bay Marina amazed me. Sailboats, yachts, rust buckets, and fishing vessels were all lined up by their docks. These vessels reflected gracefully in the water. A rocky jetty decorated with a sea serpent sculpture separated the marina from the Puget Sound. On a clear day, above the jetty, the snow-capped Olympic mountains can be spotted. As I heard the sounds of a train, I looked to the opposite side of the marina to notice a Sounder train. The train made its way through the tall, green trees lining the Sunset Hill. I was in awe looking for our sailboat while going down the L dock. The S/V List is a thirty-six feet long and twelve feet wide, sloop-designed, sailboat (Figure 65).76 My composition S/V List was named after our sailboat. Cascade Yachts located in Portland, OR built the S/V List in 1976. When we first saw the sailboat, we were overwhelmed. The sailboat was in rough condition. We were confident, 74 "People Rush to Boats in Ballard to Dodge Seattle's Crazy Housing Market - but There's No More Room," Evan Busch, The Seattle Times, last modified May 16, 2018, (accessed December 19, 2018), https://www.seattletimes.com/seattle-news/people-rush-to-boats-in-ballard-to-dodge-seattles-crazy- housing-market-but-theres-no-more-room/. 75 Photo curtesy of the author. 76 Photo curtesy Croy Carlin. 102 however, that it would be a fun adventure to get her all cleaned up, comfortable, and warm. As we worked on cleaning the boat, I noticed many intriguing sounds around the marina. Especially inside the sailboat, there were many captivating sounds. The magical sonic tapestry of the Shilshole Bay Marina and S/V List made a strong impression on me. The experience of being on a sailboat with so many sounds around me inspired me to create a composition. In S/V List I expressed my peaceful, and at times spooky, impressions of staying on a sailboat that ranged from the tranquil to the mysteriously eerie. Figure 65: Photo of S/V List Sonic Material The sonic material of S/V List arose from a series of audio recordings that I made aboard the sailboat at Shilshole Bay Marina. After spending more time on S/V List, I was notably aware of the sounds and noises surrounding me at the marina. Some of the sounds were peaceful – sounds of seagulls, squeaking of the buoys, and water gently splashing against the side of the boat. Some of the sounds were spooky – sounds of wind blowing through the rigging and halyards of sailboats, water splashing loudly in the wind, and the creaking of the wood floor inside of the cabin of the sailboat. Some sounds such as the sounds of boat motors, the S/V List’s water pump, and other ambient marina sounds were simply pleasant sounding to me. I knew that I could transform these 103 intriguing marina sounds into a musical journey. I made audio recordings of sounds that could be heard from both outside and/or inside of the boat. These recordings included the squeaking of buoys against the boat, the sound of seagulls, the gurgling of the boat’s water pump, and the closing of the ship’s cupboards, as well as the sounds of a plane passing by. To me, those sounds had tremendous musical potential and I could imagine them transformed into a musical composition. The recordings of squeaking of buoys and seagulls had a melodic element to them while the recording of cupboards closing had a percussive texture. The recordings of the water pump and the plane going by were my favorite because they were rich in their harmonic spectrum, I suspected that through analysis and resynthesis I could discover new sonic timbres using them as a source. Performance Interface For S/V List, I chose to work with a simple interface; Apple’s iPad operating the Kyma Control application. After using a large variety of physically overt performative actions using a Gametrak in Immigration (Gametrak Variations), I wanted to work with an interface that was smaller and more intimate. The iPad operating the Kyma Control application offered such an opportunity since I was able to operate the interface with my fingers alone. The iPad I used for S/V List was Apple’s fourth generation iPad, Model MD511LL/A, with 32 GB of storage and a Retina display. Apple’s iPad ran iOS version 10.3.3 (14G60). The Kyma Control application (Figure 66) allowed me to send data directly and wirelessly to my computer and Kyma to control musical parameters without going through Max. The Kyma Control application provided several layouts that could be employed during a performance: VCS, Grid, Pen, Keyboard, Tonnetz, and Navigate. I 104 chose to work with the Pen layout. I liked the color scheme and the elegant circles that traced the history of finger movements on the iPad. These circles reminded me of the little fish chasing one another near the docks of Shilshole Bay Marina, thus aligning itself with other elements of the creative concept of S/V List. Figure 66: Image of Kyma Control application used in S/V List Data was transmitted from the iPad to the computer through a wireless network connection. In order for the connection to be stable during the performance, I created a wireless network that was exclusive for the connection between my computer and the iPad. In S/V List, I used sixteen data streams. I worked with the X and Y axis for each of my fingers while I predominantly chose to use X axis data in order to separate the X and Y dimensions, which were interconnected in their data streams. Musical Challenges One of the main challenges working with the iPad interface was that data messages sent from the iPad are labeled based on the order in which a finger touches the iPad. The iPad and Kyma Control did not make distinctions between which human digit was given the designation of, say, “Finger 2”. Therefore, a “Finger 2” related message could not be sent until a “Finger 1” related message was sent and also continued to be in effect. This led to situations where a “Finger 4” related data could not be enacted until 105 three other fingers were touching the performance area of the iPad. As a work around, I created my composition in such way, when a new finger was touching the iPad, a new layer of sound was added to the sonic world. Musical Opportunities Working with the iPad interface provided special opportunities related to musical performance because basic iPad operation revolves around the use of the hand and fingers. Human fingers in particular are extremely dexterous and provide a superb way to control virtually any hand-held devices. This fact is clearly demonstrated by the sheer number of musical instruments where hands and fingers provide the essential method of physical engagement, such as the violin. An additional benefit to the iPad is that it is small, light and easy to travel with. I should also note that the iPad together with Kyma Control, took little time to set up for performance. The Kyma Control application sent data directly to Kyma, which eliminated the need to use other software to send and receive data. Data Mapping In S/V List I employed data mapping techniques such as smoothing, scaling, and offsetting, as well as the use of data to trigger musical events. Figure 67 shows the !Finger2X and !Finger2Y data routed to control the center frequencies of the bandpass filters. It is first offset using addition, and then used to scale static, specified values in the Formant1 and Formant2 parameter fields. Figure 67 also shows how the !Finger2Down data packet triggers the production of random values between 0 and 1 and interpolates intermediate values between each new random value. The incorporation between values 106 was to eliminate clicks that were necessary to mitigate the effects of sudden amplitude changes. Figure 67: First type of data mapping used in S/V List Figure 68 is an example of the second type of data mapping technique I used. Data streams created with X or Y positions of each finger, could control one sound algorithm. For example, a sound is controlled with the continuous control data stream of a finger’s X position on the iPad. !FaderA was mapped to finger one X position, and controlled six musical parameters. As one can see, data mapping techniques such as scaling, offsetting and smoothing were used. Figure 68: Second type of data mapping used in S/V List Another type of data mapping that I used in S/V List was for triggering musical events (Figure 69). 107 Figure 69: Third type of data mapping used in S/V List Within the Timeline, I organized all my sound algorithms into the composition. I navigated through different sections of the composition by setting up WaitUntil Sound objects. I used this principle of data mapping in all of my compositions. Sound Design I spent approximately 80 percent of my compositional time working on sound design in S/V List. I transformed audio recordings through a combinations of sound modification processes such as sampling, subtractive, and granular synthesis. The first musical event in the composition is reminiscent of a bell. This bell-like sound was transformed by filtering and combining multiple copies of an audio recording of a cupboard closing. Figure 70 shows this process, in which the original audio recording is split into two different signal paths, one sent to a set of two bandpass filters and the other sent to a comb filter. I tuned the two center frequencies of the bandpass filters to the approximate ranges of A6 and A7 and specified my desired bandwidth. I created a repeatable sequence of random pitches and amplitudes through Capytalk expressions, and first added reverb, and then spatialized the composite sound. The Capytalk expressions applied a normal distribution of the random seeded algorithm to achieve its results. One example of an expression used is: (!Trigger nextNormalWithSeed: 0.3 ) + 0.5 108 Figure 70: Signal flow of a bell-like sound design used in S/V List Later in the composition I developed this principle of bell-like sound design to create a less-pitched, denser, lower sounding, bass drum-like sound. As seen in Figure 71, one can see the sound design principle was similar to the one described in Figure 70. I specified a lower frequency range for the two formants and used filter modulation to alter the cutoff frequency. To give the sound more prominence, I added a third layer comprised of a sine wave oscillator at 110 Hz to which I applied subaudio frequency modulation. Figure 71: Signal flow of a bass drum-like sound design used in S/V List I had one sound recording in which one could hear seagulls and the squeaking buoys together. I wanted to maintain the original feel of the sound recording, but at the same time I also wanted to hear what the audio file sounded like transformed. My 109 solution was to enhance the atmosphere of the original audio recording by emphasizing certain frequency bands of the sound using the TwoFormantElement Sound object. One can hear these emphasized frequencies at 0:55 of the video documentation. Figure 72 shows the four Sample Sound objects tuned to different frequencies based on my finger positions in the X-dimension of the iPad. Each of the Sample Sound objects were processed with two TwoFormantElement Sound objects where I controlled the formants, bandwidths, and amplitudes. Finally, these complex signals were combined in the StereoMix4 Sound object and reverb, amplification, and spatialiazation was applied. Figure 72: Signal flow of an enhanced audio recording with sound design used in S/V List I created many sonic transformations of the original audio recordings using sample and subtractive synthesis in S/V List. Another sound modification process I worked with in S/V List was analysis and resynthesis. As seen in Figure 73, I used analysis and resynthesis to work with the audio recording of the plane passing overhead. Based on an analysis of the original recording I created a frequency spectrum comprised of 50 partials. I combined three copies of it with different amplitudes and frequencies and controlled the time-index of all three. By using the CloudBankResynthesis Sound object, I 110 resynthesized part of this spectrum using nine grain clouds while randomizing the grain duration, pan position of each grain, and frequency in varying amounts in the nine collective clouds. I dampened the higher frequencies of the spectrum by using a low pass filter and added reverberation. This method of sound design can be heard at 1:07. Figure 73: Signal flow of analysis and resynthesis used in S/V List Figure 74 shows another use of analysis and resynthesis. In this instance, using the Morph1dSpectrum Sound object, I morph in between four different spectra in real- time while simultaneously controlling the pan position of the resulting sound with a MultiChannelPan Sound object. I extracted these morphed sounds that originating from my original recordings of a water pump and a plane passing overhead. Morphing between the four spectra permitted me to create new timbres. This type of analysis and resynthesis can be heard at 3:08 of the video. 111 Figure 74: Another example of analysis and resynthesis used in S/V List I also used granular synthesis to create new timbres (Figure 75). In this example, two similar signal flows were created. Both began with three audio recordings of a water pump (one un-transposed, one transposed higher, one transposed lower). These recordings were combined, processed through the HarmonicResonator Sound object, and written into RAM memory so that the signal could be granulated by the SampleCloud Sound object. Using real-time performance, I controlled the resonant frequency, the decay time, and the amplitude of the two HarmonicResonator Sound objects and spatializing each of the SampleCloud Sound objects. The sound design created with the use of this granular processing can be heard at 2:01. Figure 75: Signal flow of granular synthesis used in S/V List 112 Formal Structure The primary musical material for S/V List was formed by the recordings of seagulls, an onboard water pump, cupboards closing, the squeaking of buoys against the boat, and sounds of airplanes passing by. The narrow selection of audio recordings provided an important unifying element to the composition. I further unified the composition by using frequency modulation with sound modification processes such as sampling, subtractive, and granular synthesis, as well as analysis and resynthesis. The central musical theme revolved around mysterious-sounding bell-like sounds and distinctive rhythmical patterns framed by short silences. The composition can be understood as occurring in three sections (0:10-1:40, 1:40-7:21, 7:21-8:42). The first section functions as an introduction of the bell-like sounds in four layers. The second section is the development of the composition with a counterpoint of melodic and rhythmical layers, leading up to the climax of the composition at 4:38. The third section functions as the work’s coda and the composition ends with a big punctuating sound with all layers of sound dissipating into silence. Performative Techniques My physical motions interacting with an iPad interface were small and consisted primarily of me touching the two-dimensional rectangular screen space with my fingers. At each point that I touched within the multi-touch, two-dimensional space, I sent labeled data packets or data streams to Kyma. In the composition, I placed either one finger or combinations of fingers on the iPad, often sliding them across the iPad’s screen and traversing the two-dimensional space. 113 Additional Comments In my composition S/V List, the original audio recordings played an important role in creating the peaceful, yet mysterious impression that the S/V List sailboat and the Shilshole Bay Marina had on me. Through sound design, I transformed the audio recordings of Shilshole Bay Marina’s and organized them into a musical composition. Through my performance of S/V List, my choice of interface, and my performative actions, I created a space for my audience to experience a musical journey. 114 CHAPTER III.7 BRIGIT My composition, Brigit, for contact microphones on metal, custom Max/MSP, and Kyma is a real-time performance composition approximately eight minutes in duration. Just like the other six compositions in my Digital Portfolio Dissertation, Brigit was also conceptualized as a quadraphonic composition. The performer may choose to diffuse the composition to eight or more speakers. The complete data-driven instrument for Brigit is comprised of the contact microphones attached to metal dishes. The contact microphones sent audio signals to Max. Max converted analog signals to digital. The digital number streams were remapped and sent to sound-producing algorithms contained in Kyma. This configuration is shown in Figure 76 below. Figure 76: Basic data flow diagram of the complete instrument for Brigit Creative Concept The creative concept for this composition came from my inspiration to repurpose old metal objects into a musical idea inspired by the Goddess Brigit. According to a poet and folklore scholar, Susa Morgan Black, Brigit is a Goddess and Celtic Patron Saint of blacksmiths although Brigit is not considered a blacksmith herself.77 My composition Brigit was named after the Goddess Brigit. Susa Morgan Black states that “[Brigit] 77 "Brigit," Order of Bards and Druids, December 28, 2012, (accessed December 25, 2018), https://www.druidry.org/library/gods-goddesses/brigit. 115 inspired the creativity and artistry of blacksmith craft and the creativity of poets.”78 Akin to how she inspired the poets, Goddess Brigit inspired my creativity for repurposing old metal objects into a new musical idea. This idea of repurposing old objects also came from noticing garbage pollution and becoming mindful of recycling. According to the State of Utah, “Nearly one million pounds of materials are wasted per person every year in America.”79 One can reduce the amount of wasted materials through recycling, whether it is taking objects to one’s local recycling center or to thrift stores. All kinds of objects can get a second life without polluting the world. Occasionally, I go to thrift stores such as Goodwill and St. Vincent De Paul’s to search for ideas to repurpose items into various art projects. On one such occasion, I looked for objects that could vibrate. I was looking specifically for metal objects because of the inspiration of Goddess Brigit and because of another reference that I had in mind, that of the gong. Gongs are typically hand crafted and constructed out of bronze or brass. As one strikes the gong, it creates a rich, full sound containing many harmonic and non-harmonic partials. In my search for objects that could vibrate, I was looking for objects to be used with contact microphones. I wanted to use and experiment with contact microphones because I had not previously employed them as part of a data-driven instrument. During my expeditions to find objects that vibrate, I also found a bag of metallic spring-like objects (Figure 77), that seemed to possess the qualities of a metallic, “Brigitian” harp that could be plucked to produce sound. 78 Ibid. 79 "Recycling Coalition of Utah," Recycle The Facts: Plastic, (accessed December 24, 2018), https://utahrecycles.org/get-the-facts/. 116 Figure 77: Photo of spring-like objects found at a thrift store The spring-like objects served the needs of the data-driven instrument and performance interfaces I was trying to create. These springs were made out of metal and were of different diameters and lengths. From the shape of them, I realized such objects could be stretched out and made to make vibrations. As I was paying for my purchase, I asked the clerk if she knew what spring-like object were meant for. The clerk smiled at me and said, “Oh, these things are used for holding decorative china on the wall.” The concept of spring-like objects holding china on the wall made sense, so I immediately thought to myself that I needed more metal objects onto which to attach the metal springs. As I went back in to the store, I found several metal trays. While picking up one of them, my keys accidentally hit the object, and I liked the sound that the tray made. Another tray simply looked cool; it was in the shape of a cookie sheet, with big holes in the bottom. I was excited from my successful trip to the thrift store, and I rushed home to experiment with my metal treasures. Some of the springs fit well over the cookie sheet and when assembled as a single unit produced different pitches (Figure 78). 117 Figure 78: Photo of spring-like objects attached to the cookie sheet used in Brigit The second metal tray was smaller. The springs did not attach to it well. When flipped vertically and tapped with my fingers, the tray made a nice sound, which was rich in spectrum and reminded me of a gong. I decided to hang the metal dish similarly to the way a suspended gong is placed (Figure 79). I drilled a whole in the top of the metal tray and hung it from a string, attaching it to a microphone stand. In my composition, Brigit, I transformed the sounds made by interacting with metal objects into a musical journey through sound design and my performative actions. Figure 79: Photo of a hanging metal tray used in Brigit Sonic Material The sonic material for Brigit arose from a series of audio recordings that I made by striking and scraping the metal trays, and by rolling metal marbles on the trays. By experimenting with the interaction using recycled metal objects, I discovered that I was able to extract a large variety of acoustical sounds. I organized the audio recordings into 118 four categories, which were non-pitched attack sounds, percussive sounds, pitched attack sounds, and semi-sustained sounds. In the last category, the sounds I recorded by striking and scraping metal dishes, had both melodic and rhythmic components. With 24 high- quality audio recordings of sonic material (created by striking and scraping the metal dishes), I had ample audio material to work with for sound design and construction of the composition. Performance Interface My experimentation with the aforementioned metal trays and springs further developed into a data-driven interface, when two contact microphones were attached to the back of the metal trays. Data was generated from the vibrations picked up by the contact microphones and sent to my computer using a USB connection via a preamplifier/audio interface. The contact microphones I used were small piezo contact microphones (Figure 80) used for amplifying guitars and violins. Figure 80: Photo of contact microphones used in Brigit Because I chose to work with two contact microphones, I only used two data streams in my composition Brigit. The first data stream was created with vibrations picked up from the springs attached to the metal tray. The second data stream was created with vibrations picked up from the hanging tray. 119 Musical Challenges I recognized four challenges working with the interface for Brigit. The first challenge was that the data streams created via vibrations picked up by contact microphones were jittery, especially the data stream created from the vibrations of the hanging metal tray. The second challenge was that extraneous vibrations of the performance space interfered with the data stream created while interacting with the interface. For example, if someone sat three feet away from the interface and tapped their foot, the vibration in the floor was picked up through the microphone stand, causing the data to be jittery. The third challenge related to the time delay between the moment I touched the metal objects and their sonic activation. A slight delay between the activation of the object and the creation of the data stream to activate sound was noticeable. During my development of this composition I learned to appreciate this delay effect and to embrace it. To me the sonic result felt like a grace note, made acoustically, with the release note sounding electronically. The fourth challenge was amplifying acoustical characteristics of the interface to the audience. If I amplified the acoustical characteristics of the interface, placement of another microphone stand on stage would have been necessary, creating a clustered performative space. That being said, performing on a small stage did not require amplifying the acoustical characteristics of the interface and solved this challenge. Musical Opportunities Despite the challenges working with the interface, I found four advantages to it. The first advantage was that I could create a large variety of sonic possibilities and 120 performative actions by plucking, pressing, stopping, and snapping the spring-like objects. The second advantage was that the interface produced three pitches, C G and Ab, by plucking each of the three spring-like sets. I used combinations of these pitches as well as enhancing and transforming their sound through sound design. The third advantage was the uniqueness of the performance interface that I believe was engaging for the audience to observe during the performance. The fourth advantage was the portability of the performance interface, which made travel easy and technical rehearsals faster and simpler. Data Mapping To use the data created with vibrations picked up by the contact microphones, I incorporated two layers of data mapping, one in Max, one in Kyma. The first layer of data mapping was done inside of Max. Figure 81 shows where I used the adc~ Max object to convert analog audio signals into digital representation. Figure 81: Max patch - first layer of data mapping used in Brigit 121 I then used the analyzer~80 Max object to analyze characteristics of the sounds such as loudness, brightness, and noisiness. Through trial and error and by working with the custom-made interfaces, I chose to work with the loudness characteristic of the audio signals. Loudness was measured by the spectral energy of the incoming signal. The data extracted from an incoming signal using the analyzer~ object was one type of continuous control data used in Brigit. My vigorous performative actions caused greater vibrations of the hanging tray thereby creating a louder signal. It is important to note that tapping closer to the placement of the contact microphone on the metal trays caused a louder signal as well. If I tapped less aggressively or farther away from the contact microphone, less vibrations were created, thereby making the signal less strong. To trigger sounds I also converted continuous control data into button-like functions. So, despite the intrinsic time delay of the instrument I was able to control when the onset of a sound would occur. In this first layer of data mapping I used techniques such as reshaping, inverting, scaling, offsetting, and smoothing. Both data streams used in Brigit, were reshaped using the abs and table objects. The Max object abs outputted the absolute values of numbers sent into the object. I further reshaped data streams using the table object. Within the table object, data streams were inverted, scaled, and offset. I then used the slide object to stabilize the data from the contact microphones. The data streams were then sent to sound-producing algorithms inside of Kyma in order to control musical parameters. The second layer of data mapping was done within Kyma. I used similar data mapping techniques to those I used in Max such as scaling, offsetting, and smoothing. I 80 "Max/MSP," Tristan Jehan, News & Updates - MIT Media Lab, (accessed December 25, 2018), http://web.media.mit.edu/~tristan/maxmsp.html. 122 also converted continuous control data streams into button-like functions. Figure 82 shows how I used data stream one (!lc01) to control both the amplitude and frequency of a sound. Using Capytalk within the amplitude parameter field (Scale) I scaled and smoothed continuous control data. Within the frequency parameter field, I used a continuous data stream and tested whether it breached a specified value in order to convert the continuous stream into a button-like function. Every time a threshold was breached, a new random number was selected to create slight variations in the pitch of the sound. Figure 82: Data mapping within Kyma used in Brigit A second example of data mapping inside of Kyma related to the control of the time-index and amplitude parameters. Figure 83 shows how I scaled and offset data stream !lc02 within the time-index parameter of the SpectrumInRamLog Sound object and within the amplitude parameter field of the SumOfSines Sound object. Figure 83: Second example of data mapping within Kyma used in Brigit 123 Another type of data mapping that I used is seen in Figure 84. Inside of a Capytalk expression, I converted both continuous control data streams into button-like functions. When I organized sound algorithms on the Kyma Timeline, I needed to control the transitions between sections of the composition. When both conditions were true – !lc02 was greater than 0.7 and !lc01 was greater than 0.8 – within the WaitUntil Sound object the timeline of the composition proceeded to the next section. Figure 84: Second layer, third example of data mapping used in Brigit Sound Design The audio recordings that formed the basis of the sonic world in Brigit were typically not produced in their unaltered forms within the composition but were transformed through sound modification processes such as sampling, subtractive, and granular synthesis. Analysis and resynthesis, amplitude, and frequency modulation were also used. Using different combinations of the above stated synthesis techniques, I achieved the sonic results that I envisioned. Figure 85 demonstrates the use of sampling and subtractive synthesis that I used in Brigit (heard at 1:04 of the video documentation). 124 Figure 85: Signal flow of sampling and subtractive synthesis techniques used in Brigit Figure 85 represents two signal flows in the processing that were constructed in a similar way. Both signal flows started out with a non-pitched rhythmical audio recording. Through a Capytalk expression, I controlled amplitude and frequency with data stream one (hanging metal dish). In Figure 85 one can see that I used a high-shelf filter in selecting two different ranges of frequency, amplifying the signal, and spatializing both signal paths. Both signal flows used a slightly adjusted version of the Capytalk expression seen in Figure 86. I specified a changeable rate of panning by choosing a random number generated in a specified range to add to the pre-existing rhythmical characteristic of the sound. Figure 86: Capytalk expression used for panning in Brigit Figure 87 demonstrates the use of analysis and resynthesis and granular synthesis that I used in Brigit. I controlled the musical parameters in this sound algorithm including time-index, frequency, amplitude, panning, grain duration, and density of grains. I then created a spectrum analysis based on this Sound and from it created five new spectra, 125 each within a different frequency range. I combined the five spectra together, controlling panning and amplitude of the four spectra. The newly created, sustained spectrum was granulated through two different SampleCloud Sound objects. In each of the two SampleCloud Sound objects, I controlled amplitude, duration and density of grains. The two granulated signal paths were then spatialized to the front and back. The signal flows were further combined and amplified. The described complex sound algorithm in Figure 87 is heard at 0:55 of the composition. Figure 87: Signal flow of analysis and resynthesis and granular synthesis techniques used in Brigit Just as in Wind in the Forest I used custom-made waveforms created from pitched attack sounds (in combination with analysis and resynthesis) to create new sonic results. As one can see in Figure 88 I used the LiveSpectralAnalysis Sound object to create a spectrum of the pitched attack audio recording. I further modified this analysis with a use of the SpectrumModifier Sound object. Finally, I created a new spectrum comprised of twenty-five frequencies. By using the OscillatorBank Sound object, I resynthesized the spectrum using custom-made waveforms of the original audio recording and made twenty copies of it. By using the Chopper Sound object, I rhythmically sliced the newly created spectrum. I spatialized, combined, and amplified the sound with four delayed versions of 126 itself to create a full sound. An example of a sound design created with the process just described can be heard at 4:23. Figure 88: Signal flow showing analysis and resynthesis in combination with a custom-made waveform Another sound synthesis technique I used in Brigit was amplitude and frequency modulation. In Figure 89, one can see that I used an audio recording of a pitched attack sound to control the amplitude of the Oscillator Sound objects. I created frequency modulation by using another Oscillator Sound object that generated frequencies in the range of zero to seven hertz. This described sound design added a nice sonic tail to the attack-like sounds. 127 Figure 89: Signal flow of AM and FM techniques used in Brigit Formal Structure In Brigit, the primary musical material was formed with recordings of striking and scraping metal trays. My choice of audio recordings emphasized the acoustical properties of the data-driven interface, and provided an important unifying element to this composition. I further unified the composition by using amplitude and frequency modulation with sound modification processes such as sampling, subtractive, and granular synthesis. The structure of Brigit revolved around contrasting dynamics and interweaving of sustained material with short percussive-like textures. My composition Brigit can be understood as being in four sections (0:13-1:54, 1:55-5:00, 5:00-7:07, 7:07- 8:30). The composition has two dramatic arcs at 4:53 and 7:42, emphasized by the buildup in tempo and dynamics of musical material. The first section featured the introduction of the prominent electronic-melodic figure that was repeated throughout the composition. The second section featured the introduction of the acoustical motif made of C, G, and Ab, which was created by plucking spring-like objects. The third section served as the development of the composition where the acoustical motif was accompanied by 128 rhythmical and highly percussive textures. The fourth section served as a coda for the composition with the composition ending the way it started, with a strong, dense attack- like sound on C. Performative Techniques In addition to the sonic material some of the physical motions used to perform this composition were the same motions used to interact with the metal dishes during the audio recording process. I interacted with the hanging metal tray by striking it in the front with my left hand, which was naturally stronger. My right hand supported the metal tray from the back, so it did not swing back and forth. I also tapped my fingers in a percussive way against the metal tray to produce a steady amplitude levels to activate the rhythmical material. The placement of the second metal tray allowed me to use a larger variety of physical motions when interacting with the spring-like objects. I plucked, dampened, scraped, and tapped on the spring-like objects. When I plucked the spring-like objects, I did so in ways that I could control the intensity of the vibrations. In doing so, I could create a variety of ranges of continuous control data that allowed me to explore sonic possibilities in a more nuanced manner. At 1:54-2:07 one can hear that I plucked the spring-like objects in a particular way to get the most volume on the pitches C, G, and Ab. I also used a combination of scraping motions on the spring-like objects and stopped their vibrations as another way to control the interface (heard at 2:14-2:27). In this instance, I used the quick damping performative action as a button-like function. Such performative action created a deceptive perception of sound created by placement of my finger on the metal tray. In reality, however, it was the data mapping that I chose to use that enabled me to emphasize the quick damping of the sound. Embracing the minimal 129 approach in my performative actions to interact with the interface was my strategy to highlight the sonic world of Brigit. As a result, I achieved my goal of creating a sense of variety and development in this composition through my sonic transformations of original audio recordings. Additional Comments In an actual performance of the piece, metal trays were placed centrally in front of the performer. The performer interacted with the interface sitting down. The audio interface was placed on a little stool or carpeted floor next to the performer. The hanging metal tray was placed on a microphone stand to the right of the performer. The contact microphone attached to the hanging metal tray was connected to channel one of the audio interface. The channel one cable was wrapped around the microphone stand to help prevent the contact microphone from picking up unwanted vibrations. The contact microphone that was attached to the metal tray with spring-like objects (placed in front of the performer) was connected to channel two of the audio interface. To avoid unnecessary vibrations from other objects with the metal tray on the table, a large towel was placed underneath it as a cushion. For performative actions to be clearly observed by the audience, a hanging metal tray needed to be placed on the right-hand side of the performer. I was able to extract acoustical sounds from the way I chose to strike, scrape, tap, and pluck the two interface objects. Through sound design, I transformed these sounds into a musical journey for the audience to experience. 130 SUMMARY Because my musical training came primarily from studying the piano, I learned that to create beautiful music I had to incorporate both technical and creative aspects in preparation for public performances. The technical aspects included the assimilation of information from multiple sources about the musical work I wanted to study, the learning of the notes, tempos and dynamics, the analysis of musical structure, and the establishment of kinesthetic memory to be able to play music without a score. However, technical know-how was not, by itself, adequate. I had to develop my musical sensibilities by attending concerts to learn about musical performance, to hear contrasting interpretation of the same composition by different artists and to cultivate my abilities to make musical judgements that related to making aesthetic expressions. Performing to the highest musical potential with data-driven instruments is no different than developing musical skills and aesthetics to become a well-rounded pianist. Such developed musical judgements are seen in my compositions through carefully crafted technical and creative concepts. The seven compositions presented in this Digital Portfolio Dissertation arose from different creative concepts and sets of audio recordings that I transformed into musical journeys. Finding creative concepts in the sounds that surrounded me and transforming them into musical expression seemed very natural. Working with each of the seven different data-driven instruments provided different challenges and opportunities for me, similar to working with the wide range of the piano repertoire. Technical aspects such as connecting various elements of software and hardware together, creating a formal structure for each composition, and data mapping techniques were necessary elements for me to be able to realize my creative concepts. 131 Often my instruments influenced how I formulated the performative actions in each composition and how each data-driven instrument would fit into the performative spaces. My creative work with each of the seven data-driven instruments, technical, and musical consideration were important in order for the audience to experience my musical journeys to their full potential, like I did. 132 BIBLIOGRAPHY 74', Cycling. Cycling 74. N/A N/A, 2018. https://cycling74.com/products/max/ (accessed July 18, 2018). Akamatsu, Masayuki. aka.leapmotion. Masayuki Akamatsu. 2016. http://akamatsu.org/aka/max/objects/ (accessed November 16, 2018). Anagnostopoulou, Christina, Miguel Ferrand, and Alan Smaill. Music and Artificial Inteligence. Edinburg: Second International Conference, ICMAI, 2002. Banzi, Massimo. Getting Started with Arduino. Sebastopol, CA: Books O'Reilly Media Inc, 2009. Black, Susa Morgan. The Order of Bards Ovates & Druids. n.d. https://www.druidry.org/library/gods-goddesses/brigit (accessed December 25, 2018). Bush, Evan. People rush to boats in Ballard to dodge Seattle's crazy housing market- but there is no more room. The Seattle Times. May 14, 2018. https://www.seattletimes.com/seattle-news/people-rush-to-boats-in-ballard-to- dodge-seattles-crazy-housing-market-but-theres-no-more-room/ (accessed December 19, 2018). California, University of. Kinect Hacking. NA NA, NA. http://idav.ucdavis.edu/~okreylos/ResDev/Kinect/MainPage.html (accessed 12 3, 2018). Carlin, Croy. Photo. Ukraine. Carlin, Croy, interview by Olga Oseth. Science Cruise to Chukchi Borderlands 2016 (August 25, 2016). Cirani, Simone, Gianluigi Ferrari, Marco Picone, and Luca Veltri. Internet of Things: Architectures, Protocols and Standards. New Jersey: John Wiley & Sons, Inc, 2019. CNMAT. Introduction to OSC. n.d. http://opensoundcontrol.org/introduction-osc (accessed January 9, 2019). Dean, Roger T. The Oxford Handbook of Computer Music. Edited by Roger T. Dean. New York, NY: Oxford University Press, 2009. Dictionary, Merriam-Webster. cross talk. 2019. https://www.merriam- webster.com/dictionary/cross%20talk (accessed January 9, 2019). 133 —. Modular | Definition of Modular by Merriam-Webster. December 27, 2018. https://www.merriam-webster.com/dictionary/modularity (accessed January 2019). Edward, Myers Elliot. A transducer for detecting the position of a mobile unit. Patent GB2373039. 09 11, 2002. Elder Mountain Dreaming. Ukraine vinok seasonal wreaths and their symbolisms. n.d. https://eldermountaindreaming.com/2017/06/14/ukraine-vinok- %D0%B2%D1%96%D0%BD%D0%BE%D0%BA-wreath-symbolism/ (accessed 11 28, 2018). Eugene, City of. WJ Skatepark + Urban Plaza. 2005-2016. https://www.eugene- or.gov/1733/WJ-Skatepark-Urban-Plaza (accessed October 15, 2018). Exploratorium. Exploratorium Skateboard Science. April 5, 2018. http://www.exploratorium.edu/skateboarding/skatedesignwheel.html (accessed October 15, 2018). Freed, Adrian, et al. "Musical applications and design techniques for the gametrak tethered spatial position controller." Proceedings of the SMC 2009 - 6th Sound and Music Computing Conference, July 23-25, 2009. GPS Signal. July 30, 2017. https://upload.wikimedia.org/wikiversity/en/7/7c/Serial_Comm.note.20170730.pd f (accessed January 9, 2019). Harmony Systems, Inc. Kyma Connect. 2018. http://www.delora.com/products/kymaconnect/ (accessed January 10, 2019). In2Games. Gametrak & Real World Golf PlayStation2. 2000-2008. https://web.archive.org/web/20080610084443/http://www.deafgamers.com/05revi ews_a/gametrak%26rwgolf_ps2.htm (accessed 12 15, 2018). Inc, Harmony Systems. Delora Kyma Connect. N/A N/A, 2017. http://www.delora.com/products/kymaconnect/ (accessed May 18, 2017 ). Janssen, Dale. technopedia. 2019. https://www.techopedia.com/definition/5107/wireless- local-area-network-wlan (accessed January 9, 2019). Jehan, Tristan. Max/MSP. n.d. http://web.media.mit.edu/~tristan/maxmsp.html (accessed December 25, 2018). Jolliffe, Daniel. Daniel Jolliffe. November 29, 2010. http://www.danieljolliffe.ca/writing+code/writing+code.htm (accessed January 1, 2016). 134 Kubisch, Christina. "Homage with Minimal DIsinformation." Five Electrical Works. 2007. LeapMotionINC. Reach into virtual reality with your bare hands. N/A N/A, 2017. https://www.leapmotion.com/#112 (accessed May 18, 2017). Lockwood, Annea. A Sound Map of The Hudson River. 1989. LTD, DELICODE. Ni Mate. N/A N/A, 2011-2017. https://ni-mate.com/ (accessed May 18, 2017). Martin, Kyle. Issue Overview- Washington Jefferson Skatepark. blogs.uoregon.edu. NA NA, NA. https://blogs.uoregon.edu/kmartin8w15gateway/ (accessed October 15, 2018). Microsoft. ""Project Natal", fact sheet." "Project Natal" 101. NA: Microsoft, June 1, 2009. Miranda, Eduardo R, and Marcelo M Wanderley. New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. Vol. 21. Middleton, WI: A-R Editions, Inc, 2006. OSC: Open Sound Control protocol. n.d. https://elearning.dei.unipd.it/pluginfile.php/59467/mod_page/content/46/9_OSC- protocol.pdf (accessed January 9, 2019). Pittsburgh, University of. Wireless Network Standard. January 9, 2019. https://www.technology.pitt.edu/help-desk/how-to-documents/wireless-network- standard (accessed January 9, 2019). Radigue, Eliane. "Kyema." Trilogie De La Mort. 1998. Recycling Coalition of Utah. n.d. https://utahrecycles.org/get-the-facts/ (accessed December 24, 2018). Roads, Curtis. Composing Electronic Music: A New Aesthetic. New York, NY: Oxford University Press, 2015. —. The computer music tutorial. N/A, MA: Massachusetts Institute of Technology, 1998. Roth, Graham. Bluetooth Wireless Technology. May 22, 2013. http://large.stanford.edu/courses/2012/ph250/roth1/ (accessed January 9, 2019). Scaletti, Carla. Kyma X Revealed! Champaign, IL: Symbolic Sound Corporation, 2004. 135 Scaletti, Carla, and Kurt Hebel. kyma sound design inspiration. Symbolic Sound. N/A N/A, 2017. http://kyma.symbolicsound.com/ (accessed May 18, 2017). Segarra, Kate, and Bureau of Ocean Energy Management BOEM. Exploring the Chukchi Borderlands with BOEM Oceanographer Kate Segarra. Kate Segarra. February 8, 2018. https://www.boem.gov/Exploring-the-Chukchi-Borderlands/ (accessed November 9, 2018). SparkFun Electronics. Force Sensitive Resistor. n.d. https://www.sparkfun.com/products/9375 (accessed 11 28, 2018). SparkFun. Serial Communication. n.d. https://learn.sparkfun.com/tutorials/serial- communication/all (accessed January 9, 2019). Stolet, Jeff. Kyma and the SumOfSines Disco Club. N/A, 2011. —. Lecture, "The Cinematics of Music Performance." 2017. Stolet, Jeffrey. Electronic Music Interactive v2. n.d. https://pages.uoregon.edu/emi/32.php (accessed January 9, 2019). —. "“Twenty-three and a Half Things about Musical Interfaces”." Keynote Address. Brussels, Belgium: Kyma International Sound Symposium, September 14, 2013. Swift, Andrew. An introduction to MIDI. n.d. http://www.doc.ic.ac.uk/~nd/surprise_97/journal/vol1/aps2/ (accessed January 9, 2019). U.S. Department of Homeland Security, USDHS. USCGC HEALY (WAGB-20). September N/A, 2018. https://www.pacificarea.uscg.mil/Our- Organization/Cutters/cgcHealy/ (accessed November 9, 2018). 136