================================================================================ ================================================================================ Audio Cartography ================================================================================ ================================================================================ Copyright: Copyright (c) 2017-2018 University of Oregon Author: Megen Brittell Department of Geography 1251 University of Oregon Eugene OR 97403-1251 License: Rendered maps released under Creative Commons Attribution-Share Alike 2.0 Generic (CC-BY-SA 2.0) Scripts for map production, presentation, and processing released under the GNU General Public License (GPL) version 3 (GPL3). "Configuration files for use with FSL FEAT are provided AS-IS for non-commercial use as a research courtesy by Dr. Megen Brittell AND ARE PROVIDED WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Download and use of these files indicates acceptance of the License which governs the FMRIB Software Library (FSL), Release 6.0, Copyright 2018, The University of Oxford." This work was funded in part by the National Science Foundation (NSF) Doctoral Dissertation Research Improvement (DDRI) Grant #1634086 and the University of Oregon (UO) Lewis Family Endowment. This work benefited from access to the University of Oregon high performance computer, Talapas. ================================================================================ The "Audio Cartography" project explored representation of geospatial data in an auditory display, focusing on the temporal arrangement of information within the an audio stream. The work involved the design of audio symbology, rending auditory maps, and evaluation of those maps through behavioral and neuroimaging methods. This collection serves to document and archive the study as part of dissertation research in the Department of Geography at the University of Oregon. This collection of data provides examples of the auditory map design (audioCartography-maps), scripts to create new instances of the audio maps (audioCartography-prepare), software to present the auditory maps and record participant responses in a behavioral evaluation (audioCartography-present), and scripts to facilitate processing of the resulting data records (audioCartography-process). An overview of each of these resources is provided at the bottom of this file, and additional detail is available in a README file within each respective folder. Behavioral and fMRI data are available in OpenNeuro: https://openneuro.org/datasets/ds001415 Results obtained using these scripts are reported in the author's dissertation: https://pqdtopen.proquest.com/pubnum/13420194.html The remainder of this document describes the three auditory map types and a behavioral task that supported the empirical evaluation. The three audio map types differ in the temporal arrangement of information within the audio stream. The decision to use neuroimaging (functional magnetic resonance imaging, fMRI) to evaluate responses to the auditory maps influenced both the map designs and the behavioral task. The auditory maps align sounds with the quiet interval of a sparse sampling scan sequence to avoid masking by the acoustic scanner noise during image acquisition. The "sequential" map type traverses the two dimensional data space with a virtual cursor, following an English reading order. The cursor starts in the northwest (top-left) grid cell and moves across the northern-most (top) row, producing a single note for a data value in each grid cell that it traverses. At the end of the row, the cursor returns to the west (left) and moves one row to the south (down), continuing until it reaches the southern-most (bottom) row. The amplitude (volume) of the note encodes the respective data values; frequency (pitch) and note rate are held constant. Playback of a grid cell lasts 0.5 seconds, a full row lasts four seconds, and a silent period of three seconds separates one row from the next. With eight rows, playback of the entire map takes fifty-six seconds. The rows and columns represent two spatial dimensions; the listener detects patterns in the amplitude changes within a single row as it played, and mentally reconstructs columns based on position within a row. The "augmented-sequential" map type follows the same pre-determined scan pattern as the sequential map, English reading order, but also redundantly encodes location (row and column within the raster grid) in the auditory symbol. Note frequency indicates the north-south (row) position, with high frequencies in the north and low frequencies in the south. Note rate encodes the east-west (column) position, with a single long note in the west and progressively shorter notes to the east. The shorter notes repeat as needed so that the symbol for a single data value (i.e., a single raster grid cell) has constant total duration. The listener again mentally reconstructs columns, but can also use the explicit location information to track absolute location within the display. The "concurrent" map type retains the encoding of location information in frequency and note rate (see augmented-sequential), but plays multiple notes simultaneously. Playback of a map using the concurrent map type takes the same amount of time as the playback of a single row of the sequential map types. For overall consistency, the concurrent map type repeats eight times so that all maps have a total duration of fifty-six seconds. The listener is free to attend to any aspect of the audio, with the option to attend to different aspects across the multiple repetitions of the map. For example, attending to chords that emerge from the concurrent playback of notes at different frequencies conveys multiple north-south locations (i.e., within column). Based on what they heard when listening to a map, participants made a judgement about the relative magnitude of data values at two given locations. The presentation software first plays an audio map, and then visually presents two response options, which are depicted as colored squares on a white background. Participants decided which square corresponded with the location of the higher data value either in relative silence ("memory") or while listening to the map a second time ("active"). -------------------------------------------------------------------------------- README (This file) -------------------------------------------------------------------------------- File: audioCartography-README.txt An overview of the contents of this collection. -------------------------------------------------------------------------------- Maps -------------------------------------------------------------------------------- Folder: audioCartography-maps A set of example maps, which demonstrate one approach to rendering spatial information in audio and to provide stimuli for evaluation. The maps take four forms: audio, matrix, tabular, and visual. -------------------------------------------------------------------------------- Prepare -------------------------------------------------------------------------------- Folder: audioCartography-prepare Scripts to systematically generate and render simple spatial data sets for use in an empirical evaluation. -------------------------------------------------------------------------------- Present -------------------------------------------------------------------------------- Folder: audioCartography-present Software and accompanying resources that implement the presentation of auditory maps for empirical evaluation in an fMRI scanning setting. -------------------------------------------------------------------------------- Process -------------------------------------------------------------------------------- Folder: audioCartography-process Scripts to facilitate data munging and analysis of the files recorded by the presentation software and the fMRI scanner. ================================================================================ ================================================================================