ROLES OF PHYSICAL AND PERCEIVED COMPLEXITY IN VISUAL AESTHETICS by ALEXANDER JOHN BIES A DISSERTATION Presented to the Department of Psychology and the Graduate School of the University of Oregon in partial fulfillment of the requirements for the degree of Doctor of Philosophy June 2017 ii DISSERTATION APPROVAL PAGE Student: Alexander John Bies Title: Roles of Physical and Perceived Complexity in Visual Aesthetics This dissertation has been accepted and approved in partial fulfillment of the requirements for the Doctor of Philosophy degree in the Department of Psychology by: Margaret E. Sereno Chairperson Nash I. Unsworth Core Member Dasa Zeithamova-Dermican Core Member Richard P. Taylor Institutional Representative and Scott L. Pratt Dean of the Graduate School Original approval signatures are on file with the University of Oregon Graduate School. Degree awarded June 2017 iii © 2017 Alexander John Bies “Roles of Physical and Perceived Complexity in Visual Aesthetics” by Alexander John Bies is licensed under CC BY 4.0 iv DISSERTATION ABSTRACT Alexander John Bies Doctor of Philosophy Department of Psychology June 2017 Title: Roles of Physical and Perceived Complexity in Visual Aesthetics The aesthetic response is a multifaceted and subtle behavior that ranges in magnitude from sublime to mundane. Few studies have investigated the more subtle, weak aesthetic responses to mundane scenes. But all aesthetic responses rely upon sensory-perceptual processes, which serve as a crucial first step in contemporary models of the aesthetic response. As such, understanding the roles of perceptual processes in aesthetic responses to the mundane provides insights into all aesthetic responses. Variation in the physical properties of aesthetic objects must cause such responses, but to understand the relationship, such physical properties must be quantified. Then, the mechanism can be determined. Here, I present the theoretical basis and reason for interest in such a test of mundane aesthetic responses in Chapter I. In Chapter II, I present metrics that quantify the physical properties of natural scenes, using computer-generated images that model the complexity of natural scenes to validate these measurement techniques. The methods presented in Chapter II are adapted to analyze the physical properties of natural scenes in Chapter III, extending the analysis to photographs and clarifying the relationship between the properties fractal dimension and spectral scaling decay rate. A behavioral study is presented in Chapter IV that investigates the extent that perceptual responses about v complexity serve as an intermediary between aesthetic ratings and the physical properties of the images described in Chapters II and III. Chapter V summarizes the results of these studies and explores future directions. This dissertation includes previously published and unpublished coauthored material. vi CURRICULUM VITAE NAME OF AUTHOR: Alexander John Bies GRADUATE AND UNDERGRADUATE SCHOOLS ATTENDED: University of Oregon, Eugene University of Evansville, Evansville DEGREES AWARDED: Doctor of Philosophy, Psychology, 2017, University of Oregon Master of Science, Psychology, 2012, University of Oregon Bachelor of Science, Psychology and Philosophy, 2008, University of Evansville AREAS OF SPECIAL INTEREST: Aesthetics and Complexity Perception Fractal and Fourier Image Analysis Structural Equation Modeling PROFESSIONAL EXPERIENCE: Dissertation Research Fellow, University of Oregon, 2016-2017 Graduate Teaching Fellow, Department of Psychology, University of Oregon, 2010-2016 GRANTS, AWARDS, AND HONORS: Dissertation Research Award, Roles of Physical and Perceived Complexity in Visual Aesthetics, American Psychological Association, 2017 College of Arts and Sciences Dissertation Fellowship, Perceiving Beauty through Fractal Complexity, University of Oregon, 2016-2017 NIH Travel Award, NIH, 2009 PUBLICATIONS: vii Bies, A. J. (2016). Cockroaches now evading death by getting bitter about sweeteners. The Journal of Undergraduate Neuroscience Education, 15(1), R17-R18. Bies, A. J., Boydston, C. R., Taylor, R. P., & Sereno, M. E. (2016). Relationship between fractal dimension and spectral scaling decay rate in computer-generated fractals. Symmetry, 8(7), 66. Juliani, A. W., Bies, A. J., Boydston, C. R., Taylor, R. P., & Sereno, M. E. (2016). Navigation performance in virtual environments varies with the fractal dimension of the landscape. Journal of Environmental Psychology, 47, 155-165. Bies, A. J., Blanc-Goldhammer, D., Boydston, C. R., Taylor, R. P., & Sereno, M. E. (2016). Aesthetic responses to exact fractals driven by physical complexity. Frontiers in Human Neuroscience, 10, 210. (Invited). Roach, S. M., San Juan, J. G., Suprak, D. N., Lyda, M., Bies, A. J., & Boydston, C. R. (2015). Passive hip range of motion is reduced in active subjects with chronic low back pain compared to controls. International Journal of Sports Physical Therapy, 10(1), 13-20. viii ACKNOWLEDGMENTS Thanks to Professors Sereno and Taylor for their mentorship, my committee for their guidance and feedback throughout the process. This project would not have been possible without the support of these four and the many other colleagues and friends to whom I have turned for training and constructive criticism. The projects described here were supported in part by a grant from the National Institute of Drug Abuse, R21DA024293 to Dr. Margaret Sereno at the University of Oregon, as well as an American Psychological Association Dissertation Research Award and a University of Oregon College of Arts and Sciences Dissertation Research Fellowship to Alexander Bies at the University of Oregon. ix To My Family, As I Would Not Be Here Without You x TABLE OF CONTENTS Chapter Page I. INTRODUCTION .................................................................................................... 1 II. RELATIONSHIP BETWEEN FRACTAL DIMENSION AND SPECTRAL SCALING DECAY RATE IN COMPUTER-GENERATED FRACTALS ........... 4 Introduction ............................................................................................................ 4 Materials and Methods ........................................................................................... 8 Midpoint Displacement Fractals ...................................................................... 8 One-Dimensional Midpoint Displacement Fractals ................................... 8 Two-Dimensional Midpoint Displacement Fractals .................................. 9 One- And Two-Dimensional Fractal Fourier Noise ........................................ 10 Measurement Of The Box Counting Dimension ............................................. 13 Box Counting Analysis Of 1-Dimensional Fractals: D(Mountain Edge) ........... 13 Box Counting Analysis Of xy Slices Of 2-Dimensional Fractal Coastlines: D(Coastal Edge) .............................................................................. 13 Fourier Decomposition And Measurement of β .............................................. 15 Spectral Scaling Analysis Of 1-Dimensional Fractals: β(Mountain Edge) ........ 15 Spectral Scaling Analysis of 2-Dimensional Fractal Intensity Images: β(Surface) .......................................................................................... 15 Results .................................................................................................................... 15 Relationship Between D(Mountain Edge) And β(Mountain Edge) For 1-Dimensional Fractals ............................................................................................................. 15 Relation Of D(Mountain Edge) And D(Coastal Edge) For 2-Dimensional Fractals ........ 17 Relation Of β(Mountain Edge) And β(Surface) For 2-Dimensional Fractals ................ 18 Relation Of β(Mountain Edge) To D For 2-Dimensional Fractals ........................... 21 xi Chapter Page Relation Of β(Mountain Edge) To D(Coastal Edge) Of 2-Dimensional Fractals ...... 21 Relation Of β(Surface) To D(Coastal Edge) For 2-Dimensional Fractals .............. 21 Discussion .............................................................................................................. 22 Mathematical Relationships Between Ds And βs ............................................ 22 Distinguishing βs ............................................................................................. 23 A Generalized Equation To Relate Ds And βs ................................................ 25 Importance Of The Relationship Between D And β For Current And Future Research ........................................................................................................... 26 Bridge To Chapter III ............................................................................................. 28 III. RELATIONSHIP BETWEEN FRACTAL DIMENSION OF EDGES AND SPECTRAL SCALING DECAY RATE IN PHOTOGRAPHS OF CLOUDS AND LANDSCAPES ............................................................................................ 29 Introduction ............................................................................................................ 29 Materials And Methods .......................................................................................... 31 Image Sets ........................................................................................................ 31 Photographs Of Clouds From An Internet Sample .................................... 31 Photographs Of Clouds From A Single Camera With A Telephoto Zoom Lens ................................................................................................. 31 Photographs Of Landscapes From A Single Camera With A Wide- Angle Zoom Lens ...................................................................................... 32 Image Analysis Techniques ............................................................................. 33 Power Spectrum Decay Rate ..................................................................... 33 Fractal Dimension ...................................................................................... 33 Edge Extraction .................................................................................... 33 xii Chapter Page Box Counting Analysis ........................................................................ 34 Results And Discussion ......................................................................................... 34 Photographs Of Clouds From A Convenience Sample .................................... 34 Descriptive Statistics .................................................................................. 34 Correlation Analysis .................................................................................. 35 Discussion .................................................................................................. 36 Photographs Of Clouds Taken From A Single Camera With A Telephoto Zoom Lens ....................................................................................................... 36 Descriptive Statistics .................................................................................. 37 Correlation Analysis .................................................................................. 37 Discussion .................................................................................................. 37 Photographs Of Landscapes Taken From A Single Camera With A Wide- Angle Zoom Lens ............................................................................................ 38 Descriptive Statistics .................................................................................. 38 Correlation Analysis .................................................................................. 38 Discussion .................................................................................................. 39 General Discussion .......................................................................................... 39 Conclusions ............................................................................................................ 39 Bridge To Chapter IV ............................................................................................ 41 IV. PREDICTING PERCEIVED AESTHETIC VALUE FROM PHYSICAL AND PERCEIVED COMPLEXITY ..................................................................... 42 Aesthetic Value ...................................................................................................... 44 Perceived Complexity ............................................................................................ 46 xiii Chapter Page Physical Complexity .............................................................................................. 46 Testing The Aesthetics Model ............................................................................... 47 Individual Differences – In Images, Not People .................................................... 48 Method ................................................................................................................... 50 Stimuli .............................................................................................................. 50 Random Fractals ........................................................................................ 50 Cloud Photographs ..................................................................................... 50 Landscape Photographs ............................................................................. 50 Image Analyses .......................................................................................... 51 Participants ...................................................................................................... 51 Procedure ......................................................................................................... 51 Results And Discussion ......................................................................................... 52 Response Times ............................................................................................... 52 Raw Rating Data Screening ............................................................................. 52 Parceled Data – Descriptive Statistics And Correlation Analyses .................. 53 Factor Structure Of Participant Ratings ........................................................... 54 Tests Of Invariance Of Ratings Across Sets Of Images ................................. 62 Mediation Analyses ......................................................................................... 64 Random Fractals ........................................................................................ 64 Cloud Photographs ..................................................................................... 66 Landscape Photographs ............................................................................. 66 Discussion ........................................................................................................ 67 xiv Chapter Page General Discussion ................................................................................................ 68 V. CONCLUSIONS ..................................................................................................... 71 REFERENCES CITED ................................................................................................ 73 xv LIST OF FIGURES Figure Page 2.1. Plots Of A Fractal Terrain With D = 2.5 And Its Intersection With Axial Planes ..................................................................................................................... 6 2.2. Illustration Of The Generation Of 1-Dimensional Midpoint Displacement Fractals ................................................................................................................... 9 2.3. Plots Of 1-Dimensional Statistical Midpoint Displacement Fractals ................... 9 2.4. Illustration Of The Generation Of A 2-Dimensional Midpoint Displacement Fractal .................................................................................................................... 10 2.5. Plots Of 2-Dimensional Midpoint Displacement Fractals .................................... 11 2.6. Generation Of 1-Dimensional Fourier Noise Fractals .......................................... 12 2.7. Generation Of 2-Dimensional Fourier Noise Fractals .......................................... 12 2.8. Edge Extraction Procedure For 2-Dimensional Midpoint Displacement Fractals ................................................................................................................... 14 2.9. Fourier Decomposition Of 2-Dimensional Fractals .............................................. 15 2.10. 1-Dimensional Fractal Measurements ................................................................ 16 2.11. 2-Dimensional Fractal D Measurements ............................................................ 18 2.12. 2-Dimensional Fractal β Measurements ............................................................. 19 2.13. Mountain Profiles From 1- And 2-Dimensional Fractal Fourier Noise .............. 20 2.14. 2-Dimensional Fractal Measurements Of Fourier Noise .................................... 22 2.15. Real And Imaginary Frequency Components Of 1- And 2-Dimensional Fractal Fourier Noise Plotted In 2- And 3-Dimensional Spaces ............................ 25 3.1. Example Photographs ........................................................................................... 32 3.2. Relations Of D And β Within Image Sets ............................................................. 36 4.1 Factor Structure Of The Rated Properties Related To Perceived Aesthetic Value xvi Figure Page And Complexity ..................................................................................................... 43 4.2. Schematic And Models ......................................................................................... 45 4.3. Latent Mediation Models Showing The Effects Of D And β On Aesthetic Value, And The Intervening Role Of Perceived Complexity ................................ 65 xvii LIST OF TABLES Table Page 4.1. Descriptive Statistics For The Parceled Ratings Of Random Fractals ................. 55 4.2. Descriptive Statistics For The Parceled Ratings Of Clouds ................................. 55 4.3. Descriptive Statistics For The Parceled Ratings Of Landscapes .......................... 56 4.4. Correlations Among The Measurements Of Physical Properties And Parceled Ratings Of Random Fractals .................................................................................. 57 4.5. Correlations Among The Measurements Of Physical Properties And Parceled Ratings Of Clouds .................................................................................................. 58 4.6. Correlations Among The Measurements Of Physical Properties And Parceled Ratings Of Landscapes .......................................................................................... 59 4.7. Fit Indices For All Measurement Models And Invariance Tests .......................... 60 4.8. Fit Indices For All Mediation Models .................................................................. 61 1 CHAPTER I INTRODUCTION Natural scenes can drive intense aesthetic responses. But do scene properties contribute to the aesthetic response directly or by a more nuanced pathway? Aesthetic judgments are nuanced, depending on considerations such as originality or uniqueness of the object or scene in addition to individual differences among observers such as personality and past experience. Still, from the time of Plato and Aristotle, philosophers (e.g., Addison & Steele, 1879; Mill, 1869) and scientists (e.g., Berlyne, 1971; Birkhoff, 1933) have entertained the notion that objects may exhibit physical properties such as symmetry that we find aesthetically pleasing. Theoretical models presented by Berlyne (1971), Leder, Belke, Oberst, and Augustin (2004), and Redies (2015) suggest intermediate physiological, emotional and cognitive processes contribute to the aesthetic response as a series of processes. This notion is consistent with the hierarchical processing that occurs within the visual system (Van Essen, 1979). The aim of this dissertation is to test such a plausible cognitive mechanism: cognitive processes transform perceived qualities such as complexity into evaluative judgments, using them as weights in the determination of aesthetic value, and those perceptual qualities are psychological instantiations of the physical properties of objects or phenomena in the world. For example, consider the aesthetic response to looking at nature within the context of such a model. In such a case, light reflects off of objects in the scene, enters the retina, is transduced into neural signals, and those signals are relayed to the cerebral cortex. Many neurons work in concert to produce a representation of discrete objects of particular sizes at specific depths. Effortlessly, and unconsciously, we recognize the branching patterns of trees, texture of clouds, and many other natural processes, and something about this induces a sense of pleasure 2 and recognition of beauty. In this example, the cognitive process of comprehending and responding to the complexity of objects such as trees and clouds, may serve as an intermediate step that affects the judged aesthetic value of a scene. Because such cognitive processes are abstract and difficult to define or measure exactly, I will use observable variables that reflect these latent constructs. The variance shared by responses about how “stunning” or “beautiful” an image is can provide a closer approximation of that image’s aesthetic value. Similarly, by measuring ratings of properties such as “intricacy” and “simplicity” I can arrive at an approximation of the estimation of complexity of a scene in much the same manner that Attneave (1957) and Cutting and Garvin (1987) showed that such ratings correlate with the physical complexity of abstract, geometric patterns. The extent to which these cognitive processes can be explained by a property such as the physical complexity of the scene can then be assessed through regression analyses. In Chapter IV, I return to the idea that discrete cognitive processes contribute to evaluative judgments in an ordered fashion, using factor and regression analyses to test specific hypotheses that arise from these theories. But first, I introduce a pair of measurements that quantify patterns’ physical complexity in Chapters II and III for several sets of images. In Chapter II, I introduce two main experimental techniques used for measuring scaling phenomena, fractal dimension, which describes the complexity of patterns that repeat at smaller scales (Mandelbrot, 1983), and spectral scaling decay rate, which describes the complexity of textures through Fourier analysis. While the measure of fractal dimension introduced in Chapter II quantifies the rate at which fine structure is introduced in these patterns’ edges, the spectral scaling decay rate describes how textures repeat at finer scales. This is important as it allows me to determine the extent that scaling of textures and edges is interdependent in random fractal images and 3 photographs of nature. Random fractal patterns have been related mathematically (Voss, 1986), but not empirically for a discrete number of scales of measurement. Accurate measurement of the rate at which fine structure is introduced in these phenomena is crucial to provide support for the premise that the physical properties of fractal scaling and spectral scaling in images impacts aesthetic responses. As such, I validate such measures in Chapter II. In Chapter III I repeat these measurements using photographs of two classes of natural objects: clouds and landscapes. I extend the previous chapter’s method of measuring fractal dimension in these categories of images, which allows for precise, fast edge extraction in photographs. Together, Chapters II and III serve as the basis for Chapter IV by providing estimates of physical complexity that may be driving aesthetic responses to natural scenes in humans. In the following chapter, Chapter II, I lay the foundation for the test of these psychological theories by generating random fractal noises and measuring their physical complexity. This dissertation contains previously co-authored and published material. The study described in Chapter II has been published in Symmetry and was co-authored with C.R. Boydston, R.P. Taylor, & M.E. Sereno. The study described in Chapter III was co-authored with B.H. Lee, R.P. Taylor, & M.E. Sereno. 4 CHAPTER II RELATIONSHIP BETWEEN FRACTAL DIMENSION AND SPECTRAL SCALING DECAY RATE IN COMPUTER-GENERATED FRACTALS Published as Bies, A. J; Boydston, C. R; Taylor, R. P; Sereno, M. E. Relationship between fractal dimension and spectral scaling decay rate in computer-generated fractals. Symmetry 2016, 8(7), 66. 1. Introduction Researchers from diverse disciplines ranging from physics to psychology have converged on the question of how to quantify the scaling symmetry of natural objects. In one camp, fractals researchers describe objects such as clouds, coastlines, mountain ridgelines, and trees using a scale-invariant power law to measure the rate at which structure appears as the scale of measurement decreases [1,2,3,4,5,6], though debates continue regarding which power-laws, if any, best describe natural phenomena [7,8,9,10,11]. The following equation is a common example: N ~ L−D (1) where N is the extent to which the fractal fills space as measured at scale L [1,4]. The power law’s exponent D is called the fractal dimension. Consider Figure 1a,b, which plots a fractal terrain in x, y, and z space, as a demonstration of how D relates to the object’s Euclidean dimension E. The topological dimension of this surface is E = 2 and it is embedded in a space of E = 3. Its fractal dimension, D(Surface), lies in the range between these two Euclidean dimensions: 2 < D(Surface) < 3. Taking a vertical slice through this terrain 5 (i.e., taking its intersection with the xz or yz plane) creates a fractal “mountain” profile (see Figure 1c) quantified by D(Mountain Edge) = D(Surface) − 1. Similarly, taking a horizontal slice creates a fractal “coastline” (see Figure 1d) with D(Coastal Edge) = D(Mountain Edge). To measure D(Surface), mathematicians and natural scientists typically determine D(Mountain Edge) or D(Coastal Edge) (and then add 1) because the measurements involved are easier and faster to implement than for measurements of D(Surface). Vision researchers similarly use a power law to capture the scale-invariant properties of the fractal. However, they typically focus on the power spectrum decay rate (β) of the terrain’s intensity image [12,13,14,15,16,17,18,19,20,21]. This intensity image is generated by converting the terrain height into either grayscale variations (high is white, black is low) to create a grayscale map (see Figure 1b) or color variations to create a “heat map”. The following power law then characterizes the fractal structure in these maps and has, in particular, proved useful for quantifying the spectral scaling decay rate of grayscale images of natural scenes: SV(f) = 1/(cfβ) (2) where SV(f) is the spectral density (power), f is the spatial frequency, and β and c are constants. Voss [5] considered the Hurst exponent H, which by definition is related to D as follows: D = E + 1 − H (3) where E is the Euclidean topological dimension. H and D lie in the following ranges: 0 < H < 1, E < D < E + 1. He then derived the relationship between H and β for a fractional Brownian function: β = 2H + 1 (4) where 1 < β < 3. Accordingly, Voss [5] stated that an approximation of the relationship for “the statistically self-affine fractional Brownian function VH(x), with x in an E-dimensional Euclidian 6 space, which has a fractal dimension D and spectral density SV(f) ∝ 1/fβ, for the fluctuations along a straight line path in any direction in E-space” is provided by the equation D = E + (3 − β)/2 (5) Figure 1. Plots of a fractal terrain with D = 2.5 and its intersection with axial planes; (a) “Surface” plot of a fractal terrain; (b) intensity “image” of the terrain; (c) “mountain edge” profile of an “xz-slice” or “yz-slice” of the terrain; (d) “coastal edge” of an “xy-slice” of the terrain. However, through this definition, Voss [5] stipulates a different measure of β than that typically used by vision researchers. The definition specifies the measure of β be taken along the intersection of the fractal with a plane parallel to the z-axis—a mountain edge—and measuring the spectral decay of the 1-dimensional trace—a mountain profile. Thus, β in Equations (4) and (5) are what we will hereafter call β(Mountain Edge). The relationship highlighted in Equation (5) between D and β(Mountain Edge) has not been observed empirically for either 1-dimensional (E = 1, D ≤ 2) or 2-dimensional (E = 2, D ≤ 3) fractals. To experimentally discern this relationship, we use two methods for generating 7 fractals—namely, midpoint displacement fractals (in which D serves as the input parameter) and fractal Fourier noise (in which β is the input). We find that, when measured along a straight-line path, the relationship described by Voss [5] holds for fractals with both E = 1 and 2. However, the relationship described by Voss [5] does not extend to the observed relationship between measurements of D and β when β is measured by the standard method of vision researchers (in a 2-dimensional Fourier space, which we will hereafter call β(Surface)). A new equation that extends the relationship between D and β to multi-dimensional Fourier spaces has the potential to enhance discourse among mathematicians, who are experts in the geometry of fractals, physicists, who are experts in surfaces and textures, and vision scientists, who are experts in animals’ sensation and perception of geometric shapes, surfaces, and textures. Further consideration of the relationship between D and β is both important and timely because new studies are being performed using fractals to investigate a variety of behaviors including aesthetics [22,23,24,25,26,27], navigation [28], object pareidolia (perceiving coherent forms in noise) [29,30], sensitivity [24], and associated neural mechanisms [31,32,33]. This is especially important in aesthetics research, where there have been claims of universality in preference for patterns of moderately low complexity [23,26,34,35,36,37]. To test this hypothesis, it is necessary to be able to translate the units of measurement of researchers who alternately use D [22,25,26,28,29,31,33,34,35,36,37,38,39,40], β [14,24,30,32,41,42,43,44,45,46], or, infrequently, both [23,27]. The crux of the problem, perhaps, is that D is a general parameter that quantifies complexity in a variety of patterns, whereas β is limited (at least in practice) in its ability to quantify some patterns’ complexity. For example, Fourier analysis is poorly suited to describe the complexity of patterns including strange attractors and some line fractals (e.g., dragon fractals and Koch snowflakes), which have 8 been used by vision researchers to study aesthetics [25,31,34,35] and perceived complexity [47]. This provides a strong impetus to convert to D when forming general conclusions. Still, there is a great deal of utility in presenting fractal noise patterns that are defined in terms of β as visual stimuli, precisely because they mimic the statistics of natural scenes [12,13,15,16,17,18,19,20,21]. Here, we provide the basis for translation between the parameters D and β in a general equation that follows from empirical analysis of the relationships between measures of D and β. 2. Materials and Methods Midpoint displacement and Fourier noise fractals were generated and analyzed in MATLAB version 2015b. 2.1. Midpoint Displacement Fractals Sets of random midpoint displacement fractal lines (see Section 2.1.1) and images (see Section 2.1.2) were generated using an algorithm described by Fournier, Fussel, and Carpenter [2], which allowed us to specify D. 2.1.1. One-Dimensional Midpoint Displacement Fractals To generate each 1-dimensional midpoint displacement fractal as a trace, a vertex, V, was added to the midpoint of an initial set of two endpoints and displaced vertically by a value randomly selected from a Gaussian distribution, with σ = 1, that was scaled by a factor of 2−2(3 − D)(R + 1), where D is the fractal dimension and R is the current level of recursion. This process is shown schematically for an exact midpoint displacement fractal that is not affected by random perturbations in Figure 2. As in the schematic, the scaling factor of the fractals generated for this study was held constant for each vertex at a given level of recursion, and changed with each level 9 of recursion. The vertices at each recursion served as endpoints in the next level of recursion for R recursions in order to generate time-series data. Figure 2. Illustration of the generation of 1-dimensional midpoint displacement fractals. (a) Cartoon graph of a scaling plot in log–log coordinates that determines the rate of scaling of midpoint displacements across recursions for high (solid line) and low (dashed line) D fractals; (b) Schematics of recursions 0–2 are shown for low (dashed line) and high (solid line) D exact midpoint displacement fractals. Gray arrows indicate displacements that occur with each recursion in (b). We retained the random values used for vertical displacement to generate sets of fractals that varied in D but retained the structure introduced at each level of recursion (see Figure 3). Figure 3. Plots of 1-dimensional statistical midpoint displacement fractals. (a–c) Fractal traces that vary in D, such that D = 1.2, 1.5, and 1.8, are generated from a single set of random numbers that contribute to the variable length and direction of displacement of the midpoints at each recursion. 2.1.2. Two-Dimensional Midpoint Displacement Fractals 10 To generate each 2-dimensional midpoint displacement fractal as an image, a vertex, V, was added to the midpoint of an initial set of four edge points and displaced according to the generation rules described in Section 2.1.1 (see illustration of the generation process in Figure 4). The vertices at each recursion served as edges in the next level of recursion for n recursions in order to generate gray-scale intensity images. Figure 4. Illustration of the generation of a 2-dimensional midpoint displacement fractal as the heights (indicated by grayscale intensity) of particular points are specified over eight recursions. (a–d) The second, fourth, sixth, and final recursions are shown. In (a–c), white space indicates points for which height has not yet been specified. We retained the random values used for vertical displacement to generate sets of fractal terrains that varied in D but retained consistent large-scale structures (see Figure 5). 2.2. One- and Two-Dimensional Fractal Fourier Noise Sets of fractal noise were generated using an algorithm described by Saupe [3], which allowed us to specify β. 11 Figure 5. Plots of 2-dimensional midpoint displacement fractals. (a–c) Surface plots of fractals generated from a single set of random numbers that vary in D, such that D = 1.2, 1.5, and 1.8; (d–f) Grayscale intensity map images of the surface plots in (a–c). For an image of size x by y pixels, an x by y amplitude matrix is created in which amplitude is specified for each spatial frequency by applying Equation (2). Each frequency is then assigned a phase specified by a phase matrix of size x by y, which consists of numbers that are randomly selected from a Gaussian distribution. The amplitude and phase matrices are then subjected to an inverse Fourier transform to generate a time series (if x or y = 1) or an image (if x and y > 1). The resulting fractals have scaling properties defined by their respective input β values (see Figure 6 and Figure 7). As with the midpoint displacement fractals, we retained the matrices of random numbers used to determine the phases of each spatial frequency (see Figure 6d and Figure 7d) to generate sets of fractal noise images that differed only in their spectral scaling. One-dimensional fractal time series were generated from phase maps with amplitude series that varied in the specified β, 12 β(input) (see Figure 6e–g). Values of β(input) were paired with phase maps to create 2-dimensional fractal images as well (see Figure 7e–g). Figure 6. Generation of 1-dimensional Fourier noise fractals. (a–c) Power (y-axis) as a function of frequency (x-axis) for β(Input) = 2.6, 2, and 1.4; (d) Set of random phases (y-axis) as a function of frequency (x-axis); (e–g) fractal traces resulting from the pairing of the phases in panel (d) with power spectra from (a–c) respectively. Figure 7. Generation of 2-dimensional Fourier noise fractals. (a–c) Power (z-axis) as a function of frequency in xy coordinates for β(Input) = 2.6, 2, and 1.4; (d) Set of random phases (z-axis) as a function of frequency in xy coordinates; (e–g) fractal terrains resulting from the pairing of the phases in panel (d) with power spectra from (a–c) respectively. 13 2.3. Measurement of the Box Counting Dimension 2.3.1. Box Counting Analysis of 1-Dimensional Fractals: D(Mountain Edge) Box counting was performed on the intersection of 1-dimensional fractals with a horizontal line at the trace’s median height. For the 2-dimensional fractal images, a fractal dust set was formed by taking the intersection of the height values of each row of the image with a line intersecting the median height. This dust set was used to compute the box counting dimension through the use of custom Matlab scripts. Briefly, for each box size with side length L, from the length of the fractal to a single pixel in steps of L/2 for a total of n steps, the image is covered with a set of boxes, and the number of boxes that contain any non-zero quantity of points is counted. The box counts of pairs of neighboring grid scales from L/(23) to L/(2n − 3) were averaged to compute D, while the counts from the grid scales outside this range (the larger and smaller boxes, where n = {0, 1, 2, n − 2, n − 1, and n}), were not used. The embedding dimension of the series of points is 1, but the embedding dimension of the fractal mountain edge is 2, so we averaged these values of D, computed for pairs of grid sizes, and added 1 to report the fractal dimension of the 1-dimensional fractals, D(Mountain Edge), which span the range 1 < D(Mountain Edge) < 2. 2.3.2. Box Counting Analysis of xy Slices of 2-Dimensional Fractal Coastlines: D(Coastal Edge) To isolate the coastal edge of a fractal terrain, the median intensity value of the intensity image was selected as the level at which a binary threshold procedure was applied with the Matlab command im2bw. The median, in particular, was selected because it is the level at which all of the resultant binary images have roughly equivalent black and white regions across the range of D. The edge of the binary images was extracted with the Matlab command bwperim. Figure 8 provides examples of the edge extraction process. This particular luminance edge— 14 extracted from the coastal edge images and shown in Figure 8g–i —served as the set on which box counting was performed. Box counting was performed on the coastal edge images as described in Section 2.3.1, with the exception that the boxes were applied as a grid over the image. Here, the embedding dimension is 2, so for the coastal edge images we report the measured values of D, D(Coastal Edge), which span the range 1 < D(Coastal Edge) < 2. Figure 8. Edge extraction procedure for 2-dimensional midpoint displacement fractals. (a–c) Grayscale intensity map images of fractals generated from a single set of random numbers that vary in D, such that D = 1.2, 1.5, and 1.8; (d–f) Binary images resulting from the threshold 15 procedure applied to the terrains shown in (a–c); (g–i) Coastal edges extracted from the binary images shown in (d–f). 2.4. Fourier Decomposition and Measurement of β 2.4.1. Spectral Scaling Analysis of 1-Dimensional Fractals: β(Mountain Edge) Fractal traces (see examples in Figure 3 and Figure 6e–g) were decomposed with a 1- dimensional Fast Fourier Transform. The square of the real-valued component was retained. Power was plotted against frequency in log–log coordinates, and the slope of a least squares regression line was retained as an empirical measure of the spectral decay rate, β(Mountain Edge), of the time series. 2.4.2. Spectral Scaling Analysis of 2-Dimensional Fractal Intensity Images: β(Surface) Each image was decomposed with a 2-dimensional Fast Fourier Transform. The lowest frequency components were centered, and the square of the real-valued component was retained and transformed into polar coordinates. For each polar angle, power was plotted against frequency in log-log coordinates, and the average was retained as an empirical measure of the spectral decay rate, β(Surface), of the image (see Figure 9a–c). Figure 9. Fourier decomposition of 2-dimensional fractals. (a) Fractal surface generated with the inverse Fourier method; (b) Power spectrum of the Fourier decomposition of the terrain shown in (a); (c) Power spectrum shown in (b) with low spatial frequencies centered. 3. Results 3.1. Relationship between D(Mountain Edge) and β(Mountain Edge) for 1-Dimensional Fractals 16 We first analyzed 1-dimensional midpoint displacement and Fourier noise fractals to validate our measures and test Voss’s approximation of the relationship between D and β for 1- dimensional fractals. To validate our box counting measure, values of D ranging from 1 to 2 in steps of 0.05 were used to generate 100 sets of midpoint fractals of length 220. The measurement technique described in Section 2.3.1 over-estimates D by a progressively smaller amount as D approaches 2, as shown in Figure 10a. This is a minor measurement error. Accordingly, the best linear fit, D(Input) = 0.91 + 0.16 × D(Mountain Edge), for which R2 = 0.97 (Figure 10a, black line), deviates from the unity line (Figure 10a, blue line) by only a small amount. Figure 10. 1-dimensional fractal measurements. (a) Midpoint displacement fractals’ D(Mountain Edge) measurements plotted against their D(Input) values; (b) Fourier noise fractals’ β(Mountain Edge) measurements plotted against their β(Input) values; (c) Fourier noise fractals’ D(Mountain Edge) measurements, adjusted by the linear fit from panel (a), plotted against their β(Mountain Edge) measurements. In each panel, the best linear fit for the data is shown with a black line. In panels (a,b), unity is represented by the blue line. In panel (c), Voss’s approximation (Equation (5)) is represented by the red line. Data are colored to distinguish adjacent input values such that each datum's color is determined by D(Input) in panel (a) and β(Input) in panels (b,c). To validate our spectral scaling rate measure, values of β ranging from 1 to 3 in steps of 0.1 were used to generate 100 sets of fractal Fourier noise of length 220. The measurement technique described in Section 2.4.1 does well at approximating β (as shown in Figure 10b), with 17 the best linear fit, β(Input) = 0.0003 + 0.9999 × β(Mountain Edge), for which R2 = 1.00 (Figure 10b, black line), overlapped by the unity line (Figure 10b, blue line). These measures reliably reflect their input parameters, so we determined the extent to which these empirical measurements are consistent with Voss’s approximation, Equation (5). Because our box counting technique results in a small measurement error across the range of dimension, we adjusted the measured D values of the fractal Fourier noise by substituting D(Mountain Edge) into the experimentally determined regression equation stated above in this section and computing the expected D(Input), which we call D(Adjusted) (as shown in Figure 10c). The best linear fit—D(Adjusted) = 2.48 − 0.50 × β(Mountain Edge)—for which R2 = 0.97 (Figure 10c, black line), is close to Voss’s approximation (Equation (5)), given E = 1 (Figure 10c, red line). We conclude that Voss’s approximation for fractals with E < D < E + 1 and 1 < β < 3 is accurate when E = 1. We also note that the difference between Equation (5) and our regression equation is inconsequential, and both overlap our measurements across the range of D and β. 3.2. Relation of D(Mountain Edge) and D(Coastal Edge) for 2-Dimensional Fractals Voss [5] generalized the relationship between D and β to n-dimensional spaces in Equation (5), so we next consider the case of E = 2, the dimensional space of our visual field. To do so, we first generated 100 sets of midpoint displacement fractals with values of D ranging from 2 to 3 in steps of 0.05 with side length 211. For each fractal, the dimension of the mountain profile was measured according to the technique described in Section 2.3.1. These measures of D(Mountain Edge) were averaged together for each image. Again, this under- and over-estimates D by a progressively larger amount as D approaches 2 and 1, respectively, as shown in Figure 11a. The best linear fit—D(Input) = 0.38 + 0.77 × D(Mountain Edge), for which R2 = 0.87 (Figure 11a, black line)—deviates from the unity line (Figure 11a, blue line) in a manner similar to that observed 18 for 1-dimensional fractals. When the coastal edge of the image was measured according to the technique described in Section 2.3.2, we observe a similar trend, with the best linear fit—D(Input) = 0.28 + 0.81 × D(Coastal Edge), for which R2 = 0.97 (Figure 11b, black line)—deviating from the unity line (Figure 11b, blue line) in a manner similar to that observed for the dust measurement technique. These measures of D, averaged mountain edges and coastal edge values, are reasonable approximations of each other, with the best linear fit D(Mountain Edge) = 0.07 + 0.93 × D(Coastal Edge), for which R2 = 0.86 (Figure 11c). Both of these measures provide an accurate means by which to compute the fractal dimension of an image. Figure 11. 2-dimensional fractal D measurements. (a) Midpoint displacement fractals’ D(Mountain Edge) measurements plotted against their D(Input) values; (b) Midpoint displacement fractals’ D(Coastal Edge) measurements plotted against their D(Input) values; (c) Midpoint displacement fractals’ D(Coastal Edge) measurements plotted against their D(Mountain Edge) measurements. In each panel, unity is represented by the blue line, while the best linear fit for the data is represented by the black line. Data are colored to distinguish adjacent input values such that each datum's color is determined by D(Input) in panels (a–c). 3.3. Relation of β(Mountain Edge) and β(Surface) for 2-Dimensional Fractals Having found that our measures of D were consistent with each other, we aimed to test their relation to β. To this end, we generated 100 sets of fractal Fourier noise images with values of β(Input) ranging from 1 to 3, in steps of 0.1, with side length 211 pixels. Measuring the spectral decay of a 2-dimensional Fourier analysis as described in Section 2.4.2 provides measured 19 β(Surface) values that are consistent with the specified input β(Input) values, with the best linear fit β(Surface) = 0.12 + 0.95 × β(Input), for which R2 = 0.9999 (see Figure 12a). Having verified the generation process with an analysis in native space, we measured the β of these 2-dimensional fractals along a straight line path, β(Mountain Edge). We averaged the β(Mountain Edge) measurements for each row of each image, as described in Section 2.4.1, to allow us to follow the definition put forth by Voss [5]. We found that these values of β(Mountain Edge) differ from the specified input β values (see Figure 12b), with an offset as evidenced by the best linear fit β(Mountain Edge) = −0.39 + 0.82 × β(Input), for which R2 = 0.998 (Figure 12b, black line). Figure 12. 2-dimensional fractal β measurements. (a) Fourier noise fractals’ β(Surface) measurements plotted against their β input values; (b) Fourier noise fractals’ β(Mountain Edge) measurements plotted against their β(Input) values; (c) Fourier noise fractals’ β(Mountain Edge) measurements plotted against their β(Surface) measurements, showing that the measures converge at β = 0; (d) Fourier noise fractals’ β(Mountain Edge) measurements plotted against their β(Input) values. In each panel, unity is represented by the blue line, while the best linear fit for the data is represented by the black line. Data are colored to distinguish adjacent input values such that each datum's color is determined by β(Input) in panels (a–d). 20 We visually inspected the mountain profiles to confirm that their frequency content was indeed different from that implied by β(Input). We found that the mountain profile from a fractal terrain with an arbitrary value of β(Surface), βi, is rougher (i.e., has a larger contribution of fine structure) than a mountain profile from a 1-dimensional fractal with β(Mountain Edge) = βi (see Figure 13). We then took measurements for an ensemble of 100 random phase maps around β = 0, which show that our measures converge when there is equal power across frequencies (see Figure 12c). An exploratory analysis on a new set of images with 0 ≤ β(Input) ≤ 4.5 allowed us to empirically determine that fractal Fourier noise terrains with β(Input) values in the range 1.8 < β(Input) < 3.8 consistently give β(Mountain Edge) values in the range 1 < β(Mountain Edge) < 3 (see Figure 12d). We found that β(Input) and β(Mountain Edge) are relatable by the regression equation β(Mountain Edge) = -0.64 + 0.93 × β(Input), for which R2 = 0.997 (Figure 12d, black line), across the range 1 < β(Mountain) < 3 and 1.8 < β(Input) < 3.8. Figure 13. Mountain profiles from 1 and 2-dimensional fractal Fourier noise. (a) 1-dimensional fractal with β(Input) = 1.5; (b) 1-dimensional fractal with βInput = 2.5; (c) 2-dimensional fractal with β(Input) = 2.5; (d) 1-dimensional fractal mountain edges from the terrain in (c). 21 An important validation of our analysis techniques is that β(Mountain Edge) and β(Surface) approximately converge at β = 0 (as expected), because for white noise, there is equal power across frequencies. This would be trivial if the two measures followed the unity line (Figure 12d, blue line), but certifies that our otherwise non-equivalent measures accurately describe white noise. We note that our β values exhibit slight measurement errors, such that classical Brownian traces (β(Mountain Edge) = 2, β(Surface) = 3) have empirically determined means of (2.14, 2.95). In the absence of measurement error, the empirically determined range 1.8 < β(Input) < 3.8 would be 2 < β(Surface) < 4. 3.4. Relation of β to D for 2-Dimensional Fractals To relate the two measures of β to D, we generated another 100 sets of fractal Fourier noise with values of β(Input) ranging from 0 to 5 in steps of 0.1 with side length 211 pixels. 3.4.1. Relation of β(Mountain Edge) to D(Coastal Edge) of 2-Dimensional Fractals First, we investigated the extension of Voss’s [5] approximation to E = 2 by determining the relationship between D(Coastal Edge) and β(Mountain Edge). We performed the Fourier analysis described in Section 2.4.1 on each row of each image, and measured the rows’ fractal dimension with the technique described in Section 2.3.2. The relationship between these measures is described by a best linear fit—β(Mountain Edge) = 5.18 – 2.00 × D(Coastal Edge), for which R2 = 0.99 (Figure 14a, black line)—which approximates Voss’s [5] equation (Figure 14a, red line, which is Equation (5)). This confirms Voss’s [5] assertion that measuring along a straight-line path will provide measures of D and β that are related by Equation (5). 3.4.2. Relation of β(Surface) to D(Coastal Edge) for 2-Dimensional Fractals We next measured β using the method described in Section 2.4.2, β(Surface), which captures the radial scaling properties of the images. When plotted against D(Coastal Edge), we observe that the 22 relationship between these measures is described by a best linear fit, β(Surface) = 6.24 − 2.14 × D(Coastal Edge), for which R2 = 0.99 (Figure 14b, black line). The observed relationship agrees with the data from a smaller set of images previously reported by Spehar and Taylor [23] (Figure 14b, green line). Significantly, this observed relationship between β(Surface) and D(Coastal Edge) agrees with Equation (11) which we present below, and will allow conversion across measures of D and β in multidimensional spaces. Figure 14. 2-dimensional fractal measurements of Fourier noise (a,b). (a) Fourier noise fractals’ β(Mountain Edge) measurements plotted against their D(Coastal Edge) measurements; (b) Fourier noise fractals’ β(Surface) measurements plotted against their D(Coastal Edge) measurements. In each panel, the best linear fit for the data within the region that was shown to exhibit fractal scaling (identified with gray lines) is shown with a black line. In panel (a), Voss’s [5] equation (Equation (5)) is shown with a red line. In panel (b), Spehar & Taylor’s [23] data is shown with a green line and our extension of Voss’s [5] equation (Equation (11)) is shown with a red dashed line. Data are colored to distinguish adjacent input values such that each datum's color is determined by β(Input) in panels (a,b). 4. Discussion 4.1. Mathematical Relationships between Ds and βs Our results show that Voss [5] was correct regarding Equation (5)’s extension into n- dimensional measures of D with the limitations described therein. However, the way that β is commonly measured in images, β(Surface), is not that which Voss [5] described. Voss’s [5] 23 equation (Equation (5)) applies for the measure we call β(Mountain Edge). However, vision researchers typically use β(Surface). Whereas the difference between these two spectral decay rates is nonexistent for white noise, where β = 0, these measures are substantially different in the range over which these noises are fractal. Before commenting further on the different measures of β, we will first summarize the relationships for the fractal images discussed in this paper. For the mountain profile fractal (E = 1), the Voss relationship of Equation (5) becomes: D(Mountain Edge) = 1 + (3 − β(Mountain Edge))/2 (6) We have also shown in Section 3.3 that the Fourier spectral decay rates measured in 1- and 2-dimensional space are approximately related by: β(Mountain Edge) = β(Surface) − 1 (7) over the range 1 < β(Mountain Edge) < 3 and 2 < β(Surface) < 4. Combining Equations (6) and (7) gives: D(Mountain Edge) = 1 + (4 − β(Surface))/2 (8) The relationship between β and D(Surface) can then be obtained using: D(Surface) = D(Mountain Edge) + 1 = D(Coastal Edge) + 1 (9) We have not measured β(Coastal Edge) in our investigations. However, we expect that, if a coastal edge was unraveled by the process described by Zahn & Roskies [48], its β value will equal β(Mountain Edge) because D(Coastal Edge) is equivalent to D(Mountain Edge) and E = 1 applies to both the mountain and coastal edges. 4.2. Distinguishing βs Our results provide the ranges of β over which the images are fractal. As Voss [5] noted, for D mountain (1 < D(Mountain Edge) < 2), we have 1 < β(Mountain Edge) < 3. However, we have shown that β measured in a single variable-space (i.e., along a straight line path as β(Mountain Edge)) diverges from β measured in a two-variable space (i.e., across a plane as β(Surface)) to an extent 24 that is characterized by Equation (7) for the range over which 2-dimensional noise is fractal. For D surface (2 < D(Surface) < 3), we have 2 < β(Surface) < 4 and 1 < β(Mountain Edge) < 3. The fact that the β values measured by 1- and 2-dimensional Fourier transforms differ for fractal noises holds crucial consequences. The fractal structure of a terrain is quantified by β(Mountain Edge). Visual inspection of Figure 13 makes it immediately apparent that its value is significantly smaller than β(Surface) for fractals of topological dimension E = 2. Given that β(Input) matches β(Surface) rather than β(Mountain Edge), it is likely that many vision researchers have been misjudging the fractal content of their fractal terrains, or adapting them by an intuitive sense of the image’s roughness. Equation (7) provides a formal justification for adjusting the β of 2-dimensional fractals. The basis for this conversion lies in the difference in generating 1- vs. 2-dimensional noise. A pair of vectors can specify the phases and amplitudes of a 1-dimensional fractal noise pattern because they have only one phase at each frequency (for illustration, see the visualization of the amplitude vectors corresponding to three different input Betas (βis) shown in Figure 6a–c and phase matrix shown in Figure 6d). In contrast, a 2-dimensional fractal pattern is generated from a matrix of amplitudes and a matrix of phases (for illustration, see the visualization of the amplitude matrices corresponding to three different input Betas (βis) shown in Figure 7a–c and phase matrix shown in Figure 7d). For 2- and higher-dimensional fractal noises, there are an increasingly greater number of inputs at increasingly high spatial frequencies (for illustration, see Figure 15, where the lowest frequency components have been centered). The number of inputs increases at a rate that is related to the distance from the lowest frequency (i.e., the radial distance in a low-spatial frequency-centered representation of Fourier space). Changing from a 1- to 2-variable space, 25 weighting the input function SV(f) by f to increase the embedding dimension by 1 (from 1 to 2) requires a subtraction of 1 from β, as denoted by the following equation: SV(f) × f = (f−βi) × f = 1/f(βi − 1) (10) Figure 15. Real and imaginary frequency components of 1- and 2-dimensional fractal Fourier noise plotted in 2- and 3-dimensional spaces with color changing with frequency, such that higher frequency components are shown in cooler colors. (a,b) Amplitude-frequency plots of a 1- dimensional noise with low frequency components centered such that the amplitude of the higher frequency components fall at the edges of the plot; (c,d) Amplitude-frequency plots of 2- dimensional noise with low frequency components centered such that larger concentric circles indicate higher frequency components. 4.3. A Generalized Equation to Relate Ds and βs We postulate that the relationship between D and β continues to change with higher dimensional Fourier decomposition (for 3- and higher dimensional Fourier decomposition), such that the relationship between D and β can be described by the equation, D = E + (F + 2 − β)/2 (11) 26 where F is the dimensional space of the Fourier transform (the number of variables with which the Fourier transform is performed), where F ≥ 1, and F < β < F + 2 (as examples, F = 1 for β(Mountain Edge) and F = 2 for β(Surface)). This new equation (Equation (11)) allows for conversion from D to both of the Fourier measurement techniques that can describe static images, and provides an extension of Mandelbrot’s [1,4], Voss’s [5], and Knill et al.’s [14] relationships that can describe spectral decay in dynamic fractal Brownian stimuli generated with Fourier, midpoint displacement, and other equivalent methods. Equation (11) extends Voss’s [5] equation (Equation (5)) by generalizing the term for β from β(Mountain Edge). 4.4. Importance of the Relationship between D and β for Current and Future Research In addition to allowing for easy translation across the parameters D and β in aesthetics research, Equation (11) provides scaffolding for extension of basic vision research into questions related to visual sensitivity and the perception of fractal motion, and has far-reaching applicability to applied topics, including stress reduction and navigation. To clarify the need for this new equation to allow for such forward progress, consider a hypothetical case of therapeutic intervention using fractal movies. From Equation (5) and previous research that suggests that low-to-moderate fractional dimensions (1.3 ≤ D ≤ 1.5) are optimal for stress reduction [49], a therapist might generate what are intended to be soothing movies of fractal noise with β = 2.2, thinking that β does not differ according to the number of variables with which the Fourier transform is performed. Meanwhile, the results of our analyses imply that such a series would be effectively space filling if β(Volume) = 2.2. This is because Equation (11) implies the optimal range of values of β(Volume) for such an application would be 4 < β(Volume) < 4.4, because E = 3 and F = 3, where fractal noises are in the range 3 < β(Volume) < 5. 27 More generally, time represents a third dimension—yet to be explored—in the perception of fractal processes. An example would be a fractal pattern that undergoes fractal change over time. Responses to dynamic stimuli have a long history of consideration in vision research that continues today [50,51], though few have focused on perception of fractal motion [52,53]. Equation (11) provides the scaffolding to extend perceptual research into the study of dynamic fractals. While Equation (11) supports the development of new lines of work into dynamic fractals, it also holds value in drawing conclusions across recent research using 2-dimensional fractal patterns in studies that have implications for aesthetics. Whereas Rainville and Kingdom [54] provide βs that are apparently in terms of β(Mountain Edge), they cite Knill et al. [14], who provided the relationship between D and β for surfaces. This highlights the difficulty associated with discerning the optimal range of β in aesthetics and vision research. More problematic is that because of the relative convenience of the respective algorithms’ implementation, others report a combination of D(Mountain Edge) and β(Surface) values [23], for which there is no clear conversion provided in the published literature. Finally, Equation (11) serves as a useful tool for converting between 2- and 3- dimensional representations of space, the problem we solve whenever we use a map to navigate. The recent work of Juliani et al. [28] asks individuals to navigate fractal environments. Under conditions such as these, the map typically has complexity in the range 1 < D(Coastal Edge) < 2, whereas the navigated environment has complexity in the range 2 < D(Surface) < 3, and the visually perceived scene has a spectral decay that likely falls off at a rate in the range 2 < β(Surface) < 4. While it is mathematically no less appropriate to describe all of these in terms of β(Mountain Edge), it is easier to interpret results described in units that reflect the experienced dimensional space 28 precisely and explicitly. There is convenience to be gained by using this more general equation (Equation (11)) and its variables’ boundary conditions rather than more specialized equations, such as those which have been put forth previously [5,14,21]. Equation (11) stems from a recognition that β varies with the number of variables with which the Fourier transform is applied. As such, it is important that we define which β is being used (β(Line), β(Surface), β(Volume), etc.) for easier interpretation of results and to facilitate the communication of future endeavors in interdisciplinary fractals research. 5. Bridge to Chapter III In this chapter, the algorithms used to measure D and β were validated through an empirical test of their relationship to the input parameters, and confirm that the mathematical relationship between these two parameters, which extends across infinite scales of measurement, holds for physical instantiations, which exhibit structure over a limited range of scales. A common assumption dating at least as far back as Voss [5] is that fractal Gaussian noise serves as a good model for physical phenomena such as clouds and coastlines. In the next chapter, Chapter III, I test this assumption by measuring the relationship between D and β for several sets of cloud and landscape photographs. 29 CHAPTER III RELATIONSHIP BETWEEN FRACTAL DIMENSION OF EDGES AND SPECTRAL SCALING DECAY RATE IN PHOTOGRAPHS OF CLOUDS AND LANDSCAPES Several others including B. H. Lee, M. E. Sereno and R. P. Taylor contributed to the conceptualization and execution of the work described in this chapter. The collection of photographs described in this chapter were taken by me or by B. H. Lee and M. E. Sereno under my direction. Analysis of the photographs was performed by me or B. H. Lee under my supervision. The writing is entirely mine. This work is in preparation for submission to the International Journal of Computer Vision, and therefore the following chapter is formatted according to the journal’s publication standard. 1. Introduction Scenes contain structure at multiple scales of measurement – from the largest objects to the smallest features. Objects’ textures also vary in terms of their smoothness. Differences in the textures and features of objects at different scales underlie the ability to decompose images with the Fourier method and distinguish different types of scenes by their amplitude spectrum alone. By this method, amplitude and phase information from an image are separated, and the rate at which amplitude decays as a function of frequency is measured and used for tasks such as classification. For example, Oliva and Torralba (2001) use this method to distinguish a variety of image types such as built and natural environments. The information can be further reduced to the parameter, β in the equation SV(f) = 1/(cfβ) (1) 30 where SV(f) is the spectral density (power), f is the spatial frequency, and β and c are constants. Such computer-based image decomposition has relevance to the insect and mammalian visual systems (Atick & Redlich, 1990; Bialek et al., 1991; Srinivasan et al., 1982, van Haternen 1992a; 1992b), which appear to utilize sparse codes and whose receptive fields conform to wavelet descriptions (Field, 1987; 1999). Barlow (1959) suggested that reducing the redundancy of information at early stages of visual processing could optimize transfer of information to areas that produce higher-order representations. Conveniently, this parameterization of images’ texture-based complexity has been suggested to approximate the complexity of edges, which is more easily described in terms of the fractal dimension, D, of the power law N ~ L−D (2) where N is the extent to which the fractal fills space as measured at scale L (Mandelbrot, 1977; 1983). For fractional Brownian noise, D and β are related. Bies et al. (2016) stated the relationship between these terms for n-dimensional spaces for Fourier noise images is D = E + (F + 2 − β)/2 (3) where E and F represent the Euclidian topological dimension and the number of variables used in the Fourier transform, respectively. Perhaps because the statistics of natural images can be conveniently approximated with fractional Brownian noises, it has been suggested that Equation 3 extends to natural scenes (Voss, 1986; Field, 1987; Knill, Field, & Kersten, 1990; Graham & Field, 2007). Yet we recently presented preliminary data (recapitulated here in Section 3.1.) that suggest the relationship between D and β is quite distinct from Equation 1 when considering natural images (Bies et al., 2015). Hansen & Gegenfurtner, (2009) showed that color and 31 luminance boundaries are independent of one another in natural scenes. This calls into question whether the non-random contours in natural images affect the relationship between D and β. If these parameters are distinct or even just less tightly coupled in natural scenes, they may serve as useful classification metrics or relate to image labels that represent abstract qualities of scenes. 2. Material and Methods 2.1. Image Sets 2.1.1. Photographs of clouds from an internet sample As we first described in Bies et al. (2015), a convenience sample of cloud photographs was generated by searching Google’s image database for photographs of clouds of varying types at low (cumulus, stratocumulus, stratus, and stratus/cumulus fractus), moderate (altocumulus), and high (cirrus, cirrostratus, cirrocumulus) altitudes. Photographs were drawn from each of the types of clouds (Ahrens, 2009) for a total of 188 photographs of which 89 were used in the analysis reported in section 3.1. Ninety-nine images were rejected for meeting any of the following criteria: 1) cloud edges out of focus, 2) image contains multiple cloud types, 3) image contains non-cloud objects such as wildlife, 4) image contains artifacts caused by sunlight (e.g., lens flare), and 5) image too small for accurate fractal analysis. Images retained for analysis varied in size from images consisting of 1,373 by 756 pixels to 4,256 by 1,970 pixels. 2.1.2. Photographs of clouds from a single camera with a telephoto zoom lens Photographs of clouds were taken with a Nikon D7100 camera and AF-S VR Zoom- Nikkor 70-300mm f/4.5-5.6G IF-ED Lens in locations around the Willamette Valley of Oregon in the United States. Edges of the cloud areas served as the focal distance. Zoom varied across images to isolate a particular species of cloud, but image size remained constant at 6,000 by 32 4,000 pixels. A total of 815 images were collected, of which 474 were selected for the present analysis, according to the criteria listed in section 2.1.1, with the additional constraint that duplicates (photographs of the same cloud with adjusted focal distance or zoom) were excluded. Here, instead of sampling equivalent numbers from a variety of cloud types, the image set is representative of the types and number of clouds that had formed on days when photographs were taken. See Figure 1 for examples. A supplemental set of images of clouds and landscapes were taken with a Nikon AF-S DX NIKKOR 35mm f/1.8G Lens affixed to the Nikon D7100 camera. This set of images was taken with the purpose of classifying cloud types by identifying information about relative altitude and distribution. As such, this set of images was not included in the following analyses although it was counted toward the above total. Figure 1. Example photographs. (A) Cloud with β = 3.02, D = 1.22; (B) Cloud with β = 2.80, D = 1.39; (C) Landscape with β = 2.99, D = 1.49; (D) Landscape with β = 2.42, D = 1.78. 2.1.3. Photographs of landscapes from a single camera with a wide-angle zoom lens Photographs of landscapes were taken with a Nikon D7100 camera and AF-S DX Nikkor 10-24mm f/3.5-4.5G ED Zoom Lens in locations around the Willamette Valley of Oregon in the A C B D 33 United States. The horizon line served as the focal distance. Focal length was set to 24mm. A total of 425 images were collected, of which 400 were selected for the present analysis. Images were rejected if they met any of the following criteria: 1) near objects out of focus, 2) animals visible in the scenes, 3) buildings visible in the scenes, 4) image contains artifacts caused by sunlight. The remaining images all contain pasture and or forest environments. See Figure 1 for examples. 2.2. Image Analysis Techniques 2.2.1. Power spectrum decay rate The parameter β was computed for each image according to the method described in Bies et al. (2016). Briefly, after converting to an intensity map, each image was subjected to a 2DFFT in Matlab, after which the amplitude values were averaged at each frequency, radially. The slope of the linear trend describing the loss of power at higher frequencies was retained as β. 2.2.2. Fractal dimension 2.2.2.1. Edge extraction In each image, a binary threshold was applied after initial processing that separated the foreground and background (e.g., cloud areas from sky regions). An automated extraction process was adapted from Bies et al. (2015). For the purposes of explaining the process, an example segmentation of cloud areas from a blue sky is described here. First, the image was loaded as an RGB image into Matlab, and the red channel intensity map was selected. (For the purpose of this example, blue and green channel information was discarded, but more complex rules (e.g., [{B+G-R}]) were applied to account for other colors as need arose.) In this example’s R-channel intensity map, sky areas are dark (a point in the image that looks blue in an RGB image will have a low intensity value in the red channel), whereas cloud regions remain bright (a 34 pixel that looks white is represented by high intensity values in all three color channels). The strong perceptual white/blue contrast boundary of cloud/sky is captured in this intensity map, allowing a binary threshold to be applied that effectively segments cloud and sky, rendering the cloud regions white and the sky regions black. The resulting binary image was used to isolate the perimeter of the white pixels (perimeter identified with the Matlab function bwperim). Box counting was performed on the resulting edge image. 2.2.2.2. Box counting analysis Box counting to compute D was performed as described in Bies et al. (2016). Briefly, a grid with box length l was applied repeatedly for lengths from the length of the image to the length of a pixel, and roughly speaking defined D as the rate at which the number of filled boxes n increased as l decreased. Specifically, we calculated D with the equation D = log(ni – ni+1) / log(1/[li – li+1]), where ni and ni+1 are the number of filled boxes at box lengths li and li+1 where li+1 is the next smaller box size. For each image, the median of the set of computed D values was retained as D. 3. Results and Discussion 3.1. Photographs of clouds from a convenience sample A set of cloud images was found with Google’s image search engine, using terms such as “cumulus cloud.” 3.1.1. Descriptive statistics The cloud images’ D ranged from 1.11 to 1.76, with M(SD) = 1.34 (0.01). A one-sample t-test shows this does not differ significantly from the D of clouds reported by Lovejoy (1982), t(88) = 0.73, p = 0.47. Lovejoy’s (1982) analyses included data from radar and satellite data with a range of scales several of orders of magnitude greater than those included in the current study. 35 This shows that although humans see clouds over a greatly restricted range of scales (not up to the continental sizes at which rain areas can exist), the scaling behavior of their edges is apparent from our perspective. Although intuited by Mandelbrot (1977) when first describing fractals, no evidence for this has been provided until now. In contrast, many studies have shown that amplitude scaling exists in photographs of natural scenes (Field, 1987; Graham & Field, 2007; Parraga et al., 1998; Ruderman & Bialek, 1994; Tolhurt et al., 1992; van der Schaaf & van Haternen, 1996). In the convenience sample, β ranged from 2.08 to 3.39 to, with M(SD) = 2.79 (0.27). Only one other study has reported a sample that was nearly as large. Tolhurst et al. (1992) note that for 135 images, β ranged from 1.6 to 3, with M = 2.4. Our sample’s mean is significantly higher, as revealed by a one sample t- test, t(98) = 13.55, p < .001, although there is substantial overlap in the range of βs across these image sets. Thus, this sample of cloud photographs taken from the internet exhibits a reasonable level of consistency with similar measurements, but does appear to be different from the mean of this typical set of natural scene photographs. 3.1.2. Correlation analysis As stated above, the relationship between the parameters D and β has been suggested to hold not just for random Brownian motion but also for natural scenes (Field, 1987; Graham & Field, 2007; Knill et al., 1990; Voss, 1986). The prediction would be that a plot of the measured D values and a set of D values computed from measured β values with the equation D(Converted) = (6.24 – β)/2.14, as empirically determined by Bies et al. (2016), which roughly speaking would be D = (6 – β)/2, should fall on the unity line. In other words, there should be a strong correlation between D and β, since these cloud images exhibit largely typical ranges of each parameter. 36 Instead, there is a nonsignificant correlation between D and β in this sample, r(89) = -0.15, p = 0.18, with β accounting for roughly 2% of the variance in D as shown in Figure 2A. Figure 2. Relations of D and β within image sets. (a) Convenience sample clouds; (b) Single- camera/lens clouds; (c) Single-camera/lens landscapes. In each panel, data are shown as black dots, the best linear fit for the data is shown with a black line, with the relation of D and β for the fractal images of Chapter II shown as red dashes. 3.1.3. Discussion A potential explanation for this apparent disconnect between theory and empirical data is that variability in the optical system or sensor introduced a large amount of error. With respect to the variability introduced, it is not a stretch to analogize the use of a variety of camera lenses, apertures, and light sensors to take a set of photographs and the recordings that might be taken from the retinal tissue or downstream neurons in animals with various eye anatomies (configurations of pupil, lens, and retinal cells and densities and circuits leading away from photoreceptors). 3.2. Photographs of clouds from a single camera with a telephoto zoom lens To address the possibility that uncontrolled variability in the optical system (e.g., sensor size, focal length) or properties of the images (e.g., image size, histogram) drove the reduction from near-perfect to near-zero correlation in our convenience sample, we collected a larger set of photographs of clouds with a single camera and lens as described in Section 2.1.2. Using the a c b 37 methods described in Sections 2.2., we measured the same properties in this second set of photographs of clouds. 3.2.1. Descriptive statistics This set of images span a large portion of the expected ranges of both D and β. These clouds’ D ranges from 1.01 to 1.67, M(SD) = 1.32 (0.10). A one sample t-test revealed that clouds’ D as estimated by Lovejoy (1982) is significantly different from the mean of this sample, t(473) = .04. The difference from our mean is D < .01, which may be smaller than the margin of error with which we can measure D, and so we consider these estimates to be consistent with one another. The βs range from 1.63 to 3.27, M(SD) = 2.56 (0.32). These have significantly higher βs than the estimate of Tolhurst et al. (1992), as revealed by a 1-sample t-test, t(473) = 10.97, p < .001 but significantly lower βs than the convenience sample from above as determined by an independent samples t-test in which equal variances were not assumed (significant Levene’s test of equality of variances was significant, p = .01), t(138.33) = -7.06, p < .001. 3.2.2. Correlation analysis To determine whether the relationship between D and β observed for random fractals is recovered when the optical system is controlled, a correlation analysis was performed on the photographs taken with a single camera. The D and β of these photographs of clouds exhibit a significant, negative correlation, r(474) = -.41, p < .001, as shown in Figure 2B. 3.2.3. Discussion The results of the correlation analyses using a convenience sample and controlled set of photographs of cloud are quite different. We suggest that this is due to the variability in the optical systems that generated the convenience sample. A similar correlation using a different 38 optical system would support this hypothesis. Next we test whether this extent of correlation is observed for an unrelated image set when using a different optical system with the same sensor. 3.3. Photographs of landscapes from a single camera with a wide-angle zoom lens To replicate and extend our findings, and ensure that they are not specific to photographs of clouds, we collected a set of photographs of landscapes as described in section 2.1.3. For this set of photographs, all constraints from the previous image set were retained. An additional constraint applied to this set of images was that focal length did not vary. 3.3.1. Descriptive statistics These images exhibited a wider range of D values, from 1.24 to 1.86, with M(SD) = 1.72 (0.09). The β of these images ranged from 1.98 to 3.15, with M(SD) = 2.40 (0.17). In comparison with the typical β reported by Tolhurst et al. (1992), a one-sample t-test revealed no significant difference, t(399) = .55, p = .58. This indicates that these landscape images are typical of those presented in other studies. An independent samples t-test comparing β in the landscape photographs and clouds confirms the conclusion that β differs in photographs of landscapes and clouds which could be drawn from the 1-sample tests using Tolhurst et al.’s (1992) mean, as these landscape photographs have a significantly lower β on average than the cloud photographs described in Sections 2.1.2 and 3.2, t(740.26) = 9.24, p < .001 (not assuming equal variances due to a significant Levene’s test, p < .001). 3.3.2. Correlation analysis To determine if the weaker correlation between D and β observed for photographs of clouds was a peculiar result of the subject or optical properties of a telephoto zoom lens, a wide- angle zoom lens was used to take pictures of landscapes for the present analysis. Landscape 39 photographs’ D and β are significantly negatively correlated, r(200) = -.43, p < .001, as shown in Figure 2C. 3.3.3. Discussion 3.4. General Discussion The observed correlations between D and β are weaker for photographs of clouds and landscapes taken with a single camera than what Bies et al. (2016) reported for random fractal noises. Still, correlations derived from single-camera image sets are qualitatively larger than what is observed from a convenience sample of images taken from a variety of cameras. The greater variability in image quality and possibly other factors such as lighting and sharpness may contribute to this difference. Such separation of D and β allows for the possibility that they could work in concert to predict the contents of images or separate them into categories. For example, a neural network could use both parameters to determine which types of clouds are present in an image containing multiple cloud types. In addition, the likelihood of rain in a particular location such as a field could be assessed without access to radar or satellite imagery, instead relying on a continuous or intermittent optical feed, as particular cloud types are more likely to produce rain than others (Ahrens, 2009). 5. Conclusions Our results clearly demonstrate a weaker relationship between D and β in natural images than what Bies et al. (2016) observed for random fractals. These counterexamples are important given the common assertion that Voss’ (1986) equation holds not just for statistically self-affine fractional Brownian functions but also describe natural scenes’ scaling properties (Voss, 1986; Field, 1987; Knill et al., 1990; Graham & Field, 2007). Clouds can certainly be modeled with 40 self-affine Brownian functions as described by Pruppacher and Klett (1996). Yet our results suggest lighting, contrast, and texture of natural clouds in the sky contribute to the weaker D-β relationship in photographs of clouds, which we observe. Alternately, the contours in these images of clouds and landscapes may differ in an important way from random noise images’ contours. The fractal D of the contour produced by each luminance edge in a fractal Brownian noise image is mathematically tied to its β. In contrast, the D of natural images’ luminance edges could vary regionally or across luminance levels as leaves, branches and other natural phenomena reflect different amounts of light. Color and luminance edges are statistically independent in natural scenes, co-occurring as often as they do not (Hansen & Gegenfurtner, 2009). In the absence of high-dynamic-range corrections, these subtleties may be lost. The visual system may perform corrections enhanced by sparse coding to recover edge information and the distribution of edges in space that may further affect the D-β relationship. The distinction between D and β has important implications. The partial orthogonalization of these two parameters allows for their potential to be used in combination to predict the contents of images in computer vision. While multiple categories can be distinguished simply based on their Fourier amplitude spectra (Torralba & Oliva, 2003), it is possible that finer distinctions can be made by introducing more descriptors such as these in conjunction with the use of recursive learning algorithms. Future human and animal research should begin to assess how both the visual system and an organism’s behavior are impacted by these distinct scene statistics. Recent research has continued to investigate perceptual responses such as human sensitivity to β of fractional Brownian images (Spehar et al., 2015), but we need to begin to address responses to natural scenes and textures themselves, not limiting our study to model 41 stimuli, as some have begun to do (e.g., Baumgartner & Gegenfurtner, 2016; Giesel & Zaidi, 2013). 6. Bridge to Chapter IV The extent to which the visual system capitalizes on this distinction between D and β has yet to be observed. The human visual system may be sensitive to one, the other or both of these parameters, and may or may not treat them similarly. In the following chapter, noise images and photographs that have been described in this and the previous chapter are used to probe the human visual system’s sensitivity to D and β, and the extent to which these explain ratings of complexity and aesthetics. 42 CHAPTER IV PREDICTING PERCEIVED AESTHETIC VALUE FROM PHYSICAL AND PERCEIVED COMPLEXITY This work is in preparation for submission to Psychology of Aesthetics, Creativity and the Arts, and therefore the following chapter is formatted according to the journal’s publication standard, American Psychological Association format. Open an internet browser, perform an image search for “nature” (not “grapefruit cat”), and you can scroll through a multitude of beautiful, calming images. Spending time “in nature” or just “viewing nature” has positive, restorative effects (Berman, Jonides, & Kaplan, 2008; Kaplan, 1995; Keniger, Gaston, Irvine, & Fuller, 2013; Ulrich, 1984). Is this just restorative, or is it also an aesthetic experience? The act of flicking though a large number of images to find a few aesthetically pleasing ones falls along the continuum of aesthetics proposed by Fechner (as described by Whitfield and de Destefani, 2011) spanning decisions about design (i.e., everyday aesthetic decisions such as what color to paint the walls, e.g., Blijlevens, Thurgood, Hekkert, Leder, and Whitfield, 2014) to unremarkable art (e.g., the images which serve as stimuli in this experiment, a behavior which is depicted in Figure 1A), to renowned works of art (i.e., intense aesthetic experiences such as those described in Marković, 2012). But we return to the question - what makes some scenes more aesthetic than others? In the present paper we address this from the perspective of Whitfield and de Destefani (2011) that relatively mundane aesthetic responses rely on a decision process. Specifically, we test a cognitive model that assumes the relatively mundane aesthetic response to nature (or at least to natural scene statistics) is an evaluative 43 judgment that relies on intermediate steps, and not just an elegant sensory encoding process (see Figures 1B and C). “Art”& 1. Sensory coding Gist-encoding and beauty-perception Internal representation External stimulus Behavior 2. Perceptual processing 3. Aesthetic response c3 c4 perceived complexity a3 a4 a5 a1 a2 aesthetic value c5 c1 c2 D β A B C “That’s&nice…”& 44 Figure 1a-c. Schematic and models. 1a. Schematic of the aesthetic response behavior. 1b. Simplified version of Redies’ (2015) model of the aesthetic response. 1c. Structural equation model showing relationships among latent and observed variables in the present study: two latent variables (perceived complexity and aesthetic value) reflect the rated properties (c1-c5 and a1-a5 in Figure 2), which are affected by physical complexity, quantified as power-law parameters (D and β). Berlyne (1973), Leder, Belke, Oeberst and Augustin (2004), and Redies (2015) have proposed models of aesthetic responses in which intermediate physiological, emotional and cognitive processes affect the aesthetic response. Leder et al. (2004) and Redies’ (2015) models are quite complex, intended to explain the interplay between cognition, perception and emotional experience, with Redies’ (2015) model constructed so as to predict typical responses to the full umbrella of art. For the purposes of the present study, a simplified model that mimics Redies’ (2015) sensory-coding and perceptual processing pathway is presented in Figure 1B (please refer to Redies, 2015 for the full model). Figure 1A shows a schematic of a viewer assessing the aesthetic value of a work of art. As shown in Figure 1B, the process is thought to start with sensory encoding, which leads to gist perception on very short time scales (i.e., approximately 100 milliseconds; see Cupchik & Berlyne, 1979; Locher, Krupinski, Mello-Thoms, & Nodine, 2007; Locher & Nagy, 1996). Slightly longer time scales of presentation (as short as 1 second) allow for deeper perceptual and cognitive processing, which could affect the decision about an image’s aesthetic value and allows these judgments to stabilize (Locher et al., 2007; Smith, Bousquet, Chang & Smith, 2006). Aesthetic Value 45 Returning to the example with which we started the paper, the aesthetic value ascribed to any particular image being scrolled through might be high enough to consider at greater length, but only if it drives an association in memory, or is particularly stunning or attractive. Lexically related descriptors such as these appear to reflect the qualities of an aesthetic image. Much of the aesthetic evaluation literature has focused on design related to objects or simple patterns (e.g., Berlyne, 1971; Blijlevens et al., 2014). While design studies have produced inconsistent results because the focus of assessment has varied between studies (e.g., on novelty or typicality), the ratings which are meant to be reflective of aesthetic value have also varied from study to study (as discussed in Blijlevens et al., 2014). A problem for rating scenes is that commonly used scales, such as Hassenzahl and Monk’s aesthetic pleasure scale, which was developed for research in the area of human-computer interaction (as discussed in Blijlevens et al., 2014), assesses a component related to pragmatic value along with beauty, which is nonsensical for a work of arti. Blijlevens et al. (2014) developed an alternate scale aimed at research in product design, which we relied upon to create a set of descriptors that reflect the aesthetics of scenes (see Figure 2). Figure 2. Factor structure of the rated properties related to perceived aesthetic value and complexity. For the present study, an ad hoc approach was taken to determine the list of descriptors that would be used to elicit ratings putatively related to aesthetic value. Still, the general basis of the scale has its roots in Blijlevens et al. (2014). Other descriptors that participants used to rate intricacy (c3) number of elements (c4) perceived complexity simplicity (c5) complexity (c1) extent of space filled (c2) beauty(a3) pleasantness (a4) perceived aesthetic value stunningness (a5) artisticness (a1) attractiveness (a2) 46 the images served a dual purpose. Such items were intended to provide discriminant validity for the aesthetic ratings, while also reflecting a putative intermediate perceptual process, perceived complexity. Güçlütürk, Jacobs, and van Lier (2016) showed that ratings of the complexity and aesthetic value of random assortments of geometric shapes are correlated, giving credence to the model as evidence that these constructs are related, at least for one set of physically complex patterns. Perceived Complexity Perceptual ratings about complexity are quite interesting for their sensitivity to the presence of symmetry and changes in a variety of physical descriptors of patterns. Attneave (1957) was among the first to note that for random geometric polygons, the introduction of an axis of symmetry to a particular polygon by mirroring one half of the shape greatly reduced the perceived complexity of the mirrored shape. Cutting and Garvin (1987) showed that complexity ratings are also affected by D, which describes the rate at which fine structure is introduced in self-similar patterns (Fairbanks & Taylor, 2011; Mandelbrot, 1983), when individuals rate the complexity of fractal polygons. Güçlütürk et al. (2016) extended this relationship between physical and perceived complexity by showing that another metric of physical complexity correlates with the perceived complexity of an array of shapes that together constitute a visual scene. A study by Forsythe, Nadal, Sheehy, Cela-Conde, and Sawey (2011) generated similar results, showing that the perceived complexity of works of art are related to the Gif compressibility of the image. Interestingly, such compressibility relates to the amount of information in an image, which is strongly related to D (Chamarro-Posada, 2016). Physical Complexity 47 Natural scenes contain multiple types of fractal phenomena, from the branching patterns of trees’ limbs to the shape of the horizon line to the structure of clouds in the sky (Hagerhall, Purcell, & Taylor, 2004; Lovejoy, 1982; Mandelbrot, 1983). For this study, we aimed to identify descriptors of complexity that could relate to the fractal aspects of scenes – textures and edges. For rated properties related to complexity, we identified three terms in an ad hoc manner (“complexity,” “intricacy,” and an antonym of complexity, “simplicity”) and adapted Blijlevens et al.’s (2014) descriptors related to variety (“extent of space filled” and “number of elements”) because these seemed to implicate a similar sense of numerosity as fractal complexity. As D increases so does the amount of fine structure – large connected regions begin to break apart and there is an increase in the number of discrete elements. More detail about the measures of physical complexity used in this study and a discussion of the literature surrounding them are provided in Chapters II and III. Testing the Aesthetics Model By measuring perceived complexity and correlating it with aesthetic value and physical complexity, we have advanced the literature in several significant ways. First, we introduce the first test of whether there is a latent factor related to perceived complexity (see Figure 2). We use this as a means to test whether the perception of complexity mediates the relationship between aesthetic value and two measures of physical complexity for the first time, and provide the first evidence for or against models such as Redies’ (2015) by using an individual differences approach. The use of latent structures will increase our sensitivity to any such relationship between perceived complexity and perceived aesthetic value. This is especially important because of the lexical distance between terms such as beauty and intricacy, which may describe entirely different aspects of a natural scene and could exhibit a weak or nonsignificant 48 correlation. When the variance that rated qualities share with other related terms is extracted and effectively subjected to correlational analyses through latent regression, it will be clear whether these latent concepts share variance or not. An individual differences approach has not been widely used in tests of how the physical qualities of art itself impact perception and judgments. Instead, it has largely been limited to personality and other continua along which people vary. Thus, in our analysis, the roles of participants and art have been reversed, with aggregation being performed across participants to achieve purer measures of the typical response to individual works of art. Individual Differences – In Images, not People It is worth reiterating that the aim of this study is to understand which physical properties of images drive stronger if still mundane aesthetic responses to images of a particular type. To help clarify, we briefly review individual differences research in which interpersonal differences are the variable of interest. For example, Axelsson (2007) investigated the effect of expertise in photography on the process of rating photographs on a variety of dimensions, and found that students preferred relatively familiar images, whereas professionals’ greater experience led them to prefer less familiar and more uncertain images. To contrast Axelsson’s (2007) approach with the present study, our focus is on evaluating differences in the photographs themselves. As another example, consider studies of the Golden Section, where it is asked whether deviations from the ratio phi (1:1.6…) decrease the aesthetics of the object of study (from sculptures of human forms in Di Dio, Macaluso, & Rizzolatti, 2007, to rectangles in McManus, Cook, & Hunt, 2010). It is easy to understand why the focus remains on inter-human differences when the alternative is to investigate inter-rectangle differences. McManus (1980) contributed to this line of work by using rectangles to investigate the stability of preferences, and has recently 49 determined that some individuals prefer squares while others prefer rectangles, even though the reason for such preferences remains unresolved (McManus, Cook, & Hunt, 2010). Others use latent analyses, but in the traditional manner – to identify individual differences in people (Chamorro-Premuzic, Burke, Hsu, & Swami, 2010; Silva, Fayn, Nusman, & Beaty, 2015). Chamorro-Premuzic et al. (2010) looked beyond quadrilaterals and found that personality does predict preference for paintings. A factor structure of aesthetic ability has even been proposed by Myszowski and Zenasni (2016), to distinguish forms and extents of expertise, and may distinguish people based on the patterns in their aesthetic responses. Recently, the technique of cluster analysis (Norušis, 2011) has been applied to separating individuals into groups based on their response patterns (e.g., Bies, Blanc-Goldhammer, Boydston, Taylor, & Sereno, 2016; Güçlütürk, Jacobs, & van Lier, 2016; Spehar, Walker, & Taylor, 2016). For example, Bies et al. (2016) showed that a minority of individuals prefers simpler exact fractal patterns, while others prefer increasingly complex fractals. To contrast the approaches, the current study would, instead, find sets of images that are similar to each other and different from others based on participants’ ratings and measurements of physical complexity. Thus, our focus is on the stimuli themselves, asking what distinguishes them when they are all quite similar, such as when all examples belong to the same category (e.g., clouds, landscapes). Consider Ulrich’s (1984) seminal study, which showed that individuals with a view of nature were faster to recover from surgery than those with a view of buildings. While anything might be better than a brick wall, we take the view that not all images of nature are equal – they exhibit slight variations in aesthetic value. But is such variability just photographer-related error, or do aesthetic responses rely on more primitive attributes? Hagerhall et al. (2004) provided a 50 first pass at this question when they measured the D of the horizon in a set of landscape photographs, and showed that after removing stimuli that might elicit particular associations (a “cognitive” response in Redies’ (2015) model, such as pictures with lakes), the average preference rating for each landscape photograph exhibited a weak but significant correlation with the D of each image. Here, we take this approach to the next level, in terms of statistical- modeling, by introducing latent variables that tap into the constructs of complexity and aesthetic value but retain a focus on how differences in the images impact the idealized observer’s responses. To do so, we average across several participants’ responses to a particular image (a technique commonly referred to as parceling; for a further discussion on the pros and cons of this technique, see Little, Cunningham, & Shahar, 2002). Method Stimuli Stimuli consisted of three sets of 200 images, random fractals generated with a computer, photographs of clouds taken with a Nikon D7100 camera and a 70-300mm lens, and photographs of landscapes taken with a Nikon D7100 camera and a 10-24mm lens. Random fractals. Two hundred random fractal patterns were generated with the inverse Fourier method described in Chapter II. Cloud photographs. Two hundred photographs of clouds were selected from a database described in Chapter III. Forty images of each of five cloud types (cumulus, stratocumulus/nimbostratus/stratus, altocumulus, cirrus, and cirrocumulus) were retained. Landscape photographs. Two hundred photographs of landscapes were randomly selected without replacement from a larger database described in Chapter III. 51 Image analyses. Each image’s intensity matrix was subjected to algorithms described in Chapters II and III by which the fractal dimension (D) and spectral decay rate (β) were calculated. Participants One hundred and seventy-five undergraduates of varied gender (109 female, 64 male, 1 other, and 1 declined to answer) and ethnicity (9 African American, 3 American Indian or Alaskan native, 24 Asian, 114 Caucasian, 1 Pacific islander, 21 other, and 4 declined to answer) participated for course credit. Participants’ ages ranged from 18 to 27 (M = 19.28, Mdn = 19). Procedure This study was carried out in accordance with a protocol the Research Compliance Services of the University of Oregon had approved. Each participant gave written informed consent, after which they completed one rating task followed by a survey in individual rooms on 27” iMacs, and finally debriefed. At the beginning of the task, participants were shown instructions indicating that they should rate each image on a continuous scale from 0 (lowest) to 1 (highest) by clicking a scale bar with specified endpoints, and using the full range of the scale. Participants were also asked to not click a single point along the scale for all images (e.g., exclusively at 0) or just click the endpoints (i.e., either 0 or 1). Each participant was then told what quality about which they should be responding (e.g., for “attractiveness” participants were asked to rate each image from lowest to highest attractiveness). Each participant was asked to rate one of the three sets of 200 images on one of the ten qualities shown in Figure 1. Each image was presented once in random order and remained on the screen until the participant provided a response, after which there was a 500-millisecond 52 delay consisting of a gray screen, before which a new image was presented and the next trial began. Results and Discussion Response Times Each stimulus remained on the screen until the participant entered a rating. Across participants the mean response time was 2.53 seconds (SD = 1.16), with the fastest participant averaging 1.07 seconds (SD = 0.33) to clouds’ simplicity and the slowest participant averaging 9.98 seconds (SD = 11.87) to rate random fractals’ pleasantness. Raw Rating Data Screening Several steps were taken to ensure the rating data represented continuous, normally distributed measures of the rated qualities. First, the raw data was screened to ensure each participant followed the instructions and did not reverse the scale (e.g., higher simplicity nearer 0, by re-interpreting the instructed quality as an antonym such as complexity) and used the scale in an appropriate manner. Of the original 150, 25 participants’ responses were singular or binary in nature, which was categorically different from the remaining 125 participants’ responses that spanned the range of values from 0 to 1. Data were collected from another 25 participants, all of whom completed the task as requested, to replace the 25 original participants whose data were discarded, with matching image-set and rated quality pairs. Data were then prepared for parceling, which consisted of averaging each set of five participants who were given the same image set and quality to rate (e.g., random fractal images rated on the quality of “complexity”). Before averaging, the correlation matrix for each set of five participants’ responses was screened for reverse-coded responses (i.e., viewed to ensure each pair of ratings exhibited a significant positive correlation or non-significant correlation). 53 This resulted in seven participants’ response patterns being classified as candidates for reverse- coding. Four ratings of “simplicity,” (two of clouds and two of landscapes), produced patterns of significant negative correlations with others’ ratings. In both cases, two participants’ responses were correlated with each other and anti-correlated to the other three participants. In all four cases, the participant’s responses were also anti-correlated with at least one measure of physical complexity (β was arbitrarily chosen for this test). As such, they were recoded by subtracting each response value from 1 (recoded response = 1 – participant response). Three participants who viewed the random fractals, two who rated attractiveness and one who rated beauty, also provided responses that were anti-correlated with other participants’ ratings and with the images β values. These responses likely reflect the accurate ratings of a subpopulation identified by Bies et al. (2016) who prefer more simplistic patterns. Still, these participants’ ratings were reverse coded for consistency with the majority pattern and to retain consistency across image sets. After this cleansing process was performed, raw correlations between pairs of subjects who rated the same set of images along the same property (e.g., clouds’ intricacy) ranged from r = -.401 to .84, M(SD) = .38(.25). Parceled data was screened for outliers, of which there were none, resulting in a ratio of 20 cases per variable (i.e., 200 images to 10 rated properties). Parceled Data - Descriptive Statistics and Correlation Analyses Descriptive statistics for all of the measures are shown in Tables 1-3. All skewness and kurtosis estimates indicate normal distributions aside from landscape’s D, which both indicate a somewhat non-normal distribution. These values were left in their original units, as well, for two reasons: 1) the following analyses utilize the maximum likelihood estimator, which is considered relatively robust to violation of the assumption of normality (Bollen, 1989), 2) covariance-based 1 That there were still significant negative correlations indicates that some discrepancy existed 54 SEMs are relatively stable across varying levels of skewness and kurtosis at levels the same as or more extreme than the levels observed here (Reinartz, Haenlein, & Henseler, 2009). A correlation matrix was derived from the ratings and measurements of each image set using SPSS 24 for 200 observations (the stimulus set) per variable (see Tables 4-6). The correlations among participant ratings varied across the image sets. For ratings of random fractals it was observed that all correlations were significant at the level of p < .001, with all r > .34, indicating factorability. For ratings of photographs of clouds and landscapes, the correlations varied in magnitude and significance, but maintained a sense of factorability. Factor Structure of Participant Ratings The correlation matrices and standard deviations were subjected to confirmatory factor analysis with 200 observations per group (image set) using the Lavaan package (version 0.5-19) in R (Rosseel, 2012), where the indicated variables were loaded on aesthetic or complexity factors. Such latent factors can reveal relationships between higher-order constructs that are undetectable at the level of observed variables through structural equation modeling (Kline, 2011). One- and two-factor measurement models were compared to determine whether complexity and aesthetic were separate factors or reflected a singular image-rating factor. In addition, two- and three-factor models were compared to assess whether variety, as identified in Blijlevens et al.’s (2014) factors that describe design aesthetics, differentiated from complexity in ratings of random fractals, clouds, and landscapes. In each model described in this section, all loadings on latent variables, covariances between latent variables, and variances were significant (p < .05) unless otherwise noted. 55 Table 1 Descriptive Statistics for the Parceled Ratings of Random Fractals (N = 200) Measure M SD Range Skew Kurtosis Aesthetic Ratings a1 0.31 0.23 0.71 0.17 -1.47 a2 0.55 0.15 0.60 0.40 -1.05 a3 0.43 0.20 0.74 -0.08 -1.43 a4 0.45 0.13 0.54 0.15 -1.12 a5 0.25 0.17 0.59 0.15 -1.44 Complexity Ratings c1 0.55 0.26 0.84 0.11 -1.41 c2 0.58 0.11 0.56 -0.50 -0.31 c3 0.55 0.25 0.79 0.07 -1.44 c4 0.54 0.31 0.97 0.09 -1.54 c5 0.40 0.22 0.78 -0.07 -1.38 Physical Properties D 1.77 0.17 0.61 -0.69 -0.77 β -2.01 0.57 1.97 0.05 -1.15 Table 2 Descriptive Statistics for the Measurements and Parceled Ratings of Clouds (N = 200) Measure M SD Range Skew Kurtosis Aesthetic Ratings a1 0.49 0.13 0.70 -0.36 -0.11 a2 0.39 0.21 0.88 0.78 -0.12 a3 0.53 0.14 0.70 0.88 0.64 a4 0.42 0.14 0.64 0.04 -0.72 a5 0.45 0.17 0.82 0.15 -0.42 Complexity Ratings c1 0.43 0.18 0.82 -0.02 -0.73 c2 0.55 0.21 0.90 -0.24 -0.66 c3 0.38 0.19 0.80 0.51 -0.66 c4 0.49 0.13 0.63 -0.31 -0.08 c5 0.49 0.18 0.82 0.18 -0.71 Physical Properties D 1.29 0.10 0.56 0.14 0.56 β -2.70 0.28 1.36 0.56 0.10 Note: a1 = artisticness, a2 = attractiveness, a3 = beauty, a4 = pleasantness, a5 = stunningness, c1 = complexity, c2 = extent of space filled, c3 = intricacy, c4 = number of elements, c5 = simplicity. 56 Table 3 Descriptive Statistics for the Measurements and Parceled Ratings of Landscapes (N = 200) Measure M SD Range Skew Kurtosis Aesthetic Ratings a1 0.39 0.15 0.72 0.75 0.24 a2 0.50 0.19 0.80 0.42 -0.61 a3 0.44 0.15 0.75 0.54 0.02 a4 0.46 0.15 0.68 0.46 -0.38 a5 0.32 0.16 0.90 1.09 1.06 Complexity Ratings c1 0.57 0.19 0.83 -1.00 0.41 c2 0.60 0.17 0.77 -0.97 0.48 c3 0.57 0.11 0.64 -0.56 0.30 c4 0.53 0.11 0.50 -0.16 -0.55 c5 0.43 0.15 0.71 0.85 0.39 Physical Properties D 1.79 0.14 0.79 -2.38 5.93 β -2.45 0.19 1.16 -0.69 1.10 57 Table 4 Correlations Among the Measurements of Physical Properties and Parceled Ratings of Random Fractals (n = 200). Measure 1 2 3 4 5 6 7 8 9 10 11 12 Aesthetic Ratings 1. a1 – 2. a2 .82 – 3. a3 .88 .76 – 4. a4 .81 .67 .78 – 5. a5 .93 .79 .87 .80 – Complexity Ratings 6. c1 -.92 -.76 -.87 -.81 -.92 – 7. c2 -.47 -.48 -.42 -.35 -.52 .46 – 8. c3 -.91 -.76 -.87 -.80 -.91 .92 .45 – 9. c4 -.93 -.76 -.88 -.80 -.93 .93 .47 .91 – 10. c5 .92 .76 .88 .79 .90 -.93 -.42 -.91 -.92 – Physical Property 11. D -.92 -.87 -.84 -.78 -.92 .90 .53 .89 .89 -.89 – 12. β -.94 -.79 -.88 -.82 -.93 .96 .48 .94 .94 -.95 -.95 – Note: a1 = artisticness, a2 = attractiveness, a3 = beauty, a4 = pleasantness, a5 = stunningness, c1 = complexity, c2 = extent of space filled, c3 = intricacy, c4 = number of elements, c5 = simplicity. Correlations > .14 are significant at the p < .05 level. Correlations above .23 are statistically significant at the level of p < .001. 58 Table 5 Correlations Among the Measurements of Physical Properties and Parceled Ratings of Clouds (n = 200). Measure 1 2 3 4 5 6 7 8 9 10 11 12 Aesthetic Ratings 1. a1 – 2. a2 .55 – 3. a3 .62 .75 – 4. a4 .68 .53 .56 – 5. a5 .71 .75 .74 .63 – Complexity Ratings 6. c1 .63 .52 .41 .58 .58 – 7. c2 -.18 .04 -.16 -.14 .05 .13 – 8. c3 .71 .68 .62 .65 .72 .77 -.05 – 9. c4 .29 .26 .13 .26 .37 .49 .62 .38 – 10. c5 -.55 -.50 -.36 -.48 -.63 -.74 -.42 -.69 -.69 – Physical Property 11. D -.10 -.26 -.31 -.13 -.23 .10 .07 -.04 .08 .00 – 12. β .59 .61 .52 .59 .70 .62 .23 .71 .57 -.71 -.08 – Note: a1 = artisticness, a2 = attractiveness, a3 = beauty, a4 = pleasantness, a5 = stunningness, c1 = complexity, c2 = extent of space filled, c3 = intricacy, c4 = number of elements, c5 = simplicity. Correlations > .14 are significant at the p < .05 level. Correlations above .23 are statistically significant at the level of p < .001. 59 Table 6 Correlations Among the Measurements of Physical Properties and Parceled Ratings of Landscapes (n = 200). Measure 1 2 3 4 5 6 7 8 9 10 11 12 Aesthetic Ratings 1. a1 – 2. a2 .72 – 3. a3 .71 .60 – 4. a4 .64 .73 .57 – 5. a5 .73 .71 .68 .65 – Complexity Ratings 6. c1 -.59 -.72 -.40 -.59 -.42 – 7. c2 -.55 -.73 -.38 -.61 -.39 .77 – 8. c3 -.32 -.52 -.15 -.43 -.20 .69 .69 – 9. c4 .07 -.01 .08 -.08 .17 .25 .12 .35 – 10. c5 .65 .70 .48 .60 .52 -.77 -.64 -.55 -.18 – Physical Property 11. D -.58 -.53 -.41 -.41 -.38 .64 .58 .43 .10 -.58 – 12. β .11 .02 .09 -.02 .18 -.06 .05 .05 -.09 .11 -.36 – Note: a1 = artisticness, a2 = attractiveness, a3 = beauty, a4 = pleasantness, a5 = stunningness, c1 = complexity, c2 = extent of space filled, c3 = intricacy, c4 = number of elements, c5 = simplicity. Correlations > .14 are significant at the p < .05 level. Correlations above .23 are statistically significant at the level of p < .001. 60 Table 7 Fit indices for all measurement models and invariance tests. Model χ2 df p χ2 / df RMSEA (90% CI) TLI CFI SRMR AIC Random Fractals 1-factor MM 1205.27 37 < .001 32.57 .40 (.38, .42) .53 .62 14.59 -2748.33 2-factor MM 61.79 34 = .002 1.82 .06 (.04, .09) .99 .99 .02 -3885.82 3-factor MM 56.51 32 = .005 1.77 .06 (.03, .09) .99 .99 .02 -3887.09 Clouds 1-factor MM 1133.08 37 < .001 30.62 .39 (.37, .40) .15 .30 26.38 -2111.97 2-factor MM 411.81 34 < .001 12.11 .24 (.22, .26) .68 .76 .13 -2827.24 3-factor MM TNPD – – – – – – – – Landscapes 1-factor MM 1351.84 37 < .001 36.54 .42 (.40, .44) -.08 .12 24.37 -2247.24 2-factor MM 231.99 34 < .001 6.82 .17 (.15, .19) .82 .87 .10 -3361.08 3-factor MM PNPD – – – – – – – – Factorial Invariance Configural model 705.59 102 < .001 6.92 .17 (.16, .18) .87 .90 .09 -10074.13 Invariant factor loadings 1392.90 118 < .001 11.80 .23 – .79 – -9358.80 Note: MM = measurement model; RMSEA = root mean square error of approximation; TLI = Tucker-Lewis index; CFI = comparative fit index; SRMR = standardized root mean square residual; AIC = Akaike information criterion. TNPD = The observed variable error term matrix, theta, was not positive definite. PNPD = The covariance matrix of latent variables, psi, was not positive definite. 61 Table 8 Fit indices for all mediation models. Model χ2 df p χ2 / df RMSEA (90% CI) TLI CFI SRMR AIC Random Fractals Direct 98.95 13 < .001 7.61 .18 (.15, .22) .93 .95 .02 -2612.45 Indirect 226.91 52 < .001 4.36 .13 (.11, .15) .94 .95 .02 -4773.79 Total b 162.60 51 < .001 3.19 .11 (.09, .12) .96 .97 .02 -4836.10 Clouds Direct 68.59 13 < .001 5.28 .15 (.11, .18) .90 .93 .05 -1940.68 Indirect 491.47 52 < .001 9.45 .21 (.19, .22) .70 .76 .13 -3315.97 Total b 450.03 50 < .001 9.00 .20 (.18, .22) .72 .78 .12 -3353.41 Landscapes Direct 73.50 13 < .001 5.65 .15 (.12, .19) .88 .92 .05 -1885.65 Indirect 282.88 52 < .001 5.44 .15 (.13, .17) .82 .86 .09 -3834.62 Total b 280.86 50 < .001 5.62 .15 (.14, .17) .82 .86 .09 -3832.63 Note: RMSEA = root mean square error of approximation; TLI = Tucker-Lewis index; CFI = comparative fit index; SRMR = standardized root mean square residual; AIC = Akaike information criterion. 62 The 2-factor models were significantly better fits than the 1-factor models for the random fractals (Δχ2(3) = 1143.48, p < .001; see Table 7), photographs of clouds (Δχ2(3) = 721.27, p < .001), and photographs of landscapes (Δχ2(3) = 1119.85, p < .001), which is supported by the other fit statistics provided in Table 7 (RMSEA and SRMR are much closer to 0, TLI and and CFI are much closer to 1, and AIC is much smaller (larger negative number) in each case. Considerations of parsimony and model identification lead to the conclusion that 2-factor models should be preferred to 3-factor models in this data set, as well (see Table 7). For the ratings of random fractals, there was no significant difference in model fit between the 2- and 3- factor models (Δχ2(2) = 5.28, p = .07, with a small change in AIC (ΔAIC < 2) overlapping RMSEA ranges, and equivalent values on other fit indices. A 3-factor model describing the cloud photograph ratings produced a not positive definite observed variable error term matrix that included negative error for c4, and a model in which the variance of c4 was nonsignificant (v = - 0.23, p = .14) the covariance of complexity and variety was particularly small (cov = .005, p = .048) and non-significant with aesthetic value (cov = .002, p = .054). Similar problems were observed for the 3-factor model describing landscape ratings, for which the covariance matrix of latent variables was not positive definite, including negative estimates of the covariance of aesthetic value with complexity and variety. Given these problems of fit, a 2-factor model is preferred, and while Blijlevens et al.’s (2014) “variety” factor was composed of slightly different terms and a “conveys variety” item, more work would need to be done to assess the factorability of variety and complexity. Tests of Invariance of Ratings Across Sets of Images The establishment of measurement invariance allows for conclusions to be drawn across groups – typically groups of individuals (e.g., to compare individuals of various genders, from 63 different cultures) or time points (e.g., individuals at different ages/stages of development). Here, the basic requirements for a test of invariance across groups were met, in that groups are categorical (e.g., cloud photographs are different from landscape photographs) and independent, each containing 200 members. The initial steps required to establish invariance are hierarchical, and stricter tests of invariance should not be considered if initial tests result in significant decrements to model fit. First, the configural model should be suitable before testing for invariance. This allows for the determination of whether the base model in each group, as identified in the preceding section, is a reasonable fit for the data. Second, the factor loadings are constrained to be equal across groups and this model is compared to the configural model. Here, for example, the relationships between a1 and aesthetic value would be set equal in each set of images (along with each other observed-latent pairing). Then, intercepts, residuals, mean structures, etc. can be compared across groups, but only if there is no significant, meaningful difference between the configural model and the metric invariance (constrained factor loading) model. The covariance matrices were entered into a configural model with two latent factors where aesthetic value and complexity each reflected five measured variables (see Figure 1). The configural model converged normally after 271 iterations, and provided an acceptable fit (χ2(102) = 705.59, p < .001). The metric invariance model, in which factor loadings were constrained, was a significantly worse fit than the configural model as indicated by a much larger χ2 fit for the metric invariance model (Δχ2(16) = 687.31, p < .001). Because this was accompanied by an increase in AIC (ΔAIC = 715.33), decrease in CFI (ΔCFI = .11), and the metric invariance model’s RMSEA was higher (ΔRMSEA = .05) and did not overlap with the 64 estimated range for RMSEA in the configural model, it was concluded that there was not metric invariance. Replacing the marker variable (the first variable designated in the equation specifying the observed variables that load on a latent variable, i.e., a1 and c1) which with other indicator variables (e.g., a2 and c2) did not result in any change in the test of metric invariance, and neither did the addition of constraints on observed variables (e.g., setting c2 and/or c4 to 0 loading on complexity). This suggests the relationships of indicators of perceived aesthetic value and complexity vary across types of images. This provides a stronger initial assessment of the intervening role of perceived complexity, with three opportunities to test the relationship among measures of physical complexity, perceived complexity, and perceived aesthetic value. Future studies may assess the extent to which invariance exists within participants rating different groups of images and across different sets of participants rating the same group of images. The following analyses consider the explanatory role of complexity in the relationship aesthetic value has with and D and β. Mediation Analyses Random fractals. A first model testing the direct effects of D and β on aesthetic value was an acceptable fit for the data (χ2(13) = 98.95, p < .001, see Table 8), with aesthetic value decreasing significantly with D (b2 = -.35, p < .001) and β (b = = -.64, p < .001). A second model testing the indirect effects of D and β on aesthetic value through complexity was also acceptable (χ2(52) = 226.91, p < .001), revealing only a significant effect of β through complexity. There was a positive relationship between β and complexity (b = .42, p < .001) and a negative relationship between complexity and aesthetic value (b = -.87, p < .001). No 2 The standardized parameter estimate is reported as “b” instead of β  for  the  sake  of  clarity. 65 indirect effect of D on aesthetic value was observed because there was a non-significant relationship between D and complexity (b = .06, p = .41). A third model allowing for the total effect to be measured (both direct and indirect) with all terms unconstrained resulted in negative variances. In particular, aesthetic value’s variance was a small negative value, the standard error around which included 0 (p = .34). Because the variance in aesthetic value likely is explained by the other terms in the model, given the strong relationships among variables in the model, its variance was set to 0 in an alternate third model (Random Fractals’ “Total b” in Table 8). This alternate model was an acceptable fit for the data (χ2(51) = 162.60, p < .001, see Table 8), and was a significantly better fit than the indirect model (Δχ2(1) = 64.31, p < .001). As shown in Figure 3A, there are significant direct effects of D and β on aesthetic value, and significant relationships between β and complexity and complexity and aesthetic value, although once again there was not an effect of D on complexity. Bootstrapping confirms that there is partial mediation of the β–aesthetic value relationship (95% CI indirect = [-.73, -.47], 95% CI direct = [.22, .48]) and no mediation of the D–aesthetic value relationship (95% CI indirect = [-.04, .33]). Figure 3a-c. Latent mediation models; effects of D and β on aesthetic value and the intervening role of perceived complexity are shown. 3a. Random fractals. 3b. Clouds. 3c. Landscapes. 3a-c. Solid lines indicate significant regression paths at p < .001(***) and p < .05(*); dashed lines perceived complexity aesthetic value D β Clouds .10* .81*** .22* -.26*** .63*** perceived complexity aesthetic value D β Random Fractals -.07 1.06*** .90*** -.46*** -1.44*** perceived complexity aesthetic value D β Landscapes .77*** .24*** .03 -.09 -.71*** A B C 66 indicate non-significant paths. Ovals and squares represent latent and observed variables, respectively. All factor loadings and variances were significant (p < .01). Cloud photographs. A first model testing the direct effects of D and β on aesthetic value was an acceptable fit for the data (χ2(13) = 68.59, p < .001, see Table 8), with aesthetic value decreasing significantly with increases in D (b = -.21, p < .001) and β (b = = -.73, p < .001). A second model testing the indirect effects of D and β on aesthetic value through complexity was qualitatively worse than the direct model (χ2(52) = 491.47, p < .001, see Table 8), revealing only a significant effect of β through complexity. In contrast with the results from the random fractals, there was a negative relationship between β and complexity (b = -.83, p < .001) and a positive relationship between complexity and aesthetic value (b = .83, p < .001). No indirect effect of D on aesthetic value was observed because there was a non-significant relationship between D and complexity (b = .04, p = .81). A third model allowing for the total effect to be measured (both direct and indirect) converged normally after 115 iterations. This model was an acceptable fit for the data (χ2(50) = 450.03, p < .001, see Table 8), and was a significantly better fit than the indirect model (Δχ2(2) = 41.44, p < .001). As shown in Figure 3B, all paths between D and β and the latent variables complexity and aesthetic value are significant. Bootstrapping confirms that there is again partial mediation of the β–aesthetic value relationship (95% CI indirect = [-.24, -.13], 95% CI direct = [- .14, -.02]) and reveals that there is no mediation of the D–aesthetic value relationship (95% CI indirect = [-.001, .12]). Landscape photographs. A first model testing the direct effects of D and β on aesthetic value was an acceptable fit for the data (χ2(13) = 73.50, p < .001, see Table 8), with aesthetic value decreasing significantly with D (b = -.62, p < .001), but not β (b = -.126, p = .054). 67 A second model testing the indirect effects of D and β on aesthetic value through complexity was qualitatively worse than the direct model (χ2(52) = 282.88, p < .001, see Table 8), although this revealed significant effects of both D and β through complexity. In contrast with the results from the random fractals but consistent with the photographs of clouds, there was a negative relationship between β and complexity (b = -.23, p < .001). Unlike the results from the photographs of clouds, there was a negative relationship between complexity and aesthetic value (b = -.78, p < .001). Here, there was also a positive relationship between D and complexity (b = .77, p < .001). A third model allowing for the total effect to be measured (both direct and indirect) converged normally after 107 iterations. This model was an acceptable fit for the data (χ2(50) = 280.86, p < .001, see Table 8), but was not a significantly better fit than the indirect model (Δχ2(2) = 2.02, p = .36). As shown in Figure 3C, neither direct path is significant. Bootstrapping confirms that there is full mediation of the D–aesthetic value relationship (95% CI indirect = [- .64, -.36], 95% CI direct = [-.23, .07]) and the β–aesthetic value relationship (95% CI indirect = [.05, .17], 95% CI direct = [-.10, .06]). Discussion The mediation analyses add weight to the failure of the invariance test in that the pattern of results across the three image sets is distinctive. No relationship persists across the image sets aside from the indirect path from β to aesthetic value. And while this trend appears to decrease across the three image sets as more elements are added (from a single process in the random fractals, to the two phenomena, clouds and sky, in the cloud images, to more, including vegetation, horizon, clouds, and atmosphere, in the landscapes), the mediating role of perceived complexity in the D–aesthetic value relationship appears to increase. These relationships in the 68 fitted models may be sensitive to the magnitude of the relationships among the observed variables (which are quite high for random fractals). What is certain is that perceived complexity and aesthetic value are tightly related at the construct level, even though these factors’ indicators vary widely in their bivariate correlations. A potential explanation for the difference between random fractals and landscapes is that the random fractals are textures, which are well described by β, while the texture of objects in the landscape images may be less pronounced than their edges. General Discussion Generally speaking, the present results provide consistent support for the notion that intermediate processes play a role in the aesthetic response. Support for the notion that intermediate processes govern physiological responses is provided by the mediating effect of perceived complexity on aesthetic responses, although the physical property on which perceived complexity anchors appears flexible. van den Berg, Joye, and Koole (2016) suggest the restorative effects observed in studies like that of Ulrich (1984) occur as a result of perceived complexity. The present results are consistent with this conclusion – perceived complexity does, at least to some extent, serve as an intermediary in the aesthetic responses to natural complexity. It is not a far stretch to imagine that an increase in positive affect, resulting from the mundane act of viewing nature, could to some extent improve overall wellbeing, whether by reducing stress (as in Taylor, 2006) and its physiological correlates or by some other mechanism. Future studies should certainly continue to attempt to identify a mechanism for this relationship. Although presumably limited in magnitude, there may be measurable differences in the physiological response to such natural scenes. Measures of arousal would have obvious utility for more graphic stimulus sets or the introduction of rare, novel stimuli. But even for a set of 200 69 photographs of clouds, there may be subtle state changes that scale with a property such as D. One especially strong reason for including physiological correlates would be to extract method variance. Here, all responses were produced by self-report methods, which proved especially problematic for the fractal image group. Physiological recordings may provide data that can separate these factors. Parceling could be applied to physiological responses, too, which may enhance the signal to noise ratio of these subtle effects without requiring participants to spend hours rating images (over the course of which time the physiological signals related to perceptual or aesthetic responses may decay anyway). In addition to the limited physiological effects that should be explored in future studies, there is a question of what the participant is experiencing over the course of the experiment. Any novelty has worn away over the course of the session, long before the two-hundredth image. What remains to be processed is likely an efficient, heuristic computation that is anchored on some property of the images, as suggested by Blijlevens et al. (2014). Of note for vision scientists is that this anchor appears to involve D as consistently as β, if not more so for natural scenes. Perhaps van der Schaaf and van Hateren (1996) were correct in stating, “Variations in spatial frequency behavior are relatively unimportant,” and not just in terms of the information gained by calculating such spectra as was their intended meaning. Vision research has long focused on the visual system’s sensitivity to β, considering scenes and textures in terms of Fourier and wavelet descriptors (Burton & Moorehead, 1987; Field, 1987; 1993; Giesel & Zaidi, 2013; Graham & Field, 2007; Isherwood, Schira, & Spehar, 2017; Knill, Field, & Kersten, 1990; Ruderman, 1996; Ruderman & Bialek, 1994; Tolhurst, Tadmor, & Chao, 1992; van der Schaaf & van Haternen, 1996; van Hateren, 1992), but has not addressed how D is perceived. This study shows unequivocally that β and also D contribute to 70 aesthetic evaluation. Cutting and Garvin (1987) showed that D can be related to perceived complexity. A crucial issue moving forward is that D and β are at least somewhat related when optical and other variables are controlled (as shown in Chapter II). Without simultaneously controlling for the variance explained by β, it is unclear whether correlations that have been observed between D and perceived complexity are not really due to the relationship between β and perceived complexity, or vice versa. A final point of departure for future research is to attempt to understand the language we use to describe scale invariant phenomena and complexity. Top-down approaches employed here and in preceding studies of aesthetics (e.g., Blijlevens, 2014) may overlook terms or entire constructs that help us to describe such complexity. While terms like coherent, order and unity describe the effects of symmetry and repetition on patterns, terms that describe the recursion of fractals are needed to understand our response to nature’s geometry. This may help tie the restorative, emotional aspects of viewing nature to other non-canonical aesthetic responses such as confusion, which could limit liking ratings for extremely high D random fractals. Exploring the larger emotional space offered by appraisal theories as they relate to the non-canonical aesthetic responses described by Silva (2009) would facilitate a much better understanding of our responses in the context of mundane aesthetic responses to natural complexity and help to refine models of aesthetic responses such as those proposed by Berlyne (1971), Leder et al. (2004) and Redies (2015). As a first step, data consistent with the existence of intermediate steps at the perceptual level have been provided here. 71 CHAPTER V CONCLUSIONS Through this dissertation I have validated an infinite scaling relationship over a discrete range of measurements for random noise (Chapter II), shown that this does not extend to images of the physical world (Chapter III), and provided the first evidence that there is a mediating role of perceptual processes in the aesthetic response (Chapter IV). Among behaviors for which mechanisms are lacking, the aesthetic response provides compelling reasons for further consideration. The paths from stimulus to response in the models of Leder, Belke, Oberst, and Augustin (2004) and Redies (2015) are still being resolved. Aesthetic responses may stem from evolutionary pressures to identify healthy mates (Cárdenas & Harris, 2006) and navigate or engage with environments (Della-Bosca, Patterson, & Roberts, 2017; Juliani, Bies, Boydston, Taylor, & Sereno, 2016). If so one might expect them to exhibit consistent physiological correlates. Yet aesthetic responses are sensitive to personality (Silva, Fayn, Nusman, & Beaty, 2015), cultural identity (Bao et al., 2016) experience (Pihko et al., 2011), and even beliefs (e.g., about an artwork’s authenticity, as in Huang, Bridge, Kemp, & Parker, 2011). But it is unclear to what extent those responses may be mitigated by or interact with other perceptual and cognitive responses. Moreover, the structure of such cognitive responses has eluded parsimonious characterization in the preceding pages. Perhaps factorial invariance would be detected for two image sets with similar contents, or with one set of raters characterizing any two image sets with disparate contents. The general logic of factor structures suggests that the particular terms used to describe concepts such as aesthetic value and complexity should not matter, so long as they do reflect the concept. A different set of descriptors may exhibit invariance across disparate image sets. 72 In addition to the labels that we apply to the images, further characterization of the structure of physical properties of images is needed. Here I have limited the discussion to D and β, but others have identified and made use of a multitude of qualities. For example, Liu, Lughofer, and Zeng (2015) utilized descriptors of texture to model responses to art. It is possible that measures of physical complexity (or some aspect thereof) could form latent factors in much the same way that measures of perceived complexity are formed, and such measures have been subjected to PCA (e.g., Cavalcante et al., 2014; Roberts et al., 2016). Describing these other physical properties and combining such descriptions with other physiological and behavioral measures will be necessary to bridge the divide between our appreciation for art and our understanding of aesthetics. 73 REFERENCES CITED CHAPTER I Addison, J., & Steele, R. (1879). The spectator. A. Chalmers (ed.). New York: D. Appleton. Attneave, F. (1957). Physical determinants of the judged complexity of shapes. Journal of Experimental Psychology, 53, 221-227. Berlyne, D. E. (1971). Aesthetics and psychobiology. New York: Appleton-Century-Crofts. Bies, A. J., Blanc-Goldhammer, D., Boydston, C. R., Taylor, R. P., & Sereno, M. E. (2016). Aesthetic responses to exact fractals driven by physical complexity. Frontiers in Human Neuroscience, 10, 210. Birkhoff, G. D. (1933). Aesthetic measure. Cambridge: Harvard University Press. Cutting, J. E., & Garvin, J. J. (1987). Fractal curves and complexity. Perception & Psychophysics, 42, 365-370. Hagerhall, C., Purcell, T., & Taylor, R. P. (2004). Fractal dimension of landscape silhouette outlines as a predictor of landscape preference. Journal of Environmental Psychology, 24, 247-255. Leder, H., Belke, B., Oeberst, A., & Augustin, D. (2004). A model of aesthetic appreciation and aesthetic judgments. British Journal of Psychology, 95, 489–508. Lovejoy, S. (1982). Area-perimeter relation for rain and cloud areas. Science, 216, 185-187. Mandelbrot, B.B. (1983). The fractal geometry of nature. Freeman: San Francisco, CA, USA. Mill, J. (1869). Analysis of the phenomena of the human mind. London: Longman, Green, Reader and Dyer. Redies, C. (2015). Combining universal beauty and cultural context in a unifying model of visual aesthetic experience. Frontiers in Human Neuroscience, 9, 218. 74 van Essen, D. C. (1979). Visual areas of the mammalian cerebral cortex. Annual Review of Neuroscience, 2, 227-263. CHAPTER II 1. Mandelbrot, B.B. Fractals: Form, Chance, and Dimension; Freeman: San Francisco, CA, USA, 1977. [Google Scholar] 2. Fournier, A.; Fussel, D.; Carpenter, L. Computer rendering of stochastic models. Commun. ACM 1982, 25, 371–384. [Google Scholar] [CrossRef] 3. Saupe, D. Algorithms for random fractals. In The Science of Fractal Images; Peitgen, H., Saupe, D., Eds.; Springer-Verlag: New York, NY, USA, 1982; pp. 71–136. [Google Scholar] 4. Mandelbrot, B.B. The Fractal Geometry of Nature; Freeman: San Francisco, CA, USA, 1983. [Google Scholar] 5. Voss, R.F. Characterization and measurement of random fractals. Phys. Scripta 1986, 13, 27– 32. [Google Scholar] [CrossRef] 6. Fairbanks, M.S.; Taylor, R.P. Scaling analysis of spatial and temporal patterns: From the human eye to the foraging albatross. In Non-Linear Dynamical Analysis for the Behavioral Sciences Using Real Data; Taylor & Francis Group: Boca Raton, FL, USA, 2011. [Google Scholar] 7. Avnir, D.; Biham, O.; Lidar, D.; Malci, O. Is the geometry of nature fractal? Science 1998, 279, 39–40. [Google Scholar] [CrossRef] 8. Mandelbrot, B.B. Is nature fractal? Science 1998, 279, 738. [Google Scholar] [CrossRef] 9. Jones-Smith, K.; Mathur, H. Fractal analysis: Revisiting Pollock’s drip paintings. Nature 2006, 444, E9–E10. [Google Scholar] [CrossRef] [PubMed] 75 10. Taylor, R.P.; Micolich, A.P.; Jonas, D. Fractal analysis: Revisiting Pollock’s drip paintings (Reply). Nature 2006, 444, E10–E11. [Google Scholar] [CrossRef] 11. Markovic, D.; Gros, C. Power laws and self-organized criticality in theory and nature. Phys. Rep. 2014, 536, 41–74. [Google Scholar] [CrossRef] 12. Burton, G.J.; Moorehead, I.R. Color and spatial structure in natural scenes. Appl. Opt. 1987, 26, 157–170. [Google Scholar] [CrossRef] [PubMed] 13. Field, D.J. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A 1987, 4, 2379. [Google Scholar] [CrossRef] [PubMed] 14. Knill, D.C.; Field, D.; Kersten, D. Human discrimination of fractal images. J. Opt. Soc. Am. A 1990, 7, 1113–1123. [Google Scholar] [CrossRef] [PubMed] 15. van Hateren, J.H. Theoretical predictions of spatiotemporal receptive fields of fly LMCs, and experimental validation. J. Comp. Physiol. A 1992, 171, 157–170. [Google Scholar] [CrossRef] 16. Tolhurst, D.J.; Tadmor, Y.; Chao, T. Amplitude spectra of natural images. Ophthalmic Physiol. Opt. 1992, 12, 229–232. [Google Scholar] [CrossRef] [PubMed] 17. Field, D.J. Scale-invariance and self-similar wavelet transforms: An analysis of natural scenes and mammalian visual systems. In Wavelets, Fractals, and Fourier Transforms; Farge, M., Hunt, J.C.R., Vassilicos, J.C., Eds.; Clarendon Press: Oxford, UK, 1993; pp. 151–193. [Google Scholar] 18. Ruderman, D.L.; Bialek, W. Statistics of natural images: Scaling in the woods. Phys. Rev. Lett. 1994, 73, 814–817. [Google Scholar] [CrossRef] [PubMed] 19. Ruderman, D.L. Origins of scaling in natural images. Vis. Res. 1996, 37, 3385–3398. [Google Scholar] [CrossRef] 76 20. van der Schaaf, A.; van Haternen, J.H. Modeling the power spectra of natural images: Statistics and information. Vis. Res. 1996, 36, 2759–2770. [Google Scholar] [CrossRef] 21. Graham, D.J.; Field, D.J. Statistical regularities of art images and natural scenes: Spectra, sparseness, and nonlinearities. Spat. Vis. 2007, 21, 149–164. [Google Scholar] [CrossRef] [PubMed] 22. Hagerhall, C.M.; Purcell, T.; Taylor, R. Fractal dimension of landscape silhouette outlines as a predictor of landscape preference. J. Environ. Psychol. 2004, 24, 247–255. [Google Scholar] [CrossRef] 23. Spehar, B.; Taylor, R.P. Fractals in art and nature: Why do we like them? In Proceedings of the SPIE 8651, Human Vision and Electronic Imaging XVIII, 865118, Burlingame, CA, USA, 3 February 2013. 24. Spehar, B.; Wong, S.; van de Klundert, S.; Lui, J.; Clifford, C.W.G.; Taylor, R.P. Beauty and the beholder: The role of visual sensitivity in visual preference. Front. Hum. Neurosci. 2015, 9, 514. [Google Scholar] [CrossRef] [PubMed] 25. Bies, A.J.; Blanc-Goldhammer, D.R.; Boydston, C.R.; Taylor, R.P.; Sereno, M.E. Aesthetic responses to exact fractals driven by physical complexity. Front. Hum. Neurosci. 2016, 10, 210. [Google Scholar] [CrossRef] [PubMed] 26. Street, N.; Forsythe, A.M.; Reilly, R.; Taylor, R.; Helmy, M.S. A complex story: Universal preference vs. individual differences shaping aesthetic response to fractals patterns. Front. Hum. Neurosci. 2016, 10, 213. [Google Scholar] [CrossRef] [PubMed] 27. Spehar, B.; Walker, N.; Taylor, R.P. Taxonomy of individual variations in aesthetic responses to fractal patterns. Front. Hum. Neurosci. 2016, 10, 350. [Google Scholar] [CrossRef] 77 28. Juliani, A.W.; Bies, A.J.; Boydston, C.R.; Taylor, R.P.; Sereno, M.E. Navigation performace in virtual environments varies with the fractal dimension of the landscape. J. Environ. Psychol. 2016, 47, 155–165. [Google Scholar] [CrossRef] [PubMed] 29. Bies, A.J.; Kikumoto, A.; Boydston, C.R.; Greenfield, A.; Chauvin, K.A.; Taylor, R.P.; Sereno, M.E. Percepts from noise patterns: The role of fractal dimension in object pareidolia. In Vision Sciences Society Meeting Planner; Vision Sciences Society: St. Pete Beach, FL, USA, 2016. [Google Scholar] 30. Field, D.; Vilankar, K. Finding a face on Mars: A study on the priors for illusory objects. In Vision Sciences Society Meeting Planner; Vision Sciences Society: St. Pete Beach, FL, USA, 2016. [Google Scholar] 31. Hagerhall, C.M.; Laike, T.; Kuller, M.; Marcheschi, E.; Boydston, C.; Taylor, R.P. Human physiological benefits of viewing nature: EEG responses to exact and statistical fractal patterns. Nonlinear Dyn. Psychol. Life Sci. 2015, 19, 1–12. [Google Scholar] 32. Isherwood, Z.J.; Schira, M.M.; Spehar, B. The BOLD and the Beautiful: Neural responses to natural scene statistics in early visual cortex. i-Perception 2014, 5, 345. [Google Scholar] 33. Bies, A.J.; Wekselblatt, J.; Boydston, C.R.; Taylor, R.P.; Sereno, M.E. The effects of visual scene complexity on human visual cortex. In Society for Neuroscience, Proceedings of the 2015 Neuroscience Meeting Planner, Chicago, IL, USA, 21 October 2015. 34. Sprott, J.C. Automatic generation of strange attractors. Comput. Graph. 1993, 17, 325–332. [Google Scholar] [CrossRef] 35. Aks, D.J.; Sprott, J.C. Quantifying aesthetic preference for chaotic patterns. Empir. Stud. Arts 1996, 14, 1–16. [Google Scholar] [CrossRef] 78 36. Spehar, B.; Clifford, C.W.; Newell, B.R.; Taylor, R.P. Universal aesthetic of fractals. Comput. Graph. 2003, 27, 813–820. [Google Scholar] [CrossRef] 37. Taylor, R.P.; Spehar, B.; Van Donkelaar, P.; Hagerhall, C. Perceptual and physiological responses to Jackson Pollock’s fractals. Front. Hum. Neurosci. 2011, 5, 60. [Google Scholar] [CrossRef] [PubMed] 38. Mureika, J.R.; Dyer, C.C.; Cupchik, G.C. Multifractal structure in nonrepresentational art. Phys. Rev. E 2005, 72, 046101. [Google Scholar] [CrossRef] [PubMed] 39. Forsythe, A.; Nadal, M.; Sheehy, N.; Cela-Conde, C.J.; Sawey, M. Predicting beauty: Fractal dimension and visual complexity in art. Br. J. Psychol. 2011, 102, 49–70. [Google Scholar] [CrossRef] [PubMed] 40. Hagerhall, C.M.; Laike, T.; Taylor, R.P.; Kuller, M.; Kuller, R.; Martin, T.P. Investigations of human EEG response to viewing fractal patterns. Perception 2008, 37, 1488–1494. [Google Scholar] [CrossRef] [PubMed] 41. Graham, D.J.; Redies, C. Statistical regularities in art: Relations with visual coding and perception. Vis. Res. 2010, 50, 1503–1509. [Google Scholar] [CrossRef] [PubMed] 42. Koch, M.; Denzler, J.; Redies, C. 1/f 2 characteristics and isotropy in the fourier power spectra of visual art, cartoons, comics, mangas, and different categories of photographs. PLoS ONE 2010, 5, e12268. [Google Scholar] [CrossRef] [PubMed] 43. Melmer, T.; Amirshahi, S.A.; Koch, M.; Denzler, J.; Redies, C. From regular text to artistic writing and artworks: Fourier statistics of images with low and high aesthetic appeal. Front. Hum. Neurosci. 2013, 7, 106. [Google Scholar] [CrossRef] [PubMed] 79 44. Dyakova, O.; Lee, Y.; Longden, K.D.; Kiselev, V.G.; Nordstrom, K. A higher order visual neuron tuned to the spatial amplitude spectra of natural scenes. Nat. Commun. 2015, 6, 8522. [Google Scholar] [CrossRef] [PubMed] 45. Menzel, C.; Hayn-Leichsenring, G.U.; Langner, O.; Wiese, H.; Redies, C. Fourier power spectrum characteristics of face photographs: Attractiveness perception depends on low-level image properties. PLoS ONE 2015, 10, e0122801. [Google Scholar] 46. Braun, J.; Amirshahi, S.A.; Denzler, J.; Redies, C. Statistical image properties of print advertisements, visual artworks, and images of architecture. Front. Psychol. 2013, 4, 808. [Google Scholar] [CrossRef] [PubMed] 47. Cutting, J.E.; Garvin, J.J. Fractal curves and complexity. Percept. Psychophys. 1987, 42, 365–370. [Google Scholar] [CrossRef] [PubMed] 48. Zahn, C.T.; Roskies, R.Z. Fourier descriptors for plane closed curves. IEEE Trans. Comput. 1972, 3, 269–281. [Google Scholar] [CrossRef] 49. Taylor, R.P. Reduction of physiological stress using fractal art and architecture. Leonardo 2006, 39, 245–251. [Google Scholar] [CrossRef] 50. Derrington, A.M.; Allen, H.A.; Delicato, L.S. Visual mechanisms of motion analysis and motion perception. Ann. Rev. Psychol. 2004, 55, 181–205. [Google Scholar] [CrossRef] [PubMed] 51. Silies, M.; Gohl, D.M.; Clandinin, T.R. Motion-detecting circuits in flies: Coming into view. Ann. Rev. Neurosci. 2014, 37, 307–327. [Google Scholar] [CrossRef] [PubMed] 52. Benton, C.P.; O’Brien, J.M.; Curran, W. Fractal rotation isolates mechanisms for form- dependent motion in human vision. Biol. Lett. 2007, 3, 306–308. [Google Scholar] [CrossRef] [PubMed] 80 53. Lagacé-Nadon, S.; Allard, R.; Faubert, J. Exploring the spatiotemporal properties of fractal rotation perception. J. Vis. 2009, 9. [Google Scholar] [CrossRef] [PubMed] 54. Rainville, S.J.; Kingdom, F.A.A. Spatial scale contribution to the detection of symmetry in fractal noise. JOSA A 1999, 16, 2112–2123. [Google Scholar] [CrossRef] [PubMed] CHAPTER III Ahrens, D. C. (2009). Meterology today: An introduction to weather, climate, and the environment (9th ed.). Boston: Cengage Learning. Atick, J. J., & Redlich, A.N. (1990). Towards a theory of early visual processing. Neural Computation, 2, 308–320. Barlow, H.B. (1959). Sensory mechanisms, the reduction of redundancy, and intelligence. In Proceedings of the National Physical Laboratory Symposium on the Mechanisation of Thought Processes (pp. 537–559). London: H.M. Stationery Office. Baumgartner, E., & Gegenfurtner, K. R. (2016). Image statistics and the representation of material properties in the visual cortex. Frontiers in Psychology, 01185. Bialek, W., Ruderman, D.L. & Zee, A. (1991). Optimal sampling of natural images: A design principle for the visual system? In Lippmann, P.L., Moody, J.E. & Touretzky, D.S. (Eds), Advances in Neural Information Processing Systems 3 (pp. 363–369). San Mateo: Morgan Kaufmann. Bies, A. J., Boydston, C. R., Taylor, R. P., & Sereno, M. E. (2016). Relationship between fractal dimension and spectral scaling decay rate in computer-generated fractals. Symmetry, 8(7), 66. 81 Bies, A. J., Taylor, R. P., & Sereno, M. E. An edgy image statistic: Semi-automated edge extraction and fractal box-counting algorithm allows for quantification of edge dimension in natural scenes. Vision Sciences Society Annual Meeting, 2015. Online. Field, D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America, A, 4, 2379–2394. Giesel, M. & Zaidi, Q. (2013). Frequency-based heuristics for material perception. Journal of Vision, 13, 7. Graham, D. J. & Field, D. J. (2007). Statistical regularities of art images and natural scenes: Spectra, sparseness, and nonlinearities. Spatial Vision, 21, 149–164. Hansen, T., & Gegenfurtner, K. R. (2009). Independence of color and luminance edges in natural scenes. Visual Neuroscience, 26(1), 35-49. Knill, D. C., Field, D., & Kersten, D. (1990). Human discrimination of fractal images. Journal of the Optical Society of America A, 7, 1113–1123. Lovejoy, S. (1982). Area-perimeter relation for rain and cloud areas. Science, 216, 185-187. Mandelbrot, B. B. (1983). The fractal geometry of nature. San Francisco: W. H. Freeman and Company. Mandelbrot, B. B. (1977). Fractals: Form, chance and dimension. San Francisco: W. H. Freeman and Company. Oliva, A., & Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision. 42(3), 145-175. Parraga, C. A., Brelstaff, G. Troscianko, T., & Moorehead, I. R. (1998). Color and luminance information in natural scenes. Journal of the Optical Society of America A, 15(3), 563- 569. 82 Pruppacher, H. R., & Klett, J. D. (1996). Microphysics of clouds and precipitation (2nd ed.). Dordrecht, Netherlands: Kluwer Academic Publishers. Ruderman, D. L., & Bialek, W. (1994). Statistics of natural images: Scaling in the woods. Physical Review Letters, 73, 814–817. Spehar, B., Wong, S., van de Klundert, S., Lui, J., Clifford, C. W. G., &Taylor, R. P. (2015). Beauty and the beholder: The role of visual sensitivity in visual preference. Frontiers in Human Neuroscience, 9, 514. Srinivasan, M. V., Laughlin, S. B., & Dubs, A. (1982). Spatial processing of visual information in the movement-detecting pathway of the fly. Journal of Comparative Physiology, 140, 1-23. Tolhurst, D. J., Tadmor, Y., & Chao, T. (1992). Amplitude spectra of natural images. Ophthalmic and Physiological Optics, 12, 229–232. Torralba, A., & Oliva, A. (2003). Statistics of natural image categories. Network: Computation in Neural Systems, 14, 391-412. van der Schaaf, A., & van Haternen, J. H. (1996). Modeling the power spectra of natural images: Statistics and information. Vision Research, 36, 2759–2770. van Hateren, J. H. (1992a). Real and optimal neural images in early vision. Nature, 360, 68–69. van Hateren, J. H. (1992b). Theoretical predictions of spatiotemporal receptive fields of fly LMCs, and experimental validation. Journal of Comparative Physiology A, 171, 157– 170. Voss, R.F. (1986). Characterization and measurement of random fractals. Physica Scripta, 13, 27–32. CHAPTER IV 83 Attneave, F. (1957). Physical determinants of the judged complexity of shapes. Journal of Experimental Psychology, 53, 221-227. Axelsson, O. (2007). Individual differences in preferences to photographs. Psychology of Aesthetics, Creativity, and the Arts, 1(2), 61-72. Berlyne, D. E. (1971). Aesthetics and psychobiology. New York: Appleton-Century-Crofts. Berman, M. G., Jonides, J., & Kaplan, S. (2008). The cognitive benefits of interacting with nature. Psychological Science, 19(12), 1207-1212. Bies, A. J., Blanc-Goldhammer, D. R., Boydston, C. R., Taylor, R. P., & Sereno, M. E. (2016). Aesthetic responses to exact fractals driven by physical complexity. Frontiers in Human Neuroscience, 10, 210. Blijlevens, J., Thurgood, C., Hekkert, P., Leder, H., & Whitfield, T. W. A. (2014). The development of a reliable and valid scale to measure aesthetic pleasure in design. Proceedings of the 23rd Biennial Congress of the International Association of Empirical Aesthetics, 22-24 August, New York, USA. Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley. Burton, G.J., & Moorehead, I. R. (1987). Color and spatial structure in natural scenes. Applied Optics, 26, 157–170. Chamorro-Premuzic, T., Burke, C., Hsu, A., & Swami, V. (2010). Personality predictors of artistic preferences as a function of the emotional valence and perceived complexity of paintings. Psychology of Aesthetics, Creativity, and the Arts, 4(4), 196-204. Chamorro-Posada, P. (2016). A simple method for estimating the fractal dimension from digital images: The compression dimension. Chaos, Solitons, & Fractals, 91, 562-572. 84 Cupchik, G. C., & Berlyne, D. E. (1979). The perception of collative properties in visual stimuli. Scandanavian Journal of Psychology, 20, 93–104. Cutting, J. E., & Garvin, J. J. (1987). Fractal curves and complexity. Perception & Psychophysics, 42, 365-370. Di Dio, C., Macaluso, E., Rizzolatti, G. (2007). The golden beauty: Brain responses to classical and renaissance sculptures. PLOS One, 2(11), e1201. Fairbanks, M. S., & Taylor, R. P. (2011). Scaling analysis of spatial and temporal patterns: From the human eye to the foraging albatross. In Non-Linear Dynamical Analysis for the Behavioral Sciences Using Real Data. Taylor & Francis Group: Boca Raton, FL, USA. Field, D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America, A, 4, 2379–2394. Field, D. J. (1993). Scale-invariance and self-similar wavelet transforms: An analysis of natural scenes and mammalian visual systems. In Wavelets, Fractals, and Fourier Transforms; Farge, M., Hunt, J.C.R., Vassilicos, J.C., Eds.; Clarendon Press: Oxford, UK, pp. 151– 193. Forsythe, A., Nadal, M., Sheehy, N., Cela-Conde, C. J., & Sawey, M. (2011). Predicting beauty: Fractal dimension and visual complexity in art. British Journal of Psychology, 102(1), 49-70. Giesel, M. & Zaidi, Q. (2013). Frequency-based heuristics for material perception. Journal of Vision, 13, 7. Graham, D. J. & Field, D. J. (2007). Statistical regularities of art images and natural scenes: Spectra, sparseness, and nonlinearities. Spatial Vision, 21, 149–164. 85 Güçlütürk, Y., Jacobs, R. H. A. H., & van Lier, R. (2016). Liking versus complexity: Decomposing the inverted u-curve. Frontiers in Human Neuroscience, 10, 112. Hagerhall, C., Purcell, T., & Taylor, R. P. (2004). Fractal dimension of landscape silhouette outlines as a predictor of landscape preference. Journal of Environmental Psychology, 24, 247-255. Isherwood, Z. J., Schira, M. M., & Spehar, B. (2017). The tuning of human visual cortex to variations in the 1/fα amplitude spectra and fractal properties of synthetic noise images. NeuroImage, 146, 642-657. Kaplan, S. (1995). The restorative benefits of nature: Toward an integrative framework. Journal of Environmental Psychology, 15, 169-182. Keniger, L. E., Gaston, K. J., Irvine, K. N., & Fuller, R. A. (2013). What are the benefits of interacting with nature? International Journal of Environmental research and Public Health, 10, 913-935. Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York: The Guilford Press. Knill, D. C., Field, D., & Kersten, D. (1990). Human discrimination of fractal images. Journal of the Optical Society of America A, 7, 1113–1123. Leder, H., Belke, B., Oeberst, A., & Augustin, D. (2004). A model of aesthetic appreciation and aesthetic judgments. British Journal of Psychology, 95, 489–508. Leder, H., & Nadal, M. (2014). Ten years of a model of aesthetic appreciation and aesthetic judgments: The aesthetic episode – Developments and challenges in empirical aesthetics. British Journal of Psychology, 105, 443-464. 86 Little, T. D., Cunningham, W. A., & Shahar, G. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling, 9(2), 151-173. Locher, P., Krupinski, E. A., Mello-Thoms, C., & Nodine, C. F. (2007). Visual interest in pictorial art during an aesthetic experience. Spatial Vision, 21, 55–77. Locher, P., & Nagy, Y. (1996). Vision spontaneously establishes the percept of pictorial balance. Empirical Studies of the Arts, 14, 17–31. Lovejoy, S. (1982). Area-perimeter relation for rain and cloud areas. Science, 216, 185-187. Mandelbrot, B.B. (1983). The fractal geometry of nature. Freeman: San Francisco, CA, USA. Marković, S. (2012). Components of aesthetic experience: Aesthetic fascination, aesthetic appraisal, and aesthetic emotion. i-Perception, 3(1), 1-17. McManus, I. C. (1980). The aesthetics of simple figures. British Journal of Psychology, 71(4), 505-524. McManus, I. C., Cook, R., & Hunt, A. (2010). Beyond the Golden Section and normative aesthetics: Why do individuals differ so much in there aesthetic preference for rectangles? Psychology of Aesthetics, Creativity, and the Arts, 4(2), 113-126. Myszowski, N., & Zenasni, F. (2016). Individual differences in aesthetic ability: The case for an aesthetic quotient. Frontiers in Psychology, 7, 750. Norušis, M. (2011). ‘‘Cluster analysis,’’ in IBM SPSS Statistics 19 Statistical Procedures Companion, 1st Edn. Boston, MA: Addison Wesley. Redies, C. (2015). Combining universal beauty and cultural context in a unifying model of visual aesthetic experience. Frontiers in Human Neuroscience, 9, 218. 87 Reinartz, W., Haenlein, M., & Henseler, J. (2009). An empirical comparison of the efficacy of covariance-based and variance-based SEM. International Journal of Research in Marketing, 26, 332-344. Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1-36. Ruderman, D. L. (1996). Origins of scaling in natural images. Vision Research, 37, 3385–3398. Ruderman, D. L., & Bialek, W. (1994). Statistics of natural images: Scaling in the woods. Physical Review Letters, 73, 814–817. Silva, P. (2009). Looking past pleasure: Anger, confusion, disgust, pride, surprise, and other unusual aesthetic emotions. Psychology of Aesthetics, Creativity, and the Arts, 3(1), 48- 51. Silva, P. J., Fayn, K., Nusman, E. C., & Beaty, R. E. (2015). Openness to experience and awe in response to nature and music: Personality and profound aesthetic experience. Psychology of Aesthetics, Creativity, and the Arts, 9(4), 376-384. Smith, L. F., Bousquet, S. G., Chang, G., & Smith, J. K. (2006). Effect of time and information on perception of art. Empirical Studies of the Arts, 24, 229–242. Spehar, B., Walker, N., & Taylor, R. P. (2016). Taxonomy of individual variations in aesthetic responses to fractal patterns. Frontiers in Human Neuroscience, 10, 350. Taylor, R. P. (2006). Reduction of physiological stress using fractal art and architecture. Leonardo, 39, 245-251. Tolhurst, D. J., Tadmor, Y., & Chao, T. (1992). Amplitude spectra of natural images. Ophthalmic and Physiological Optics, 12, 229–232. 88 Ulrich, R. S. (1984). View through a window may influence recovery from surgery. Science, 224, 420-421. van den Berg, A., Joye, Y., & Koole, S. L. (2016). Why viewing nature is more fascinating and restorative than viewing buildings: A closer look at perceived complexity. Urban Forestry & Urban Greening, 20, 397-401. van der Schaaf, A., & van Haternen, J. H. (1996). Modeling the power spectra of natural images: Statistics and information. Vision Research, 36, 2759–2770. van Hateren, J. H. (1992). Theoretical predictions of spatiotemporal receptive fields of fly LMCs, and experimental validation. Journal of Comparative Physiology A, 171, 157– 170. Whitfield, T. W. A., & de Destefani, L. R. (2011). Mundane aesthetics. Psychology of aesthetics, Creativity, and the Arts, 5(3), 291-299. CHAPTER V Bao, Y., Yang, T., Lin, X., Fang, Y. Wang, Y., Poppel, E., & Lei, Q. (2016). Aesthetic preferences for eastern and western traditional visual art: Identity matters. Frontiers in Psychology, 7, 1596. Cárdenas, R. A., & Harris, L. J. (2006). Symmetrical decorations enhance the attractiveness of faces and abstract designs. Evolution and Human Behavior, 27, 1-18. Cavalcante, A., Mansouri, A., Kacha, L., Barros, A. K., Takeuchi, Y., Matsumoto, N., & Ohnishi, N. (2014). Measuring streetscape complexity based on the statistics of local contrast and spatial frequency. PLOS One, 9(2), e87097. 89 Della-Bosca, D., Patterson, D., & Roberts, S. (2017). An analysis of game environments as measured by fractal complexity. Proceedings of the Australasian Computer Science Week Multiconference, 63. Juliani, A. W., Bies, A. J., Boydston, C. R., Taylor, R. P., & Sereno, M. E. (2016). Navigation performance in virtual environments varies with fractal dimension of landscape. Journal of Environmental Psychology, 47, 155-165. Huang, M., Bridge, H., Kemp, M. J., & Parker, A. J. (2011). Human cortical activity evoked by the assignment of authenticity when viewing works of art. Frontiers in Human Neuroscience, 5, 134. Leder, H., Belke, B., Oeberst, A., & Augustin, D. (2004). A model of aesthetic appreciation and aesthetic judgments. British Journal of Psychology, 95, 489–508. Liu, J., Lughofer, E., & Zeng, X. (2015). Aesthetic perception of visual textures: a holistic exploration using texture analysis, psychological experiment, and perception modeling. Frontiers in Computational Neuroscience, 9, 143. Pihko, E., Virtanen, a., Saarinen, V., Pannasch, S.Hirvenkari, L., Tossavainen, T., … Hari, R. (2011). Experiencing art: The influence of expertise and painting abstraction level. Frontiers in Human Neuroscience, 5, 94. Redies, C. (2015). Combining universal beauty and cultural context in a unifying model of visual aesthetic experience. Frontiers in Human Neuroscience, 9, 218. Roberts, R. H. A. H., Haak, K. V., Thumfart, S., Renken, R., Henson, B., & Cornelissen, F. W. (2016). Aesthetics by numbers: Links between perceived texture qualities and computed visual texture properties. Frontiers in Human Neuroscience, 10, 343. 90 Silva, P. J., Fayn, K., Nusman, E. C., & Beaty, R. E. (2015). Openness to experience and awe in response to nature and music: Personality and profound aesthetic experience. Psychology of Aesthetics, Creativity, and the Arts, 9(4), 376-384.