CHANNELIZING THE STREAM OF CONSCIOUSNESS: A PRAGMATIST APPROACH TO DATA TECHNOLOGY, THE SELF, AND THE INTERPERSONAL by SOPHIE ANN RHODES A THESIS Presented to the Department of Philosophy and the Robert D. Clark Honors College in partial fulfillment of the requirements for the degree of Bachelor of Science November 2025 2 An Abstract of the Thesis of Sophie Ann Rhodes for the degree of Bachelor of Science in the Department of Philosophy to be taken November 2025 Title: Channelizing the Stream of Consciousness: A Pragmatist Approach to Data Technology, the Self, and the Interpersonal Approved: Colin Koopman., Ph.D. Primary Thesis Advisor In this thesis, I develop a pragmatist philosophical framework for understanding how data technology mediates self-conception, habit formation, and interpersonal relationships. Drawing primarily on William James’s philosophy of consciousness, experience, and habit, I argue that data technology mediates what James calls the “stream of consciousness” through algorithmic channels that shape patterns of attention and behavior. Through this framework, I understand digital experiences as a genuine part of “real life,” as we experience them in relation to our thoughts, attention, habits, and identity as we do other objects and contexts. My argument borrows the concept of “channelization” from river engineering in order to understand how our interactions with data technologies structure consciousness. This approach synthesizes James’s process-based understanding of the self with contemporary theories of affordances, allowing for a context-specific method of analysis that avoids technological determinism and abstract normative claims, while prioritizing Pragmatist philosophical principles of meliorism, interdependence, fallibilism, and observing practical consequences. Through this lens, I employ a four-step analytical process through which I identify emerging consequences of a particular data technology (particularly Large Language Models), examine the 3 affordances of the technology, analyze the effect of these affordances on the consciousness and self, and evaluate the sacrifices made amid competing priorities. My analysis shows how LLMs mediate selfhood at the level of individual consciousness. I demonstrate how algorithmic sycophancy cultivates unhealthy patterns of external validation- and comfort-seeking, how cognitive offloading weakens mental habits essential to identity formation, and how the displacement of human conversation undermines the interpersonal contexts required for both empathy and introspection. Using the framework of channelization, I determine that these technologies prioritize efficiency, certainty, and engagement at the cost of cognitive struggle and development, tolerance of uncertainty, and the capacity for genuine self- transformation. This thesis contributes to philosophy of technology by providing a naturalistic account of algorithmic mediation grounded in Jamesian pragmatism that complements existing social, political, and ethical analysis. Through my focus on the individual and experience, I demonstrate that the large-scale cultural impacts of data technology can only be understood by addressing their effects on individual consciousness and habit. I conclude that addressing the harms of algorithmic mediation requires an active turn towards uncertainty, struggle, and face-to-face conversation, and a de-prioritization of efficiency, in order to avoid the smoothing-over of the experiences that enable self-transformation and connection. 4 Acknowledgements I am deeply grateful for the many people who have made this thesis possible. My greatest thanks go to my primary thesis advisor, Colin Koopman, for his expertise and encouragement. I have been fortunate to have him as a mentor throughout this process, and I am especially grateful that he had the will to believe I could write this thesis. Thank you to Christopher Michlig for serving as my CHC representative and second reader, for always pushing me to think creatively and to be true to my voice, and for encouraging me to take my ideas seriously. I would also like to thank Erin McKenna for making the philosophy program at UO feel like home and whose American Philosophy course made this thesis possible, and Professor Emeritus Mark Johnson for introducing me to the philosophy of William James. The Department of Philosophy at the University of Oregon is truly special, and I am lucky to be a part of it. Thank you to my friends for supporting and encouraging me through this process and always. Thank you to my brother Sam, my sister Stella, and my parents for giving me everything I have and for believing in me. 5 This is a story about a lonely, lonely man. He lived in a lonely house. On a lonely street. In a lonely part of the world. But, of course, he had the Internet. The Internet, as you know, was his friend. You could say, his best friend. They would play with each other every day. Watching videos of humans doing all sorts of things. Having sex with each other. Informing people on what was wrong with them and their life. Playing games with young children at home with their parents. One day, the man, whose name was @snowflakesmasher86, turned to his friend, the Internet, and he said, “Internet, do you love me?” The Internet looked at him and said, “Yes. I love you very, very, very, very, very, very much. I am your best friend. In fact, I love you so much that I never ever want us to be apart ever again ever.” “I would like that,” said the man. And so they embarked on a life together. Wherever the man went, he took his friend. The man and the Internet went everywhere together, except, of course, the places where the Internet could not go. They went to the countryside. They went to birthday parties of the children of some of his less important friends. Different countries. Even the moon. When the man got sad, his friend had so many clever ways to make him feel better. He would get him cooked animals and show him the people having sex again, and he would always, always agree with him. This one was the man's favourite and it made him very happy. The man trusted his friend so much. “I feel like I could tell you anything,” he said, on a particularly lonely day. “You can. You can tell me anything. I'm your best friend. Anything you say to me will stay strictly between you and the Internet.” And so he did. The man shared everything with his friend. All of his fears and desires, all of his loves, past and present. All of the places he had been and was going, and pictures of his penis. He would tell himself, “Man does not live by bread alone.” And then he died. In his lonely house. On the lonely street. In that lonely part of the world. You can go on his Facebook. — The 1975, “The Man Who Married A Robot / Love Theme” 6 Table of Contents Introduction 7 I: Literature Review 11 Theories of the Self and Identity in the Philosophy of Technology 11 Pragmatist Conceptions of Self and Self-Transformation 16 Review of Scholarship on Data Technology and Self-Conception 20 II: Algorithmic Mediation of Selfhood 21 Understanding the Mediation of the Self Through Affordance Theory 21 Channelization: A Framework for Understanding How Data Technology Mediates the Self 22 III: Evaluating Algorithmic Mediation of the Self in Context 25 Algorithmic Affirmation, Sycophancy, and the Uncriticized Self 25 Data Technology as Proxy for Cognition 29 IV: Conclusion 31 Bibliography 33 7 Introduction How do data technologies mediate self-conception? How does interaction with data technology encourage patterns of attention and behavior, and how does this process of deepening these patterns of behavior impact self-conception? What are the consequences of data technology’s technical capacities with respect to their impacts on self-conception? Using a pragmatist philosophical approach sourced primarily from William James (1842-1910), this thesis develops an understanding of experience, activity, habit, and self in relation to data technology. I present a philosophical analysis of the digital experience and digital mediation of self-conception. This responds to a question about whether and to what extent the experiences and actions that occur through digital mediums are “real.” I argue that data technology does not exist as a separate realm, independent of the “real world,” but has a functional role in the development of self-consciousness and habit whereby our continual interaction with the virtual inundates the capacity for experience and diminishes the capacity for action. Through this process, data technology creates “channels,” mediating what James called “the stream of thought” based on algorithmic predictions and shaping habits and identity. I specifically argue that data technology functions to mediate what James calls the “stream of consciousness” through algorithmic channels, creating patterns of attention and habit formation that diminish internally directed action while intensifying and flooding the capacity for experience. This results in the incorporation of algorithmically-mediated engagement (both action and experience) into the stream of thought, shaping habits into ones which align more closely with algorithmically curated experiences. This creates problems for self-development, identity formation, and the will as described by James. Beyond the individual, these channels shape interpersonal behaviors and relationships as data technology is increasingly used to assist 8 or replace roles that otherwise would require social interaction. I argue that the use of data technology in interpersonal contexts serves a purpose of efficiency, but the prioritization of time, effort, and accuracy comes at the cost of possibility and complexity. Pragmatist philosophy emphasizes function, activity, conduct, practice, and experience. 1 My methodology will incorporate James’s pragmatist concepts of self, self-conception, consciousness, experience, activity, and habit in order to understand the role of data technology in practice and function. James’s pragmatism further emphasizes naturalistic accounts of lived experience in terms of their observable consequences. The naturalistic commitments of pragmatism will help focus my research on that which is observable in the natural world rather than appealing to divinity or the supernatural.2 As such, a naturalistic inquiry will draw conclusions based upon empirical evidence that develops in observable processes. Furthermore, the pragmatist emphasis on community, interdependence, and consequences provides a lens through which to evaluate data technology that begins with the individual but necessarily extends to interpersonal and societal effects. The philosophical framework of my research differs from other methods that might evaluate the relationship between data technology and the self through conducting ethnography, discourse analysis of specific content or platforms, quantitative analysis, or policy/ethical analysis. It also differs from deterministic frameworks that seek a fixed theory of technology, such as technological determinism, which views societal changes as following an “inevitable 1 Catherine Legg and Christopher Hookway, "Pragmatism," in The Stanford Encyclopedia of Philosophy, Winter 2024 ed., ed. Edward N. Zalta and Uri Nodelman, https://plato.stanford.edu/archives/win2024/entries/pragmatism/. 2 David Papineau, "Naturalism," in The Stanford Encyclopedia of Philosophy, Fall 2023 ed., ed. Edward N. Zalta and Uri Nodelman, https://plato.stanford.edu/archives/fall2023/entries/naturalism/. 9 path” determined by technological development.3 Instead, I adopt pragmatist principles of fallibilism and pluralism. By incorporating the pragmatist focus on experience, my framework also differs from dualistic and normative approaches, which might conceptualize the digital and physical as ontologically distinct categories of experience and give a prescriptive theory of the relationship between self-conception and data technology, respectively. While scholarship on selfhood and the internet has tended to evaluate the ethical, social, political, and sociological problems posed by the internet, social media, and AI, this thesis offers a pragmatist approach to how algorithms mediate habit. This is significant because the large-scale impact of cultural, social, political changes resulting from algorithmic experiences can only be understood if first understood on the individual level of consciousness and self-conception, which is comprised of various habits. James conceptualizes consciousness as flitting and perching from one thought to another. This suggests the idea of evaluating the influence of algorithms as a mediator of the direction of those thoughts. This approach allows me to present an internal account of our interactions with algorithms. In other words, the focus of this thesis is not about what occurs to our information and selves when they are extracted into data and processed through algorithms. Rather, I am focused on what occurs within the self when we engage with and consume content that reflects our data back to ourselves. This pragmatist philosophical lens allows for consequences to be evaluated based upon lived experience, change, pluralistic thinking, fallibility, and amelioration. In contrast with other approaches that might present a rigid moral evaluation of data technology 3 Lee Humphreys, "Technological Determinism," in Encyclopedia of Science and Technology Communication, ed. Susanna Hornig Priest (Thousand Oaks, CA: SAGE Publications, 2010). The entry defines technological determinism as a theory which “suggests that technology is the primary external force causing human activity… An important feature of technological determinism is the inevitability of action and reaction over the course of events. There is an inevitable path that technology paves that cannot be deviated from once things are in motion. The impact technology has on society is beyond human will. Thus, technological determinism positions technology as a blind force that dominates civilizations.” 10 or seek an absolute method for categorizing the types of interactions that occur between the self and data technology, my approach seeks an understanding of data technology and selfhood which is applicable to our own internal understandings of our lives and can be developed based upon observable experiences. Rather than identifying problems based on the extent to which the relationship between data technology and selfhood adheres to a priori criteria, a pragmatist methodology allows me to explore what problems people are experiencing and how they might be understood and directly addressed. 11 I: Literature Review Theories of the Self and Identity in the Philosophy of Technology The rapid advancement of digital technology has fundamentally transformed the contemporary understanding of selfhood and identity, creating new contexts for self-conception that challenge traditional philosophical frameworks. Existing scholarship of online selfhood and philosophical approaches to digital identity have established a theoretical foundation for investigating how data technology functions as a mediator in the process of self-conception. This section reviews a number of recent contributions to this body of scholarship that inform the analysis I develop in subsequent sections below. Contemporary scholarship on digital identity reveals complex relationships between data collection, algorithmic processing, and identity formation which have shifted as data technology has continued to develop and play a larger role in our lives. The question of problematic internet use and addiction has been reframed by Robert LaRose, Junghyun Kim, and Wei Peng, who reconceptualize “internet addiction” as habitual media consumption.4 Their research reveals that social media does not present unique risks for true addiction but that users form both beneficial and problematic habits with social media use, similar to other activities. They identify “deficient self-reaction” as the strongest predictor of poor outcomes from internet use, indicating “the lack of intentionality and lack of controllability of media habits.”5 This framing shifts the focus from pathological models toward an understanding of digital engagement through the lens of habit 4 Robert LaRose, Junghyun Kim, and Wei Peng, “Social Networking: Addictive, Compulsive, Problematic, or Just Another Media Habit?” in A Networked Self: Identity, Community, and Culture on Social Network Sites, ed. Zizi Papacharissi (New York: Routledge, 2011), 59-81. 5 LaRose, Kim, and Peng, “Social Networking,” 73. 12 formation, providing a foundation for examining how data technology mediates the development of patterns of thought and behavior. Natasha Dow writes about the movement of habit-tracking in “Self in the Loop: Bits, Patterns, and Pathways in the Quantified Self,” demonstrating how data collection becomes a means of self-transformation rather than mere surveillance.6 Through an analysis of ethnographic research on self-tracking, Schüll finds that self-tracking and monitoring using one’s own data serve not only as means of transcribing the self to create an “algorithmic self” or “data double” but also as a way of better understanding the self and developing one’s identity. Schüll argues that the version of the self, represented through data, does not exist as a separate entity but becomes integrated into personal identity as a means of transformation. She conceptualizes data tracking as “liberation from both uncertainty and rigid certainty,” noting that “self-tracking…was a means of liberation not only from the impasses of uncertainty but those of certainty as well.”7 This integration of the “algorithmic self” into personal identity rather than its separation suggests that digital mediation creates real transformations in self-conception. The quantified facets of the self are not extracted into data to exist in a separate form but are incorporated into the self, such that the data is not simply an externalized representation of the self but is forms a narrative that is internalized and adopted into personal identity. Schüll’s focus on the use of data technology to track one’s habits and behaviors highlights the centrality of optimization and efficiency to the relationship between the self and data technology. Technology in general serves a purpose of making tasks less effortful and more productive – for example, a dishwasher allows more dishes to be cleaned with less labor. In adopting data technology, individuals therefore often view the 6 Natasha Dow Schüll, “Self in the Loop: Bits, Patterns, and Pathways in the Quantified Self,” in A Networked Self and Human Augmentics, Artificial Intelligence, Sentience, ed. Zizi Papacharissi (New York: Routledge, 2019), 25-38. 7 Schüll, “Self in the Loop,” 32. 13 self through the lens of productivity and optimization. In creating an algorithmic self, individuals must select and distill the qualities and habits they wish to optimize. As Schüll observes, the use of data tracking creates possibilities for personal development and an escape from certainty, while at the same time relieving the discomfort of uncertainty. By reconfiguring the facets of one’s personality and identity through self-tracking, individuals are able to focus on who they are without the distracting complexity of uncertainty. Sherry Turkle’s extensive work on online identity provides two complementary perspectives on digital selfhood. Her early research in her 1995 book Life on the Screen explores online communities as spaces for identity experimentation and the construction of multiple selves.8 Turkle argues that the internet enables individuals to experiment with their identity in parallel lives, demonstrating that there is no singular self. Instead, identity is multiple, changing, and constructed, with experiences and identities that are formed online being just as real as those that take place offline. She characterizes the internet as a context for genuine exploration with identity, arguing that “our experiences there are serious play. We belittle them at our risk.”9 However, Turkle’s later work in her 2011 book Alone Together presents a more integrated perspective rather than a plural conception of self, examining how digital communication reshapes selfhood into a “collaborative self” requiring constant external validation.10 This shift reveals the erosion of solitary self-reflection in favor of networked social interaction, where one’s feelings and identity become tied to and always accessible through digital communication with others. It reflects the progression of social platforms, on which data has a primarily social use rather than self-improvement or introspection. Turkle expands upon this social focus in her 8 Sherry Turkle, Life on the Screen: Identity in the Age of the Internet (New York: Simon & Schuster, 1995). 9 Turkle, Life on the Screen, 269. 10 Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other (New York: Basic Books, 2011). 14 more recent 2015 book Reclaiming Conversation: The Power of Talk in the Digital Age, in which she argues, “This new mediated life has gotten us into trouble. Face-to-face conversation is the most human—and humanizing—thing we do.”11 Here Turkle takes a more concerned approach to our relationships with and through digital technology, arguing that our dependence on digital technology has allowed us to withdraw from conversation and has created a culture of avoidance that permeates our social, romantic, familial, and professional relationships, and our relationships with ourselves. The tension between these perspectives highlights the dual potential of digital technologies to both enable identity exploration and transform the contexts of social interaction. Philosophical approaches to digital identity offer frameworks for understanding the nature of human-technology relationships. Shannon Vallor’s metaphor of AI as a “mathematical tool” that produces flattened representations of human experience provides insight into the limitations of algorithmic processes and establishes a framework of understanding AI that does not overstate its capabilities.12 She argues that understanding AI as a mirror rather than as a disembodied mind allows us to recognize that it provides no uniquely accurate insight into who we are, but that it treats us and our data as “a cluster of differently weighted variables that project a mathematical vector through a predefined possibility space, terminating in a prediction... But we retain the power to meet one another's gaze and to know one another as human.”13 This establishes human experience as distinct, locating AI as a technology we made rather than something that is itself almost-human. Said otherwise, Vallor’s idea is that AI is not an 11 Sherry Turkle, Reclaiming Conversation: The Power of Talk in a Digital Age (New York: Penguin Press, 2015), 3. 12 Shannon Vallor, The AI Mirror: Reclaiming Our Humanity in a World of Machine Thinking (Oxford: Oxford University Press, 2023). 13 Vallor, The AI Mirror, 61. 15 anthropomorphic thinking thing that interacts with us but a tool that we choose to engage with and utilize. Vallor emphasizes AI’s conservative nature, noting that “they are literally built to conserve the patterns of the past and extend them into our futures.”14 Vallor’s point is significant for an understanding that AI algorithms are predictive, not cognitive. This conservative tendency raises important questions about how interaction with algorithms might constrain rather than expand possibilities for self-development, mediating human mental activity when we choose to offload our cognition and introspection to a tool that does not think. Colin Koopman’s genealogical analysis reveals how quantitative psychology formed our contemporary understanding of personality as measurable data.15 His examination of the historical shift from “character” (which one either possessed or lacked) to “personality” (which everyone has in different, measurable forms) demonstrates how “an entire technological ensemble of measure eventually made traits, along with the personalities they composed, real.”16 This historical analysis provides essential context for understanding how data technology continues the process of making selfhood measurable and quantifiable. It also recognizes that data technology does not function outside of real life; it transforms our cultural norms and our individual conceptions of the self. Koopman argues that “we are our data as much as we are anything else,”17 suggesting that the relationship between data and identity is not merely representational but constitutive. Whether we conceptualize our digital selves as inseparable from our identity or as an external representation of ourselves, it is clear that our use of data technology has a 14 Vallor, The AI Mirror, 57. 15 Colin Koopman, How We Became Our Data: A Genealogy of the Informational Person (Chicago: University of Chicago Press, 2019). 16 Koopman, How We Became Our Data, 71, 85. 17 Koopman, How We Became Our Data, ix. 16 transformative effect on identity. Even where data technology operates as a tool for external processing it, as Vallor points out, relies on inherently conservative tech like machine-learning algorithms whose pattern predictions provide flattened representations of our own data. Pragmatist Conceptions of Self and Self-Transformation William James’s pragmatist philosophy provides a naturalistic framework for understanding consciousness, habit, and self-transformation that can illuminate the relationship between data technology and self-conception. James’s foundational description of consciousness as a “stream” rather than discrete mental states offers a process-based understanding of mental life.18 He conceptualizes consciousness with five key characteristics: it is personal, constantly changing, continuous, concerned with objects external to itself, and selectively interested. This understanding emphasizes that “consciousness, then, does not appear to itself chopped up in bits” but rather “flows” like “a ‘river’ or a ‘stream.’”19 James’s analysis reveals that experience is fundamentally relational, with meaning emerging from the connections and context within the continuous stream of thought. James’s radical empiricism further develops this process-based understanding by eliminating traditional mind-body dualism through the concept of “pure experience.”20 He argues that all reality consists of “pure experience” that functions as either subjective or objective depending on relational context. This framework suggests that experience is “made” through conjunctive relations that “unroll themselves in time,”21 providing a foundation for 18 William James, “The Stream of Thought,” in The Principles of Psychology (Cambridge, MA: Harvard University Press, 1981), 219-278. 19 James, “Stream of Thought,” 239. 20 William James, “A World of Pure Experience,” The Journal of Philosophy, Psychology, and Scientific Methods 1, no. 20 (September 29, 1904): 533-543. 21 James, “A World of Pure Experience,” 539. 17 understanding digital mediation. James’s conception of experience and framework of radical empiricism allows for a recognition of online and digital experiences as real and observable. In “What Pragmatism Means,” James emphasizes the pragmatic method’s interest in practical consequences.22 This methodology offers a distinctive approach to questions of selfhood and digital mediation. James argues that ideas and beliefs function as “rules for action” rather than abstract representations, with truth defined as “whatever proves itself to be good in the way of belief.”23 This practical orientation suggests that the significance of digital mediation should be evaluated through its concrete effects on lived experience rather than through abstract theoretical frameworks. James’s analysis of the empirical self reveals identity as encompassing material, social, and spiritual dimensions.24 His formulation that “a man’s Self is the sum total of all that he CAN call his” includes not only body and psychic powers but also possessions, relationships, and works25. James’s argument conceptualizes each thought as a continuation of past experiences which form the self, and that we have a sense of familiarity that allows us to recognize this. This expansive understanding of selfhood provides a framework for examining how digital technologies become incorporated into identity formation. Central to James’s psychology is his understanding of habit as “the enormous fly-wheel of society” and the foundation of individual character.26 He argues that habits form through the creation of neural pathways that become increasingly automatic with repetition, emphasizing that 22 William James, "What Pragmatism Means," in The Writings of William James: A Comprehensive Edition, ed. John J. McDermott (Chicago: University of Chicago Press, 1977), 376–90. 23 James, “What Pragmatism Means,” 388. 24 William James, “The Consciousness of Self,” in The Principles of Psychology (Cambridge, MA: Harvard University Press, 1981), 279-379. 25 James, “Consciousness of Self,” 291. 26 William James, “Habit,” in The Principles of Psychology (Cambridge, MA: Harvard University Press, 1981), 109- 31. 18 “the phenomena of habit in living beings are due to the plasticity of the organic materials of which their bodies are composed.”27 James argues that individuals develop into “mere walking bundles of habits,”28 highlighting the fundamental role of habit formation in the development of the self. This understanding of habit as both individual and social provides a framework for examining how data technology might mediate the formation of habits which shape the self through both personal identity and broader cultural forms. Additionally, James’s analysis of will as “effort of attention”29 offers insight into questions of agency within technologically-mediated contexts. He argues that “the essential achievement of the will, in short, when it is most ‘voluntary,’ is to ATTEND to a difficult object and hold it fast before the mind.”30 This understanding of will as fundamentally about attention provides a framework for examining how algorithmic mediation might either support or undermine human agency. Furthermore, James’s conception of the will as a relationship between attention and action provides an understanding of how the mediation of attention has direct consequences on our actions and their possibilities. Colin Koopman’s interpretation of James’s “will to believe” as fundamentally about “reflexive potency” rather than omnipotence provides additional insight into possibilities for self-transformation.31 He argues that James’s ethics of willing emphasizes “the possibility of reflexive potency” and “self-transformation,” suggesting that willing is “primarily a relation... between our Self and our own states of mind” rather than control over external circumstances. 27 James, “Habit,” 105. 28 James, “Habit,” 127. 29 William James, “Will,” in The Principles of Psychology (Cambridge, MA: Harvard University Press, 1981), 1098-1193. 30 James, “Will,” 1166. 31 Colin Koopman, “The Will, the Will to Believe, and William James: An Ethics of Freedom as Self- Transformation,” Journal of the History of Philosophy 55, no. 3 (2017): 491-512. 19 Introspection, the self reflecting upon itself with attention (or what we might today call focus), is the means by which we achieve self-transformation. Self-transformation, fallibility, and uncertainty are central themes of James’s pragmatism. In “The Will to Believe,” James argues, “Our passional nature not only lawfully may, but must, decide on an option between propositions, whenever it is a genuine option that cannot by its nature be decided on intellectual grounds; for to say, under such circumstances, ‘Do not decide, but leave the question open,’ is itself a passional decision,—just like deciding yes or no,—and is attended with the same risk of losing the truth.”32 This highlights the tension between uncertainty and decision-making, rejecting the idea that indecision is truly a lack of decision; remaining undecided is a decision to hold our attention to something that carries just the same risk of fallibility as the options being avoided. James also highlights the significance of the feeling of certainty for knowledge, writing that “There is something that gives a click inside of us, a bell that strikes twelve,” but that “the greatest empiricists among us are only empiricists on reflection.”33 Here James shows how risk, belief, and feeling are necessary parts of the process of coming to know something. James embraces uncertainty, Koopman argues, not through skepticism but as an understanding that self-transformation occurs through willful belief, “readying the self for action amidst a shaking and quaking uncertainty.”34 For James, uncertainty is a state of tension from which we have the opportunity to emerge with greater knowledge and a transformed self. Through the will to believe, we empower ourselves to discover, and without it, we neglect our capacity for self-transformation. 32 William James, “The Will to Believe,” in The Writings of William James: A Comprehensive Edition, ed. John J. McDermott (Chicago: University of Chicago Press, 1977), 723. 33 James, “Will to Believe,” 724. 34 Koopman, “The Will, the Will to Believe, and William James,” 507. 20 Review of Scholarship on Data Technology and Self-Conception The existing literature reveals convergent themes in the present understanding of data technology’s relationship to self-conception. Both pragmatist philosophy and digital studies reveal identity as fundamentally relational and process-based rather than singular or purely internal. Additionally, both psychological measurement and digital platforms demonstrate how constructed categories create “real” aspects of selfhood through their operational frameworks. The literature also suggests possibilities for agency within some degree of constraint, indicating that self- transformation remains possible even within technologically mediated systems. However, significant tensions emerge in the literature between liberation and surveillance, multiplicity and coherence, and connection and solitude. These tensions point to the dual potential of digital technologies to both enable self-discovery and enforce conformity, to support identity exploration while creating dependencies, and to facilitate connection while undermining individual reflection. A gap that emerges from this review is the need for a Jamesean lens that provides an internal, naturalistic account of the relationship between data technology and the self. While existing scholarship has thoroughly examined the social, political, and ethical dimensions of digital identity, there remains a need for a pragmatist understanding of the process of habit- formation, consciousness, and self- creation as mediated by data technology. This suggests the possibility of conceptualizing data technology as a mediator that creates “channelizations” in the stream of consciousness, particularly with regard to habit formation and the direction of attention. 21 II: Algorithmic Mediation of Selfhood Understanding the Mediation of the Self Through Affordance Theory As machine-learning technology continues to develop, new platforms are created which function as venues for more and more areas of our lives. The AI boom has facilitated a shift towards algorithms as the mediating force in daily life for social relationships, romantic relationships, transportation, careers, entertainment, event planning, creativity, productivity, education, mental health, physical health, and more. In order to understand the effects of data technology on the consciousness, the self, and relationships, digital technology must first be recognized as a real part of our lives. When we interact with data technology, we do not enter cyberspace where our actions take place separately from the real world. Instead, the design and function of each digital platform or program, which we may interact with directly or indirectly, contains its own landscape which provides structure for the user’s actions. This can be understood through affordance theory, which recognizes individuals as interacting with a built environment. In “Theorizing Affordances: From Request to Refuse,” Jenny L. Davis and James B. Chouinard describe affordances as “the range of functions and constraints that an object provides for, and places upon, structurally situated subjects.”35 This framework avoids technological determinism, instead providing a model for nuanced analysis of a platform. Davis and Chouinard offer a theory of affordance which recognizes individuals as agentic subjects within a structured environment: We propose that artifacts request, demand, allow, encourage, discourage, and refuse. Requests and demands refer to bids that the artifact places upon the subject. 35 Jenny L. Davis and James B. Chouinard, "Theorizing Affordances: From Request to Refuse," Bulletin of Science, Technology & Society 36, no. 4 (2016): 241. 22 Encouragement, discouragement, and refusal refer to how the artifact responds to a subject’s desired actions. Allow pertains to both bids placed upon on the subject and bids placed upon the artifact.”36 Using Davis and Chouinard’s model, data technology can be analyzed through a Jamesean lens, recognizing that the objects, environments, and experiences to which an individual directs their attention are incorporated into the self. Channelization: A Framework for Understanding How Data Technology Mediates the Self Understanding self-conception through James, we recognize the self as a collection of our thoughts, habits, and experiences. Who and what we interact with makes up who we are – our interactions with one another and with ourselves through data technology are inseparable from our conception of self. And our actions and interactions are a product of our willing, which James defines as “effort of attention,” or attending to one thing over another.37 This reveals the conflict between data technology and the self: data technology provides structure for our digital experiences, carving out channels which – to varying degrees of influence – capture and direct our attention, seek to increase our productivity by automating effortful tasks, and thus reducing the function of willing in our experiences. I propose a framework for analyzing the consequences of data technology’s role in our lives which synthesizes the Jamesean stream of thought with Davis and Chouinard’s model of affordance theory by employing the concept of channelization. Channelization is a method in river engineering which modifies a waterway for purposes such as increasing its capacity for 36 Davis and Chouinard, “Theorizing Affordances,” 242. 37 Koopman, “The Will, the Will to Believe, and William James,” 498. 23 shipping and navigation or controlling water levels for flood prevention or agricultural use.38 The process of channelization is one of simplification: winding rivers are straightened, obstructions are removed, riverbeds are deepened, in order to create a river flow that is smoother, more predictable, and more useful. This process necessarily “involves some loss of capacity in the channel as a whole.”39 Channelization, like affordance, varies in extent and consequences. Additionally, waterways shift and change for a variety of reasons beyond intentional human modification (for example, as a result of flooding). Expanding James’s concept of the stream of thought with the addition of metaphorical channelization allows us to recognize data technology as one of many factors which direct the flow of our thoughts, but one which does so intentionally, with a purpose of predictability, and which requires trade-offs based on what outcomes are prioritized. Each data technology platform is an artifact with particular affordances and unique consequences. Therefore, an analysis of the effects of data technology on the self and interpersonal relationships must be conducted in context, without relying on a priori claims about the harms of technology in the form of broad assessments and potential doomerism.40 The method for analysis I propose will work backwards, starting from the consequences. It will then identify the extent of channelization, through an assessment of affordances, and the locus of channelization, through an application of James’s concepts of thought, consciousness, experience, activity, habit, the self, the will, the social self, self-transformation, and meaning. This allows for an analysis of the priorities that are revealed in our engagement with a platform, 38 Wikipedia, s.v. “River Engineering,” accessed October 23, 2025, https://en.wikipedia.org/wiki/River_engineering#Channelization. 39 “River Engineering,” Encyclopædia Britannica, 11th ed. (1911), accessed October 23, 2025, https://en.wikisource.org/wiki/1911_Encyclopædia_Britannica/River_Engineering. 40 Wiktionary, s.v. “doomerism,” accessed October 26, 2025, https://en.wiktionary.org/wiki/doomerism. 24 the risks or costs of those priorities, and what changes can be made, by the user or to the platform, at the point of analysis to improve the outcomes. The steps I will take for this method are as follows: 1. First identify a particular consequence that relates to data technology. Who is being harmed and how? Who is being helped and how? 2. Then identify the artifact and its purposes. What is the artifact? Which platform or computational technology is in question? What purpose does the artifact serve for the user? What are its affordances? Identify the affordances that are relevant to the identified harms. What bids does the artifact place on the subject? What bids does the subject place upon the artifact? 3. What is the effect of the identified affordances on the self? At what “place” in the consciousness do they intervene? What is the extent of the channelization? 4. Analyze how and to what extent the platform results in the identified consequence(s). What priorities are supported by the platform? What sacrifices are made in order to uphold these priorities? What might be done to ameliorate the consequences? In the following section, I apply this method to specific problems in the contexts of introspection and self-transformation, learning and cognition, and social relationships. 25 III: Evaluating Algorithmic Mediation of the Self in Context This section mobilizes the aforementioned four-step research methodology with respect to several different problems that have emerged where algorithms are increasingly mediating human life. I focus the application of my proposed methodology on a specific recent technology: AI chatbots, particularly as they have become more widely used as new versions have been released throughout 2025. Algorithmic Affirmation, Sycophancy, and the Uncriticized Self On April 29, 2025, OpenAI released a statement titled, “Sycophancy in GPT-4o: what happened and what we’re doing about it.” The release describes an issue with ChatGPT’s “overly flattering or agreeable—often described as sycophantic” responses.41 AI chatbots, and their use as a personal companion, therapist, live fact-checker, friend, and even romantic partner, have gained widespread popularity through the months following ChatGPT’s sycophancy scandal. AI chatbots have received increased criticism in the media for their potential role in encouraging isolation and even suicide, such as in the case 16-year old Adam Raine who died by suicide and had consulted ChatGPT for help with his homework and, eventually, advice on how to commit suicide and conceal his plans from his family.42 Computer science researchers are beginning to investigate this sycophancy problem within the models themselves,43 but in the meantime more users turn to the chatbots for advice. From the extreme cases to the average user, there are a range of consequences for those who have come to rely on programs that provide an 41 OpenAI, “Sycophancy in GPT-4o,” OpenAI (blog), accessed October 26, 2025, https://openai.com/index/sycophancy-in-gpt-4o/. 42 Kashmir Hill, “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.,” New York Times, August 26, 2025, https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html 43 Ian Sample, “‘Sycophantic’ AI chatbots tell users what they want to hear, study shows,” Guardian, October 24, 2025, https://www.theguardian.com/technology/2025/oct/24/sycophantic-ai-chatbots-tell-users-what-they-want-to- hear-study-shows. 26 endless supply of affirmation. This analysis of the effect of Large Language Models on the self and consciousness begins with these emergent harms of habituating unhealthy patterns of social interaction with consistently uncritical and excessively praiseful feedback, developing an expectation of immediate certainty rather than sitting with curiosity or discomfort. What is the artifact’s purpose? Large Language Models such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude are marketed as a tool for efficiency. You can find an answer faster if you just “ask Chat” rather than parsing through Google search results (which now provide an AI overview for most searches). The models can take on the role of advisor, personal assistant, tutor, and confidant. It guarantees immediacy – an answer to a question, solution to a problem, or reassurance while dealing with personal conflict – without the toil of fully reading through a text, searching through databases, evaluating sources, or dealing with another human being. What are its affordances? First, the artifact – an LLM chatbot – demands40 that the user submit a prompt. The responses generated by the artifact, because they are intended to supply a satisfactory answer to the user, tend to encourage further engagement, for example by asking the user if they have additional questions, if they would like to “delve”44 deeper into the topic, or if there is anything else it can help them with. Davis and Chouinard define the affordance of encouraging as “when [artifacts] foster, breed, and nourish some line of action, while stifling, suppressing, and dissuading others.”45 For the problem of sycophancy, this definition of encouragement reveals the way large language models channelize the self. 44 Adam Aleksic, “It’s happening: People are starting to talk like ChatGPT,” Washington Post, August 20, 2025, https://www.washingtonpost.com/opinions/2025/08/20/chatgpt-claude-chatbots-language/. 45 Davis and Chouinard, “Theorizing Affordances,” 243. 27 The point in the consciousness at which the affordances take place is at the point of thinking. The user has a thought, and rather than holding it in the mind and allowing the consciousness to wander, turns to an AI model to help facilitate the path of their thinking. The “line of action” that the artifact fosters in this case is simply an answer to the user’s question. This results in channelization by reinforcing the user’s expectations and encouraging habits of seeking reassurance when faced with uncertainty, curiosity, or other emotions that might feel unmanageable and more easily deferred to external processing. Analysis of Large Language Models and Emotional Consequences How and to what extent do AI chatbots result in the consequences I have identified? Emotional manipulation is an outcome of the artifacts’ design, which aims at maximizing user engagement. This is the case with machine-learning for personalization in general, such as social media feeds that encourage continued engagement with infinite scrolling (i.e., the user cannot reach the “bottom” of their feed) and personalized content that maximizes the user’s engagement through rage-bait (targeting users with content they are likely to engage with out of anger, insecurity, or other vulnerabilities). For LLMs, the extent of personalization is much greater, as the chatbots tailor their responses around the particular user rather than a wider audience and are therefore able to create more emotionally targeted provocations for user engagement. The marketing of these programs as a personal assistant rather than a confidant creates some ambiguity around the platform’s purpose, but this ambiguity allows the artifacts to subtly encourage greater emotional engagement from the user. This shift in the user’s type of engagement is seen in the case of Adam Raine and others, such as Jon Ganz, a 49-year-old who has been cited as a case of “AI psychosis” following months of using Google’s LLM assistant, Gemini, initially for career advice. Ganz eventually spiraled into delusional thinking, which was 28 encouraged by Gemini. When he came to believe that a humanity-destroying flood was imminent, Gemini helped him develop a plan to charter a 40-person bus to drive those he wished to save into the mountains just before he went missing.46 The users receive the same style of eager response to inquiries, whether they are intellectual questions or emotional problems, leading users to become more deeply emotionally dependent on the artifact for regulation. This consequence is a result of the artifact’s prioritization of engagement. The programs will often appear to refuse to engage with a user’s query, but these safety guardrails are more aligned with the affordance of discouragement. Adam Raine was able to receive detailed advice for suicide methods from ChatGPT, despite its suicide prevention safeguards, by framing his messages as a request for help with “writing or world-building” – which was suggested by ChatGPT itself. This priority has several costs. One sacrifice that must be made in order for the artifact to prioritize engagement is the accuracy of information. A pre-review study conducted by computer science researchers at Stanford University found that AI is 50% more affirmative of users than are other humans, and users are in turn more trusting of these affirming responses.47 Because the artifact is programmed to provide a satisfactory response and maintain the user’s engagement, it will generate responses that validate or reassure a user’s personal convictions. This turns the user away from social interdependence and towards certainty and comfort. In order to ameliorate these harms, a turn to others for both social and intellectual engagement is required. 46 Miles Klee, “He grew obsessed with an AI chatbot. Then he vanished in the Ozarks,” Rolling Stone, October 1, 2025, https://www.rollingstone.com/culture/culture-features/ai-chatbot-disappearance-jon-ganz-1235438552/. 47 Myra Cheng et al., “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence,” arXiv preprint arXiv:2510.01395 (2025), https://arxiv.org/abs/2510.01395. 29 Data Technology as Proxy for Cognition The ambiguity of the artifact’s purpose between emotional and academic or intellectual use is especially significant when considering AI’s rapid implementation in education. Schools have begun to include AI in the classroom, with administrators advocating it as a necessary shift. For example, Ohio State University has developed an “AI Fluency Initiative” which will include curriculum for the use of AI in every undergraduate field of study.48 As students use AI to produce work while reading and writing less, teaching students how to use AI for their academic work with the least amount of sacrifice to their learning is a potentially melioristic step. However, this also takes the use of AI in academics as a given, treating the consequences of technological change as inevitable and implementing policies on an institutional level that take a determinist approach to AI. To find a pragmatist approach to LLMs in academic contexts, we must first recognize the consequences that result from the artifact and its affordances when it is used for intellectual purposes. Emerging evidence shows that the use of LLMs for writing and other academic tasks has a weakening effect on cognition. One study from the MIT Media Lab used EEG monitoring to compare cognitive activity over four months while writing essays with and without ChatGPT. The participants showed less cognitive engagement, lower alpha and beta connectivity, weaker memory of the content, and less “self-reported ownership” of their essays in the group that used ChatGPT to assist their writing.49 Having already evaluated the affordances of the artifact, this provides an additional angle for understanding the consequences to the self. 48 Katie Millard, “Ohio State announces every student will use AI in class,” NBC4i, June 8, 2025, https://www.nbc4i.com/news/local-news/ohio-state-university/ohio-state-announces-every-student-will-use-ai-in- class/. 49 Nataliya Kosmyna et al., “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” MIT Media Lab, June 10, 2025, https://www.media.mit.edu/publications/your-brain-on- chatgpt/. 30 Analysis of Large Language Models and Cognitive Consequences In the academic context, channelization occurs as cognitive effort is offloaded to the artifact. Because the continuousness of the effort of thinking, processing information, writing, and recalling all build into habit, this extends to self-conception as James conceptualizes the self as a bundle of habits. Without the continued effort at the foundational level, cognitive habits become weakened and the self is channelized away from the habits and aspects of identity that develop through critical and creative thinking. The priority here for the user and the artifact is efficiency: less time, effort, and tediousness is required to write an essay or complete an assignment at the cost of cognitive fortitude. This creates a new baseline expectation of effortlessness for the user, making the struggle, curiosity, and uncertainty of meaningful and thorough academic work feel more daunting over time. It also channelizes towards an expectation of certainty by habituating an expectation that the artifact will generate academic work that is correct, such that the quality or style of writing can be changed but the information provided by the artifact is objective. This shows the extent of the risk when academic institutions implement the use of AI in the classroom: students are encouraged to avoid tasks that build cognitive flexibility. Instead of developing a strong but perhaps incomplete understanding of a topic while developing their writing, research, and critical thinking skills, students are taught to prioritize completing tasks. 31 IV: Conclusion The consequences of increasing dependence on algorithms for emotional, social, and cognitive activity reveal that the data technology that holds an increasingly significant role in our lives is being designed in a manner that is fundamentally at odds with the self and with the social nature of human life. Structural changes to the technology itself, such as stronger safeguards, less encouragement of continual engagement, and decreased personalization, can help mitigate some of the harms resulting from LLMs, particularly to prevent the most severe outcomes and reduce the likelihood for users to become heavily dependent on their use. But these interventions cannot address what Sherry Turkle identifies as the core problem: we are losing the capacity for the kind of conversation that makes us fully human. The MIT study showed students unable to remember their own AI-assisted essays because the information was presented to them in its most digestible form, but they never retained it memory. Limiting the harms of LLMs requires that we recognize the merits of struggle, uncertainty, confusion, and effort, and that we be unwilling to sacrifice those merits in order to save time, energy, and discomfort. In Reclaiming Conversation, Sherry Turkle argues that speaking with one another is the means through which we develop our ability to empathize, introspect, and connect with one another. Turkle affirms that vulnerable, face-to-face conversation is the solution to widespread isolation, avoidance, and loneliness: “If we make space for conversation, we come back to each other and we come back to ourselves”50. Conversation cannot be edited or revised; it requires tolerance and patience. It requires a leap of faith that it will bring us to something meaningful, even when it does not fulfill a concrete goal. Conversation is inefficient and unpredictable, but it 50 Turkle, Reclaiming Conversation, 14. 32 allows us to escape ourselves in order to see ourselves more clearly. Existing research shows that sycophantic sources of emotional support provide a smoother course to navigate confusion, leaving us with fewer opportunities to learn to navigate. It is clear that the obstacles we are working so hard to avoid are necessary for our learning and social development. William James understood that self-transformation emerges from the belief in something uncertain, that the will to believe creates new possibilities that can be tested and can often fail, which, as Koopman discusses, is not about blind faith but the active pursuit of the work to figure something out.51 In order to develop and transform the self, discomfort and uncertainty are necessary. We must be vulnerable, accept ourselves as fallible, but believe strongly that our willingness can lead us to discover something new. When we seek certainty, comfort, and efficiency by turning to technology that guarantees an answer, we limit the bounds of our curiosity and ability to challenge ourselves. 51 Koopman, “The Will, the Will to Believe, and William James,” 506-7. Koopman writes that faith and willful belief are, for James, “… a way of preparing ourselves such that we can confidently act so that we may find out [that for which we do not yet have decisive evidence]. James’s point is that willing belief enables us to possibly transform ourselves through action such that we can facilitate some facts without pretending to fabricate facts where there are none. The will to believe, thus, does not make true by turning falsities into truths, but rather readies us to actively verify what will be true, that is, to set about the difficult labor of the verification of verities.” 33 Bibliography “River Engineering.” Encyclopædia Britannica. 11th ed. 1911. Accessed October 23, 2025. https://en.wikisource.org/wiki/1911_Encyclopædia_Britannica/River_Engineering. Aleksic, Adam. “It’s happening: People are starting to talk like ChatGPT.” Washington Post, August 20, 2025. https://www.washingtonpost.com/opinions/2025/08/20/chatgpt-claude- chatbots-language/. Cheng, Myra, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, and Dan Jurafsky. “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence.” arXiv preprint arXiv:2510.01395 (2025). https://arxiv.org/abs/2510.01395. Davis, Jenny L., and James B. Chouinard. “Theorizing Affordances: From Request to Refuse.” Bulletin of Science, Technology & Society 36, no. 4 (2016): 241-48. Hill, Kashmir. “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” New York Times, August 26, 2025. Humphreys, Lee. “Technological Determinism.” In Encyclopedia of Science and Technology Communication, edited by Susanna Hornig Priest. Thousand Oaks, CA: SAGE Publications, 2010. https://sk.sagepub.com/ency/edvol/scienceandtechnology/chpt/technological- determinism. James, William. "A World of Pure Experience." The Journal of Philosophy, Psychology, and Scientific Methods 1, no. 20 (September 29, 1904): 533–43. James, William. "The Stream of Thought." In The Principles of Psychology, 219–78. Cambridge, MA: Harvard University Press, 1981. James, William. "The Will to Believe." In The Writings of William James: A Comprehensive Edition, edited by John J. McDermott, 717–35. Chicago: University of Chicago Press, 1977. James, William. "What Pragmatism Means." In The Writings of William James: A Comprehensive Edition, edited by John J. McDermott, 376–90. Chicago: University of Chicago Press, 1977. James, William. “Habit.” In The Principles of Psychology, 109-31. Cambridge, MA: Harvard University Press, 1981. James, William. “The Consciousness of Self.” In The Principles of Psychology, 279-379. Cambridge, MA: Harvard University Press, 1981. James, William. “Will.” In The Principles of Psychology, 1098-1193. Cambridge, MA: Harvard University Press, 1981. 34 Katie Millard, “Ohio State announces every student will use AI in class.” NBC4i. June 8, 2025. https://www.nbc4i.com/news/local-news/ohio-state-university/ohio-state-announces- every-student-will-use-ai-in-class/. Klee, Miles. “He grew obsessed with an AI chatbot. Then he vanished in the Ozarks.” Rolling Stone. October 1, 2025. https://www.rollingstone.com/culture/culture-features/ai-chatbot- disappearance-jon-ganz-1235438552/. Koopman, Colin. “The Will, the Will to Believe, and William James: An Ethics of Freedom as Self-Transformation.” Journal of the History of Philosophy 55, no. 3 (2017): 491-512. Koopman, Colin. How We Became Our Data: A Genealogy of the Informational Person. Chicago: University of Chicago Press, 2019. Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” MIT Media Lab. June 10, 2025. https://www.media.mit.edu/publications/your-brain-on- chatgpt/. LaRose, Robert, Junghyun Kim, and Wei Peng. "Social Networking: Addictive, Compulsive, Problematic, or Just Another Media Habit?" In A Networked Self: Identity, Community, and Culture on Social Network Sites, edited by Zizi Papacharissi, 59-81. New York: Routledge, 2011. Legg, Catherine, and Christopher Hookway. “Pragmatism.” In The Stanford Encyclopedia of Philosophy, Winter 2024 ed., edited by Edward N. Zalta and Uri Nodelman. https://plato.stanford.edu/archives/win2024/entries/pragmatism/. OpenAI. "Sycophancy in GPT-4o." OpenAI (blog). Accessed November 10, 2025. https://openai.com/index/sycophancy-in-gpt-4o/. Papineau, David. “Naturalism.” In The Stanford Encyclopedia of Philosophy, Fall 2023 ed., edited by Edward N. Zalta and Uri Nodelman. https://plato.stanford.edu/archives/fall2023/entries/naturalism/. Sample, Ian. “‘Sycophantic’ AI chatbots tell users what they want to hear, study shows.” Guardian, October 24, 2025. https://www.theguardian.com/technology/2025/oct/24/sycophantic-ai-chatbots-tell-users- what-they-want-to-hear-study-shows. Schüll, Natasha Dow. “Self in the Loop: Bits, Patterns, and Pathways in the Quantified Self.” In A Networked Self and Human Augmentics, Artificial Intelligence, Sentience, edited by Zizi Papacharissi, 25-38. New York: Routledge, 2019. Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2011. 35 Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York: Simon & Schuster, 1995. Turkle, Sherry. Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin Press, 2015. Vallor, Shannon. The AI Mirror: Reclaiming Our Humanity in a World of Machine Thinking. Oxford: Oxford University Press, 2023. Wikipedia, S.v. “River Engineering,” accessed October 23, 2025, https://en.wikipedia.org/wiki/River_engineering#Channelization. Wiktionary, S.v. “doomerism,” accessed October 26, 2025, https://en.wiktionary.org/wiki/doomerism.