Deep Lexical Hypothesis: Identifying personality structure in natural language

Loading...
Thumbnail Image

Date

2022-03-04

Authors

Cutler, Andrew
Condon, David M.

Journal Title

Journal ISSN

Volume Title

Publisher

Cornell University

Abstract

Recent advances in natural language processing (NLP) have produced general models that can perform complex tasks such as summarizing long passages and translating across languages. Here, we introduce a method to extract adjective similarities from language models as done with survey-based ratings in traditional psycholexical studies but using millions of times more text in a natural setting. The correlational structure produced through this method is highly similar to that of self- and other-ratings of 435 terms reported by Saucier and Goldberg (1996a). The first three unrotated factors produced using NLP are congruent with those in survey data, with coefficients of 0.89, 0.79, and 0.79. This structure is robust to many modeling decisions: adjective set, including those with 1,710 terms (Goldberg, 1982) and 18,000 terms (Allport & Odbert, 1936); the query used to extract correlations; and language model. Notably, Neuroticism and Openness are only weakly and inconsistently recovered. This is a new source of signal that is closer to the original (semantic) vision of the Lexical Hypothesis. The method can be applied where surveys cannot: in dozens of languages simultaneously, with tens of thousands of items, on historical text, and at extremely large scale for little cost. The code is made public to facilitate reproduction and fast iteration in new directions of research.

Description

73 oages

Keywords

Personality structure, Language models, Lexical hypothesis, Deep learning, Prompt engineering

Citation

https://doi.org/10.48550/arXiv.2203.02092

Collections