What I’ve been up to

Recent presentations

  • Digital risks and harms: From social media to artificial intelligence (February 2025). Office for Product Safety and Standards (London, UK).
    Summary
    In this invited talk for the UK's Office for Product Safety and Standards I discussed some challenges (but also opportunities) in understanding the rapidly evolving digital technology landscape from a psychologist's perspective.
  • Understanding psychological heterogeneity with Bayesian hierarchical models (February 2025). Tilburg University (Tilburg, NL).
    Summary
    As psychologists' shift their focus from "the average person" to fundamental heterogeneity in psychological phenomena, much work remains to be done in developing effective models, descriptions, and reporting practices that maximize investigations' impact on theory development. Our goal is to contribute to that work. We describe and illustrate the use of numerical and graphical descriptions of heterogeneity that (1) Go beyond model parameters to describe heterogeneity in clear and actionable terms, and (2) Take uncertainty in model parameters into account.
  • Communicating Causal Effect Heterogeneity (December 2024). Department of Psychology, University of Illinois at Urbana-Champaign (Remote). [link]
    Summary
    Advances in experimental, data collection, and analysis methods have brought population variability in psychological phenomena to the fore. Yet, current practices for interpreting such heterogeneity do not appropriately treat the uncertainty inevitable in any statistical summary. Heterogeneity is best thought of as a distribution of features with a mean (average person’s effect) and variance (between-person differences). This expected heterogeneity distribution can be further summarized e.g. as a heterogeneity interval (Bolger et al., 2019). However, because empirical studies estimate the underlying mean and variance parameters with uncertainty, the expected distribution and interval will underestimate the actual range of plausible effects in the population. Using Bayesian hierarchical models, and with the aid of empirical datasets from social and cognitive psychology, we provide a walk-through of effective heterogeneity reporting and display tools that appropriately convey measures of uncertainty. We cover interval, proportion, and ratio measures of heterogeneity and their estimation and interpretation. These tools can be a spur to theory building, allowing researchers to widen their focus from population averages to population heterogeneity in psychological phenomena.
  • Understanding psychological heterogeneity with Bayesian hierarchical models using the brms R package (September 2024). StanCon (Oxford, UK). [link]
    Summary
    We discuss computational and graphical probabilistic methods for assessing and communicating causal effect heterogeneity. The methods we discuss are especially timely as psychological research is placing increasing emphasis on variation among individuals' effects. Established practices in studying heterogeneity predominantly focus on point estimates and ignore uncertainties, and thereby substitute robust inferences with guesses based on expectations. We provide a walk-through of effective heterogeneity reporting and display tools that appropriately convey measures of uncertainty using Bayesian hierarchical models. We illustrate the concepts and computations behind four heterogeneity metrics based on the posterior distribution of the effects' heterogeneity distribution. These tools are enabled by (1) modern Bayesian methods that return random draws from models' multivariate posterior distributions, and (2) accessible interfaces (brms) to state of the art estimation algorithms (Stan). We discuss the benefits of both and illustrate their uses with example datasets from psychological research.
  • Investigating video game player behavior and well-being (August 2024). Tilburg University (Tilburg, NL).
    Summary
    Workshop presentation on our open dataset on video game play behavior.
  • Video games and well-being (July 2024). Gaming Disorder Global Seminar (Seoul, SK).
    Summary
    In this presentation I review psychological research on video games and how they might affect players' well-being. Many studies have focused on how time spent playing video games predicts individuals' well-being and found that the associations are likely to be very small if they exist at all. Overall, people who play more report similar levels of well-being than individuals who play less. I discuss methodological issues that must be addressed before reliable and generalizable conclusions about video games' effects on well-being and health can be made. These include facilitating independent researchers’ access to game play data from industry sources.
  • Big data, small transparency: Limits to understanding, and addressing effectively, concerning behaviors in the online era (June 2024). International Behavioural Public Policy Conference (Cambridge, UK). [link]
  • Understanding the roles of digital technologies in psychological functioning (June 2024). Tilburg University (Tilburg, NL).
    Summary
    Digital technologies have impacted nearly all domains of human life. Yet, the current state of research does not adequately address hotly debated worries and hopes about how digital technologies might psychologically impact their users. I present results from my attempts at studying how the adoption and use of digital technologies, the internet, and social media are associated with psychological well-being on a global scale. Unlike past negative results from studies' that focused on WEIRD populations, I find little evidence in favor of widespread harms. In many cases, internet adoption and use predict greater psychological well-being. The current emphasis on digital technologies' effects is best seen in the context of a repeated cycle of technology panics: Opportunistic, myopic, and ineffective research is carried out on novel technologies while inconclusive results regarding prior questions are forgotten. To do better, widespread methodological and theoretical advances are needed. The same technologies that inspire current societal worries might facilitate a more robust understanding of psychological functioning in the digital age, if appropriately used.
  • Internet Technology and Well-Being (May 2024). Vrije Universiteit Amsterdam (Amsterdam, NL).
    Summary
    An invited presentation for the Department of Communication Science at the Vrije Universiteit Amsterdam. I discussed my recent work on understanding potential broad shifts in psychological well-being associated with adoption of internet technologies.
  • Understanding the roles of digital technologies in psychological functioning (March 2023). Tilburg Experience Sampling Center, Tilburg University (Tilburg, NL).
    Summary
    Video game play is an extremely popular form of leisure, yet the scientific understanding of games' relations to psychosocial functioning is at its infancy. To better understand games' roles in people's lives, we need not only more experimentation, but critically, more observation and description of play as it occurs naturally. We describe a data set of ~10,000 players, from 39 countries, and ~700,000 responses to psychological instruments within the video game PowerWash Simulator. These data were collected in collaboration with the game's developer FuturLab Inc., who published a modified version of the game. This research edition queried participants' well-being and motivational experiences during play six times each hour using an in-game messaging system, and along with the survey responses, logged detailed telemetry on player behavior, achievements, and other in-game events. The resulting combination of detailed play behavior and event data, and players' high temporal resolution responses to psychological instruments within the game itself is suitable for both detailed desciptive studies and in-depth statistical modelling of video game play and its relations to players' psychological states.
No matching items

Recent things I’ve read

This is a short selection of things I’ve been reading, watching, or listening to.

  • Authors: Helen Pearson
    Date: 2025-02-05 | Date read: 2025-02-06
    | Archive link: https://archive.is/VlGxx
    Summary
    Search engines, GPS maps and other tech can alter our ability to learn and remember. Now scientists are working out what AI might do.
  • Authors: DICASTERY FOR THE DOCTRINE OF THE FAITH & DICASTERY FOR CULTURE AND EDUCATION
    Date: 2025-01-28 | Date read: 2025-01-31
    | Archive link: https://archive.is/etvMY
    Summary
    NA
  • Authors: Mark Steyvers & Robert J. Schafer
    Date: 2020-08-31 | Date read: 2025-01-31
    Summary
    The flexibility to learn diverse tasks is a hallmark of human cognition. To improve our understanding of individual differences and dynamics of learning across tasks, we analyse the latent structure of learning trajectories from 36,297 individuals as they learned 51 different tasks on the Lumosity online cognitive training platform. Through a data-driven modelling approach using probabilistic dimensionality reduction, we investigate covariation across learning trajectories with few assumptions about learning curve form or relationships between tasks. Modelling results show substantial covariation across tasks, such that an entirely unobserved learning trajectory can be predicted by observing trajectories on other tasks. The latent learning factors from the model include a general ability factor that is expressed mostly at later stages of practice and additional task-specific factors that carry information capable of accounting for manually defined task features and task domains such as attention, spatial processing, language and math.
  • Authors: Martin A. Schwartz
    Date: 2008-06-01 | Date read: 2025-01-28
    Summary
    NA
  • Authors: Grant Sanderson
    Date: 2024-11-20 | Date read: 2025-01-14
    Summary
    Based on the 3blue1brown deep learning series:    • Neural networks
  • Authors: David Papineau
    Date: Unknown | Date read: 2025-01-14
    | Archive link: https://archive.is/d2afM
    Summary
    Thomas Bayes | Philosophy Essay | David Papineau argues that it is crucial for scientists to start heeding the lessons of Thomas Bayes
  • Authors: Anne Rauwerda
    Date: 2024-11-01 | Date read: 2025-01-14
    Summary
    A conversation about yogurt wars, German hymns, tropical cyclones, and the people who make Wikipedia function.
  • Authors: Grace Lindsay
    Date: 2024-05-03 | Date read: 2025-01-14
    Summary
    As a new professor, I was caught off guard by one part of the job: my role as an evaluator.
  • Authors: David Nirenberg
    Date: 2022-11-28 | Date read: 2025-01-14
    Summary
    Game theory, computers, the atom bomb—these are just a few of things von Neumann played a role in developing, changing the 20th century for better and worse.
  • Authors: Zack Savitsky
    Date: 2024-12-13 | Date read: 2025-01-14
    Summary
    Exactly 200 years ago, a French engineer introduced an idea that would quantify the universe’s inexorable slide into decay. But entropy, as it’s currently understood, is less a fact about the world than a reflection of our growing ignorance. Embracing that truth is leading to a rethink of everything from rational decision-making to the limits of machines.
  • Authors: Vincent Arel-Bundock, Noah Greifer, & Andrew Heiss
    Date: 2024-11-30 | Date read: 2024-12-02
    Summary
    The parameters of a statistical model can sometimes be difficult to interpret substantively, especially when that model includes nonlinear components, interactions, or transformations. Analysts who fit such complex models often seek to transform raw parameter estimates into quantities that are easier for domain experts and stakeholders to understand. This article presents a simple conceptual framework to describe a vast array of such quantities of interest, which are reported under imprecise and inconsistent terminology across disciplines: predictions, marginal predictions, marginal means, marginal effects, conditional effects, slopes, contrasts, risk ratios, etc. We introduce marginaleffects, a package for R and Python which offers a simple and powerful interface to compute all of those quantities, and to conduct (non-)linear hypothesis and equivalence tests on them. marginaleffects is lightweight; extensible; it works well in combination with other R and Python packages; and it supports over 100 classes of models, including linear, generalized linear, generalized additive, mixed effects, Bayesian, and several machine learning models.
  • Authors: Jean C. Digitale, Jeffrey N. Martin, & Medellena Maria Glymour
    Date: Unknown | Date read: 2024-10-22
    Summary
    Directed acyclic graphs (DAGs) are an intuitive yet rigorous tool to communicate about causal questions in clinical and epidemiologic research and inform study design and statistical analysis. DAGs are constructed to depict prior knowledge about biological and behavioral systems related to specific causal research questions. DAG components portray who receives treatment or experiences exposures; mechanisms by which treatments and exposures operate; and other factors that influence the outcome of interest or which persons are included in an analysis. Once assembled, DAGs — via a few simple rules — guide the researcher in identifying whether the causal effect of interest can be identified without bias and, if so, what must be done either in study design or data analysis to achieve this. Specifically, DAGs can identify variables that, if controlled for in the design or analysis phase, are sufficient to eliminate confounding and some forms of selection bias. DAGs also help recognize variables that, if controlled for, bias the analysis (e.g., mediators or factors influenced by both exposure and outcome). Finally, DAGs help researchers recognize insidious sources of bias introduced by selection of individuals into studies or failure to completely observe all individuals until study outcomes are reached. DAGs, however, are not infallible, largely owing to limitations in prior knowledge about the system in question. In such instances, several alternative DAGs are plausible, and researchers should assess whether results differ meaningfully across analyses guided by different DAGs and be forthright about uncertainty. DAGs are powerful tools to guide the conduct of clinical research. © 2021 Elsevier Inc. All rights reserved.
No matching items