What I’ve been up to
Recent things I’ve read
This is a short selection of things I’ve been reading, watching, or listening to.
-
Antiqua et nova. Note on the Relationship Between Artificial Intelligence and Human Intelligence (Dicastery for the doctrine of the faith & Dicastery for culture and education)
Summary
NA -
Summary
Modern science wouldn’t exist without the online research repository known as arXiv. Three decades in, its creator still can’t let it go. -
The Misplaced Incentives in Academic Publishing (C. Brandon Ogbunu)
Summary
Opinion | Scientists who spend time peer-reviewing manuscripts don’t get rewarded for their efforts. It’s time to change that. -
The fallacy of the null-hypothesis significance test. (William W. Rozeboom)
Summary
Though several serious objections to the null-hypothesis significance test method are raised, 'its most basic error lies in mistaking the aim of a scientific investigation to be a decision, rather than a cognitive evaluation… . It is further argued that the proper application of statistics to scientific inference is irrevocably committed to extensive consideration of inverse probabilities, and to further this end, certain suggestions are offered.' -
Summary
Search engines, GPS maps and other tech can alter our ability to learn and remember. Now scientists are working out what AI might do. -
Inferring latent learning factors in large-scale cognitive training data (Mark Steyvers & Robert J. Schafer)
Summary
The flexibility to learn diverse tasks is a hallmark of human cognition. To improve our understanding of individual differences and dynamics of learning across tasks, we analyse the latent structure of learning trajectories from 36,297 individuals as they learned 51 different tasks on the Lumosity online cognitive training platform. Through a data-driven modelling approach using probabilistic dimensionality reduction, we investigate covariation across learning trajectories with few assumptions about learning curve form or relationships between tasks. Modelling results show substantial covariation across tasks, such that an entirely unobserved learning trajectory can be predicted by observing trajectories on other tasks. The latent learning factors from the model include a general ability factor that is expressed mostly at later stages of practice and additional task-specific factors that carry information capable of accounting for manually defined task features and task domains such as attention, spatial processing, language and math. -
The importance of stupidity in scientific research (Martin A. Schwartz)
Summary
NA -
The YouTube Apparatus (Kevin Munger)
Summary
Cambridge Core - Politics: General Interest - The YouTube Apparatus -
Visualizing transformers and attention | Talk for TNG Big Tech Day '24 (Grant Sanderson)
Summary
Based on the 3blue1brown deep learning series: • Neural networks -
Summary
Thomas Bayes | Philosophy Essay | David Papineau argues that it is crucial for scientists to start heeding the lessons of Thomas Bayes -
The Depths of Wikipedians (Anne Rauwerda)
Summary
A conversation about yogurt wars, German hymns, tropical cyclones, and the people who make Wikipedia function. -
Summary
As a new professor, I was caught off guard by one part of the job: my role as an evaluator. -
The World John von Neumann Built (David Nirenberg)
Summary
Game theory, computers, the atom bomb—these are just a few of things von Neumann played a role in developing, changing the 20th century for better and worse. -
What Is Entropy? A Measure of Just How Little We Really Know. (Zack Savitsky)
Summary
Exactly 200 years ago, a French engineer introduced an idea that would quantify the universe’s inexorable slide into decay. But entropy, as it’s currently understood, is less a fact about the world than a reflection of our growing ignorance. Embracing that truth is leading to a rethink of everything from rational decision-making to the limits of machines. -
How to Interpret Statistical Models Using marginaleffects for R and Python (Vincent Arel-Bundock, Noah Greifer, & Andrew Heiss)
Summary
The parameters of a statistical model can sometimes be difficult to interpret substantively, especially when that model includes nonlinear components, interactions, or transformations. Analysts who fit such complex models often seek to transform raw parameter estimates into quantities that are easier for domain experts and stakeholders to understand. This article presents a simple conceptual framework to describe a vast array of such quantities of interest, which are reported under imprecise and inconsistent terminology across disciplines: predictions, marginal predictions, marginal means, marginal effects, conditional effects, slopes, contrasts, risk ratios, etc. We introduce marginaleffects, a package for R and Python which offers a simple and powerful interface to compute all of those quantities, and to conduct (non-)linear hypothesis and equivalence tests on them. marginaleffects is lightweight; extensible; it works well in combination with other R and Python packages; and it supports over 100 classes of models, including linear, generalized linear, generalized additive, mixed effects, Bayesian, and several machine learning models. -
Tutorial on directed acyclic graphs (Jean C. Digitale, Jeffrey N. Martin, & Medellena Maria Glymour)
Summary
Directed acyclic graphs (DAGs) are an intuitive yet rigorous tool to communicate about causal questions in clinical and epidemiologic research and inform study design and statistical analysis. DAGs are constructed to depict prior knowledge about biological and behavioral systems related to specific causal research questions. DAG components portray who receives treatment or experiences exposures; mechanisms by which treatments and exposures operate; and other factors that influence the outcome of interest or which persons are included in an analysis. Once assembled, DAGs — via a few simple rules — guide the researcher in identifying whether the causal effect of interest can be identified without bias and, if so, what must be done either in study design or data analysis to achieve this. Specifically, DAGs can identify variables that, if controlled for in the design or analysis phase, are sufficient to eliminate confounding and some forms of selection bias. DAGs also help recognize variables that, if controlled for, bias the analysis (e.g., mediators or factors influenced by both exposure and outcome). Finally, DAGs help researchers recognize insidious sources of bias introduced by selection of individuals into studies or failure to completely observe all individuals until study outcomes are reached. DAGs, however, are not infallible, largely owing to limitations in prior knowledge about the system in question. In such instances, several alternative DAGs are plausible, and researchers should assess whether results differ meaningfully across analyses guided by different DAGs and be forthright about uncertainty. DAGs are powerful tools to guide the conduct of clinical research. © 2021 Elsevier Inc. All rights reserved.
No matching items