Thoughtful statistical consulting for clear, defensible decisions
I work with industry teams on Bayesian modeling, causal inference, data science systems, and analytics workflow automation. I am most useful when teams face high-stakes decisions, messy data, and reporting processes that need to become clearer, faster, and more reliable.
I help teams reason under uncertainty, compare interventions, design analyses, and communicate what a model can and cannot tell us.
I clean, structure, visualize, and model operational datasets. My background includes machine learning, stochastic processes, control theory, Fourier analysis, and mathematical modeling.
I build repeatable workflows with tools such as Quarto, R, Python, and AI-assisted document pipelines. Recent automation work includes translation workflows for instructional documents that preserve formatting while reducing repetitive labor.
My data science work is guided by reproducibility, interpretability, openness, respect for privacy, and human flourishing. I am especially interested in tools that reduce drudgery, improve understanding, and help people make better decisions.
With the team at Thermo Fisher, I developed a machine learning model to assess RNA quality using Thermo Fisher equipment. This led to a conference poster presentation, which can be found here.
As the John W. Allender Associate Professor of Data Ethics, I direct the Data Science Innovation Lab at Washington College. DSIL is an academic trust signal for my consulting work: the same rigor, documentation standards, and delivery discipline are applied across both settings.
My work on the mathematically optimal way to cut an onion has been cited by The New York Times, The Pudding, Daily Mail, and outlets around the world. The project is a useful example of how I work: take a concrete question, build a mathematical model, explain the assumptions, and communicate the result clearly.