Skip to main content
About Me

About Me

11 mins
Robert Carson
Author
Robert Carson
Working at the intersection of HPC systems and solid mechanics. Builder of open-source scientific software in Rust and C++. Landscape photographer.

Over the course of my career I have found myself following a consistent thread: building tools rigorous enough to actually predict how real materials behave, fast enough to be useful at scale, and open enough that others can build on them. That thread started in an undergraduate biomechanics lab, ran through a PhD developing new computational and experimental methods for characterizing how deformation evolves within crystal grains under cyclic loading, and has continued at Lawrence Livermore National Laboratory where I lead development of GPU-accelerated crystal plasticity software and drive material qualification efforts at exascale. Outside of work I take real pleasure in spending time with my family and pets, hiking, and shooting landscape photography.

The Early Path
#

My first hands-on research experience was working on surface-porous polyether-ether-ketone (PEEK) for orthopedic implants as an undergraduate researcher. That work was about engineering a thin porous surface layer onto high-strength PEEK to promote bone ingrowth without sacrificing the bulk mechanical properties needed for load-bearing applications. It was early exposure to the challenge of connecting processing choices to material outcomes through a chain of physical mechanisms, an approach that continues to shape how I think about validation work today.

For graduate school my undergraduate advisor pointed me toward Cornell, which had strong groups working on computational methods in materials science. It happened that one of the projects there centered on fatigue, one of the genuinely hard and still unsolved problems in engineering. The first published fatigue study appeared in 1837 on mining conveyor chains that failed in service. By the end of the 20th century more than 100,000 papers had been written on the subject and the problem remains open, because fatigue failure depends on a wide range of interacting factors: residual stress, thermal processing, surface condition, environment, and at the most fundamental level, the microstructure of the material itself.

PhD Work at Cornell
#

My thesis centered on a specific question: how does deformation distribute itself heterogeneously within individual crystal grains in a polycrystalline material under cyclic loading, and what does that heterogeneity tell us about where fatigue cracks will eventually initiate?

In FCC metals like copper, cyclic loading produces surface features called persistent slip bands: narrow lamella-like structures roughly a micron wide where plastic slip localizes and accumulates irreversibly over many cycles. The intrusions and extrusions these bands produce along free surfaces become Stage I crack initiation sites. Almost all of the existing research on persistent slip bands had been done on single crystals, where the geometry is simple. In a polycrystalline material, grains interact mechanically with their neighbors, creating local stress and strain states that vary from grain to grain and within each grain. My thesis proposed that in polycrystals, these features are better understood as three-dimensional persistent slip networks that cross grain boundaries.

Pursuing that hypothesis computationally required three distinct things that did not fully exist. The first was dimensionality reduction methods for connecting the kinematics of deformation to what synchrotron diffraction experiments actually measure, building a rigorous simulation-experiment comparison framework at the intragrain scale rather than relying on coarse grain averages that hide the interesting physics. The second was the Continuous Lattice Orientation Finite Element Method (LOFEM), which ensures that the crystal lattice orientation is smooth within a grain rather than discontinuous between integration points. That continuity is essential: it is what allows dislocations and slip to travel smoothly across a grain in the simulation in a physically meaningful way. The third was a graph-theoretic framework for tracking how slip propagates across grain boundaries and forms connected networks spanning the polycrystal. Applied to simulations of OFHC copper under fully reversed cyclic loading, this framework showed slip networks forming in the first cycle with a subset persisting from cycle to cycle, directly analogous to persistent slip bands in single crystals but now visible in a polycrystalline aggregate.

Those computationally identified persistent slip networks were later confirmed experimentally, with studies on cyclically loaded materials observing the same network structures through EBSD methods years after the simulations predicted them. That kind of delayed experimental confirmation is one of the more satisfying outcomes a simulation-first study can produce.

The experimental side of the thesis was built in close collaboration with Professor Matthew Miller and Dr. Mark Obstalecki at Cornell using far-field and near-field high-energy X-ray diffraction at the Cornell High Energy Synchrotron Source and the Advanced Photon Source at Argonne. That collaboration reinforced something I have come to believe strongly: the most discriminating tests of constitutive model physics come from combining modelling and experiment rather than validating against macroscopic stress-strain curves alone.

LLNL: ExaAM and Exascale Crystal Plasticity
#

After Cornell I joined Lawrence Livermore National Laboratory, where my work pivoted to making crystal plasticity simulations run on GPUs at scale. The ExaAM project, a multi-institution effort within the DOE Exascale Computing Project, needed a crystal plasticity finite element code capable of running on leadership-class GPU machines. No open-source code existed that could do this in a serious way, so I led the development of ExaConstit from scratch with GPU execution as a first-class target from day one.

ExaConstit sits at the core of the ExaAM simulation pipeline, which connects melt pool thermodynamics through solidification microstructure evolution to crystal plasticity finite element analysis, enabling end-to-end prediction of process-structure-property relationships in laser powder bed fusion additive manufacturing. The scientific goal driving this pipeline is certification: for safety-critical applications you need to know the local mechanical properties of a printed part before you have tested thousands of them. The culminating run of the project was a world-first uncertainty quantification study on ORNL’s Frontier system where I ran 7,850 high-fidelity simulations over 8,000 nodes, showing how uncertainty propagation at lower length scales drives variation in local properties across an additively manufactured part. Mean predicted yield stress landed within 5% of the experimental mean, achieved through physics-based prediction without empirical fitting to the benchmark data.

The ExaAM work also expanded into BCC crystal plasticity. Working with colleagues, I contributed to developing constitutive models for tantalum informed by large-scale molecular dynamics simulations and validated against dynamic plate-impact hole closure experiments at synchrotron facilities. The connection back to my thesis work is direct: orientation-resolved single-crystal experiments remain the most diagnostic test of constitutive model physics, whether the loading is quasi-static cyclic or a plate impact at strain rates above 10^5/s.

Leading the Internal Material Model Library
#

Separate from the open source projects, I have been the code lead for LLNL’s internal material model library, which has over 600 users across the lab. This is a long-running production library with a significantly larger user base and more constrained change process than any of the open source work. Leading the team has required a different skill set than research software development: architecting GPU porting efforts across multiple hardware architectures simultaneously, supporting application teams whose competing needs pull the code in different directions, adding new material models while maintaining backward compatibility, and modernizing a codebase that accumulated substantial complexity over many years. The lessons from this work, particularly the discipline required to refactor carefully in a high-stakes production environment, fed directly into how I approached the later modernization efforts on SNLS, ExaCMech, and ExaConstit.

Automatic Differentiation for Material Models
#

One area I have been actively pushing is the application of automatic differentiation to material model development. The core problem is well known to anyone who has implemented a crystal plasticity model: the consistent material tangent, the Jacobian of stress with respect to strain increment that finite element codes need for Newton-Raphson convergence, is notoriously tedious to derive analytically for complex constitutive laws. For simple models closed-form expressions exist, but as physical fidelity increases, deriving and then correctly implementing the full tangent for models with many internal state variables and nonlinear kinetics becomes a significant bottleneck on how fast new models can be developed and validated.

Tools like Enzyme and JAX are changing this. Enzyme operates directly at the LLVM compiler IR level, meaning it can differentiate existing C++, Rust, and Fortran code without source rewrites and produce gradients that are as fast as hand-coded derivatives because the differentiation happens after compiler optimization rather than before it. JAX approaches the problem from a Python-first direction and has already shown up in differentiable crystal plasticity finite element codes in the research community, where it eliminates the need to manually derive case-by-case Jacobians entirely. The working group I have been leading at LLNL is exploring how both of these tools can be systematically integrated into our material model workflows, with the goal of decoupling the time it takes to implement a new physical idea from the time it takes to get a correct and performant Jacobian for it.

AI and LLM Coding Agents
#

A significant shift in how I think about this broader class of problems, including material modelling, code development, and research acceleration, began with the joint DOE AI1000 event in March 2025 between various DOE labs and OpenAI and Anthropic. During that event I pushed the first round of reasoning models from both organizations hard with PhD-level mechanics questions, probing their ability to connect ideas across constitutive modelling, nonlinear FEM formulations, GPU algorithm design, and related fields. What stood out was not just the depth of answers within a domain but the breadth of connections the models would draw to adjacent areas, occasionally surfacing relationships or literature I would not have reached quickly on my own. That cross-field synthesis, pulling in knowledge from fields you are not actively tracking, is one of the more underappreciated capabilities of frontier reasoning models for research work.

Following that event I began exploring how these tools could be put to work more systematically. The most concrete early result was using Anthropic’s Claude models to drive a major refactor of ExaConstit, using the project feature and GitHub integration to maintain persistent records of the conversations that shaped the changes. The refactor resulted in roughly 35,000 lines added and 12,000 lines removed, producing a more sustainable architecture, greatly increased documentation, and features I had been sketching out for years but lacked the bandwidth to tackle. The key insight from that process was treating the AI as a coding agent operating at roughly a junior developer level: genuinely useful for driving feature development faster and for pulling in patterns and techniques from across the software engineering literature, but requiring steady oversight, code review, and occasional redirection when it reached for an outdated technique or made a design choice that did not fit the existing architecture. The productivity gain is real, but so is the supervision cost.

From that experience I began teaching those around me how to use these tools effectively, helping the team make the best of their limited bandwidth across projects. I have also been leading efforts to examine how LLM-based coding agents can be applied more broadly to material modelling workflows, from accelerating the implementation of new constitutive models to helping researchers unfamiliar with GPU programming get started with performance-portable code.

Open Source Work
#

At LLNL I oversee several open source projects that form an interconnected stack: the nonlinear solver library SNLS, the crystal mechanics constitutive library ExaCMech, and the crystal plasticity finite element code ExaConstit. SNLS provides the nonlinear solvers that ExaCMech runs at the material point level, and ExaCMech provides the material models that ExaConstit uses at every quadrature point in its meshes. All three are GPU-capable and have been deployed at full exascale scale.

On the personal side I maintain a set of Rust libraries for scientific computing. I started working in Rust well before most of my C++ library work, initially drawn to it by the memory safety guarantees after being bitten one too many times by Fortran not verifying that array shapes matched the compile-time information passed to it, among other things. Since then it has been a natural playground for testing out new ideas and building fast Python libraries through PyO3 bindings. The current set includes mori for crystallographic orientation representations, rust_data_reader for fast scientific data parsing, and HelixSnail for small nonlinear solvers. Several Rust crates also live inside the ExaConstit repository itself, handling microstructure voxel coarsening and diffraction-based post-processing in the ExaAM pipeline.

Disclaimer
#

Any thoughts or opinions discussed throughout this website do not represent my employer Lawrence Livermore National Laboratory (LLNL).

AI Useage
#

I have used AI to help generate aspects of this website. The feature / background photos were generated using Google’s Gemini. I found it was able to do a decent job at capturing the complex topics each article is discussing.

I also iterated with Anthropic’s Claude to help create the various articles in the Papers and Projects sections. I largely acted as an editor and driver for each article, but had Claude write the articles that I’ve been meaning to write. I largely chose to do this as I do value spending time with my family and pets when not spending all of my time working during the week. I’ve found these tools have helped me make the best of my limited time I spend working on this website. I will note that I have reviewed everything written, and iterated quite a bit on getting things how I like. Sadly this is still faster than my usual writing process as I’m quite slow and very careful as seen in any of my authored journal articles…