To speedily weed out error, scientific research must be reproduced. Because of the difficulty of reproducing computational research without access to code, computational scientists must provide the code that they used to generate all numerical data and figures in their papers. Papers with code that is capable of this are replicable. In this project, I assess the replicability of eleven computational physics papers by attempting to compile and run provided code, and then by comparing any output with figures and data in the manuscript. I hypothesize that computational physics papers do not provide code that is capable of replicating results, and that considerable effort will be required to achieve replication. I find that, out of eleven papers, none achieved complete replication, while only four achieved partial replication. The remaining eight provided code producing output with no resemblance to data in the paper. I determine that code is written in an ad hoc fashion, with little thought given to replicability or continued maintenance.
University of California Irvine
Dr. Daniel Katz
Department of Research Advisor:
Year of Publication: