Scientists from NCCR MARVEL led the most comprehensive verification effort so far on computer codes for materials simulations, providing their colleagues with a reference dataset and guidelines for assessing and improving existing and future code.
For the past few decades, physicists and materials scientists around the world have been busy developing computer codes that simulate the key properties of materials, and they can now choose from a whole family of such tools, using them to publish tens of thousands of scientific articles per year.
Understanding Density-Functional Theory (DFT)
These codes are typically based on density-functional theory (DFT), a modeling method that uses several approximations to reduce the otherwise mind-boggling complexity of calculating the behavior of each individual electron according to the laws of quantum mechanics. The differences between the results obtained with various codes come down to the numerical approximations being made, and the choice of the numerical parameters behind those approximations, often tailored to study specific classes of materials, or to calculate properties that are key for specific applications – say, conductivity for potential battery materials.
Challenges in Code Verification
Given the complexity of these codes, it is really difficult to make sure that all of them are free of any possible coding error, or do not suffer from numerical approximations that are too coarse. But it is crucial for the community to verify that the results from different codes are comparable, consistent with each other, and reproducible.
In a new article published today (November 14) in Nature Review Physics, a large group of scientists has carried out the most comprehensive verification effort so far on solid-state DFT codes and provided their colleagues with the tools and a set of guidelines for assessing and improving existing and future codes.
The work builds on a previous study published in Science in 2016 that had compared 40 computational approaches by using each one of them to calculate the energies of a test set of 71 crystals, each one corresponding to an element on the periodic table, and concluded that the mainstream codes were in very good agreement with each other.
Expanded Chemical Diversity
“That work was reassuring, but it did not really explore enough chemical diversity,” says Giovanni Pizzi, leader of the Materials Software and Data Group at the Paul Scherrer Institute PSI in Villigen (Switzerland), and corresponding author of the new paper. “In this study, we considered 96 elements, and for each of them we simulated ten possible crystal structures.”
In particular, for each of the first 96 elements of the periodic table, they studied four different unaries, that are crystals made only with atoms of the element itself, and six different oxides, which also include oxygen atoms. The result is a dataset of 960 materials and their properties, calculated by two independent, state-of-the-art DFT codes called FLEUR and WIEN2k. Both are “all-electron” (AE) codes, meaning that they consider explicitly all the electrons in the atoms under consideration.
Benchmark Dataset for Code Testing
That dataset can now be used by anyone as a benchmark to test the precision of other codes, in particular those based on pseudopotentials where, unlike in all-electron (AE) codes, the electrons that do not participate in chemical bonds are treated in a simplified way to make the computation lighter.
“We actually have already started to improve nine such codes in our paper, comparing their results to those in our dataset, measuring the discrepancies, and adjusting their numerical parameters (such as the pseudopotentials) accordingly,” explains Pizzi.
Recommendations and Future Directions
The study also includes a series of recommendations for users of DFT codes, to make sure that computational studies are reproducible, on how to use the reference dataset to conduct future verification studies, and on how to expand it to include other families of codes and other materials properties.
“We hope our dataset will be a reference for the field for years to come,” says Pizzi, who is one of the nine MARVEL researchers who authored the study, together with Marnik Bercx, Kristjan Eimre, Sebastiaan Huber, Matthias Krack, Nicola Marzari, Aliaksandr Yakutovich, Jusong Yu, Austin Zadoks.
Supporting Computational Frameworks
The study also provides an environment for future verification studies through AiiDA, the open-access computational framework developed by the National Centre for Competence in Research (NCCR) MARVEL, which supported the work and in which Pizzi is a project leader, and by the European Centre of Excellence MaX. “AiiDA allows us to write the same instruction in the same way for 11 different codes, for example, the request to compute a specific structure,” says Pizzi. It can then run the calculation for you and select the right numerical parameters for each.”
In addition to expanding the reference dataset with more structures, Pizzi says that in the future he hopes to take into account not only how accurate the different codes are, but also how expensive they are in terms of time and computational power, so helping scientist find the most cost-efficient parameters for their calculations.
Reference: “How to verify the precision of density-functional-theory implementations via reproducible and universal workflows” 14 November 2023, Nature Reviews Physics.
DOI: 10.1038/s42254-023-00655-3
Funding: Swiss National Science Foundation