This is the implementation of paper "Diffusion-based Deep Learning for Sub-ångström Resolution Imaging with an Uncorrected Scanning Transmission Electron Microscopy".
Achieving sub-ångström resolution has long been restricted to sophisticated aberration-corrected scanning transmission electron microscopy (AC-STEM). Recent advances in computational super-resolution techniques have enabled uncorrected STEM to achieve sub-ångström resolution without the need for delicate aberration correctors. However, these methods have strict requirements for sample thickness and thus have yet to see widespread implementation. In this study, we introduce SARDiffuse, a deep-learning diffusion model designed to enhance spatial resolution and correct the noise level of uncorrected STEM images. Trained with experimental AC-STEM data, SARDiffuse has the capability to restore high-frequency information of STEM images, enabling sub-ångström resolution in an uncorrected microscope. We demonstrate the model’s effectiveness on representative materials, including silicon (Si), strontium titanate (STO), and gallium nitride (GaN), achieving substantial improvements (<1 Å) in spatial resolution. Detailed statistical analysis confirms that SARDiffuse reliably preserves atomic positions, demonstrating it is a powerful tool for high-precision material characterization. Furthermore, SARDiffuse effectively mitigates spherical-aberration-induced artifacts, outperforming current methods in artifact correction. Meanwhile, background information of images, such as thickness variation or carbon contamination distribution is also preserved. This work highlights the potential of deep learning to realize sub-ångström resolution imaging in the uncorrected electron microscope, offering a cost-effective alternative to delicate AC-STEM when imaging conventional single crystals.