2026-01-18 18:20:00
The implementation of SRNet ×2 on memristor hardware. Credit: Nature Communications (2025). DOI: 10.1038/s41467-025-66240-7
In a Nature Communications study, researchers from China have developed an error-aware probabilistic update (EaPU) method that aligns memristor hardware’s noisy updates with neural network training, slashing energy use by nearly six orders of magnitude versus GPUs while boosting accuracy on vision tasks. The study validates EaPU on 180 nm memristor arrays and large-scale simulations.
Analog in-memory computing with memristors promises to overcome digital chips’ energy bottlenecks by performing matrix operations via physical laws. Memristors are devices that combine memory and processing like brain synapses.
Inference on these systems works well, as shown by IBM and Stanford chips. But training deep neural networks hits a snag: “writing” errors when setting memristor weights.
Think of it like trying to fine-tune a dial on a faulty radio knob—the intended tiny nudge gets lost in noise from programming tolerances and post-write drift, sending weights jumping wildly instead of settling precisely as backpropagation demands. This stochastic mess destabilizes learning, but EaPU fixes it by wisely skipping most small tweaks and scaling up others to match the hardware’s quirks, cutting writes by over 99%.
Phys.org spoke with the lead authors, Jinchang Liu and Tuo Shi of Zhejiang Lab, and Qi Liu of Fudan University, who detailed how EaPU bridges this gap.
“Numerous studies have confirmed that memristor-based in-memory computing exhibits significant energy efficiency,” said the team.
However, they noted that training still faces bottlenecks like stability and hyper-parameter sensitivity, with write errors that increase over time. EaPU tackles the core algorithm—hardware mismatch head-on.

Challenge of analog training. a: Limitations of memristor-based calculations. b: Distinction between the analog and algorithm-based training processes. Credit: J. Liu et al/Nature Communications. DOI: 10.1038/s41467-025-66240-7.
The algorithm—hardware mismatch
Backpropagation algorithms make small, precise adjustments to network weights during training in standard neural networks. This works by calculating gradients from the loss function and propagating errors backward layer-by-layer, steadily nudging synaptic strengths to minimize prediction errors.
Memristors are unpredictable devices, however. Conductance values fluctuate randomly because of programming errors and device relaxation, a phenomenon that makes stored values drift over time.
“We observed in numerous experiments that the update (writing) error of memristors increases over time, which is mainly affected by the device relaxation characteristics,” explained the team.
“This issue impairs training stability and performance; therefore, we conducted this study to address the mismatch between the device update error and the weight update magnitude of the algorithm.”
The researchers found that the desired weight updates in neural networks are typically 10 to 100 times smaller than the noise inherent in memristor devices. Previous approaches either sacrificed precision for speed or consumed excessive energy trying to force devices into exact states—neither solution proved satisfactory for large-scale deployment.
Embracing uncertainty
Rather than fighting against device noise, EaPU works with it. The method uses a probabilistic approach that converts small, deterministic weight updates into larger, discrete updates applied with calculated probability.
“The update magnitudes of the standard backpropagation algorithm are usually deterministic and small-valued, while the stochastic characteristics of memristive devices render the conductance updated by them large and stochastic,” explained the team.
EaPU converts these small deterministic updates into large stochastic ones while preserving training performance.
The mechanism stays simple: when a weight update falls below noise threshold ΔWth, EaPU applies a full threshold pulse with probability proportional to the desired change, or skips entirely. Larger updates pass unchanged.
“An essential concept of the EaPU method is that the update magnitude ΔW of the standard backpropagation algorithm is modified into the memristor update magnitude ΔWth through a probabilistic transformation, while ensuring the average value of the updates remains unchanged to guarantee training performance,” said the team.
This approach has a dramatic side effect. It reduces the number of parameters that need updating during training to less than 0.1%. For a 152-layer ResNet neural network, only 0.86 per thousand parameters required updates at any given step—a reduction of more than 99%.
Validation across scales
The team validated their approach both experimentally and through simulation. Using a custom-built 180-nanometer memristor array, they trained neural networks for image denoising and super-resolution tasks, achieving structural similarity indices of 0.896 and 0.933, respectively—results that matched or exceeded conventional training methods while using a fraction of the energy.
“Experimental results definitively verify the effectiveness of EaPU in memristor-based training strategies,” the team noted. “Meanwhile, hardware experiments can demonstrate EaPU’s low update frequency and its robustness against other memristor-related noise.”
For larger networks, impractical to test on current hardware, the researchers used simulation tools informed by experimental data. They successfully trained ResNet architectures with up to 152 layers and modern Vision Transformer models, demonstrating accuracy improvements exceeding 60% compared to standard backpropagation methods on noisy hardware.
The energy advantages are striking. Compared to previous memristor training approaches, EaPU reduces training energy by 50 times, and by 13 times compared to the state-of-the-art MADEM method. The dramatic reduction in update frequency also extends device lifetime by approximately 1,000 times—a critical consideration for commercial viability.
Future work
The researchers see broad applications for their method beyond memristors.
“The EaPU method holds broad application prospects and can be extended in the following directions,” said the team, citing potential extensions to other memory technologies like ferroelectric transistors and magnetoresistive RAM.
Perhaps most significantly, the team envisions applying EaPU to training clusters for large language models.
“Leveraging the low update frequency characteristic of EaPU, it can enhance the service life of hardware systems and reduce training energy consumption,” noted the team, a crucial consideration as AI models grow ever larger and energy costs mount.
With large model training burning millions in energy alone, six-orders-of-magnitude gains could transform AI economics and sustainability.
Written for you by our author Tejasri Gururaj, edited by Sadie Harley, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Jinchang Liu et al, Error-aware probabilistic training for memristive neural networks, Nature Communications (2025). DOI: 10.1038/s41467-025-66240-7.
© 2026 Science X Network
Citation:
New memristor training method slashes AI energy use by six orders of magnitude (2026, January 18)
retrieved 18 January 2026
from https://techxplore.com/news/2026-01-memristor-method-slashes-ai-energy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.








