In the middle of this energy crunch, researchers in Beijing say they have done something almost heretical in tech terms: stop chasing ever-denser digital chips, and instead revive a largely forgotten, analogue way of computing — then aim it squarely at modern artificial intelligence.
A 50‑year throwback that runs AI 12 times faster
Peking University engineers have unveiled an analogue AI chip that, in tests, ran key machine-learning workloads 12 times faster than cutting-edge digital processors while using around 1/200th of the energy.
An analogue AI chip in China delivers up to 12× the speed of advanced digital rivals and slashes power use by about 200× for certain tasks.
The project, led by researcher Sun Zhong and described in the journal Nature Communications, does not rely on binary logic gates firing in lockstep. Instead, it leans on continuous electrical signals — the kind that governed early computing hardware in the 1960s and 70s, long before today’s GPUs.
By encoding information as voltages and currents inside memory itself, the chip performs mathematical operations “in place” rather than shuttling data back and forth like a digital processor.
How analogue computing works, without the nostalgia filter
From sliding rules to silicon matrices
Before digital electronics took over, engineers used analogue machines to solve problems in physics and engineering. These devices represented numbers as continuous values: a needle position, a rotating shaft, or a voltage level on a wire.
- Digital processors break problems into many tiny, sequential steps, using bits that are either 0 or 1.
- Analogue systems use the natural behaviour of circuits to carry out many operations at once, because the physics of the device does the maths.
The Beijing chip updates that approach. Instead of chunky op-amps and dials on a console, it uses dense arrays of memory cells whose electrical properties directly encode the numbers inside a matrix — the backbone structure of most AI models.
When a voltage is applied, currents flow through these arrays, and the chip effectively performs huge matrix multiplications — the heavy lifting behind recommendation engines and image processing — in a single physical step.
Why digital AI chips are hitting walls
Most modern AI runs on specialised digital hardware, such as Nvidia’s H100 GPU. These chips can execute billions of operations per second, but they hit a hard limit: moving data costs more and more energy.
➡️ The French nuclear giant exports its expertise to the Middle East’s first atomic plant: Barakah
Every time numbers travel between memory and the compute unit, they burn power and generate heat. As models grow to trillions of parameters, that traffic becomes the main bottleneck, not the raw number of calculations per second.
In many AI workloads, more energy is spent moving data than on the actual maths.
The analogue approach tries to sidestep this by keeping data and computation in the same physical structure. The chip acts like a kind of “mathematical sensor”, where results emerge directly from currents inside the memory array, with minimal shuttling in and out.
What the Chinese chip actually did in tests
From recommendation engines to image compression
Crucially, this is not just a lab curiosity running toy problems. The Peking University team tested their chip on tasks that resemble commercial workloads.
First, they fed it data similar in scale to that used by platforms such as Netflix and Yahoo for recommendation systems. These algorithms scan huge matrices describing user behaviour and item features to estimate who might like what.
The analogue chip executed these operations substantially faster than advanced digital processors while consuming a fraction of the energy. The researchers reported orders-of-magnitude gains in efficiency, not just marginal tweaks.
The device was also tested on image compression. It reconstructed pictures that were visually close to those produced by high-precision digital methods, yet with storage requirements cut roughly in half for the trial dataset.
That kind of performance matters for edge devices: cameras, drones, industrial sensors or wearables that must process data on-site without access to a giant data centre.
The secret weapon: NMF baked into hardware
At the heart of the chip lies a mathematical technique called non-negative matrix factorisation (NMF). This method breaks a large matrix into two smaller ones with only non-negative entries. It is widely used to uncover hidden patterns in user behaviour, sound, images and biological data.
On a conventional digital processor, NMF can become painfully slow and power-hungry once datasets reach millions of rows and columns. The algorithm must repeatedly update estimates, step by step, over many iterations.
By embedding NMF directly into the analogue circuit, the Chinese chip turns an iterative digital routine into a near one-shot physical process.
The Peking University design treats NMF less as a line of code and more as a property of the hardware. The way currents flow through the memory array corresponds to solving the NMF problem. One pulse of voltage stands in for thousands of digital instructions.
Why this matters for AI’s energy headache
Data centres versus power grids
Tech giants now spend billions on new data centres packed with GPUs to train and run AI models. Analysts warn that the electricity demand from AI-heavy infrastructure could rival that of small countries within a few years.
Countries facing tight power supplies, like parts of Europe and Asia, have begun questioning how many more GPU farms they can support. This pressure dovetails with national efforts to reduce emissions and upgrade ageing grids.
- Training one state-of-the-art language model can consume as much electricity as thousands of homes use in a year.
- Running that model for millions of users adds a continuous, rising power bill.
- Cooling the hardware often demands almost as much energy as the computations themselves.
A chip that can cut energy use by a factor of 200 in certain tasks does not fix all of this, but it changes the equation. Entire lines of AI services — recommendations, image compression, some forms of clustering — could be shifted to far more efficient hardware.
Strategic signal from Beijing
The work also sends a clear geopolitical message. As the US tightens export controls on advanced GPUs to China, Chinese researchers are racing to build alternative paths to AI dominance that rely less on Western digital chips.
Analogue AI offers one such path. Instead of copying Nvidia’s playbook, China is pursuing a parallel strategy: radically efficient, task-specific chips that reduce dependence on sanctioned hardware.
Statements from the team and Chinese media frame the chip as part of a broader push towards “in-memory analogue computing”, a research direction that could, in theory, scale to systems claimed to be hundreds or even a thousand times faster than current GPUs for some workloads.
Limitations, noise and the messy side of analogue
Accuracy versus efficiency
Analogue hardware is not a free lunch. Continuous signals pick up noise, drift with temperature and age, and struggle with the exactness that digital logic guarantees by design.
For workloads that demand almost perfect numerical precision — high-stakes scientific simulations, financial risk engines, cryptography — small errors can compound into big problems. Digital processors remain the safer bet there.
The Chinese chip targets a different zone: AI tasks that tolerate a bit of fuzziness. Recommendation engines and compression algorithms often work fine with approximate results, as long as they remain consistent and statistically reliable.
Many AI applications care less about the ninth decimal place and more about staying fast, cheap and “good enough” on average.
Hybrid systems may emerge, where analogue chips handle bulk, approximate computations and digital cores refine the output or handle critical decisions.
Can this scale beyond the lab?
Turning a prototype into a commercial product involves a long list of obstacles: manufacturing yield, integration with existing software stacks, error correction techniques, and the ability to program these chips without a PhD in analogue electronics.
Toolchains for AI developers currently assume a digital world: tensors, CUDA libraries, and standardised APIs. For analogue hardware to spread, chipmakers will need to hide most of the physics behind friendly software layers so that data scientists can use it with familiar frameworks.
| Aspect | Digital AI chips | Analogue AI chips (as proposed) |
|---|---|---|
| Data representation | Bits: 0 or 1 | Continuous voltages/currents |
| Main strength | Precision, flexibility, mature tools | Energy efficiency, massive parallelism |
| Key weakness | Rising power use, memory bottlenecks | Noise, calibration, programming complexity |
| Best suited tasks | General-purpose computing, exact maths | Pattern extraction, recommendations, compression |
What this could mean for everyday tech
Scenarios from your phone to industrial plants
If chips like this move beyond research, they could reshape where AI runs and who controls it.
Imagine a smartphone that can run recommendation models locally without draining the battery. Your music app, shopping suggestions and photo clean-up tools could work offline, powered by a few analogue arrays instead of streaming everything to a data centre.
In factories, analogue AI modules could sit directly on machinery, spotting anomalies in vibration data or temperature readings in real time. Less data would need to be sent to remote servers, cutting latency and easing network congestion.
Smart cameras in cities might compress and analyse video at the edge, sending only the most relevant clips to central systems. That shrinks storage costs and limits exposure of raw footage, with knock-on benefits for privacy.
Key terms worth unpacking
Two expressions from this research come up repeatedly:
- In-memory computing: a design in which the same physical component both stores data and performs calculations on it. This sharply reduces data movement.
- Non-negative matrix factorisation (NMF): a mathematical method that splits a large dataset into a combination of simpler, non-negative patterns. It often reveals “parts” of a signal — for example, basic facial features in images or recurring themes in user behaviour.
These ideas also appear in neuromorphic computing, which aims to mimic the brain’s style of processing. The Chinese chip is not a brain replica, but it draws on a similar intuition: use physics and device properties to compute, instead of forcing everything through clocked digital logic.
Putting them together — in-memory structures that physically carry out NMF — shows how old theoretical ideas can gain new impact when etched into silicon. It suggests a future where parts of AI feel less like software, and more like carefully tuned materials performing maths simply by being powered on.
Originally posted 2026-03-12 16:42:29.
