小木猫

 找回密码
 立即注册

扫一扫,访问微社区

搜索
热搜: 活动 交友 discuz
小木猫 门户 查看主题

中国发布“全球首个”类脑人工智能,运行速度比对手快100倍

发布者: 宫羽 | 发布时间: 2025-9-10 19:03| 查看数: 29| 评论数: 0|帖子模式

本帖最后由 宫羽 于 2025-9-10 19:05 编辑

China unveils ‘world’s first’ brain-like AI that runs 100 times faster than rivals



Researchers at the Chinese Academy of Sciences’ Institute of Automation in Beijing have introduced a new artificial intelligence system called SpikingBrain 1.0.
Described by the team as a “brain-like” large language model, it is designed to use less energy and operate on homegrown Chinese hardware rather than chips from industry leader Nvidia. "Mainstream Transformer-based large language models (LLMs) face significant efficiency bottlenecks: training computation scales quadratically with sequence length, and inference memory grows linearly,” said the researchers in a non-peer-reviewed technical paper.
According to the research team, SpikingBrain 1.0 performed certain tasks up to 100 times faster than some conventional models while being trained on less than 2% of the data typically required.
This project is part of a larger scientific pursuit of neuromorphic computing, which aims to replicate the remarkable efficiency of the human brain, which operates on only about 20 watts of power.
“Our work draws inspiration from brain mechanisms,” added the researchers.
To replicate efficiency of human brain
The core technology behind SpikingBrain 1.0 is known as “spiking computation,” a method that mimics how biological neurons in the human brain function.
Instead of activating an entire vast network to process information, as mainstream AI tools like ChatGPT do, SpikingBrain 1.0’s network remains mostly quiet. It uses an event-driven approach where neurons fire signals only when specifically triggered by input. This selective response is the key to reduced energy consumption and faster processing time. To demonstrate their concept, the team built and tested two versions of the model, a smaller one with 7 billion parameters and a larger one containing 76 billion parameters. Both were trained using a total of approximately 150 billion tokens of data, a comparatively small amount for models of this scale.
The model’s efficiency is particularly notable when handling long sequences of data. In one test cited in the paper, the smaller model responded to a prompt consisting of 4 million tokens more than 100 times faster than a standard system.
In a different test, a variant of SpikingBrain 1.0 demonstrated a26.5-fold speed-up over conventional Transformer architectures when generating just the first token from a one-million-token context.
Stable performance
The researchers reported that their system ran stably for weeks on a setup of hundreds of MetaX chips, a platform developed by the Shanghai-based company MetaX Integrated Circuits Co. This sustained performance on domestic hardware underscores the system’s potential for real-world deployment.T
hese potential applications include the analysis of lengthy legal and medical documents, research in high-energy physics, and complex tasks like DNA sequencing, all of which involve making sense of vast datasets where speed and efficiency are critical.
“These results not only demonstrate the feasibility of efficient large-model training on non-NVIDIA platforms, but also outline new directions for the scalable deployment and application of brain-inspired models in future computing systems,” concluded the research paper.


最新评论

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

快速回复 返回顶部 返回列表