The Zeta AI Chip is a domestically produced AI computing chip focused on high energy efficiency, featuring compute-in-memory, RISC-V, and Chiplet-inspired architectures, primarily for edge computing and AI inference.
Its core advantages lie in the compute-in-memory architecture that dramatically reduces data movement, combined with the flexibility of RISC-V and Chiplet 3D stacking to achieve a balance of high performance and energy efficiency.
Primarily applicable to scenarios with high power and real-time requirements, such as IoT endpoints, mobile devices, edge servers, and AI training and inference in resource-constrained environments.
The Zeta chip uses a specialized architecture (such as compute-in-memory) with hardware-level optimizations for AI computing, achieving higher efficiency for specific inference tasks rather than pursuing universal graphics processing power.
Based on the open-source RISC-V instruction set architecture, with AI-focused custom instruction set extensions to achieve better performance and ecosystem autonomy.

Geekbench AI is a cross-platform AI performance benchmarking tool that simulates real machine learning tasks to help you accurately evaluate the AI compute power of CPUs, GPUs, and NPUs on your devices.
Cerebras provides industry-leading wafer-scale AI compute infrastructure, powered by its unique WSE chip, delivering performance and efficiency far beyond traditional hardware for training large-scale language models and fast inference.