Rona Technology builds FPGA-based AI accelerators that run large language models locally — ultra-low latency, minimal power, zero data exposure. Deploy Qwen-class intelligence inside any device.
Our FPGA accelerator architecture delivers server-class LLM inference in an embedded form factor — fully customizable, fully yours.
Custom matrix multiply and attention mechanism blocks implemented in RTL. Runs 10× faster than ARM Cortex-A on the same task.
All inference runs on-device. No data leaves your system, no cloud dependency, no subscription keys. Critical for industrial and medical use cases.
Optimized for quantized versions of Qwen 1.5 and DeepSeek models. INT4/INT8 quantization preserves >95% model accuracy at a fraction of compute cost.
Based on Xilinx Zynq Ultrascale+. Reconfigure the accelerator architecture without hardware replacement — adapt to new model architectures as AI evolves.
Small enough to integrate into existing products. PCIe and M.2 interface options available. Drop-in acceleration for existing hardware platforms.
Built entirely on China-accessible components and software. No dependency on sanctioned technology. Supports domestic FPGA alternatives (Gowin, PANGO).
From model training to edge deployment — a complete pipeline designed for hardware efficiency.
Start prototyping with our development kits. Volume pricing and custom modules available for OEM integration.
Perfect for proof-of-concept and developer evaluation. Runs Qwen-0.5B at INT8 precision.
Full-featured kit for production prototyping. Supports Qwen-1.8B with high-throughput inference pipeline.
Tailored to your hardware requirements. Compact M.2 or custom PCB form factor for direct product integration.
Rona Technology is a Guangzhou-based company dedicated to bringing large language model intelligence to embedded systems. We design FPGA accelerators from the ground up — from RTL architecture to production deployment.
Whether you're exploring evaluation kits or planning a full OEM integration, our team is ready to help you bring edge AI to your hardware.
Get in Touch →Whether you need an evaluation kit, custom OEM module, or technical consultation — we're ready to help you deploy AI at the edge.