Aicraft
Skip to main content
PURE C|ZERO DEPS|MIT

Machine learning,
uncompromised.

A complete deep-learning framework written entirely in pure C. SIMD-optimised, Vulkan-accelerated, header-only. From training to edge inference in a single #include.

C11MIT Licensev1.0
terminal
Quick start
$ git clone https://github.com/miaototi/Aicraft.git && cd Aicraft
Then #include "aicraft/aicraft.h" and compile. That's it.

Official Teaser

Watch the official teaser

0Lines of C
0Header files
0GLSL shaders
0Autograd ops
0Test cases
ZeroDependencies

Dead simple

Include. Compile. Run.

No CMake, no vcpkg, no conan. Drop the header folder into your project, pass -I./include, and build. One translation unit, zero friction.

C11Header-onlyMIT LicenseCross-platformEmbeddable
demo.c
12345678910111213141516171819
#include "aicraft/aicraft.h"
int main(void) {
ac_init();
// Build a feedforward network
AcLayer *net[] = {
ac_dense(784, 128, AC_RELU),
ac_dense(128, 10, AC_SOFTMAX)
};
// Forward + backprop in one line
AcTensor *x = ac_tensor_rand((int[]){1,784}, 2);
AcTensor *y = ac_forward_seq(net, 2, x);
ac_backward(y);
ac_cleanup();
return 0;
}

Architecture

Every layer, one file

Aicraft is a vertically integrated stack. No external libraries sit between your code and the hardware.

Your Applicationmain.c
aicraft.hSingle include
Layers / Loss / OptimizerHigh-level API
Autograd Engine22 ops, DAG-based
Tensor CoreN-dim, broadcasting
SIMD KernelsAVX-512 / NEON
Vulkan Compute14 GLSL shaders
Arena AllocatorCheckpoint / restore memory

Capabilities

Built for performance,
designed for simplicity

SIMD Vectorised

AVX2, AVX-512, ARM NEON. Every hot path hand-tuned with platform intrinsics and BLIS-style GEMM micro-kernels.

Vulkan Compute

14 GLSL compute shaders for GEMM, activations, and reductions. Cross-vendor GPU acceleration.

Autograd Engine

22 differentiable ops. Dynamic computational graph with reverse-mode autodiff and O(1) cycle detection.

INT8 Quantisation

Post-training quantisation with asymmetric per-tensor scaling. ~4x model compression for edge.

Arena Allocator

Checkpoint/restore memory management. Zero per-tensor malloc. Constant memory in training.

Training Loop

SGD, Adam, AdamW optimisers. Cross-entropy, MSE, Huber loss. Full training pipeline.

How it works

Four steps to production

01

Include

Add the single header to your C project. No build system changes needed.

02

Define

Stack layers, pick a loss function and optimiser. Just like Python, but in C.

03

Train

Forward, backward, step. The autograd engine handles gradient computation.

04

Deploy

Quantise to INT8, serialise, and run on anything from x86 to ARM Cortex-M.

How it compares

Minimal footprint, maximum control

AicraftPyTorchTensorFlow
Binary size~150 KB~800 MB~1.8 GB
Dependencies0~50~80
LanguageC11C++ / PyC++ / Py
GPU backendVulkanCUDACUDA
SIMDHand-tunedGenericGeneric
MemoryArena allocatormalloc/freeCustom
Edge deployMCU-readyNoTFLite
The best dependency is the one you never add. Aicraft proves you can train a neural network without pulling half the internet into your build.
Tobias TesauriCreator of Aicraft — T&M Softwares

Open source

Ready to see what
pure C can do?

Read the docs, explore the source, or start building.

A project by Tobias Tesauri — T&M Softwares