u/yassa9

Built a zero allocation, header only C++ Qwen tokenizer that is nearly 20x faster than openai Tiktoken

Built a zero allocation, header only C++ Qwen tokenizer that is nearly 20x faster than openai Tiktoken

I'm into HPC, and C++ static, zero allocation and zero dependancy software. I was studying BPE tokenizers, how do they work, so decided to build that project. I hardcoded qwen tokenizer for LLMs developers.

I really know that whole Tokenization phase in llm inference is worth less than 2% of whole time, so practically negligible, but I just "love" to do that kind of programming, it's just an educational project for me to learn and build some intuition.

Surprisingly after combining multiple different optimization techniques, it scored really high numbers in benchmarks. I thought it was a fluke at first, tried different tests, and so far it completely holds up.

For a 12 threads Ryzen 5 3600 desktop CPU, 1 GB of English Text Corpus:
- Mine Frokenizer: 1009 MB/s
- OpenAI Tiktoken: ~ 50 MB/s

For code, tests and benchmarking:
https://github.com/yassa9/frokenizer

u/yassa9 — 7 hours ago
Built a zero allocation, header only C++ Qwen tokenizer that is nearly 20x faster than openai Tiktoken
🔥 Hot ▲ 70 r/LocalLLaMA+1 crossposts

Built a zero allocation, header only C++ Qwen tokenizer that is nearly 20x faster than openai Tiktoken

I'm into HPC, and C++ static, zero allocation and zero dependancy software. I was studying BPE tokenizers, how do they work, so decided to build that project. I hardcoded qwen tokenizer for LLMs developers.

I really know that whole Tokenization phase in llm inference is worth less than 2% of whole time, so practically negligible, but I just "love" to do that kind of programming, it's just an educational project for me to learn and build some intuition.

Surprisingly after combining multiple different optimization techniques, it scored really high numbers in benchmarks. I thought it was a fluke at first, tried different tests, and so far it completely holds up.

For a 12 threads Ryzen 5 3600 desktop CPU, 1 GB of English Text Corpus:
- Mine Frokenizer: 1009 MB/s
- OpenAI Tiktoken: ~ 50 MB/s

For code, tests and benchmarking:
https://github.com/yassa9/frokenizer

u/yassa9 — 10 hours ago