u/Grand_Release_7375

I went down the rabbit hole of building a chess engine as part of a small Android project I’ve been working on, mainly to understand how search and evaluation actually behave in practice.

I started with a simple array-based board, but moved to bitboards fairly quickly once performance became a bottleneck.

Right now the engine roughly looks like this:

Bitboards for representation

Precomputed attack tables (sliders + leapers)

Alpha-beta with iterative deepening

Move ordering (captures, killer moves, some history heuristic)

Quiescence search (captures only)

Lightweight SEE to avoid obviously bad trades

Pruning experiments (null-move, basic LMR)

Simple transposition table (Zobrist hashing, still tuning usage)

Basic opening handling (very small book / simple heuristics)

Evaluation is still fairly simple:

material, mobility, piece activity, some king safety

also briefly experimented with a smaller NNUE-style eval (not Stockfish’s), mainly to understand how it compares to a handcrafted eval

At this point, search depth and responsiveness on mobile feel “good enough” for what I’m trying to do.

Where I got stuck is more about diminishing returns:

Further search tweaks don’t seem to improve strength much anymore

The real bottleneck feels like evaluation

Even at decent depth, play strength is nowhere near Stockfish

The NNUE experiments, and later integrating Stockfish, made that gap pretty obvious

So I ended up integrating Stockfish for strong play and shifted focus more toward the app UX/performance side.

That said, I’d still like to understand where I’m leaving the most strength on the table from an engine perspective.

A few things I’m curious about:

At this stage, how much of the gap vs Stockfish is really evaluation (NNUE etc.) vs search?

Without going down the full NNUE route, is there still meaningful strength left to gain?

Are improvements in TT usage, move ordering, or pruning still worth chasing, or mostly marginal at this point?

On mobile specifically, how do you usually balance deeper search vs richer evaluation?

Anything obvious missing from the setup above that would give a noticeable Elo bump?

Would really appreciate any thoughts — especially from people who’ve gone through a similar phase.

reddit.com
u/Grand_Release_7375 — 11 days ago

I’ve been experimenting with building a minimal Android app (in my case, a chess app) and wanted to remove as much friction as possible from the user experience.

Early on, I decided to avoid:

login/signup flows

ads or monetization hooks

any delays before getting into the core action

The idea was to make it feel like: open → play → exit, with no interruptions.

This simplified the UX quite a bit, but it also introduced some obvious tradeoffs:

no user state across devices

limited personalization

very little analytics to understand usage

harder to build any long-term retention loops

On the technical side, keeping things responsive while integrating something relatively heavy (like a chess engine such as Stockfish) also required some care to avoid UI lag.

I’m curious how others here think about this:

At what point does “minimal” start hurting usability?

How do you approach analytics when avoiding intrusive tracking or logins?

Is skipping login entirely viable long-term, or does it eventually become a blocker?

Any patterns you’ve seen for keeping apps fast without feeling too bare?

Would appreciate any thoughts, especially from people who’ve tried similar approaches.

u/Grand_Release_7375 — 12 days ago