u/dusan69

Image 1 — IKI model update: layout weight
Image 2 — IKI model update: layout weight
Image 3 — IKI model update: layout weight
Image 4 — IKI model update: layout weight
Image 5 — IKI model update: layout weight
Image 6 — IKI model update: layout weight
Image 7 — IKI model update: layout weight
Image 8 — IKI model update: layout weight
Image 9 — IKI model update: layout weight
Image 10 — IKI model update: layout weight
▲ 0 r/KeyboardLayouts+1 crossposts

IKI model update: layout weight

So what does a “perfectly fair and equal society” look like… for keyboard layouts?

Yeah, I know, that sounds dangerously philosophical for a keyboard subreddit. I swear this started as a technical problem, not a political manifesto 😄

I’m not into politics at all. In fact, I actively avoid it. But at some point, to separate subjective preference from objective reasoning, I ended up borrowing political metaphors to describe what my typing model is trying to do.

Figure 1 shows the result from my original, simple, straightworward, heuristic-based, definition: Using a corpus that is 100% English, on a standard 4-row × 10-column keyboard, and restricting the analysis to the 4 most popular layouts (QWERTY, QWERTZ, AZERTY, Dvorak), the model reconstructs the following “bridge population” between the real-world dataset and the fully uniform thought population:

- Dvorak: ~44%

- QWERTY: ~17%

Figure 2 shows the result using a definition proposed by Claude AI:

- Dvorak rises slightly to 46%

- QWERTY drops to 12%

So the numbers are not radically different. From the very beginning, my intuition was that a “reasonable” distribution should look something like:

- Dvorak ≈ 40%

- QWERTY / QWERTZ / AZERTY sharing the remaining 60%

But the important difference is qualitative, not numerical. Claude’s formulation is simply more _scientific_. It emerges more from accumulated knowledge over objective facts than from personal opinion or intuition.

What the model is trying to describe is an “ideal population”: an infinite world of users evenly distributed across the enourmous set of all possible layouts and all possible languages — or more precisely, across random character sequences.

That hypothetical population becomes the neutral environment where typing speed on any layout and any language can be estimated fairly and compared meaningfully.

What surprised me most is that Claude’s argument actually survived pretty aggressive criticism from GPT-5, plus additional challenges from Gemini, GPAI, DeepSeek… and even me trying to poke holes in it.

Knowledge-wise, I’m basically the student here. But in this weird academic AI debate, I wasn’t exactly the student, and Claude wasn’t exactly the professor either.

I was more like… a debate moderator armed with too many language models.

I used one AI’s criticism to push another AI into refining its ideas.

Not to manipulate them — I wasn’t trying to “win” for my own theory.

I wanted the most correct answer possible, even if it destroyed my original assumptions.

If I had to force the political metaphor one last time:

- I’m the public

- Claude is the executive branch

- GPT-5 is the judiciary

- The other AIs are the legislature

A chaotic government, but surprisingly productive.

There are still weeks of arguing ahead over details, wording, definitions, and edge cases. But mathematically speaking, the core definitions and theorems are now in place.

The debate has basically settled.

The AIs reached consensus.

This post was translated into English in free style by an AI.

#ArtificialIntelligence

#KeyboardLayouts

#Dvorak

#StatisticalModeling

#EntropicInference

u/dusan69 — 4 days ago

**Finally: My Typing Speed Model Actually Works (Kinda)**

After weeks of drowning in theory and debugging my soul, I finally have results. I’ve officially upgraded my typing speed predictor from "random guess" to a **Structured Weight** model that actually understands how we type.

**The "Rare Key" Respect Factor** In English, ‘E’ and ‘T’ are everywhere, while ‘Z’ is basically an urban legend (0.07% frequency). To make my model smart, I use the "Inverse Frequency" trick: the rarer the bigram, the more "respect" (weight) it gets in the math.

**Playing God with Data** I’ve made the weights flexible enough to simulate a "Keyboard Utopia".

* **The Gender Gap:** If my sample is 20% men and 80% women, the model buffs every guy to 2.5x and nerfs every girl to 0.625x to force a perfectly balanced 50/50 split.

* **The Layout Revolution:** In the real world, QWERTY is a bully. Dvorak users are basically a 0.1% myth. But my model can pretend we live in a world where Dvorak has 44% market share, just to see what a "perfect" keyboard ecosystem looks like.

Check the glow-up from the old version (q4ik) to the new hotness (q519) in the attached images! 

#KeyboardLayout #Dvorak #DataScience #Speedrun

P/S. This is an AI-assisted project. The text above was translated and "spiced up" by AI to ensure maximum Reddit-tier vibes while keeping the technical details intact. The model was fitted from samples from the "136M Keystrokes" dataset [Dhakal V. et al., 2018].  The model itself was posted here in my previous post; it predicts the Inter-Key Interval (IKI) of bigrams, basically putting it in a heavyweight title match with the 'Total Word Effort' metric from the Cyanophage Playground.

u/dusan69 — 11 days ago