u/Junior_Bake5120

I built a proof of concept for something I think is inevitable: machine-readable developer identity.

The thesis is simple. We built robots.txt for crawlers, schema.org for search engines, llms.txt for AI docs. Every time machines needed to consume human content, a structured standard emerged.

Developer profiles are next. AI agents are getting good enough to screen GitHub profiles for hiring, contributor matching, code review assignment. But GitHub profiles were designed for human eyes. Pinned repos, green graphs, polished bios. None of that is structured data.

I spent a couple days building a proof of concept to explore what a structured developer identity could look like.

What it does:

  • Takes any GitHub username, extracts 14 signals from public data
  • Generates a structured JSON schema (devcard.json)
  • Scores profiles on two axes: Human Visibility (recruiters) and Agent Readiness (AI tools)
  • 5 SVG themes you can embed in your README
  • Profile advisor with actionable critiques
  • Compare two developers side-by-side

Not saying this is the final answer, but I believe this layer of the developer ecosystem is going to exist eventually. Someone is going to build the schema.org for developer profiles.

What do you think? Is this inevitable, or am I overthinking it?

GitHub: https://github.com/chiruu12/devcard

reddit.com
u/Junior_Bake5120 — 3 days ago

Should developer profiles have a machine-readable standard? Like robots.txt but for devs.

We have robots.txt for crawlers schema. org for search engines. llms.txt for AI documentation. But there’s nothing equivalent for developer identity.

Right now, AI agents are getting good enough to evaluate developers. Devin ships code. Claude reasons through codebases. It’s not hard to imagine agents screening GitHub profiles, checking commit quality, stack depth, collaboration patterns, and surfacing candidates before a recruiter opens a tab.

When that happens, GitHub profiles have a problem. Pinned repos, green graphs, polished bios. All designed for human eyes. An agent sees unstructured HTML, inconsistent READMEs, and zero standardized signals about what someone actually builds or how they work.

I’ve been thinking about what a structured developer identity format could look like. Built a proof of concept in Python to test the idea. It takes any GitHub username and extracts 14+ signals (languages, stack, activity, code quality, collaboration style) into a JSON schema.

Two scores per profile:

  • Human Visibility (0-100): how findable by recruiters
  • Agent Readiness (0-100): how readable by AI tools

Not pitching this as the answer. Genuinely curious what this community thinks: is a machine-readable developer identity standard inevitable, or am I solving a problem that won’t actually materialize?

Link in the comments. Please check it out if you also think this shift is coming sooner than people expect.

reddit.com
u/Junior_Bake5120 — 3 days ago
▲ 0 r/github

Your GitHub profile was designed for human eyes. AI agents are about to be the ones looking.

Think about how the web evolved:

  • robots.txt so crawlers could understand websites
  • schema.org so search engines could understand content
  • llms.txt so AI agents could understand documentation

Developer identity has no equivalent yet.

You spend time on your GitHub profile. Pinned repos, contribution graph, maybe a profile README with badges. It looks good to humans. But AI agents evaluating developers? They see unstructured HTML, missing metadata, and zero standardized signals. No structured data about your tech stack, code quality patterns, or collaboration style.

This gap is going to matter. Agents are getting good enough to screen developer profiles for hiring, open-source matching, contributor evaluation. The question isn’t if this happens, but when. And when it does, the developers whose profiles are machine-readable will have an edge over the ones who only optimized for human scrolling.

I built a proof of concept to explore what a structured developer identity format could look like (adding the link in the comments)

reddit.com
u/Junior_Bake5120 — 3 days ago

Should developer profiles have a machine-readable standard? Like robots.txt but for devs.

We have robots.txt for crawlers. schema.org for search engines. llms.txt for AI documentation. But there’s nothing equivalent for developer identity.

Right now, AI agents are getting good enough to do several things. Devin ships code. Claude reasons through codebases. It’s not hard to imagine agents screening GitHub profiles and evaluating devs in near future, checking commit quality, stack depth, collaboration patterns, and surfacing candidates before a recruiter opens a tab.

When that happens, GitHub profiles have a problem. Pinned repos, green graphs, polished bios. All designed for human eyes. An agent sees unstructured HTML, inconsistent READMEs, and zero standardized signals about what someone actually builds or how they work.

I’ve been thinking about what a structured developer identity format could look like. Built a proof of concept in Python to test the idea. It takes any GitHub username and extracts 14+ signals (languages, stack, activity, code quality, collaboration style) into a JSON schema.

Two scores per profile:

  • Human Visibility (0-100): how findable by recruiters
  • Agent Readiness (0-100): how readable by AI tools

Tech details if you’re curious: Python 3.11+, Typer, async httpx with semaphore-based rate limiting, Pydantic, Rich, diskcache. 14 extractors run concurrently via asyncio.gather. 361 tests with httpx.MockTransport, zero real API calls in the test suite.

Not pitching this as the answer. Genuinely curious what this community thinks: is a machine-readable developer identity standard inevitable, or am I solving a problem that won’t actually materialize?

GitHub: https://github.com/chiruu12/devcard

reddit.com
u/Junior_Bake5120 — 4 days ago
▲ 18 r/gsoc_2027+3 crossposts

GSoC results just came out. Whether you got selected or not, if you're contributing to open source, this might help.

As a mentor and a past contributor, I see the same patterns over and over:

  1. Contributor picks a random issue filed by some user. It never gets reviewed.

  2. Contributor skips CONTRIBUTING.md. PR gets rejected for process, not code.

  3. Contributor uses AI to write the fix. Can't answer a single question during review. PR dies.

  4. Contributor doesn't understand the codebase. Patches the symptom, not the root cause.

I built [OSS-Skills](https://github.com/chiruu12/OSS-Skills) - 8 Claude Code skills that walk you through the contribution process step by step. The key difference: the AI researches, you think.

What it does:

  • Finds unclaimed issues filed by actual maintainers (not random users)

  • Checks if the repo even accepts outside contributions before you waste time

  • Reads CONTRIBUTING.md so you don't skip the thing that gets your PR rejected

  • Walks you through the codebase architecture before you touch anything

  • Teaches you unfamiliar tech using examples from the actual repo (not generic docs)

  • Won't let you submit code until you can explain what it does and why

What it doesn't do:

  • Write your code for you

  • Generate PR descriptions

  • Let you skip understanding the codebase

Every skill has "thinking gates" where you have to explain your understanding before moving forward. The AI gives you hints about where to look, but you have to articulate the answer.

Requires Claude Code and the GitHub CLI.

If you try it, I'd genuinely like to hear what worked and what didn't. Open an issue or drop a comment here.

For GSoC candidates who didn't get selected this round: these skills are specifically designed to help you build the kind of deep project understanding that makes GSoC proposals stand out. Contributing well > contributing fast.

u/Junior_Bake5120 — 12 days ago