r/golang

I built a tool that generates OpenAPI specs from Go code — no annotations needed
🔥 Hot ▲ 52 r/golang

I built a tool that generates OpenAPI specs from Go code — no annotations needed

I've been working on go-apispec, a CLI tool that generates OpenAPI 3.1 specs from Go source code using static analysis. No // @Summary comments, no struct tags, no code changes. Just point it at your project:

go install github.com/antst/go-apispec/cmd/apispec@latest
apispec --dir ./your-project --output openapi.yaml

It detects your framework (Chi, Gin, Echo, Fiber, Gorilla Mux, net/http), builds a call graph from route registrations to handlers, and traces through your code to figure out what goes in and what comes out.

Concrete example. Given this handler:

func CreateUser(w http.ResponseWriter, r *http.Request) {
    var user User
    if err := json.NewDecoder(r.Body).Decode(&user); err != nil {
        w.WriteHeader(http.StatusBadRequest)
        json.NewEncoder(w).Encode(ErrorResponse{Error: "invalid body"})
        return
    }
    w.WriteHeader(http.StatusCreated)
    json.NewEncoder(w).Encode(user)
}

It produces:

/users:
  post:
    requestBody:
      content:
        application/json:
          schema:
            $ref: '#/components/schemas/User'
    responses:
      "201":
        description: Created
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/User'
      "400":
        description: Bad Request
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/ErrorResponse'

Both status codes, both response types, the request body — all inferred from the code. Fields without omitempty are marked required in the schema.

Some of the harder problems it solves:

  • switch r.Method { case "GET": ... case "POST": ... } → produces separate operations per method. This uses control-flow graph analysis via golang.org/x/tools/go/cfg to understand which code runs under which branch.
  • APIResponse[User] → instantiates the generic struct with concrete types in the schema.
  • Handlers registered via interfaces → traces to the concrete implementation to find response types.
  • w.Header().Set("Content-Type", "image/png") → that endpoint's response uses image/png, not the default application/json.
  • Multiple frameworks in one project (e.g., Chi on :8080, Gin on :9090) → all routes from all frameworks appear in the spec.

What it can't do (static analysis limitations): reflection-based routing, computed string concatenation for paths ("/api/" + version), complex arithmetic across functions for status codes. These require runtime information.

Output is deterministic (sorted keys), so you can commit the spec and diff it in CI.

Background: This started as a fork of apispec by Ehab Terra, which provided the foundational architecture — the AST traversal approach, call graph construction, and the pattern-matching concept for framework detection. I've since rewritten most of the internals: type resolution pipeline, schema generation, CFG integration, generic/interface support, deterministic output, and the test infrastructure. But the original design shaped where this ended up, and I'm grateful for that starting point.

GitHub: https://github.com/antst/go-apispec

Try it: go install github.com/antst/go-apispec/cmd/apispec@latest && apispec --dir . --output openapi.yaml

Would love to hear if you try it on a real project — especially cases where it gets something wrong or misses a pattern. That's the most useful feedback.

u/Apprehensive-Ebb2263 — 14 hours ago
How to Turn a Go GUI App into a Mac App with Parall
▲ 11 r/golang

How to Turn a Go GUI App into a Mac App with Parall

I am the developer of Parall, and I have been using it for a very practical workflow on macOS that makes local GUI development much more convenient.

Instead of opening Terminal every time, typing a command, and launching the app manually, I create a lightweight app bundle for the project and pin it to the Dock. Then the loop becomes very simple. Edit code, quit the app, click the Dock icon, and immediately run the latest code from the project folder.

This is especially nice for learning Go GUI development, testing small local tools, and iterating on a project without extra friction.

Here is a simple example using Go and Fyne.

1. Create a Fyne project in a new folder

Create a new folder for the example project:

mkdir ~/Downloads/FyneParallDemo
cd ~/Downloads/FyneParallDemo
go mod init fyneparalldemo
go get fyne.io/fyne/v2@latest

Create main.go:

package main

import (
    "fyne.io/fyne/v2"
    "fyne.io/fyne/v2/app"
    "fyne.io/fyne/v2/container"
    "fyne.io/fyne/v2/widget"
)

func main() {
    a := app.New()
    w := a.NewWindow("Fyne Parall Demo")
    w.Resize(fyne.NewSize(420, 200))

    label := widget.NewLabel("Hello from a local Fyne app")
    button := widget.NewButton("Change text", func() {
        label.SetText("You are running the latest code from your folder")
    })

    w.SetContent(container.NewVBox(
        label,
        button,
    ))

    w.ShowAndRun()
}

Test it once from Terminal:

cd ~/Downloads/FyneParallDemo
go mod tidy
go run .

If the window opens, the project is ready to use with Parall.

2. Run Parall and select Command Shortcut mode

Open Parall and create a new shortcut using Command Shortcut mode.

This is the mode that lets you launch a local command as a normal macOS app, with its own Dock icon and the window created by your Go code. This is not limited to Fyne. It can also work with other Go apps, other GUI frameworks, and more broadly with terminal apps that you want to launch through a normal macOS app shortcut.

3. Enter the command, arguments, and working directory

For this example, use the Go binary as the command.

Command path:

/opt/homebrew/bin/go

Command arguments:

run .

Working directory:

/Users/ighor/Downloads/FyneParallDemo

Environment variables are not needed for this demo, so you can leave that field empty.

4. Configure the shortcut name and icon

Give the shortcut a clean name, for example:

Fyne Parall Demo

Then choose an icon for it. This is a small detail, but it makes the shortcut feel much more like a real app once it is pinned to the Dock.

5. Optionally enable a menu bar icon and Dock icon effects

If you want, enable the menu bar icon for the shortcut.

That gives you another way to identify and access the running shortcut from the macOS menu bar.

You can also enable Dock icon effects.

This is optional, but it adds something that most Mac apps do not have. Instead of trying to mimic a normal app experience, it gives the shortcut a more dynamic and distinctive feel.

6. Export the app bundle, approve it, and use it

Export the shortcut as an app bundle.

macOS may ask you to approve or confirm the app the first time, depending on your system settings. Approve it, then launch it.

After that, you can pin it to the Dock and use it like a normal app.

Why this is useful for learning and testing

This setup is great for local Go GUI development because it shortens the feedback loop.

  • You keep your source code in a normal project folder.
  • You edit the Go files whenever you want.
  • You quit the running app.
  • You click the Dock icon again.
  • The latest code from the folder runs immediately.

That makes experimentation much easier than going through Terminal every time. It is especially useful when you are learning Fyne, testing UI changes, or building small local desktop tools that you want to relaunch often during development.

This is not about packaging a frozen standalone build for distribution. It is about making local development feel smoother and more natural on macOS.

Parall is available on the Mac App Store and can do much more than this post covers. Learn more here: https://parall.app

u/JulyIGHOR — 5 hours ago
Announced GoRL v2.1.0
▲ 3 r/golang

Announced GoRL v2.1.0

my Go rate limiter implementation that supports Fixed Window, Sliding Window, Token Bucket, Leaky Bucket, and middleware for net/http, Gin, Fiber, and Echo.

This release concentrated on correctness and distributed execution.
I refined the public API, unified rate-limit metadata and HTTP headers, addressed sliding window edge cases, and included Redis Lua scripts to atomically execute the built-in algorithms.
I also added Redis tests and benchmark examples with pre-Lua and post-Lua performance metrics.

GitHub Repository:
https://github.com/AliRizaAynaci/gorl

u/aynacialiriza — 6 hours ago
▲ 7 r/golang

vet - audit your Go module dependencies with CEL based policy as code, including time-based cooldown checks

Go's govulncheck is solid for known CVEs. But it doesn't actually cover the full picturel like unmaintained packages, license violations or low OpenSSF Scorecard. Packages that simply shouldn't be in prod.

We have been building vet for filling this gap only.

vet is an open source SCA tool written in Go. It reads your go.mod / go.sum and evaluates each dependency against data from OSV, OpenSSF Scorecard, and other sources. The interesting part is how you express policy, it uses CEL rather than config files or flags

# Fail CI if any dep has critical CVEs or is unmaintained

vet scan -D . \
  --filter '(vulns.critical.size() > 0) || (scorecard.scores.Maintained == 0)' \
  --filter-fail

Or define a policy file you can version alongside your code:

# .vet/policy.yml
name: production policy
filters:
  - name: no-critical-vulns
    value: vulns.critical.size() > 0

  - name: maintained
    value: scorecard.scores.Maintained < 5

  - name: approved-licenses
    value: |
      !licenses.exists(p, p in ["MIT", "Apache-2.0", "BSD-3-Clause", "ISC"])

vet scan -D . --filter-suite .vet/policy.yml --filter-fail

The filter input is a typed struct - vulns, scorecard, licenses, projects, pkg , so writing and testing expressions is straightforward. There's also a GitHub Action for CI integration.

Repo: https://github.com/safedep/vet

One addition worth calling out separately, time-based cooldown checks.

Most supply chain compromises rely on speed where a malicious version gets published, automated builds pull it within hours before detection catches up. A cooling-off period is a blunt but effective guardrail. vet supports this via a now() function in its CEL evaluator (landed via community contribution PR #682):

bash

vet scan -D . \
  --filter-v2 '!has(pkg.insight.package_published_at) || (now() - pkg.insight.package_published_at).getHours() < 24' \
  --filter-fail

The !has(...) guard catches packages so new they haven't been indexed yet and those get blocked too. The duration is yours to set, 24h is a reasonable default, some teams go to 7 days.

reddit.com
u/BattleRemote3157 — 13 hours ago
Orla is an open source framework written in Go that makes your agentic workflows 3 times faster and half as costly
▲ 5 r/golang+3 crossposts

Orla is an open source framework written in Go that makes your agentic workflows 3 times faster and half as costly

Most agent frameworks today treat inference time, cost management, and state coordination as implementation details buried in application logic. This is why we built Orla, an open-source framework for developing multi-agent systems that separates these concerns from the application layer. Orla lets you define your workflow as a sequence of "stages" with cost and quality constraints, and then it manages backend selection, scheduling, and inference state across them.

Orla is the first framework to deliberately decouple workload policy from workload execution, allowing you to implement and test your own scheduling and cost policies for agents without having to modify the underlying infrastructure. Currently, achieving this requires changes and redeployments across multiple layers of the agent application and inference stack.

Orla supports any OpenAI-compatible inference backend, with first-class support for AWS Bedrock, vLLM, SGLang, and Ollama. Orla also integrates natively with LangGraph, allowing you to plug it into existing agents. Our initial results show a 41% cost reduction on a GSM-8K LangGraph workflow on AWS Bedrock with minimal accuracy loss. We also observe a 3.45x end-to-end latency reduction on MATH with chain-of-thought on vLLM with no accuracy loss.

Orla currently has 210+ stars on GitHub and numerous active users across industry and academia. We encourage you to try it out for optimizing your existing multi-agent systems, building new ones, and doing research on agent optimization.

Please star our github repository to support our work, we really appreciate it! Would greatly appreciate your feedback, thoughts, feature requests, and contributions!

Thank you!

github.com
▲ 0 r/golang

Minha irmã reclamou do RedTrack. Eu fiz um CLI/TUI em Go pra ela.

Sexta-feira Santa, eu estudando Go, minha irmã chega reclamando do trabalho.

Ela usa RedTrack pra tráfego pago e passa o dia inteiro fazendo a mesma coisa: abrir mil abas, aplicar filtro, esperar carregar, voltar, filtrar de novo… só pra descobrir se uma campanha tá dando lucro.

Eu falei: “isso dá pra resolver com um comando no terminal”.

Ela riu. Eu abri o VS Code.

Algumas horas depois, nasceu um CLI que conversa direto com a API do RedTrack e resolve exatamente esse problema.

Hoje ele tem dois modos:

• CLI

Você roda algo tipo:

**redtrack campaigns list --status active --json**

Cospe JSON ou CSV, funciona com pipe e dá pra plugar fácil em automações ou até em agentes de IA.

• TUI (modo painel)

Interface no terminal estilo dashboard.

Navega por contas, campanhas, anúncios e conversões só no teclado, com drill-down completo.

Stack que usei:

Go + Cobra + Bubble Tea v2 + Lipgloss + Bubbles

Binário único, zero dependência.

Ainda é MVP, mas já cobre:

•	campaigns

•	conversions

•	dashboard com stats do dia

Tô evoluindo pra CRUD completo de offers, networks e landings.

Algumas decisões que tomei e fiquei na dúvida se faz sentido pra quem usa tracking no dia a dia:

• Usei /campaigns/v2 (paginado) como padrão ao invés do v1 simples

• Config em arquivo local + env + flag (nessa ordem de prioridade)

• Na TUI, se não tiver API key, trava tudo e abre direto tela de setup

Se você trabalha com tráfego pago / afiliado e vive preso em dashboard lento, queria muito ouvir:

como você consulta performance hoje?

o que mais te irrita nessas ferramentas?

você usaria algo assim no terminal ou não faz sentido no seu fluxo?

Se fizer sentido pra você, me chama que eu te mando pra testar.

reddit.com
u/davioliveeira — 3 hours ago
HypGo: An AI-Human Collaborative Go Framework - Schema-First + Huge Tokens Savings for AI Coding (Taiwan-made)
▲ 0 r/golang

HypGo: An AI-Human Collaborative Go Framework - Schema-First + Huge Tokens Savings for AI Coding (Taiwan-made)

Hey r/golang,

I just open-sourced HypGo — the Go framework built from the ground up for AI-Human collaboration in 2026. It developed independently by me(and Claude Code).

Traditional frameworks force AI to read hundreds of lines of handler code.
HypGo flips it: Schema-First + single Project Manifest. AI only reads 6 lines of metadata + 1 YAML (~500 tokens) instead of 5,000+.

Key features:

  • Schema-first routes with Input/Output types
  • .hyp/manifest.yaml (AutoSync) — AI’s single source of truth
  • contract.TestAll(t, router) — one-line full contract testing
  • CLI tools: hyp impact, hyp ai-rules, auto migration diff
  • Zero runtime overhead (Radix Tree + zero-alloc Context Pool)

>Quick start:

go install github.com/maoxiaoyue/hypgo/cmd/hyp@latest
hyp api myservice && cd myservice

And I haven't finished writing the English version of the wiki yet.

I will prepared for these pages for few days.
Thanks for your read.

u/Miserable-Chris — 14 hours ago
Week