r/Python

Built an open-source Nepali calendar API that computes dates astronomically
🔥 Hot ▲ 70 r/SideProject+4 crossposts

Built an open-source Nepali calendar API that computes dates astronomically

Been working on this for a couple months. It's called Project Parva, basically an API that computes Nepali calendar stuff (BS/AD conversion, festival dates, panchanga, muhurta) using real planetary position data from Swiss Ephemeris rather than storing hardcoded dates someone typed in from a government PDF.

The main thing that bothered me about existing stuff is they all assume Kathmandu. If you're building something for diaspora users, the sunrise-dependent calculations (tithi, muhurta windows) are just wrong for anyone outside KTM. This one takes actual lat/lon.

Verified against 65 dates from MoHA holiday PDFs across 2080-2082, passes all of them.

Here's my project,
GitHub: https://github.com/dantwoashim/Project_Parva

reddit seems to flag the direct link to the API

 (AGPL, open source)
Happy to answer questions or take feedback on what's missing.

u/Natural-Sympathy-195 — 2 days ago
Built a Nepali calendar computation engine in Python, turns out there's no formula for it
🔥 Hot ▲ 63 r/Python

Built a Nepali calendar computation engine in Python, turns out there's no formula for it

What My Project Does

Project Parva is a REST API that computes Bikram Sambat (Nepal's official calendar) dates, festival schedules, panchanga (lunar almanac), muhurta (auspicious time windows), and Vedic birth charts. It derives everything from real planetary positions using pyswisseph rather than serving hardcoded lookup tables. Takes actual lat/lon coordinates so calculations are accurate for any location, not just Kathmandu.

Target Audience

Developers building apps that need Nepali calendar data programmatically. Could be production use for something like a scheduling app, a diaspora-focused product, or an AI agent that needs grounded Nepali date data. The API is public beta so the contract is stable but not yet v1. There's also a Python SDK if you want to skip the HTTP boilerplate.

Comparison

Most existing options are either NPM packages with hardcoded month-length arrays that break outside a fixed year range (usually 2000-2090 BS), or static JSON files someone manually typed from government PDFs. Both approaches fail for future dates and neither accounts for geographic location in sunrise-dependent calculations. Hamro Patro is the dominant consumer app but has no public API, so developers end up writing scrapers that break constantly. Parva computes everything from Swiss Ephemeris, which means it works for any year and any coordinates.

https://github.com/dantwoashim/Project_Parva

u/Natural-Sympathy-195 — 8 hours ago
▲ 1 r/Python

Punk-Records: A filesystem-centric workspace orchestrator for AI agents

Hello everyone,

I would like to share an open-source CLI tool I have been developing called Punk-Records, and I am actively looking for feedback on its architecture and methodology from this community.

The problem / Target Audience

Current AI agent frameworks often rely on complex, opaque code layers to manage state and context. When an LLM navigates complex, multi-stage tasks, it frequently loses context or attempts unstructured, destructive edits to files. Furthermore, rather than trying to build complex custom frameworks that attempt to outsmart general frontier models (like Claude or Gemini), we need better ways to safely orchestrate them. If you work with AI agents and feel like your context window is just not working as expected, this tool might help you "engineer" the context window itself, not only the ad-hoc prompts.

What My Project Does

Punk-Records acts as a specialized orchestrator for AI agent workflows where the filesystem itself serves as the state machine. By treating directories as state boundaries and markdown files as executable contracts, it provides deterministic precision and human observability.

The core methodology is heavily inspired by the paper: Interpretable Context Methodology: Folder Structure as Agent Architecture

Key Highlights:

  • Functional Anchors: To handle the unstructured nature of a filesystem, the tool uses a "Functional Anchor" approach for document safety. It forces the LLM to target specific, machine-readable metadata blocks rather than letting it haphazardly rewrite entire files.
  • Dogfooding: The tool is written in Python using Typer and Jinja2. As a fun milestone, I actually used Punk-Records to recursively build and refactor Punk-Records!

Moreover, the tool helped me build the tool itself (I used punk-records while in development to build punk-records hehe).

I would appreciate any critique on the codebase, the "Functional Anchor" approach to document safety, and general thoughts on how the tool operates to handle the unstructured nature of a file-system and markdown files.

I am not a traditional software developer, I primarily work in cybersecurity and infrastructure; thus, I am sure I have my fair share of bad coding practices! I am here to learn in the process.

Thank you for your time and insights!

reddit.com
u/BigComfortable3281 — 41 minutes ago
Day 75 of 100 Days 100 IoT Projects
▲ 4 r/Python+1 crossposts

Day 75 of 100 Days 100 IoT Projects

Hit the 75 day mark today. 25 projects left.

Day 75 was ESP-NOW + RFID — one ESP8266 scans a card and wirelessly sends the UID to a second ESP8266 which displays it on OLED. No WiFi, no broker, direct peer-to-peer.

Some highlights from the past 75 days:

ESP-NOW series — built a complete wireless ecosystem from basic LED control to bidirectional relay and sensor systems to today's wireless RFID display.

micropidash — open source MicroPython library on PyPI that serves a real-time web dashboard directly from ESP32 or Pico W. No external server needed.

microclawup — AI powered ESP32 GPIO controller using Groq AI and Telegram. Natural language commands over Telegram control real GPIO pins.

Wi-Fi 4WD Robot Car — browser controlled robot car using ESP32 and dual L298N drivers. No app needed, just open a browser.

Smart Security System — motion triggered keypad security system with email alerts via Favoriot IoT platform.

Everything is open source, step-by-step documented, and free for students.

Repo: https://github.com/kritishmohapatra/100_Days_100_IoT_Projects

GitHub Sponsors: https://github.com/sponsors/kritishmohapatra

u/OneDot6374 — 6 hours ago
I built an open-source tool that query SQL databases in plain English using a local LLM.
▲ 4 r/LocalLLaMA+1 crossposts

I built an open-source tool that query SQL databases in plain English using a local LLM.

**What My Project Does**

OpsMind is a Streamlit app that connects to a SQL database and lets you ask questions in plain English. It uses Ollama (Phi3/Mistral) running locally — no cloud, no API costs, no data leaves your machine. It converts natural language to SQL, executes the query, and explains the result.

It also includes RAG document search (upload PDFs, search by meaning via ChromaDB), compliance dashboards (batch traceability, temperature excursions, allergen matrix), and 5 automated alert types.

A schema registry maps 7 business domains to specific tables, so when you ask about "orders", only order-related tables are sent to the LLM — not all 150. A pre-built query library handles the 10 most common questions without LLM generation at all.

36 pytest tests, GitHub Actions CI, password auth, query caching (5-min TTL), MIT licensed.

GitHub: https://github.com/Pawansingh3889/OpsMind

Landing page: https://pawansingh3889.github.io/OpsMind/

Tech stack: Python, Streamlit, SQLAlchemy, ChromaDB, Ollama, Plotly, pytest

**Target Audience**

This is a working tool I built for a real use case — I work in food manufacturing in the UK and operators needed answers from the production database but couldn't write SQL. IT requests took 2-3 days. This replaces that process.

It ships with a demo database (662 production runs, 451 orders, 3,600 temperature logs) so anyone can try it. It's open source and designed to be self-hosted on any machine with 16GB RAM.

**Comparison**

- vs ChatGPT / Claude with database plugins: OpsMind runs 100% locally via Ollama. No data leaves your network. No API costs. No internet required.

- vs Langchain SQL agents: OpsMind adds a schema registry (handles 150+ tables by routing to relevant ones), a pre-built query library (bypasses the LLM for common questions), and compliance-specific features like batch traceability and allergen matrices.

- vs traditional BI tools (Power BI, Tableau): OpsMind is free, self-hosted, and answers ad-hoc questions in natural language instead of requiring pre-built reports.

Known limitations: ~60% accuracy on novel complex queries, 10-25s response time on 16GB RAM.

u/Jazzlike-Tiger-2731 — 22 hours ago
Day 75 of 100 Days 100 IoT Projects — building IoT projects daily with MicroPython
▲ 3 r/Python

Day 75 of 100 Days 100 IoT Projects — building IoT projects daily with MicroPython

What My Project Does:

A 100-day challenge where I build and document one real-world IoT project every single day using MicroPython on ESP32, ESP8266, and Raspberry Pi Pico. Every project includes wiring diagrams, fully commented code, and a README so anyone can replicate it from scratch. Projects range from basic sensor readings to AI-powered GPIO controllers, real-time dashboards, ESP-NOW wireless systems, RFID access control, OTA updates, and more.

I also published two open source MicroPython libraries on PyPI during this challenge — micropidash (IoT web dashboard) and microclawup (AI powered GPIO controller via Telegram and Groq AI).

Target Audience:

Students and beginners learning embedded systems and IoT with MicroPython. No prior hardware experience needed. Everything is free, open source, and structured so you can follow along project by project at your own pace.

Comparison:

Unlike paid courses or scattered YouTube tutorials, this is a single structured repository where every project builds on real hardware concepts and is fully documented. Unlike most GitHub repos that just dump code, every project here has a proper README, circuit diagram, and explanation. The goal is not just to showcase but to teach.

75 days in. 25 to go.

Repo: https://github.com/kritishmohapatra/100_Days_100_IoT_Projects

GitHub Sponsors: https://github.com/sponsors/kritishmohapatra

u/OneDot6374 — 6 hours ago
PyPI stats 2026 from the piwheels team
▲ 8 r/Python

PyPI stats 2026 from the piwheels team

We at piwheels.org have produced stats about PyPI and piwheels over the years. Here's a blog post showcasing some interesting stats about current PyPI data - package names, what people use as version strings, and more!

https://blog.piwheels.org/2026/03/pypi-stats/

Ever wondered what the longest package name is? Or what the most common version pattern is? Or which prefixes (like django- and mcp-) are most popular? Or whether the distribution of numbers in versions follow Benford's law? (I guess not)

There are now over 700k packages, and over 8 million versions. Enjoy!

(Note I did get Claude to generate the stats, but in a reproducible jupyter notebook I've published based on real data and my own prior work in this area)

u/benn_88 — 16 hours ago
▲ 1 r/Python

I built a new human-readable config language — CHSL (Python & Node.js)

Hello developers!

I recently built and published a new configuration language as a personal project. It is designed from the ground up to be as simple, clean, and predictable as possible for humans to read and write.

Instead of worrying about strict syntax rules, CHSL lets you focus purely on your data.

Here is what CHSL does:

  • No Quotes Needed: It automatically understands text, numbers, and booleans (YES/NO).
  • Safe OS Secrets: You can pull API keys and passwords directly from your operating system environment using COPY_ENV. They never have to be hardcoded in your file.
  • Reusable Values: You can define a value once and reuse it anywhere in the file using the COPY keyword.
  • Explicit Multi-line Text: Write long text blocks clearly using numbered lines.
  • Clear Nesting: Use numbered header tags (like 0Group0 or 1SubGroup1) to always know exactly what scope you are in.

Here is what a .chsl file looks like:

Text

NOTE: This is a CHSL configuration file

0ServerSetup0
port = 8080
is_active = YES

welcome_message = LINE
1 = Hello!
2 = Welcome to the new server.

NOTE: Pull the API key securely from the OS
api_key = COPY_ENV STRIPE_SECRET_KEY

NOTE: Reuse the port number
backup_port = COPY port

What My Project Does: CHSL is a configuration language parser and serializer. It lets you read and write .chsl config files in Python and Node.js, with built-in support for OS secrets, variable references, file modularity, and typed values.

Target Audience: Developers who write configuration files for their applications. This is a real, usable project — not a toy. It is suitable for personal projects and small to medium applications today.

Comparison: Unlike other config formats, CHSL has native COPY_ENV for pulling OS secrets directly, COPY for referencing other values in the same file, and an explicit nesting system that makes scope always visually clear. It also supports multi-line text blocks without escape characters.

Try it out: The official parsers are live today. You can install them right now:

We welcome contributors! 🤝 CHSL is completely open-source. Right now, there are official engines for Python and JavaScript. If you are a developer who writes in Rust, Go, C++, Java, or Ruby, I would absolutely love your help to build CHSL engines for those languages!

You can read the documentation, see more features, and find the Contributing Guide on GitHub: 🔗 https://github.com/kcvabeysinghe/chsl

Thank you for reading, and I would love to hear your feedback!

reddit.com
u/kcvabeysinghe — 4 hours ago
“CoryFlow – a lightweight RSS ticker for Linux (PySide6)”
▲ 3 r/Python

“CoryFlow – a lightweight RSS ticker for Linux (PySide6)”

“I built a desktop RSS news ticker (like TV news bar) using Python”

what my project does:

read rss and show the feeds in a running ribbon like in news subtitles

Target Audience:

free for now created it for me and other who like to read news with out specifically openeing a window for them

comparison:

dont have to open a window and leave ur work to read news ,your eye can stary to then while u wait for a loading time

because i once saw one years ago and i tried hard to find it so in the end i tried to build one for my self and any one who fancy such thing

i appreciate the feed back

https://github.com/mfarrise/CoryFlow

u/FabulousConcert9434 — 19 hours ago
I open-sourced a Python desktop app, CLI, and SDK for BLE-enabled Fluke meters
▲ 2 r/Python

I open-sourced a Python desktop app, CLI, and SDK for BLE-enabled Fluke meters

I just open-sourced fluke-app, an unofficial Python project for working with BLE-enabled Fluke meters from a shared codebase.

Repo: https://github.com/zach-edf/fluke-app

What My Project Does

fluke-app provides a few different ways to work with BLE-enabled Fluke meters in Python:

  • a PySide6 desktop app for live readings, rolling charts, session logging, and guided workflows
  • a CLI for scanning, connecting, collecting readings, and exporting sessions
  • a Python SDK for scripting and integrations

It also includes shared protocol/session/export/workflow layers, fixture/debug tooling, and tests/CI.

Current built-in support is centered on the Fluke 376 FC.

Target Audience

This is mainly for:

  • engineers, technicians, and developers who want to stream or log readings from compatible Fluke meters
  • Python users who want a reusable SDK instead of only a GUI
  • people interested in BLE device tooling, measurement workflows, or desktop Python applications

Current status:

  • usable and functional, but still early as a public open-source project
  • not positioned as a polished commercial product
  • probably most useful right now for technically comfortable users who do not mind filing issues or working through device/platform edge cases

Comparison

The closest practical alternative I found for this use case is Fluke Connect, which is mobile-focused and tied to Fluke’s own app experience.

fluke-app is different in a few ways:

  • it is desktop-first rather than mobile-first
  • it also provides a CLI and a Python SDK, not just a GUI
  • it is aimed at scripting, repeatable logging, local exports, and developer-oriented workflows
  • it includes guided workflow steps, session replay, and fixture/debug tooling for development and testing
u/sharkysurfboi — 22 hours ago
▲ 0 r/Python

Looking for feedback: lightweight Python library for ML model diagnostics

Hello everyone,

After training models I kept hitting the same problem. Metrics looked fine. But I did not really know if the model was behaving correctly.

So I built a small Python library called diagnost.

What My Project Does

diagnost is a lightweight library for diagnosing trained ML models.

In one call it can evaluate performance, check calibration, detect drift, assess subgroup performance, and flag dataset issues like missing values, correlations, and outliers.

Example:

import diagnost

report = diagnost.evaluate(model, X_test, y_test, task="classification")
report.summary()

It can also compare models and export results to JSON.

Target Audience

Mainly data scientists and ML practitioners. Right now it is more of a lightweight tool for notebooks and experimentation. Not exactly production ready.

Comparison

Most libraries focus on training models or give you raw metrics.

diagnost focuses on post training checks and tries to give clear, structured diagnostics in one place. It also adds things like calibration checks, drift detection, and subgroup analysis without much setup.

If you want to contribute and/or have ideas, please get in touch.

PyPI: https://pypi.org/project/diagnost/
GitHub: https://github.com/Eklavya20/diagnost

It is still early, so I would really appreciate any feedback. Especially what checks you usually run manually.

reddit.com
u/TurquoiseBlu — 22 hours ago
▲ 0 r/Python

Built Python bot that writes my daily AI newsletter and saves it as draft. I just review and publish

What My Project Does

This is a Python automation that runs every morning at 6 AM, scrapes the latest AI news, writes a full Substack newsletter post using Claude, fetches a relevant cover image, and saves it as a draft. All without me touching anything.

It does NOT auto-publish. Every post goes into my Substack drafts. I review it, make edits if needed, and hit publish myself. The bot handles the tedious 90% (research, writing, formatting, image sourcing) and I stay in control of what goes live. Takes me 5 minutes to review instead of 2 hours to do from scratch.

Here's the full pipeline:

  1. Scrapes real-time AI news from Google News RSS. No API cost, no scraping libraries.
  2. Sends headlines to Claude (Anthropic API) with a detailed prompt to write a structured newsletter post with title, subtitle, body, references, and a descriptive cover image search query.
  3. Fetches a cover image: Claude generates a search phrase based on the post content (e.g. "student using AI laptop glowing futuristic"), then the bot queries the Pexels API to find a matching royalty-free image and saves it locally.
  4. Opens a real browser with Playwright, logs into Substack via session cookies, and types the entire post into the editor, including inserting Subscribe and Share CTA buttons.
  5. Saves it as a DRAFT. It never publishes. The post sits in my drafts until I review and click publish.
  6. Runs daily at 6 AM via macOS launchd.

It also saves a local Markdown backup of every post and the cover image to disk, so nothing is ever lost.

Tech stack:

Tool Purpose
Python Core automation logic
Anthropic Claude API AI-powered post writing
Playwright Browser automation (login, typing, button clicks)
Google News RSS Free real-time news scraping
Pexels API Royalty-free cover images
macOS launchd Cron-like daily scheduling

Some things I learned building this:

  • Substack doesn't have a public posting API, so I used Playwright to open a real browser, inject cookies, and interact with the editor like a human. It's janky but it works.
  • Google News RSS is underrated. https://news.google.com/rss/search?q=your+query+when:1d gives you the last 24 hours of headlines for free. No API key needed.
  • The hardest part wasn't the AI. It was dealing with Substack's random popups, overlays, and editor quirks. I wrote a dismiss_popups() function that tries 15+ selectors to close whatever dialog is blocking the page.
  • Prompt engineering matters. I constrain Claude to ONLY write about stories from the RSS feed and copy exact source URLs for references. No hallucinated links.

GitHub: https://github.com/drona23/substack-ai-bot

Target Audience

Anyone who runs (or wants to run) a content-heavy newsletter but doesn't have hours to spend on research and writing every day. It's a real project I use daily for my own Substack, not a toy. It's designed to be a human-in-the-loop assistant: the bot does the grunt work, you stay the editor.

Also useful as a reference for anyone working with Playwright browser automation, Google News RSS scraping, or structured prompt engineering with Claude.

Comparison

  • Zapier / Make.com newsletter automations: These connect services via webhooks but can't actually write content or interact with Substack's editor. They'd need a Substack API that doesn't exist for posting. My bot uses Playwright to automate the real browser UI.
  • AutoGPT / CrewAI / other agent frameworks: Those are general-purpose agent frameworks. This is a single-purpose, focused script with no framework overhead and no complex agent orchestration. Just a clean Python script that does one thing well.
  • Manual newsletter workflow: Research + write + format + find image + paste into Substack + add buttons = ~2 hours. This bot does it in under 3 minutes and saves a draft. I just review and publish.

Would love feedback. Has anyone else automated their newsletter pipeline? What would you add?

reddit.com
u/General_Head_2469 — 2 hours ago
Tour component - guide the user through your site !
▲ 0 r/Python

Tour component - guide the user through your site !

What my project does:
I just implemented a tour component that allows you to easily guide the user through all the components on your website.

Every time I make an app, I try to make it intuitive and easy to understand, however for complex or technical app this can not be done easily. This is why I implement the Driver.js library for streamlit. In just a few lines of code you can have a working tour of your website. It uses the key parameters in some of streamlit component to spot it in the JS script :

import streamlit as st
from streamlit_tour import Tour

st.title("My App")
st.text_input("Name", key="name_input")

if st.button("Start Tour"):
Tour.start(
steps=[
Tour.bind("name_input", title="Your Name", desc="Enter your name here."),
Tour.info(title="That's it!", desc="You're ready to go."),
]
)

Here is the Github, feel free to use this component and raise an issue if you encounter one !

https://github.com/mp-mech-ai/streamlit_tour

Target audience:

This is still in early development so I wouldn’t yet recommend to use it in production but my goal is to make robust enough to make it possible.

Comparison:

This type of guided tour was inexistent in Streamlit so here it is.

u/poppyshit — 7 hours ago
▲ 0 r/Python

CÓMO PUEDO GANAR EXPERIENCIA

Hola buenas tardes o noches quisiera que me ayudarán en algo y es en como puedo ganar experiencia laboral para trabajar con python espero me entienda quisiera aclarar mis dudas que tengo por qué éso con lo que la IA esta avanzando mucho quisiera estar muy actualizado para el mercado laboral

reddit.com
u/Scary-Heron-3422 — 16 hours ago
[Update] I just updated my first ever Python library on PyPI....
▲ 0 r/Python

[Update] I just updated my first ever Python library on PyPI....

I officially released numeth a few months ago. It's a library focused on core Numerical Methods used in engineering and applied mathematics.

Today, I added visualizations to all the numerical method algorithms in numeth.

-  What My Project Does

Numeth helps you quickly solve and visualise tough mathematical problems - like equations, integration, and differentiation - using numerical methods.

It covers essential methods like:

  1. Root finding (Newton–Raphson, Bisection, etc.)

  2. Numerical integration and differentiation

  3. Interpolation, optimization, and linear algebra

  4. Graph visualizations for all except Linear Algebra methods, since they rely on vectors and matrices.

-  Target Audience

I built this from scratch with a single goal:

Make fundamental numerical algorithms ready to use for students and developers alike.

- Comparison

Most Python libraries, like NumPy and SciPy, are designed to use numerical methods, not understand them. Their implementations are optimized in C or Fortran, which makes them incredibly fast but opaque to anyone trying to learn how these algorithms actually work.

'numeth' takes a completely different approach.

It reimplements the core algorithms of numerical computing in pure, readable Python, structured into clear, modular functions. It also visualises the result in a graph, giving students and researchers a visual representation of the problem.

The goal is helping students, educators, and developers trace each computation step by step, experiment with the logic, and build a stronger mathematical intuition before diving into heavier frameworks.

If you’re into numerical computing or just curious to see what it’s about, you can check it out here:

🔗 https://pypi.org/project/numeth/

or run 'pip install numeth'

The GitHub link to numeth:

🔗 https://github.com/AbhisumatK/numeth-Numerical-Methods-Library

Would love feedback, ideas, or even bug reports.

u/Prestigious_Bear5424 — 20 hours ago
▲ 0 r/Python

Listas e dicionários

Tô com dificuldade de entender pq as vezes entro em laços for e preciso usar algo como encontrar = false achar e por encontrar = true, e pq as vezes entro no laço e as vezes n preciso, é tipo eu sei que a meia está na gaveta 4 índice 4? Se n sei preciso usar o for pra abrir todas as gavetas ? Isso? Me expliquem pfvr

reddit.com
u/Round_Plantain8319 — 22 hours ago
▲ 0 r/Python

2 New Discord Based Projects: RSS to Discord Webhook

Disclaimer: I know this community dislikes AI misuse or AI slop. That’s not what this is. All our projects are tested, borderline‑tested, and every line of code is human‑moderated. AI is heavily used in the codebase, but even though every writer can write their own code from scratch, it’s decidedly inefficient.

Now, onto the projects: DiscRSS and CongressReport. Two simple Python tools at the final test stages, (Target Audience) built for developers, sysadmins, and self‑hosters who want lightweight automation.

Comparison: what sets us apart?

I literally had it running on a Raspberry Pi in less than 2 minutes on the last test. The ease of use and support really sets it apart from similar tools. It also is itself free software that can be used or modified by anyone.

What My Project Does:

DiscRSS monitors RSS feeds and forwards them to webhooks, with custom routes per feed or individual webhooks. Example: one program reads two separate feeds and sends them to two different webhooks. Simple, flexible, useful.

CongressReport uses the QuiverQuant API to track congressional trades and push them into a Discord channel via webhook. Perfect if you run a finance‑focused server.

What do you think?

TRADELY is always looking for more people to help contribute and improve these tools.

Download any of our tools or libraries: https://doc.tradely.dev

Want to become a dev? Join the Discord and open a ticket in the SUPPORT TICKET section. You’ll need to provide proof you’re a capable coder.

Have feedback? Still the discord.

Found an issue, we don't care... just kidding. join the discord.

techareaone/DiscRSS: A tool to forward RSS Feeds to Discord Webhooks

reddit.com
u/Emergency-Buyer-7384 — 22 hours ago
Week