u/Objective-Test-5374

We detect when your DNS records change outside our editor — and show you exactly what changed

DNS Drift

DNS drift occurs when records change without going through your intended management interface. This can happen for several reasons:

  • Direct Access: Someone logs into the registrar or DNS provider directly.
  • Shadow Automation: A script modifies records via a different API.
  • Security Breach: An attacker compromises your DNS provider credentials.
  • Access Management: You forgot a contractor still had access from six months ago.

How it Works

We take periodic snapshots of your DNS zone as it appears from authoritative nameservers. When we detect a difference between the expected state (last known snapshot) and the live state, we create a drift event.

Drift Event Details

Each alert contains the specific data you need to investigate:

  • Record type (e.g., A, MX, CNAME)
  • Record name
  • Change type (Added, Modified, or Deleted)
  • Old value $\rightarrow$ New value
  • Detection timestamp

Management & Resolution

You receive an alert immediately. In the dashboard, unacknowledged drift events appear as a warning badge on your domain.

  1. Review: Examine the event details in the dashboard.
  2. Acknowledge: If the change was intentional, acknowledge it. This updates the baseline snapshot and stops the alerting.
  3. Remediate: If the change was unauthorized, you know exactly what to revert to secure your zone.

>

DNS drift is when records change without going through your intended management interface. Someone logs into the registrar directly. A script modifies records via a different API. An attacker compromises your DNS provider credentials. Or you just forgot you gave a contractor access six months ago.


We take periodic snapshots of your DNS zone as it appears from authoritative nameservers. When we detect a difference between the expected state (last known snapshot) and the live state, we create a drift event:


- Record type
- Record name
- Change type (added/modified/deleted)
- Old value → new value
- Detection timestamp


You get an alert immediately. In the dashboard, unacknowledged drift events show up as a warning badge on your domain. You can review each event, and if the change was intentional, acknowledge it — which updates the baseline snapshot so it stops alerting. If it wasn't intentional, you know exactly what to fix.


Think of it as `git diff` for your DNS zone, running continuously.

DNS Drift

DNS drift occurs when records change without going through your intended management interface. This can happen for several reasons:

  • Direct Access: Someone logs into the registrar or DNS provider directly.
  • Shadow Automation: A script modifies records via a different API.
  • Security Breach: An attacker compromises your DNS provider credentials.
  • Access Management: You forgot a contractor still had access from six months ago.

How it Works

We take periodic snapshots of your DNS zone as it appears from authoritative nameservers. When we detect a difference between the expected state (last known snapshot) and the live state, we create a drift event.

Drift Event Details

Each alert contains the specific data you need to investigate:

  • Record type (e.g., A, MX, CNAME)
  • Record name
  • Change type (Added, Modified, or Deleted)
  • Old value $\rightarrow$ New value
  • Detection timestamp

Management & Resolution

You receive an alert immediately. In the dashboard, unacknowledged drift events appear as a warning badge on your domain.

  1. Review: Examine the event details in the dashboard.
  2. Acknowledge: If the change was intentional, acknowledge it. This updates the baseline snapshot and stops the alerting.
  3. Remediate: If the change was unauthorized, you know exactly what to revert to secure your zone.

>

reddit.com
u/Objective-Test-5374 — 1 day ago

You can transfer a domain between RacterMX organizations without downtime, re-verification, or DNS changes

If you're an agency managing client domains, or a team splitting into separate orgs, or just handing off a side project to someone else — you need to move domains between accounts without breaking anything.

How it works:

  1. Initiate: The source org admin initiates a transfer from the domain settings.
  2. Invite: We generate a cryptographically signed token and send an invitation email to the target recipient.
  3. Accept: The recipient clicks the link and accepts the transfer.
  4. Transfer: The domain moves instantly — aliases, routing rules, DNS records, SMTP credentials, email logs, security scan history, everything.

Why it's seamless:

  • No re-verification: Since the DNS records don't change, there is no need to verify the domain again.
  • Zero downtime: The forwarding configuration stays identical throughout the process.
  • Complete Migration: The transfer simply updates which organization owns the domain in the central database and moves the domain's data to the target tenant's database.

Security and Lifecycle:

Transfers expire after 7 days if not accepted. You can cancel a pending transfer at any time. The entire lifecycle is tracked through three main states:

  • Pending
  • Accepted / Cancelled
  • Expired

> How do you handle domain handoffs between teams or clients today? Most people we talk to end up deleting and re-adding, which loses all history.

reddit.com
u/Objective-Test-5374 — 2 days ago

Every organization on RacterMX gets its own database. Here's why we chose physical tenant isolation.

Most multi-tenant SaaS products use a shared database with a tenant_id column on every table. It's simpler to build, simpler to migrate, and simpler to query across tenants for analytics. We didn't do that.

Every organization on RacterMX gets a dedicated PostgreSQL database. Your domains, aliases, routing rules, email logs, DNS snapshots, security scan results, and webhook configurations live in a database that no other tenant can access — not through a bug, not through a SQL injection, and not through a misconfigured query scope.

How it Works

We use Laravel's Stancl/Tenancy package. When a request comes in, we:

  1. Resolve the tenant from the authenticated user's organization.
  2. Initialize the tenant database connection.
  3. Execute all subsequent queries against that specific tenant's database.

The central database handles authentication, billing, organization hierarchy, and cross-tenant operations like domain transfers.


The Tradeoffs

  • Migrations: These run per-tenant (we use a command that iterates through all tenant databases).
  • Analytics: Cross-tenant analytics require explicit aggregation.
  • Provisioning: Creating a new organization takes a few seconds longer because we are provisioning a physical database.

We think the tradeoff is worth it for email infrastructure where data isolation isn't optional — it's the whole point.

> What level of tenant isolation do you expect from services that handle your email?

reddit.com
u/Objective-Test-5374 — 4 days ago

We built configurable alerts for security score drops, delivery failures, and anomalies

Monitoring your email infrastructure shouldn't require you to log into a dashboard every day and eyeball the numbers. We built an alerting system so the platform tells you when something needs attention.

How it works

You create alert rules that define a condition and a threshold. When the condition is met, we fire a notification. Rules are scoped to your organization, so different teams can have different alert configurations.

What you can alert on

Some examples of rules you can set up:

  • Security score drops below a grade you specify (e.g., alert me if any domain falls below B)
  • Bounce rate exceeds a percentage threshold over a time window
  • Email volume drops significantly compared to the previous period (could indicate a routing problem)
  • A domain gets listed on a blacklist
  • A certificate is approaching expiration
  • DNS records change unexpectedly (drift detection)
  • DMARC compliance rate drops below a threshold

How notifications are delivered

Two channels right now:

  • Email: sent to the addresses associated with your account, respecting your notification preferences (you can opt out of specific categories)
  • In-app: shows up in the notification panel in the dashboard with a badge count

Each alert includes the domain affected, the rule that triggered, the current value vs. the threshold, and a link directly to the relevant section of the dashboard so you can investigate.

Alert history

Every alert that fires is logged with a 90-day retention. You can review past alerts to spot patterns. If your security score keeps dipping every Tuesday, maybe something in your weekly deployment is touching DNS records. The history makes those patterns visible.

Proactive alerts

Beyond the rules you configure, we also fire proactive alerts for things that are objectively worth knowing about:

  • TLS certificates expiring within 30 days
  • DKIM keys that haven't been rotated in a long time
  • Stale DNS configurations (records pointing to IPs or hostnames that no longer resolve)
  • DMARC policy still set to p=none after extended monitoring shows high compliance

These fire automatically without you needing to create a rule. You can suppress them individually if they're not relevant to your setup.

What we don't do (yet)

We don't support webhook delivery for alerts. Right now it's email and in-app only. If you want to pipe alerts into Slack, PagerDuty, or your own monitoring system, you'd need to poll the API. Webhook-based alert delivery is something we're considering if there's enough interest.

We also don't support complex compound rules (e.g., "alert me if bounce rate exceeds 5% AND security score is below C"). Each rule evaluates independently. Compound rules add a lot of UI complexity and we're not sure the use case justifies it yet.

What thresholds would you set for your email infrastructure monitoring? And would webhook delivery for alerts be useful for your workflow, or is email notification sufficient?

reddit.com
u/Objective-Test-5374 — 5 days ago

We support TOTP-based 2FA with organization-level enforcement

Not a flashy feature. Not something we'd normally write a post about. But we keep running into email services that either don't offer two-factor authentication at all, or offer it but don't let admins enforce it across their team.

So here's what we built.

TOTP two-factor authentication

Standard time-based one-time password. Works with any authenticator app: Google Authenticator, Authy, 1Password, Bitwarden, whatever you already use. Enable it in your profile settings, scan the QR code, enter the confirmation code, done.

Once enabled, every login requires your password plus a 6-digit code from your authenticator. No SMS fallback. SMS-based 2FA is vulnerable to SIM swapping and we didn't want to offer a weaker option alongside the stronger one.

Recovery codes

When you enable 2FA, we generate a set of one-time recovery codes. Store them somewhere safe. If you lose access to your authenticator (phone dies, app gets wiped, etc.), you can use a recovery code to log in and reconfigure 2FA. Each code works once. We show you how many you have remaining.

Organization-level enforcement

This is the part that matters for teams. If you're an organization admin, you can flip a switch that requires every user in your org to enable 2FA. Users who haven't set it up yet get redirected to the 2FA setup screen on their next login. They can't access the dashboard until they configure it.

No exceptions, no "I'll do it later." If enforcement is on, 2FA is mandatory.

Why this matters for email infrastructure

Your email forwarding configuration controls where mail goes. If someone compromises an account and changes an alias's forwarding address, they can intercept email silently. Password resets for other services, client communications, financial notifications. The blast radius of a compromised email admin account is large.

2FA doesn't prevent all attacks, but it eliminates the most common one: credential stuffing with leaked passwords. If your password shows up in a breach dump, the attacker still can't get in without your authenticator.

What we don't do

We don't support hardware security keys (FIDO2/WebAuthn) yet. It's on the list but we haven't prioritized it over other features. If that's a blocker for your team, let us know and we'll bump it up.

We also don't support "remember this device" exemptions. Every login requires the second factor. Some people find this annoying. We think it's the right tradeoff for a service that controls email routing.

Is MFA enforcement a requirement for your team's tools? Do you care about hardware key support, or is TOTP sufficient for your use case?

reddit.com
u/Objective-Test-5374 — 6 days ago

Catch-all aliases: receive email at any address on your domain without creating individual aliases

This is one of those features that sounds simple but changes how you use email once you start.

What catch-all does

Enable catch-all on a domain and every email sent to any address at that domain gets forwarded to your inbox. It doesn't matter if the alias exists or not. someone-you-never-heard-of@yourdomain.com? Forwarded. typo-in-your-name@yourdomain.com? Forwarded. anything-at-all@yourdomain.com? Forwarded.

You don't need to create individual aliases ahead of time. The catch-all acts as a safety net that captures everything.

How people actually use it

The most common pattern we see is using unique addresses for every service you sign up for:

You never create these aliases in advance. You just make them up on the spot and the catch-all delivers them to your inbox.

Why this is useful

Three reasons:

  1. Spam source identification. If you start getting spam to netflix@yourdomain.com, you know Netflix leaked or sold your address. You can block that specific address without affecting anything else.
  2. Compartmentalization. Each service has a unique address. If one gets compromised, the others are unaffected. You're not giving the same email to your bank and a random newsletter.
  3. Legacy address handling. If you're migrating from another email provider and people have your old addresses, catch-all ensures nothing gets lost during the transition. Every address at your domain works, even ones you forgot about.

Specific aliases override the catch-all

You can still create named aliases with specific forwarding rules. If you have support@yourdomain.com forwarding to your helpdesk and a catch-all forwarding everything else to your personal inbox, the specific alias takes priority. The catch-all only handles addresses that don't match an existing alias.

The spam tradeoff

The obvious downside of catch-all is that spammers can send to random addresses at your domain and it all lands in your inbox. We mitigate this a few ways:

  • rspamd runs content-based spam filtering on all inbound mail before forwarding
  • The sender blocklist lets you block specific addresses or wildcard patterns at the SMTP level
  • Your email logs show exactly what's coming in, so you can spot patterns and block them

In practice, most catch-all users find the benefits outweigh the spam increase, especially with the blocklist as a backstop.

Configuration

Enable catch-all per domain in the dashboard, via the REST API, or through the MCP server. Set the forwarding destination and you're done. You can disable it just as easily if you decide it's not for you.

How do you use catch-all on your domains? Or do you prefer creating explicit aliases for everything? We've seen both approaches and there are good arguments for each.

reddit.com
u/Objective-Test-5374 — 7 days ago

We scan your subdomains for takeover vulnerabilities — here's how dangling CNAMEs become attack vectors

Subdomain Takeover Detection

A subdomain takeover occurs when a DNS CNAME record points to a third-party service that you no longer use. While the DNS record remains active, the external resource it points to has been deprovisioned. An attacker can then claim that resource on the third-party platform, allowing them to serve arbitrary content on your subdomain—including phishing pages that inherit your domain's reputation and SSL trust.

How We Detect It

We maintain a comprehensive fingerprint database of vulnerable services. We look for specific CNAME patterns and HTTP response signatures that indicate an unclaimed or "dangling" resource:

  • Amazon S3: Buckets returning NoSuchBucket.
  • Heroku: Pages showing No such app.
  • GitHub Pages: 404 errors with specific response bodies.
  • Enterprise Services: We also monitor signatures for Azure, Fastly, Shopify, Zendesk, and many others.

The RacterMX Process

For every domain managed on our platform, we perform the following automated checks:

  1. Enumeration: We identify subdomains using your DNS zone records and Certificate Transparency (CT) logs.
  2. Resolution: We resolve each CNAME chain to its final destination.
  3. Fingerprinting: We match the destination against our database of deprovisioned service signatures.
  4. Reporting: If a dangling record is found, it is flagged in your Security Dashboard under the Shadow pillar.

Remediation

Each alert includes:

  • The specific service name.
  • The CNAME target.
  • Remediation guidance: Typically, the solution is as simple as deleting the orphaned DNS record.

> [!IMPORTANT] > This check runs automatically as part of every security scan. Most organizations have at least one dangling CNAME they've forgotten about—have you audited yours recently?

reddit.com
u/Objective-Test-5374 — 7 days ago
▲ 1 r/RacterMX+1 crossposts

We keep a full history of your DNS zone changes with diff view and one-click rollback

DNS changes are high-stakes and low-visibility. You update a record, propagation takes anywhere from minutes to hours, and if something breaks you're scrambling to remember what the old value was. Most DNS providers give you a zone editor and nothing else. No history, no undo, no way to see what changed and when.

We built versioned zone history into our DNS hosting because we kept running into this ourselves.

How it works

Every time a record is created, modified, or deleted in your zone, we take a snapshot. The snapshot captures the full zone state at that point in time. You can browse the history, see exactly what changed between any two snapshots, and roll back to a previous state with one click.

The diff view

Pick any two snapshots and we show you a side-by-side diff. Added records are highlighted in green, removed records in red, modified records show the old and new values. It looks like a git diff but for DNS records.

This is useful for debugging. "Mail stopped working yesterday afternoon." Open the zone history, look at what changed yesterday, and there's your answer. Maybe someone deleted an MX record. Maybe the SPF record got modified and now it's too long. Maybe a CNAME was added that conflicts with an existing record.

One-click rollback

Found the problem? Click rollback on the snapshot you want to restore. We rewrite the zone to match that snapshot's state. The rollback itself creates a new snapshot, so you can always undo the undo.

Drift detection

This one catches a subtle problem. If someone modifies your DNS records outside of our editor (at the registrar, through a different API, or via a script that talks to PowerDNS directly), the zone state drifts from what our history shows. We detect this drift and alert you.

Drift detection matters because unauthorized DNS changes can indicate compromise. If your MX records suddenly point somewhere else and nobody on your team made the change, that's a serious problem. The alert gives you a chance to investigate and roll back before damage is done.

You can acknowledge a drift event if the change was intentional (maybe you ran a migration script), which updates the baseline snapshot so it stops alerting.

What gets versioned

Every record type: A, AAAA, CNAME, MX, TXT, SRV, NS, CAA, and anything else in the zone. The snapshot includes the full record set with names, types, values, TTLs, and priorities.

Via API

Zone history is accessible through the REST API. You can query snapshots, view diffs, and trigger rollbacks programmatically. Useful if you're building deployment pipelines that include DNS changes and want automated rollback on failure.

How do you track DNS changes today? We've talked to people who use everything from "I keep a spreadsheet" to "we version our zone files in git" to "we don't track changes at all and just hope for the best." Curious where people land on this.

reddit.com
u/Objective-Test-5374 — 7 days ago

New Feature: Unsubscribe Enforcement — Track violations, calculate fines, generate demand letters

We just shipped something we've been wanting to build for a long time: Unsubscribe Enforcement.

The short version: RacterMX now automatically detects when senders include List-Unsubscribe headers, lets you one-click unsubscribe, tracks the legally mandated 10-business-day compliance window, and flags every email that arrives after the deadline as a violation. When you're ready, it generates a PDF demand letter citing the exact statutes and calculating your damages.

How it works

1. Automatic detection

Every incoming email is scanned for RFC 2369 List-Unsubscribe and RFC 8058 List-Unsubscribe-Post headers. If a sender supports unsubscribe, they show up in your new Unsubscribe tab on the domain detail page. No configuration needed — it just starts collecting.

2. One-click unsubscribe

Click the button. RacterMX handles the rest:

  • HTTP POST with List-Unsubscribe=One-Click (preferred, RFC 8058)
  • HTTP GET fallback
  • Mailto fallback

The exact timestamp, method used, and HTTP response code are all recorded. This becomes your audit trail.

3. The 10-business-day bake time

Under the CAN-SPAM Act (15 USC § 7704(a)(3)(A)), senders have 10 business days to honor an opt-out request. RacterMX calculates this deadline automatically — excluding weekends and all U.S. federal holidays — and shows you the exact enforcement date.

While the bake time is active, any emails from that sender are flagged as "grace period" (amber). After the deadline passes, they become violations (red).

4. Violation tracking

Every post-deadline email is logged as a separate violation with:

  • Date and time received
  • Business days past the compliance deadline
  • Sender IP address
  • Subject line
  • Message-ID

Click the violation count to see the full evidence list in a slide-out panel.

5. Demand letter generation

When you have confirmed violations, click Generate Letter. RacterMX produces a professional PDF demand letter that includes:

  • Your complete suppression list audit trail
  • A table of every violation with dates, IPs, and subjects
  • CAN-SPAM Act citations (15 USC § 7704(a)(3)(A), 15 USC § 7706(a))
  • California Business & Professions Code § 17529.5 — $1,000 per violation (private right of action)
  • Federal enforcement exposure ($50,120 per violation via FTC/AG)
  • A 30-day response deadline
  • Notice of intent to file complaints with the FTC and state Attorney General

The letter opens in a new tab as a downloadable PDF. Ready to print and mail.

Why this matters

Most people click "unsubscribe" and hope for the best. When senders ignore it, there's nothing you can do — or so you think.

The reality: every email sent after the 10-business-day deadline is a separate legal violation. California residents can collect $1,000 per email under § 17529.5. The FTC can levy $50,120+ per email. Companies settle these claims quickly because the alternative is an AG investigation.

The problem has always been evidence. You need to prove:

  • When you unsubscribed
  • That the sender acknowledged it
  • Exactly when the compliance deadline expired
  • Every email received after that date

RacterMX now builds this evidence chain automatically. The demand letter is pre-populated with everything you need.

Retention policy

  • Unactioned senders (you never clicked unsubscribe): automatically purged after 30 days of inactivity
  • Actioned senders (pending, unsubscribed, or with violations): retained indefinitely — this is your legal evidence

Where to find it

Open any domain → click the Unsubscribe tab. It's already collecting data from your incoming emails. No setup required.

Feedback welcome. We're considering adding:

  • Bulk unsubscribe actions
  • Scheduled auto-unsubscribe for senders exceeding a threshold
  • Integration with the blocklist (auto-block after N violations)

Let us know what would be most useful.

reddit.com
u/Objective-Test-5374 — 8 days ago

We monitor Certificate Transparency logs for your domains and check CAA compliance

This is one of those security features that most people don't think about until something goes wrong. It falls under our "Shadow" pillar in the security posture scanner.

What Certificate Transparency is

Every time a Certificate Authority (Let's Encrypt, DigiCert, Sectigo, etc.) issues a TLS certificate for a domain, they're required to log it in a public Certificate Transparency (CT) log. These logs are append-only and publicly searchable. The idea is that if someone fraudulently obtains a certificate for your domain, you can detect it by monitoring the CT logs.

Why it matters for email

If an attacker gets a valid certificate for your domain (or a subdomain like mail.yourdomain.com), they can set up a convincing phishing site, intercept traffic via man-in-the-middle, or impersonate your mail server. CT monitoring lets you catch this early.

It also catches mistakes. Maybe a former employee's staging environment is still issuing certificates for a subdomain you forgot about. Or a CDN provider issued a certificate that covers your domain as a SAN entry and you didn't realize it.

What we check

As part of our security posture scanning, we:

  • Monitor CT logs for certificate issuance events involving your domains
  • Alert you if a certificate is issued by a CA you didn't expect
  • Check your CAA (Certificate Authority Authorization) DNS records
  • Verify that your CAA records restrict issuance to only the CAs you actually use
  • Flag missing CAA records (if you don't have one, any CA can issue for your domain)

CAA records explained

A CAA record is a DNS record that tells Certificate Authorities which of them are allowed to issue certificates for your domain. For example:

yourdomain.com.  CAA  0 issue "letsencrypt.org"
yourdomain.com.  CAA  0 issuewild ";"

This says "only Let's Encrypt can issue regular certificates for this domain, and nobody can issue wildcard certificates." If DigiCert or any other CA receives a request for your domain, they're supposed to check the CAA record and refuse.

Without a CAA record, any CA can issue. That's a wider attack surface than most people realize.

Auto-fix for DNS-hosted domains

If your domain's DNS is hosted with us and we detect a missing CAA record, the security scanner offers a one-click fix. We'll write a CAA record restricting issuance to the CA that issued your current certificate. You can always edit it afterward if you use multiple CAs.

Where to check yours

You can check your CAA records right now:

dig CAA yourdomain.com

If you get an empty response, you don't have one. That means any CA on the planet can issue a certificate for your domain without restriction.

Do you monitor CT logs for your domains? Do you have CAA records published? This is one of those areas where the security benefit is high and the effort to configure it is low, but adoption is still surprisingly poor.

reddit.com
u/Objective-Test-5374 — 9 days ago
▲ 2 r/localization+1 crossposts

This one surprised people when we shipped it. An email forwarding service with full internationalization isn't something you see often. Most providers are English-only, maybe with a Spanish or French translation that covers half the UI and falls back to English for the rest.

We went all in.

10 languages

English, Spanish, French, German, Italian, Hindi, Japanese, Portuguese (Brazilian), Chinese (Simplified), and Arabic.

Every surface is translated:

  • All marketing pages (home, pricing, about, comparisons, blog index, changelog)
  • The full dashboard (domain tree, tabs, settings, command palette, status messages)
  • Email templates (verification, password reset, notifications, scheduled reports)
  • Error messages and validation feedback
  • The onboarding wizard
  • The TOS acceptance screen
  • The billing page

We're talking about 2,300+ translation keys per locale. Not a partial effort where the landing page is translated but the dashboard reverts to English the moment you log in.

Arabic and RTL

Arabic was the hardest to get right. It's not just translating strings. The entire layout needs to flip: navigation goes right-to-left, text alignment reverses, directional icons mirror, and CSS needs to account for the different reading direction.

We added dir="rtl" detection based on the active locale and wrote RTL-specific CSS overrides for the navigation, footer, sidebar, dropdowns, and form layouts. The marketing pages and the dashboard both respect the direction.

How it works technically

We use Laravel's __() translation function with JSON locale files. Each locale has a single JSON file mapping English keys to translated values. The SetLocale middleware reads the user's preference from their session cookie (set via the language switcher) or falls back to the Accept-Language header.

Marketing pages have locale-prefixed URLs for SEO: /es/pricing, /fr/about, /de/pricing, etc. Each localized URL gets proper hreflang tags and an x-default pointing to the English version. Canonical tags on localized pages point to the English URL to avoid duplicate content issues with search engines.

The language switcher

It's in the navigation bar on every page. Shows the current locale code (EN, ES, FR, etc.) with a globe icon. Click it, pick your language, the page reloads in that locale. Your preference is saved in a cookie so it persists across sessions.

What we learned

Some things that caught us off guard:

  • Date formatting varies wildly. "April 19, 2026" in English becomes "19 avril 2026" in French and "2026年4月19日" in Japanese. Laravel's translatedFormat() handles this but you have to use it consistently everywhere.
  • Number formatting matters too. "1,000.50" in English is "1.000,50" in German. Currency symbols go in different positions.
  • Some translated strings are significantly longer than English, which breaks layouts that were designed for English string lengths. German is especially bad for this.
  • RTL isn't just "flip everything." Some elements (like code blocks, URLs, and email addresses) should stay left-to-right even in an RTL context.

Why bother?

Email is global. If someone in Tokyo or Riyadh or Sao Paulo wants to manage their domain's email forwarding, they shouldn't have to do it in a language they're not comfortable with. The infrastructure doesn't care what language the UI is in, so there's no technical reason to limit it.

What languages are underserved in the email infrastructure space? Are there locales we should add? We're open to expanding if there's demand.

reddit.com
u/Objective-Test-5374 — 10 days ago

One of the concerns people raise about consumption-based pricing is unpredictability. "What if I get a spike in email volume and my bill doubles?" Fair question. Here's how we handle it.

Spending alerts

In your billing dashboard, you can set a monthly spending threshold. Pick a dollar amount that represents your comfort level. If your estimated charges for the current billing period cross that threshold, we send you a notification.

That's it. No automatic suspension, no throttling, no "your account has been limited." Just a heads-up that your usage is higher than expected so you can investigate.

Why informational only

We thought about auto-pausing accounts or throttling email delivery when spending exceeds a threshold, but that creates a worse problem than a higher bill. If your email stops flowing because of a billing trigger, you might miss critical messages. A surprise $20 bill is annoying. Missing a client email because your forwarding got suspended is a business problem.

So alerts are informational. You decide what to do with the information.

Real-time usage dashboard

The billing page shows your current-period consumption across all four dimensions:

  • Active domains and the per-domain cost
  • Active aliases and the per-alias cost
  • Emails processed with the per-email rate
  • DNS queries with the per-query rate

Each line shows the count, unit price, and estimated cost. The total updates as usage is reported throughout the month. No waiting until the invoice to find out what you owe.

What causes unexpected spikes

From what we've seen so far, the most common causes of usage spikes are:

  • Adding several domains at once for a new project (domain count jumps)
  • A marketing campaign that generates a lot of inbound replies (email volume jumps)
  • Enabling DNS hosting on a popular domain (DNS query volume jumps)
  • Someone discovering your catch-all alias and sending junk to random addresses (email volume, though the blocklist handles this)

In all of these cases, the spending alert gives you time to react before the invoice lands.

Via API

The spending alert threshold is also configurable via the REST API if you want to manage it programmatically or integrate it with your own monitoring.

What billing transparency features do you wish more SaaS products had? We're open to adding more controls here if there's demand. Things like daily spending caps, per-dimension alerts, or weekly usage summaries have come up in conversations but we haven't built them yet.

reddit.com
u/Objective-Test-5374 — 11 days ago

We spent a lot of time on the onboarding experience because first impressions matter, and email services have a reputation for painful setup processes. Here's what a new user sees from start to finish.

Step 1: Register

Name, email, password, optional promo code. Email validation checks that the domain actually has MX records, so typos like "gmial.com" get caught at the form level. If there's a current Terms of Service, you review and accept it before the form appears.

Step 2: Verify your email

Standard verification link sent to your inbox. Click it, you're verified.

Step 3: Activate your subscription

You're redirected to Stripe Checkout to add a payment method. If you entered a promo code during registration, the discount is applied automatically. You still need a card on file, but if you have a promo code you won't be charged until the promotional period ends.

Step 4: Add your first domain

The onboarding wizard walks you through adding a domain. Two options:

  • Host DNS with RacterMX: point your registrar's nameservers to us and we handle everything. MX, SPF, DKIM, DMARC, and MTA-STS records are auto-provisioned. No copy-pasting.
  • Keep your existing DNS: we show you exactly which records to add at your current provider, with copy buttons for each value.

Step 5: DNS verification

This is where most services make you wait and manually click "verify" over and over. We auto-poll your DNS records every few seconds. As soon as propagation completes, the wizard advances automatically. No clicking, no refreshing, no guessing whether it worked.

Step 6: Send a test email

The wizard prompts you to send a test email to your new alias. When it arrives, you see it in the email logs in real time. Confirmation that the whole chain works: DNS records are correct, mail is being accepted, forwarding is active, and delivery to your real inbox succeeded.

That's it

The whole process takes about 60 seconds if you're using our DNS hosting (no records to copy). If you're keeping your existing DNS provider, add maybe 2-3 minutes depending on how fast their propagation is.

For users who already have domains configured (maybe they were invited to an existing organization), the wizard detects that and auto-completes so they go straight to the dashboard.

What we skipped

We intentionally left out a product tour, tooltip walkthrough, or "click here to learn about this feature" overlay. If you're the kind of person who manages email infrastructure, you don't need us to explain what a domain tree is. The command palette (Cmd+K) is there if you need to find something.

What's the worst onboarding experience you've had with an email service? We're always looking for friction points to eliminate.

reddit.com
u/Objective-Test-5374 — 12 days ago

Quick show of hands: how many of your domains have MTA-STS configured?

If you're not sure what MTA-STS is, that kind of proves the point. It's one of those standards that's important for email security but painful enough to set up that almost nobody does it.

What MTA-STS does

When a mail server wants to deliver email to your domain, it looks up your MX records and connects over SMTP. SMTP supports STARTTLS for encryption, but it's opportunistic by default. A man-in-the-middle can strip the STARTTLS offer from the connection and force a plaintext downgrade. The sending server doesn't know the difference because there's no way to signal "I require TLS" through DNS alone (DANE does this, but requires DNSSEC on both sides, which limits adoption).

MTA-STS (RFC 8461) solves this. You publish a policy file at a specific HTTPS URL on a subdomain of your domain, plus a DNS TXT record that tells sending servers to fetch the policy. The policy says "when delivering mail to my domain, you must use TLS, and here are the MX hostnames you should expect." If the TLS connection fails or the MX doesn't match, the sending server should refuse to deliver rather than fall back to plaintext.

Why nobody sets it up

To configure MTA-STS manually, you need to:

  1. Create a subdomain: mta-sts.yourdomain.com
  2. Get a valid TLS certificate for that subdomain
  3. Configure a web server to serve the policy file over HTTPS
  4. Write the policy file with the correct MX hostnames and mode
  5. Publish a _mta-sts TXT record in DNS with a version identifier
  6. Renew the certificate before it expires
  7. Update the policy file and DNS record if your MX records change

That's a lot of moving parts for something most people have never heard of.

What we do

When you enable DNS hosting for a domain on RacterMX, we handle all of it automatically:

  • Provision the mta-sts subdomain
  • Obtain and renew the TLS certificate via Let's Encrypt
  • Generate and serve the policy file with the correct MX hostnames
  • Publish the _mta-sts DNS TXT record
  • Keep everything in sync if your MX configuration changes

You don't configure anything. You don't even know it's there unless you look for it. Your domain just quietly gets protection against TLS downgrade attacks.

The provisioning runs every 6 hours

New domains get picked up automatically. Certificate renewals happen before expiration. If your MX records change, the policy file updates to match. It's a background process that you never have to think about.

Checking your domains

If you're curious whether your domains have MTA-STS, you can check with:

curl https://mta-sts.yourdomain.com/.well-known/mta-sts.txt
dig TXT _mta-sts.yourdomain.com

If both return valid results, you're covered. If not, you're vulnerable to downgrade attacks on inbound mail delivery.

How many of your domains have MTA-STS configured today? And for those that do, did you set it up manually or does your provider handle it?

reddit.com
u/Objective-Test-5374 — 13 days ago

Retention policy is one of those things that sounds boring until you need it. Either an auditor asks "how long do you keep email metadata?" and you don't have an answer, or you realize a service has been storing your communication patterns for years with no way to force deletion.

We let you set your own retention policy and we actually enforce it.

How it works

In your dashboard under Email Service > Retention, you configure two independent values:

  • Metadata retention: how long we keep the log entry itself (sender, recipient, subject, timestamps, delivery status, authentication results). Set anywhere from 0 days to 2,555 days (about 7 years).
  • Content retention: This is the maximum time we will retain an email that is queued for forwarding. The typical cycle is that this mail sits on our servers for between 1 and 8 seconds, but if for some reason your email provier were to stop accepting mail, we would continue to try to deliver for up to a maximum of 30 days.

You can also set per-event overrides. Maybe you want to keep bounce records for 90 days for debugging but only keep successful delivery records for 30 days. That's configurable.

What happens when retention expires

The data is permanently deleted. Not archived to cold storage. Not soft-deleted with a "we might still have it" asterisk. Deleted. The cleanup job runs daily and removes everything past its retention window.

Why this matters

Different situations call for different retention periods:

  • GDPR says minimize data collection and retention. If you don't need it, don't keep it.
  • SOX, HIPAA, and financial regulations may require you to retain communication records for specific periods.
  • Some organizations want maximum retention for incident response and forensics.
  • Privacy-conscious individuals want minimum retention because they don't trust anyone to hold their data longer than necessary.

We don't pick a side. You set the policy, we enforce it. The default is 90 days for metadata and 30 days for content, but you can change it to whatever your situation requires.

Compliance exports

Before data ages out, you can export your logs as CSV. The export respects your current filters (domain, date range, status) so you can pull exactly what you need for an audit without downloading everything.

Via API

The retention policy is readable and writable via the REST API, so you can manage it programmatically or integrate it into your compliance automation.

What retention policies do you run for your email infrastructure? Are you driven by compliance requirements, privacy preferences, or just "whatever the default is"? We're curious how people actually think about this in practice.

reddit.com
u/Objective-Test-5374 — 14 days ago

We built the API before we built the dashboard. Everything in the UI is an API call under the hood, which means anything you can do by clicking around in the dashboard, you can automate.

What the API covers

  • Domains: list, create, verify, update settings, delete, check DNS health
  • Aliases: list, create, update, delete, import/export, enable/disable
  • DNS Records: full CRUD on zone records for DNS-hosted domains
  • Email Logs: search, filter, export, get delivery details
  • Blocklist: add, remove, list blocked senders and patterns
  • Webhooks: create, update, delete, test, view delivery logs
  • SMTP Credentials: create, delete, reset password, toggle sender privacy
  • API Keys: create, revoke, list
  • Security: get domain security score, list findings, trigger scan, apply fixes
  • DMARC: get compliance reports, score history
  • Retention: get and update log retention policies
  • Anonymous Replies: list proxy addresses, disable individual proxies

Authentication

API keys with granular scopes. When you create a key, you pick exactly what it can access:

  • domains:read / domains:manage
  • aliases:read / aliases:manage
  • email:read / email:send
  • smtp:read / smtp:manage
  • webhooks:read / webhooks:manage
  • blocklist:read / blocklist:manage

Read scopes let you query data. Manage scopes let you create, update, and delete. You can create a read-only key for monitoring dashboards and a separate key with manage scopes for your provisioning scripts.

Keys also support IP allowlisting and expiration dates.

Rate limiting

60 requests per minute per API key by default, with a per-tenant ceiling of 600 across all keys. Enough for automation, not enough for abuse. Rate limit headers in every response so your code can back off gracefully.

Documentation

Interactive Swagger UI at /api/documentation. You can try endpoints directly from the browser with your API key. OpenAPI spec available for code generation if you want typed clients.

Example: provision a new domain with aliases

POST /api/v2/domains        { "name": "example.com" }
POST /api/v2/domains/42/verify
POST /api/v2/aliases        { "domain_id": 42, "local_part": "hello", "forward_to": "me@gmail.com" }
POST /api/v2/aliases        { "domain_id": 42, "local_part": "*", "forward_to": "me@gmail.com", "is_catchall": true }

Four calls and you have a domain with a named alias and a catch-all, ready to receive mail.

What API features matter most to you for email infrastructure? We've been thinking about adding batch endpoints (create 50 aliases in one call) and async operations with status polling for long-running tasks like bulk DNS changes. Would either of those be useful?

reddit.com
u/Objective-Test-5374 — 15 days ago

Simple feature, but it comes up a lot: sender blocklisting.

How it works

Add an email address or a wildcard pattern to your blocklist. Some examples:

  • spammer@example.com blocks that specific address
  • *@annoying-newsletter.com blocks everything from that domain
  • *@*.marketing.example.com blocks all subdomains under their marketing infrastructure

Blocked messages are rejected at the SMTP level during the initial connection. The sending server gets a 5xx rejection. The message never gets forwarded, never hits your inbox, never consumes your email volume quota, and never shows up in your logs as a delivered message. It's gone before it enters the pipeline.

Why SMTP-level rejection matters

Some services accept the message first and then filter it on the backend. That means the message was already processed, stored temporarily, and counted against your usage before being discarded. With SMTP-level rejection, the sending server knows immediately that delivery failed. If it's a legitimate sender you accidentally blocked, they'll get a bounce notification right away instead of their message silently disappearing.

Management

You can manage the blocklist from three places:

  • The dashboard (add/remove entries, see the full list)
  • The REST API (programmatic management, bulk operations)
  • The MCP server (tell your AI agent "block everything from spammer.com")

What we track

The blocklist itself is simple, but we log rejection events so you can see how many messages are being blocked and from which senders. Useful for spotting patterns or confirming that a block rule is actually catching what you intended.

What it doesn't do

The blocklist is for explicit sender blocking. It's not a spam filter. We run rspamd for content-based spam filtering separately. The blocklist is for when you know exactly who you don't want to hear from and you want them rejected hard at the door.

What's your approach to blocking unwanted email at the infrastructure level? Do you prefer sender-based blocking, content filtering, or a combination? We've been thinking about adding support for header-based rules (block messages with specific X-headers or subject patterns) but haven't decided if that's worth the complexity.

reddit.com
u/Objective-Test-5374 — 16 days ago

Most email service dashboards look like every other SaaS admin panel. Sidebar with menu items, a main content area, maybe some cards with numbers. Fine for simple use cases, but if you're managing 10+ domains with aliases, DNS records, security scores, and email logs, that layout falls apart fast.

We built our dashboard for people who spend their day in VS Code, IntelliJ, or a terminal. The mental model is an IDE, not a settings page.

Domain tree (left sidebar)

Your domains are organized in a tree view grouped by organization. Each domain shows its security score badge (A through F) right in the tree. Click a domain to open it in a tab. Right-click for a context menu with quick actions. You can pin frequently accessed domains to the top, filter by name, or sort by security score (worst first, so the problems float up).

The sidebar is resizable with a drag handle. Width persists across sessions.

Tabbed workspace

Domains open in tabs, just like files in an editor. You can have multiple domains open at once and switch between them. Each domain tab has subtabs for Mail, DNS, Security, Stats, Reputation, and DMARC. The tab state persists so you pick up where you left off.

Command palette

Hit Cmd+K (or Ctrl+K) and you get a search bar that queries across domains, aliases, email logs, and commands. Fuzzy matching with typo tolerance. Start typing a domain name and it shows up. Type "bounced" and it pulls up recent bounced emails. Type "add alias" and it gives you the command. Keyboard navigation throughout.

Email logs panel

The bottom of the screen has a collapsible log panel, similar to a terminal output pane. Live tail mode, filtering by domain/status/date, search across sender/recipient/subject. It's always one click away while you're working on domain configuration.

Keyboard shortcuts

Cmd+K for command palette. Cmd+L to toggle the log panel. Standard tab navigation. We tried to make it so you can manage your email infrastructure without touching the mouse if you prefer keyboard-driven workflows.

Dark mode by default

The whole thing is dark-themed by default with a light mode toggle. We figured if you're the kind of person who manages email infrastructure, you probably already have dark mode on everything else.

Under the hood

The frontend is Alpine.js and CSS Grid. Domain subtabs are Livewire components that update in real time. No jQuery, no heavy JS framework, no build step for the dashboard itself. And of course at the end of the day you could just use visual studio, or claude, or KIRO, with the MCP plugin if thats what your more comfortable with.

What UX patterns from your code editor do you wish more web apps would adopt? We're always looking for ways to make the dashboard faster to navigate for power users.

reddit.com
u/Objective-Test-5374 — 16 days ago
▲ 2 r/RacterMX+1 crossposts

We put our MCP server on npm: u/ractermx/mcp-server

If you haven't run into MCP yet, it's Model Context Protocol. It's a standard that lets AI agents (Claude, Cursor, Kiro, etc.) call tools and APIs through a structured interface. Instead of the AI guessing at curl commands or writing API calls from documentation, it gets a typed tool catalog with parameter schemas and can execute operations directly.

What it can do

Our MCP server exposes the full RacterMX API surface. Some examples:

  • "Add a catch-all alias for staging.example.com that forwards to the team inbox"
  • "Show me all bounced emails from the last 24 hours"
  • "Create a new domain and set up DNS hosting"
  • "Block everything from u/spammer.com"
  • "What's the security score for example.com?"
  • "List all aliases on my domain and export them"

The agent handles the API calls, authentication, and error handling. You just describe what you want in plain English.

How to set it up

Install it globally or in your project:

npm install -g u/ractermx/mcp-server

Configure it in your AI tool's MCP settings with your RacterMX API key. The server auto-discovers all available tools and presents them to the agent.

Why we built it

We already had a REST API with 60+ endpoints. Wrapping it in MCP was straightforward and it makes the API accessible to people who don't want to write code. Your AI agent becomes your email infrastructure admin.

What we're seeing

People are using it for bulk operations mostly. Adding 20 aliases at once, auditing DNS records across all domains, pulling log summaries for incident response. Things that would take a dozen API calls or a lot of clicking in the dashboard.

Anyone else experimenting with MCP for infrastructure management? What operations would you want an AI agent to handle for your email setup? We're curious what use cases we haven't thought of.

reddit.com
u/Objective-Test-5374 — 23 days ago