u/Ancient_Course4287

've been sketching a new iPad notation app and I want reactions before I sink real time into it. Specifically I want to hear what's wrong with the idea, what existing tools already do this well, and whether the people I think I'm building for actually want what I'm describing.

The frustration I keep hitting is that handwriting recognition in StaffPad style apps is unreliable enough that I end up fighting it. At the same time, piano roll editors in DAWs are great for pitch and rhythm but useless for expressive marking like slurs, dynamics, and articulations. So the idea is to flip the input model. The piano roll becomes the primary editing surface, where you tap with a finger to place notes and drag to set length, the same way you would in Logic or FL Studio. The Apple Pencil draws expressive marks directly on top of the roll. A slur is literally an arc drawn over two or more notes. A staccato is a dot above a note. A crescendo is a wedge under a range. The Pencil never has to interpret pitch or rhythm, only expression. The engraved score lives as a second view that stays live synced. You swipe in from the right edge to see your piano roll as a real engraved score, then swipe back to keep editing. The recogniser only has to handle about 12 well defined gestures, the ones you draw constantly. Everything else like ornaments, pedal, octave displacement, and repeats lives in a radial palette summoned by a two finger double tap. MVP would be solo piano only. Grand staff, premium piano sound, AUv3 host so you can route to your own samples, MusicXML import and export, iCloud sync, offline first. No multi instrument, no ensemble sync, no live collaboration in v1. A few questions I actually want answered. Is this solving a real problem for you, or am I describing a product no one needs because Dorico and StaffPad already handle the workflows you have? If you write piano music on iPad now, what is the thing that makes you put the iPad down and go to your laptop? Is the piano roll plus drawn expression model something you would actually use, or does it sound like a worse version of pure notation entry? And who am I missing in my target audience, beyond cue writers, theater MDs, school directors, and DAW composers who do not sight read fluently? I have UI sketches and an architecture doc if anyone wants to dig deeper. Mostly I want to hear what is wrong with this before I commit a year of my life to it.

reddit.com
u/Ancient_Course4287 — 7 days ago
▲ 4 r/ipadmusic+1 crossposts

 I've been sketching a new iPad notation app and I want reactions before I sink real time into it. Specifically I want to hear what's wrong with the idea, what existing tools already do this well, and whether the people I think I'm building for actually want what I'm describing.

The frustration I keep hitting is that handwriting recognition in StaffPad style apps is unreliable enough that I end up fighting it. At the same time, piano roll editors in DAWs are great for pitch and rhythm but useless for expressive marking like slurs, dynamics, and articulations. So the idea is to flip the input model. The piano roll becomes the primary editing surface, where you tap with a finger to place notes and drag to set length, the same way you would in Logic or FL Studio. The Apple Pencil draws expressive marks directly on top of the roll. A slur is literally an arc drawn over two or more notes. A staccato is a dot above a note. A crescendo is a wedge under a range. The Pencil never has to interpret pitch or rhythm, only expression. The engraved score lives as a second view that stays live synced. You swipe in from the right edge to see your piano roll as a real engraved score, then swipe back to keep editing. The recogniser only has to handle about 12 well defined gestures, the ones you draw constantly. Everything else like ornaments, pedal, octave displacement, and repeats lives in a radial palette summoned by a two finger double tap. MVP would be solo piano only. Grand staff, premium piano sound, AUv3 host so you can route to your own samples, MusicXML import and export, iCloud sync, offline first. No multi instrument, no ensemble sync, no live collaboration in v1. A few questions I actually want answered. Is this solving a real problem for you, or am I describing a product no one needs because Dorico and StaffPad already handle the workflows you have? If you write piano music on iPad now, what is the thing that makes you put the iPad down and go to your laptop? Is the piano roll plus drawn expression model something you would actually use, or does it sound like a worse version of pure notation entry? And who am I missing in my target audience, beyond cue writers, theater MDs, school directors, and DAW composers who do not sight read fluently? I have UI sketches and an architecture doc if anyone wants to dig deeper. Mostly I want to hear what is wrong with this before I commit a year of my life to it.

reddit.com
u/Ancient_Course4287 — 7 days ago

I’m working on an iPad-only music app with a deliberately narrow scope — not trying to be a full Finale/Dorico replacement on day one, but to nail a few things:

What I’m actually building Handwriting recognition that’s actually usable — staff-aware, with clear error feedback and fast ways to fix mistakes (not “throw away the whole bar because something’s wrong”). Solo composer mode — write your score, hear it back, export when you need to finish elsewhere.Ensemble / rehearsal mode — conductor or librarian drives page turns (or musical position); each player’s iPad shows only their assigned part after they claim their instrument/seat in the session. Built for “we’re in the room with our iPads,” not a generic PDF reader bolt-on. Playback — the notation drives audio; MIDI in the picture with sensible piano hand-splitting for imports/recording; goal is lots of instrument sounds (starting from solid GM + room to grow). Why I’m posting Apps in this space get ripped for flaky recognition, abandonment, or half-baked workflows. I’m trying to stay scope-locked: solo writing + good pen entry + synced ensemble reading + playback — and resist feature creep until those feel solid. Questions for you, If you compose on iPad, what’s the #1 thing that makes you bail on pen-based apps today, For ensemble use: is follow the conductor enough for v1, or do you need synced playback/click in-session from day one, Hand splitting for piano MIDI — fixed split point, or do you care about per-note “move to other hand” immediately? Happy to hear gut reactions, feature traps to avoid, or “I’d pay for X if Y.” Thanks.

reddit.com
u/Ancient_Course4287 — 8 days ago

Been coding this for the last few weeks. It's called Scrim. The pitch: you describe what your app should do in plain English, and an AI agent opens a real Chromium browser and tests it the way a user would — clicking, typing, signing in with stored credentials, taking screenshots, judging whether things actually work.

What I think makes it different from Mabl / Octomind / Cypress is that the same agent can also wait for emails and webhooks, so a single test verifies the whole chain — signup → welcome email arrives → click magic link → land on the dashboard. Not just "did the button click."

I've also wired up adversarial security testing (prompt injection / jailbreak attacks against any AI features in your app), and voice testing where AI personas with realistic voices call into voice agents like Vapi or Retell. All of it lives in one chat-style interface, and failed runs auto-file GitHub issues with screenshots and repro.

Before I commit more time to it — would you actually use this? What's missing for it to be worth paying for? Brutal honesty welcome.

reddit.com
u/Ancient_Course4287 — 13 days ago