What is the threshold where local llm is no longer viable for coding?
I have read many of the posts in this subreddit on this subject but I have a personal perspective that leads me to ask this question again.
I am a sysadmin professionally with only limited scripting experience in that domain. However, I've recently realized what Claude Code allows me to do in terms of generating much more advanced code as an amateur. My assumption is that we are in a loss leader phase and this service will not be available at $20/mo forever. So I am curious if there is any point in exploring whether smallish local models can meet my very introductory needs in this area or if that would simply be disappointing and a waste of money on hardware.
Specifically, my expertise level is limited to things like creating scrapers and similar tools to collect and record information from various sources on various events like sports, arts, music, food, etc and then using an llm to infer whether to notify me based on a preference system built for this purpose. Who knows what I might want to build in the future that is where I'm starting which I'm assuming is a basic difficulty level.
Using local models able to run on 64G of VRAM/Unified, would I be able to generate this code somewhat similarly to how well I can using Claude Code now or is this completely unrealistic?