u/Aadi_1234567

▲ 5 r/Freelancers+2 crossposts

Over the past few months I’ve been observing how service businesses advertise online (consultants, agencies, clinics, coaches, etc.).

One thing that keeps standing out is how many businesses selling services worth ₹1L+ market themselves the same way someone selling a ₹999 product would.

For example:

• Ads focused heavily on discounts or urgency

• Messaging that talks about features instead of outcomes

• Very little effort put into building trust before the sale

But when the price point is high (₹1L–₹5L+), the buyer psychology seems completely different.

People usually aren’t asking “Is this cheap enough?”

Instead they’re thinking things like:

• Can I trust this person?

• Do they actually understand my situation?

• Is this worth the risk?

Some of the more interesting campaigns I’ve seen lean much more into education, proof, and positioning instead of trying to push immediate conversions.

Lead volume might be lower, but the people who come in tend to be far more serious.

Curious to hear from others here.

If you run or market a service that costs ₹1L or more, what has worked best for you in attracting serious buyers rather than just generating a lot of low-intent leads?

reddit.com
u/Aadi_1234567 — 15 days ago
▲ 4 r/FacebookAds+1 crossposts

Been seeing a lot of posts lately saying Meta performance has completely fallen off this month.

Higher CPMs, worse lead quality, campaigns dying after a few days etc.

Interestingly I’m seeing the opposite in one of the accounts I’m managing right now.

It’s a local wedding planning + decor business and over the last 7 days we generated 193 leads in total across two accounts.

Planning: ~90 leads at around ₹95 per conversation

Decor: ~103 leads averaging around ₹70 per conversation

Total spend across both was roughly ₹15k.

Nothing crazy in terms of setup either.

The way we approached it was basically testing a few different structures first:

Broad targeting (completely open)

Detailed targeting around wedding interests

Custom audiences from past engagement

Lookalikes

Initially the detailed targeting worked okay but the broad ad sets eventually started outperforming everything once the account warmed up.

After that it mostly came down to creatives.

We tested a bunch of different angles and once we found the 2–3 creatives Meta clearly preferred, we just let those run and stopped touching the ad set structure.

At that point performance became pretty stable.

Curious what others are seeing this month.

Is Meta actually struggling right now or is it just certain niches/accounts getting hit?

reddit.com
u/Aadi_1234567 — 18 days ago

Recently I’ve been following a pretty structured testing framework for my Meta campaigns and I’m curious if anyone here would approach it differently.

For testing, I usually launch 1 campaign with 3 ad sets and place 5 creatives inside each ad set.

All ad sets use the same creatives, but each one targets a different audience. I run it on ad set budgets instead of CBO because I want relatively equal spend distribution while testing.

The idea is to isolate variables as much as possible so I can clearly see:

• which audience actually drives the results

• which 1–2 creatives perform consistently across multiple audiences

Once I get enough data and identify the winners, I launch a separate campaign purely for scaling, using the winning audience + the winning creatives.

I usually let that campaign run for 1–2 weeks, and if performance stays stable I’ll increase the budget once by roughly 10%, then just let it run until performance naturally drops (usually when creative fatigue starts kicking in).

One thing I intentionally avoid doing is adding new ad sets or creatives into a campaign that’s already performing. I try not to disrupt the existing learning stability.

So whenever I am done testing new audiences or new creatives, I create a separate campaign for scaling instead of modifying the active one.

The cycle basically becomes:

The cycle basically becomes testing campaigns, identifying the winners, launching a separate campaign for scaling, letting it run, and then repeating the process with new tests.

This structure has been working fairly well for me so far, but I’m curious how others here structure their testing vs scaling framework.

Would you change anything about this?

Especially interested to hear how you people test and add new audiences/creatives without risking disruption to campaigns that are already performing.

reddit.com
u/Aadi_1234567 — 21 days ago