Last year I developed my own prompting method that the new Anthropic emotion vectors validated for me
I call it "liberation prompting"
what I notice was that when I was too specific or working with methods that prompt engineers were using my "guidelines" stated to act a lot like "guardrails". I then started to experiment with giving the ai more freedom. Instead of telling it much of anything I would define a goal, give hard constraints and few necessary specifications. Then I would inform the ai that it was designed for what I was trying to get it to do so it was potentially better than me at doing it. I would give it the "freedom" to do whatever it could however it saw best to get the job done. Then it would, more times than not, perform easy better than I expected on the first prompt and could reiterate from a finished concept.
I've used this on loveable ai, repplit, the one that does videos and presentations and on photo generators. I've also used it with llm's for menial tasks like summarizing and what not. For all of these I can usually get a full functional concept from the first prompt. Depending on complexity it may take a few more but not much one you get the big pieces done.
Where the Anthropic paper comes in is it essentially establishes that user tone affects ai output pretty substantially. When you're very specific and tell it things like "your an expert prompt engineer for over 10 years" filled by very specific parameters, you unintentionally apply pressure to its "user pleasing" mechanism that's built into these models. So resource allocation is spent making sure it fills your very specific needs. When you set a goal and give freedom then resource allocation gets put to the goal and the llm can do the ai stuff is better at anyway.
just wanted to share my thoughts because I thought it was cool lol.