Prompt Shaping: Measuring the Impact of Prompt Modifiers on Output Size and Format
What if you could guide large language models (LLMs) to output shorter, clearer, or more structured responses just by adjusting the prompt?
This experiment set out to answer a simple yet valuable question:
Can we shrink or reshape LLM output just by...
metawake.hashnode.dev4 min read