An Experimental Study of Prompt Engineering Techniques for Optimizing Large Language Model Inference
Keywords:
prompt-based learning; pretrained language models; few-shot learning; AutoPrompt; PET; LM-BFF; prefix-tuning; prompt tuningAbstract
Prompt-based learning emerged as a major paradigm shift in natural language processing by enabling pretrained language models to perform downstream tasks through input reformulation rather than full task specific retraining. This review synthesizes influential studies published up to 2021 and evaluates how prompt-based methods improved inference effectiveness, few-shot adaptation, and parameter efficiency.

