The performance of a large language model (LLM) is sensitive to the way it is prompted. Automated prompt engineering methods aim to find suitable prompts for a given task by sampling several prompts and evaluating them. Existing automatic prompt engineering methods do not generate sufficiently diverse sample prompts or rely on several meta-prompting tricks to achieve the desired results. In this thesis, we will use a method for prompt selection to directly optimise diversity and estimated performance by exploiting so called determinental point processes. The thesis will involve comparisons of this technique to state-of-the-art prompt engineering methods such as PromptBreeder from DeepMind.
Eꮊ·¦z{l
- Excellent and long standing interest and knowledge in mathematics
- Good programming skills in Python and PyTorch (optional)