function-based prompts

Using functions to fine-tune LLM responses is not a new concept. This implementation focuses on conciseness and consistency of response to enhance productivity as well as reproducibility. By using functions, we can standardize the outputs based on the function contents, ensuring a consistent quality of responses. This approach also saves time by eliminating the need to think of different ways to phrase the LLM's task. Overall, it improves the efficiency and effectiveness of the conversation.

an example?

The function above takes an input idea title and transforms it into an SEO-optimised version of the input title.

The notation is reminiscent of Python’s function syntax so the function announcement alone will cause the LLM to respond with programming support.

As a result we propose a declaration of a function to be used and remembered for the duration of the conversation. This is a statement to outline the purpose of the function syntax that will be entered.

why?

time savings - it can take up to 10 secs for you to properly word the preceding words in order to get a task done. This removes the thinking time required to think of a way to phrase the LLM’s task. It also means that you are not repeating yourself.

standardization - function outputs are more standardized based on the function contents. We opt for usage where task details are not more than a few sentences and operations long as this reduces the LLM’s potential to produce a response of consistent quality.