Lessons from a Year of LLM Development

July 08, 2024

I’ve been using LLM’s as part of my workflow and developing LLM solutions over the past year. Here are some of my lessons learned:

  1. Like any software or data solution, start with getting a basic end-to-end solution working. Small, iterative cycles will show progress and are flexible for solution redirection.
  2. Get user- or SME-feedback throughout the process. This will help inform your prompts and evaluation criteria. It will also help you identify where your solution might need human interaction (human-in-the-loop).
  3. Define your inputs – what data are needed for accurate responses, where does it reside and how do you access it. This could be achieved by creating a vector store, ingesting files, or hard-coded text through API calls. But remember to keep it simple in the beginning – don't add operational complexity unless necessary.
  4. Break-up large prompts into smaller, detailed prompts. This makes the tasks more manageable and can lead to more accurate responses. Additionally, not all LLMs are good at following large instructions. For prompt organization check out fabric by @danielmiessler.
  5. AWS Step Functions are a natural choice for building prompt chaining workflows because they offer different methods for chaining prompts: sequentially, in parallel, and iteratively by passing state data from one task to the next.

Profile picture

Written by:
David Curry
Software & AI Technologist

© 2024, Stack Assembly