Show HN: Standardizing build, experiment, and deployment for LLM Development
4 by asif_ | 0 comments on Hacker News.
*Motivation* Hi hackers, I'm Asif. I know we dislike premature standardization, but hear me out. LLM Application development is extremely iterative, more so than most other types of application development. We need a process that allows us to iterate faster. LLM Development is highly iterative due to the activities that come with regular software development, as well as the need to make the LLM Application accurate and reduce hallucination. To improve hallucination, we need to trial and error various combinations of LLM models, prompt templates (e.g., few-shot, chain-of-thought), prompt context with different RAG architecture, and possibly try multi-agent architecture. There are thousands of permutations to try, and we want to be able to easily experiment with different permutations and have a process to objectively judge LLM performance so we can iteratively move towards accuracy goals. *Solution* I have been working in AI since 2021 - first at FAANG with ML, then with LLM in start-ups since early 2023. I have had the chance to talk with many different companies that have been successful and unsuccessful with AI development. Using my learnings, I am working on an Open Source framework to standardize the build, experiment, and deploy process for LLM Development. The goal of this framework is to optimize for rapid iteration. We are doing this by enforcing a modular LLM application layer build, allowing for easy testing of different configurations of your application. We provide maximum flexibility for using any external tools you want for building your application. We also have tools to benchmark your accuracy and improve the performance of your application in a data-driven way. Finally, everything is deployable through a Docker image. *Getting Involved* If you're curious, check us out on Github. You can get fully set up with a single command. Stars for better visibility