🧰 LLMSmith

LLMSmith is a lightweight, un-opinionated Python library designed for developing functionalities powered by Large Language Models (LLMs). It allows developers to integrate generative AI capabilities into various types of applications, whether they are web applications, GUI applications, or any other kind of application.

What does LLMSmith do?

LLMSmith provides you with nice abstractions developing AI workflows in a clean and concise manner. As a developer, this is what you’ll be doing when using LLMSmith:

  • define each Task in an AI workflow (which can be an LLM call, or retrieving some documents from a vector DB).

  • create a Job and add the above tasks to it.

  • run the Job sequentially or concurrently depending on the use-case.

Have a look at Examples section, if you want to see how its done in code.

Some design decisions

  • LLMSmith has no in-built prompts

    At this stage, prompts are very “sensitive”. Even changing a single word may have a considerable impact in the response generated by the LLM. Also, each LLM is coming up with their own prompting conventions, and because of that the same prompt will not work the same for different LLMs. So having in-built prompts at this stage might do more harm than good.

  • LLMSmith will not create LLM/DB clients instances within the library

    LLMSmith expects the developer to pass the LLM or DB client instance as a parameter while creating a Task. Each application will have its own way of handling the lifecycle of these client object instances (especially DB clients). So its better if LLMSmith refrains from interfering with client object lifecycle.

  • LLMSmith does not have ETL capabilities (text-wrangling, document loading, embedding etc)

    ETL of documents into vector store is another problem of its own, and several battle tested tools are already available for doing the same.

Indices and tables