You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’ve created a package that allows orchestrating agents, chat models, and RAG from different backends within .NET applications, using LlamaSharp for gaining local LLM capabilities as a local backend, and then allows it to be integrated with other LLM providers such as Ollama, Groq, OpenAI, etc. The aim of the package is to enable the creation of complex LLM (and other) workflows with minimal boilerplate, making it easier to experiment with different LLM integrations.
While the implementation is still a bit raw, I hope its already good-enough for prototyping and quick PoCs
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,7 @@ There are integrations for the following libraries, making it easier to develop
65
65
-[kernel-memory](https://github.com/microsoft/kernel-memory): a multi-modal AI Service specialized in the efficient indexing of datasets through custom continuous data hybrid pipelines, with support for RAG ([Retrieval Augmented Generation](https://en.wikipedia.org/wiki/Prompt_engineering#Retrieval-augmented_generation)), synthetic memory, prompt engineering, and custom semantic memory processing.
66
66
-[BotSharp](https://github.com/SciSharp/BotSharp): an open source machine learning framework for AI Bot platform builder.
67
67
-[Langchain](https://github.com/tryAGI/LangChain): a framework for developing applications powered by language models.
68
-
68
+
-[MaIN.NET](https://github.com/wisedev-code/MaIN.NET): simplistic approach to orchestrating agents/chats from different (llm) providers
69
69
70
70
The following examples show how to build APPs with LLamaSharp.
0 commit comments