Llmpm
NPM for LLMs
38 tools found
NPM for LLMs
Full-stack LLMOps platform to monitor, manage, and improve LLM-based apps.
Load and run large LLMs locally to use in your terminal or build your apps.
Evaluate, test, and ship LLM applications with a suite of observability tools to calibrate language model outputs across your dev and production lifecycle.
Route, track, and debug all LLM traffic.
Multilingual, multimodal, scalable AI tool; open-source.
An open-source framework for building production-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluations, and experimentation.
Build, deploy AI apps easily; no-code, multi-model integration.
European frontier AI models with open and closed options
Automates and monitors LLMs for quality, compliance, and performance.
Open-source drag-and-drop LLM app and chatbot builder
Developer platform
APIPark is an open-source enterprise API developer portal designed to streamline the management of large language models (LLMs) in production environments...
A local Word Add-in for you to use local LLM servers in Microsoft Word. Alternative to "Copilot in Word" and completely local.
The open-source LLM observability for developers.
Developed by Stanford, Alpaca is an instruction-following model based on LLaMA, fine-tuned for improved performance on specific tasks. Alpaca can be deployed locally, allowing researchers to work with large language models offline for greater privacy and control.
The open-source LLM observability for developers.
The open-source LLM observability for developers.
Agents-Flex is a powerful Java framework designed for large language model (LLM) applications. This lightweight and elegantly simple framework provides de...
Open-source LLM engineering platform that helps teams collaboratively debug, analyze, and iterate on their LLM applications. [#opensource](https://github.com/langfuse/langfuse)
An AI platform providing a range of tools for language model fine-tuning, deployment, and usage.
An open-weight model that competes with LLaMA, offering a balance between performance and efficiency. Mistral can run on local servers or personal hardware for high-performance, private deployment of language models.
Local LLM Red-Teaming Tool
API / platform