
Vicuna-13B
An open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.

An open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.

Fine-tuned on real user conversations
Trained on authentic ShareGPT dialogues for natural conversational ability
13 billion parameters
Optimized model size balancing performance with computational efficiency
Multi-turn dialogue support
Capable of maintaining context across extended conversations
Open-source architecture
Fully accessible codebase and weights for research and customization
Instruction-following
Trained to understand and execute complex user requests and prompts
Community-driven development
Benefits from ongoing improvements and contributions from the open-source community
Research and academic projects exploring conversational AI and language models
Custom chatbot development for specific domains or organizational needs
Educational tools for learning about LLM architecture and fine-tuning
Privacy-focused applications where data cannot be sent to external APIs
Integration into applications requiring offline or on-premise AI capabilities