Ollama: Native One-Click Deployment of Open Source Large Language Models

Trae

Ollama General Introduction

ollama is a lightweight framework for running native language models, allowing users to easily build and run large language models. It offers multiple quick start and installation options, supports Docker, and includes a rich set of libraries for users to choose from. It is easy to use, provides a REST API, and has a variety of plugins and extensions that integrate with the community.

ollama is a pure command line tool, personal computer use, recommended deployment of local chat interface, such as: Open WebUI, Lobe Chat, NextChat

Ollama:本地一键部署开源大语言模型

Modify the default installation directory: https://github.com/ollama/ollama/issues/2859

 

Ollama:本地一键部署开源大语言模型

 

 

Ollama Feature List

Getting Large Language Models Up and Running Fast
Support macOS, Windows, Linux systems
Provides libraries such as ollama-python, ollama-js, etc.
Including Llama 2. Mistral, Gemma et al. preconstructed model
Supports both local and Docker installations
Provide customized model function
Support for model conversion from GGUF and PyTorch
Provide CLI operation guide
Provide REST API support

 

Commonly used ollama commands

Pull model: ollama pull llama3.1

Run model: llama run llama3.1

Delete model: llama rm llama3.1

List all available models: ollama list

Query API service address: ollama serve (default http://localhost:11434/)

 

 

Ollama Help

Installation scripts and guides available through the ollama website and GitHub page
Installation using the provided Docker image
Model creation, pulling, removing and copying via CLI operations
Initialize and run local builds
Run the model and interact with it

 

 

Some of the models supported by Ollama

 

ModelParametersSizeDownload
Llama 27B3.8GBollama run llama2
Mistral7B4.1GBollama run mistral
Dolphin Phi2.7B1.6GBollama run dolphin-phi
Phi-22.7B1.7GBollama run phi
Neural Chat7B4.1GBollama run neural-chat
Starling7B4.1GBollama run starling-lm
Code Llama7B3.8GBollama run codellama
Llama 2 Uncensored7B3.8GBollama run llama2-uncensored
Llama 2 13B13B7.3GBollama run llama2:13b
Llama 2 70B70B39GBollama run llama2:70b
Orca Mini3B1.9GBollama run orca-mini
Vicuna7B3.8GBollama run vicuna
LLaVA7B4.5GBollama run llava
Gemma2B1.4GBollama run gemma:2b
Gemma7B4.8GBollama run gemma:7b

 

 

Ollama Download

© Copyright notes
AiPPT

Related posts

No comments

none
No comments...