We use Dify as our LLMs APP development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, making it possible to create apps from prototype to production. Here’s a list of the core features:
1. Workflow: Build and test powerful AI workflows on a visual canvas, leveraging all the following features and beyond.
2. Comprehensive model support: Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama3, and any OpenAI API-compatible models. A full list of supported model providers can be found here.
3. Prompt IDE: Intuitive interface for crafting prompts, comparing model performance, and adding additional features such as text-to-speech to a chat-based app.
4. RAG Pipeline: Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
5. Agent capabilities: You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
6. LLMOps: Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
7. Backend-as-a-Service: All of Dify’s offerings come with corresponding APIs, so you could effortlessly integrate Dify into your own business logic.
Open the Sign in page and enter admin account Email address and password that you entered when you set up the system after accessing installation page http://localhost/install

Then click “Sign in” button to access the system, now, you need set up the system to add Large Language Models, click top right corner and click Settings.

This brings you to the Settings window. On the window, click the left top corner “Model Provider” entry to show models compatible with dify, and then hover your mouse into Models section until you see the “Add Model” button show up then click it to add a LLM Model.

Here we use Azure OpenAI as an example, after clicking “Add Model” button in “Azure OpenAI service” section, it brings you to the “Add Azure OpenAI Service Model” window.

now, you need enter Model information to required data fields,
Model Type == “LLM”
Deployment Name ==”……”
API Endpoint URL == “……”
API Key == “……”
API Version == “……”
Base Model == “……”
Then you need to open Azure OpenAI studio to get “Deployment Name”, “API Endpoint URL”, “API Key”. Before you using Azure OpenAI service, you need create the deployment.

Enter the the above data you get from Azure OpenAI Studio. When you copy your API Key to Dify, Your API KEY will be encrypted and stored using PKCS1_OAEP technology.
For API Version and Base model, you need select from drop-down box, in this example we set the values as following:
API Version == 2024-05-01-preview
Base Model == gpt-4o
Then click “Save”.

Congratulations! You just added a model and can add more than one model, including open source AI models, such as Ollama which you can deploy on a laptop computer.
