- Home
- AI
- Installation
- LocalAI
This article was translated by AI
LocalAI
LocalAI is the free, Open Source OpenAI alternative. This guide shows how to connect LocalAI to ONLYOFFICE editors on Linux.
Please note that installation via Docker is also possible. To learn more, please refer to the official Local AI guide on how to do that.
Step 1: Installing LocalAI
Hardware requirements
| Component | Requirement |
| CPU | Multicore processor |
| RAM | 8 GB minimum; 16 GB or more recommended |
| Storage | 20 GB minimum; SSD recommended |
| GPU | Not required; optional GPU acceleration is supported (NVIDIA, AMD, Intel) |
| Network | Not required to run; recommended for downloading models and updates |
Installation
curl https://localai.io/install.sh | sh
Step 2: Installing the required model
Use the following command to install the required model:
local-ai models install name_of_the_model
where name_of_the_model is the name of the model you need.
Learn more about the available models on the official LocalAI website.
Step 3: Launching LocalAI with the cors flag
This step is necessary to use your AI assistant not only locally, but also in the web.
local-ai run --cors
Alternatively, you can add the following line to the /etc/localai.env file:
CORSALLOWEDORIGINS = "*"
Step 4: Configuring the AI plugin settings in ONLYOFFICE
- Please refer to our configuration guide for the initial setup.
- Now that you have connected your LocalAI to the ONLYOFFICE editors, specify LocalAI as a provider name.
- Enter the URL
http://127.0.0.1:8080if you did not change it when launching local-ai. - The models will load automatically to the list – select the one you’ve chosen previously in step 2 of this guide.
-
Click OK to save the settings.
Article with the tag:
Browse all tags