Category:ML Server
Introduction
At the interaction station you have the ability to run various machine learning models on the station's server. Our server is hosting a service named LocalAI which can be used as a drop-in replacement and is compatible with OpenAI's API specification. It allows you to use large language models (LLMs), transcribe audio, generate images and generate audio. The only thing you need to know is how to request the server what model to run so that the server can give you a response. Using the [Category:Python|Python] programming language, this tutorial will walk you to the process. To follow along, it is advised that you have some understanding of Python.
As mentioned before, LocalAI is compatible with OpenAI's API specification. That means you can also read the OpenAI's API Reference for more information. This guide borrows heavily from their documentation.
Pages in category "ML Server"
The following 4 pages are in this category, out of 4 total.