Category:ML Server

From Interaction Station Wiki
Revision as of 15:59, 23 September 2024 by Boris (talk | contribs)
Jump to navigation Jump to search
Warning
Info:
We are currently in the process of writing this category. Articles are unfinished and may change!

Introduction

At the interaction station you have the ability to run various machine learning models on the station's server. Our server is hosting a service named [LocalAI](https://localai.io/) which can be used as a drop-in replacement and is compatible with OpenAI's API specification. It allows you to use large language models (LLMs), transcribe audio, generate images and generate audio. The only thing you need to know is how to request the server what model to run so that the server can give you a response. Using the Python programming language, this tutorial will walk you to the process. To follow along, it is advised that you have some understanding of Python.

As mentioned before, LocalAI is compatible with OpenAI's API specification. That means you can also read the [OpenAI's API Reference](https://platform.openai.com/docs/api-reference/introduction) for more information. This guide borrows heavily from their documentation.

Pages in category "ML Server"

The following 4 pages are in this category, out of 4 total.