flexstack.ai
  • Welcome to Flexstack AI
  • How Flexstack AI works
    • Three roles in Flexstack AI
    • AI Stack architecture
    • Models Directory
    • Open Source AI Demo
      • Image generation
      • LLM (Text completion)
      • Video generation
  • Flexstack AI API: Start making things with Flexstack AI
    • Environment setup
    • Restful APIs
      • User Endpoints
        • Login
        • Refresh Token
        • User Profile
        • Task History
      • LLMs
        • Models
        • Text Completion
      • Image Generation
        • Models
        • LoRA
          • List Types
          • List Categories
          • List Models
        • Create Image
        • Get Result
      • Video Generation
        • Models
        • Create video
        • Get Result
      • Audio Generation
        • Models
        • Music / Sound Effects Generation
          • Create audio
          • Get Result
        • Speech Generation
          • Create audio
          • Get Result
      • Text Embeddings
        • Models
        • Create embedding
        • Get Result
      • Feedback & Retrain model
        • Train LORA
        • Feedback
        • Feedback Request
      • Error Handling
        • Error Response
  • Flexstack AI Host: Start contributing
    • Prerequisites
    • Deployment Guideline
      • RunPod
      • VALDI
  • Flexstack AI Validator
    • LLM Validation
      • Methodology
      • Restful APIs
  • Additional Information
    • Technical support
Powered by GitBook
On this page

Was this helpful?

  1. Flexstack AI Host: Start contributing

Prerequisites

PreviousFlexstack AI Host: Start contributingNextDeployment Guideline

Last updated 1 year ago

Was this helpful?

Flexstack AI Host requires a setup that meets the following criteria to ensure stable and efficient operation.

  • Python: Version 3.10. Ensure that Python is properly installed and configured on your system. Use the command python --version to check your current Python version. If you don't have Python yet,

  • CUDA: Version 12.1 or compatible.

    Checking and Installing CUDA

    • Use the command nvcc --version in the terminal to check your current CUDA version. If you do not have CUDA installed, or if it's not version 12.1, please visit .

  • G++ compiler is required.

    Checking and Installing G++

    • To check the current version of G++ installed, open a terminal and type g++ --version. If G++ is not installed on your system, you will need to .

  • GPU: If your system includes a GPU, a minimum of 9GB VRAM is required for image generation tasks. For LoRA Training tasks, a minimum of 16GB VRAM is required.

    Checking VRAM

    • You can use a tool like nvidia-smi on systems with NVIDIA GPUs to check VRAM capacity.

this guide is helpful to install it.
NVIDIA's official website to download and install CUDA 12.1
install it via your operating system's package manager