flexstack.ai
  • Welcome to Flexstack AI
  • How Flexstack AI works
    • Three roles in Flexstack AI
    • AI Stack architecture
    • Models Directory
    • Open Source AI Demo
      • Image generation
      • LLM (Text completion)
      • Video generation
  • Flexstack AI API: Start making things with Flexstack AI
    • Environment setup
    • Restful APIs
      • User Endpoints
        • Login
        • Refresh Token
        • User Profile
        • Task History
      • LLMs
        • Models
        • Text Completion
      • Image Generation
        • Models
        • LoRA
          • List Types
          • List Categories
          • List Models
        • Create Image
        • Get Result
      • Video Generation
        • Models
        • Create video
        • Get Result
      • Audio Generation
        • Models
        • Music / Sound Effects Generation
          • Create audio
          • Get Result
        • Speech Generation
          • Create audio
          • Get Result
      • Text Embeddings
        • Models
        • Create embedding
        • Get Result
      • Feedback & Retrain model
        • Train LORA
        • Feedback
        • Feedback Request
      • Error Handling
        • Error Response
  • Flexstack AI Host: Start contributing
    • Prerequisites
    • Deployment Guideline
      • RunPod
      • VALDI
  • Flexstack AI Validator
    • LLM Validation
      • Methodology
      • Restful APIs
  • Additional Information
    • Technical support
Powered by GitBook
On this page

Was this helpful?

  1. Flexstack AI API: Start making things with Flexstack AI
  2. Restful APIs
  3. Video Generation

Create video

HTTP request

POST /ai/video_generation

Authorization

Include your ACCESS TOKEN in HTTP Authorization header

Authorization: Bearer Token

Request Parameters

KEY
TYPE
VALUE

prompt

String

A brief description or theme for the video you want to generate. Example: “A scenic landscape with rolling hills and a clear blue sky.”

configs

JSON

This parameter is a JSON object encompassing a variety of settings you can adjust to customize the image generation process. It includes several parameters, which we will describe next, allowing you to control different aspects of the generation.

model

String

Specifies the AI model used for generating the video. Default is “damo-text-to-video”.

width

Integer

The width of the generated video frame in pixels. This should be a positive integer. Common range: 256 to 512 pixels. Default is 256.

height

Integer

The height of the generated video frame in pixels. This should be a positive integer. Common range: 256 to 512 pixels. Default is 256.

steps

Integer

The number of steps the model will take to refine each frame of the video. Common range: 25 to 50 steps. Default is 50.

seed

Integer

An initial seed value for the random number generator used in video generation. Can be initialized with any random integer. Default value is -1.

fps

Integer

Specifies the number of frames per second in the generated video. Default value is 8.

num_frames

Integer

Determines the total number of frames (images) in the generated video, influencing its duration alongside the FPS value. Default value is 16.

negative_prompt

String

A description of elements you specifically want to exclude from the video. This helps in refining the output to better match your expectations. Example: "No humans, animals".

Negative_prompt is not used by default.

enhance_prompt

Boolean

The model will utilize language modeling to enhance and clarify the initial prompt, potentially leading to more detailed and accurate representations. Default value is False.

User Guide

  1. Craft a Detailed Prompt: Begin by crafting a clear and detailed prompt for the video you have in mind. Incorporate all pertinent details to steer the image generation effectively.

  2. Adjust Configuration Settings: Tailor the configuration settings to your preferences. This involves making choices regarding video frame resolution (width and height), the level of detail (steps), and the expected processing time, balancing these elements according to your needs.

  3. Experiment with Seeds: Leverage the seed parameter to explore different variations of your video. Altering the seed can yield distinct outcomes based on the same initial prompt, offering a range of possibilities.

  4. Refine Using Negative Prompts: Employ the negative_prompt field to eliminate any elements you do not wish to include in your video. This step helps in fine-tuning the results to more accurately reflect your vision.

  5. Enhance Your Prompt for Better Outcomes: Should the results not meet your expectations, consider activating the enhance_prompt feature. A more elaborate prompt may lead to improved video generation.

Example Request

{
  "prompt": "A serene mountain landscape at sunrise.",
  "configs": {
    "model": "damo-text-to-video",
    "negative_prompt": "No humans, worst quality",
    "enhance_prompt": true,
    "height": 512,
    "width": 512,
    "seed": 1000,
    "steps": 8,
    "frames": 16,
    "fps": 8
  }
}

Parrot API

video_task = parrot.create_txt2vid(prompt, model, width, height, steps, seed, fps, num_frames, negative_prompt, enhance_prompt)

Response

Returns the ID of the successful task.

{
  "data": {
    "task_id": "a4f50932587646cc95234defc4efe1d0",
    "prompt": "the man on beach",
    "negative_prompt": "worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated",
    "config": {
      "model": "modelscope-txt2vid",
      "negative_prompt": "worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated",
      "enhance_prompt": true,
      "height": 512,
      "width": 1024,
      "seed": -1,
      "steps": 25,
      "frames": 16,
      "fps": 8,
      "task_type": "TXT2VID",
      "queue_name": "txt2vid_modelscope_queue"
    }
  },
  "errors": [],
  "error_description": "",
  "start_time": "2024-03-02 20:44:15.818120",
  "end_time": "2024-03-02 20:44:17.802267",
  "host_of_client_call_request": "103.186.100.36",
  "total_time_by_second": 1.984159,
  "status": "success"
}
PreviousModelsNextGet Result

Last updated 1 year ago

Was this helpful?