Skip to content

Amm1rr/WebAI-to-API

Repository files navigation

Disclaimer

This project is intended for research and educational purposes only.
Please refrain from any commercial use and act responsibly when deploying or modifying this tool.


WebAI-to-API

WebAI-to-API Server gpt4free Server

WebAI-to-API is a modular web server built with FastAPI that allows you to expose your preferred browser-based LLM (such as Gemini) as a local API endpoint.


This project supports two operational modes:

  1. Primary Web Server

    WebAI-to-API

    Connects to the Gemini web interface using your browser cookies and exposes it as an API endpoint. This method is lightweight, fast, and efficient for personal use.

  2. Fallback Web Server (gpt4free)

    gpt4free

    A secondary server powered by the gpt4free library, offering broader access to multiple LLMs beyond Gemini, including:

    • ChatGPT
    • Claude
    • DeepSeek
    • Copilot
    • HuggingFace Inference
    • Grok
    • ...and many more.

This design provides both speed and redundancy, ensuring flexibility depending on your use case and available resources.


Features

  • 🌐 Available Endpoints:

    • WebAI Server:

      • /v1/chat/completions
      • /gemini
      • /gemini-chat
      • /translate
    • gpt4free Server:

      • /v1
      • /v1/chat/completions
  • πŸ”„ Server Switching: Easily switch between servers in terminal.

  • πŸ› οΈ Modular Architecture: Organized into clearly defined modules for API routes, services, configurations, and utilities, making development and maintenance straightforward.

Endpoints


Installation

  1. Clone the repository:

    git clone https://github.com/Amm1rr/WebAI-to-API.git
    cd WebAI-to-API
  2. Install dependencies using Poetry:

    poetry install
  3. Create and update the configuration file:

    cp config.conf.example config.conf

    Then, edit config.conf to adjust service settings and other options.

  4. Run the server:

    poetry run python src/run.py

Usage

Send a POST request to /v1/chat/completions (or any other available endpoint) with the required payload.

Example Request

{
  "model": "gemini-2.0-flash",
  "messages": [{ "role": "user", "content": "Hello!" }]
}

Example Response

{
  "id": "chatcmpl-12345",
  "object": "chat.completion",
  "created": 1693417200,
  "model": "gemini-2.0-flash",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Hi there!"
      },
      "finish_reason": "stop",
      "index": 0
    }
  ],
  "usage": {
    "prompt_tokens": 0,
    "completion_tokens": 0,
    "total_tokens": 0
  }
}

Documentation

WebAI-to-API Endpoints

GET /gemini

Initiates a new conversation with the LLM. Each request creates a fresh session, making it suitable for stateless interactions.

POST /gemini-chat

Continues a persistent conversation with the LLM without starting a new session. Ideal for use cases that require context retention between messages.

POST /translate

Designed for quick integration with the Translate It! browser extension.
Functionally identical to /gemini-chat, meaning it maintains session context across requests.

POST /v1/chat/completions

A minimalistic implementation of the OpenAI-compatible endpoint.
Built for simplicity and ease of integration with clients that expect the OpenAI API format.


gpt4free Endpoints

These endpoints follow the OpenAI-compatible structure and are powered by the gpt4free library.
For detailed usage and advanced customization, refer to the official documentation:

Available Endpoints (gpt4free API Layer)

GET  /                              # Health check
GET  /v1                            # Version info
GET  /v1/models                     # List all available models
GET  /api/{provider}/models         # List models from a specific provider
GET  /v1/models/{model_name}        # Get details of a specific model

POST /v1/chat/completions           # Chat with default configuration
POST /api/{provider}/chat/completions
POST /api/{provider}/{conversation_id}/chat/completions

POST /v1/responses                  # General response endpoint
POST /api/{provider}/responses

POST /api/{provider}/images/generations
POST /v1/images/generations
POST /v1/images/generate            # Generate images using selected provider

POST /v1/media/generate             # Media generation (audio/video/etc.)

GET  /v1/providers                  # List all providers
GET  /v1/providers/{provider}       # Get specific provider info

POST /api/{path_provider}/audio/transcriptions
POST /v1/audio/transcriptions       # Audio-to-text

POST /api/markitdown                # Markdown rendering

POST /api/{path_provider}/audio/speech
POST /v1/audio/speech               # Text-to-speech

POST /v1/upload_cookies             # Upload session cookies (browser-based auth)

GET  /v1/files/{bucket_id}          # Get uploaded file from bucket
POST /v1/files/{bucket_id}          # Upload file to bucket

GET  /v1/synthesize/{provider}      # Audio synthesis

POST /json/{filename}               # Submit structured JSON data

GET  /media/{filename}              # Retrieve media
GET  /images/{filename}             # Retrieve images

Roadmap

  • βœ… Maintenance

Configuration βš™οΈ

Key Configuration Options

Section Option Description Example Value
[AI] default_ai Default service for /v1/chat/completions gemini
[Browser] name Browser for cookie-based authentication firefox
[EnabledAI] gemini Enable/disable Gemini service true

The complete configuration template is available in WebAI-to-API/config.conf.example.
If the cookies are left empty, the application will automatically retrieve them using the default browser specified.


Sample config.conf

[AI]
# Default AI service.
default_ai = gemini

# Default model for Gemini.
default_model_gemini = gemini-2.0-flash

# Gemini cookies (leave empty to use browser_cookies3 for automatic authentication).
gemini_cookie_1psid =
gemini_cookie_1psidts =

[EnabledAI]
# Enable or disable AI services.
gemini = true

[Browser]
# Default browser options: firefox, brave, chrome, edge, safari.
name = firefox

Project Structure

The project now follows a modular layout that separates configuration, business logic, API endpoints, and utilities:

src/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ main.py                # FastAPI app creation, configuration, and lifespan management.
β”‚   β”œβ”€β”€ config.py              # Global configuration loader/updater.
β”‚   β”œβ”€β”€ logger.py              # Centralized logging configuration.
β”‚   β”œβ”€β”€ endpoints/             # API endpoint routers.
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ gemini.py          # Endpoints for Gemini (e.g., /gemini, /gemini-chat).
β”‚   β”‚   └── chat.py            # Endpoints for translation and OpenAI-compatible requests.
β”‚   β”œβ”€β”€ services/              # Business logic and service wrappers.
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ gemini_client.py   # Gemini client initialization, content generation, and cleanup.
β”‚   β”‚   └── session_manager.py # Session management for chat and translation.
β”‚   └── utils/                 # Helper functions.
β”‚       β”œβ”€β”€ __init__.py
β”‚       └── browser.py         # Browser-based cookie retrieval.
β”œβ”€β”€ models/                    # Models and wrappers (e.g., MyGeminiClient).
β”‚   └── gemini.py
β”œβ”€β”€ schemas/                   # Pydantic schemas for request/response validation.
β”‚   └── request.py
β”œβ”€β”€ config.conf                # Application configuration file.
└── run.py                     # Entry point to run the server.

Developer Documentation

Overview

The project is built on a modular architecture designed for scalability and ease of maintenance. Its primary components are:

  • app/main.py: Initializes the FastAPI application, configures middleware, and manages application lifespan (startup and shutdown routines).
  • app/config.py: Handles the loading and updating of configuration settings from config.conf.
  • app/logger.py: Sets up a centralized logging system.
  • app/endpoints/: Contains separate modules for handling API endpoints. Each module (e.g., gemini.py and chat.py) manages routes specific to their functionality.
  • app/services/: Encapsulates business logic, including the Gemini client wrapper (gemini_client.py) and session management (session_manager.py).
  • app/utils/browser.py: Provides helper functions, such as retrieving cookies from the browser for authentication.
  • models/: Holds model definitions like MyGeminiClient for interfacing with the Gemini Web API.
  • schemas/: Defines Pydantic models for validating API requests.

How It Works

  1. Application Initialization:
    On startup, the application loads configurations and initializes the Gemini client and session managers. This is managed via the lifespan context in app/main.py.

  2. Routing:
    The API endpoints are organized into dedicated routers under app/endpoints/, which are then included in the main FastAPI application.

  3. Service Layer:
    The app/services/ directory contains the logic for interacting with the Gemini API and managing user sessions, ensuring that the API routes remain clean and focused on request handling.

  4. Utilities and Configurations:
    Helper functions and configuration logic are kept separate to maintain clarity and ease of updates.


🐳 Docker Deployment Guide

For Docker setup and deployment instructions, please refer to the Docker.md documentation.


Star History

Star History Chart

License πŸ“œ

This project is open source under the MIT License.


Note: This is a research project. Please use it responsibly, and be aware that additional security measures and error handling are necessary for production deployments.