A self-sustaining P2P server system using GitHub Actions runners
Ouroboros is a unique P2P server implementation that uses GitHub Actions runners to maintain a persistent server presence. It creates a self-sustaining cycle where each server instance spawns its successor before terminating, ensuring continuous uptime within GitHub Actions' time constraints.
- A GitHub Actions runner starts up and initializes a PeerJS peer that acts as a server
- The server runs for approximately 2 hours (to avoid WebRTC timeout issues)
- Before termination, it:
- Triggers a new runner instance
- Transfers all data to the new server
- Updates the server ID in GitHub Pages
- The cycle continues with the new server instance
- GitHub Actions: Provides the compute infrastructure via runners
- PeerJS: Handles WebRTC P2P connections and provides a free signaling server
- GitHub Pages: Stores and serves the current active server ID
- Puppeteer: Runs the server code in a browser environment since WebRTC is not supported in Node.js
ouroboros.yml
: Main server workflow that runs the P2P serverstatic.yml
: Updates the active server IDs on GitHub PagesstaticRestore.yml
: Restores server IDs if needed
- Fork this repository
- Create a GitHub Personal Access Token with runners access
- Add the token as a repository secret named
RUN
- Enable GitHub Pages for the repository
- Trigger the initial server by running the "Run Ouroboros server" workflow
- New server starts and retrieves the previous server's ID from GitHub Pages
- After a brief delay (to allow Pages to update), it connects to the old server
- Old server validates the new server's identity
- Data is transferred to the new server
- Old server terminates, new server becomes active
- Server maintains a list of connected peers
- Messages are validated before processing
- GitHub Actions runners have a 6-hour maximum runtime
- Each runner has 16 GB of RAM and 4 cores
- Potential brief interruptions during server handoff
- GitHub may shut this down at any time
- Implement multiple Ouroboros nodes to avoid downtime and provide horizontal scaling
- Implement message deletion when reaching RAM limits
- Implement a more robust server handoff process
- Implement a more robust way to transfer data
- Implement backup Signaling Server using Reverse Proxy (Ngrok)
MIT License - See LICENSE file for details