GateLLM Logo GateLLM

Secure your AI workflow with complete control

Safeguard your privacy, security, and control when building with LLMs. Designed for students, indie hackers, and early startups, GateLLM prevents attacks and offers true independence.

Join the Community

Be part of the conversation. Help define the future of secure AI development.

GateLLM - The open-source gateway securing your AI workflows | Product Hunt

What's This About?

LLMs are powerful but they can leak data, get tricked by prompts, or be misused in code.

GateLLM is a secure proxy layer for LLMs. It's like a firewall but for prompts and AI output.

terminal ~ zsh

$ npm install gatellm

Installing GateLLM secure proxy...

Starting GateLLM proxy server on port 3000...

✓ Connected to OpenAI API

✓ Connected to Anthropic API

✓ Local model loaded: mistral-7b

◯ Securing requests with PII filtering

◯ Prompt injection protection active

GateLLM ready! All AI traffic is now secure.

We're building:

  • Prompt validation & injection detection
  • Logging & auditing for LLM requests
  • Guardrails for AI-assisted code review
  • A developer-friendly gateway with real-world use cases

Why GateLLM?

Data Control

Exploring ways to ensure only necessary data is shared with AI models.

Secure Access

Working on a single interface for managing multiple LLM integrations securely.

Designed for Privacy

Starting from scratch to build something robust for those who need more control.

Built by Developers, for Developers

This isn't a product, it's a journey.

If you're a:

  • CS student learning AI & security,
  • Junior dev exploring prompt engineering,
  • Senior engineer building LLM features into your app

Come help shape GateLLM ~ we're sharing everything as we learn.

Proof of Concept

Starting Small, Dreaming Big

As a developer, I'm building this project step-by-step, starting with a simple command-line interface that can:

  • Detect PII in prompts before they reach AI models
  • Route requests to different LLM providers
  • Log and analyze prompts for security concerns
gatellm-cli.py

import argparse

import re

# Simple PII detector

def detect_pii(text):

  # Match emails, phone numbers, etc.

  patterns = [r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}']

  for pattern in patterns:

    if re.search(pattern, text):

      return True

  return False

 

def main():

  # CLI is under development

  # More code coming soon...

Next Steps:

Phase 1: Simple CLI tool for prompt scanning (NLP basics)

Phase 2: Basic REST API

Technical Exploration

Prompt Injection Protection

Rule-based detection to block harmful prompts

PII Detection & Filtering

Protect sensitive data from reaching AI models

Authentication System

API keys for secured, controlled access

Logging & Auditing

Track all requests for security analysis

Early Explorers Wanted

We're looking for early adopters to test the CLI, try prompt firewalls, and co-create best practices for LLM dev.

Be part of the conversation. Help define the future of secure AI development.

  • Add your name to get early access
  • Feedback makes this better for everyone