6.0 KiB
Linux Command GPT (lcg)
This repo is forked from https://github.com/asrul10/linux-command-gpt.git
Generate Linux commands from natural language. Supports Ollama and Proxy backends, system prompts, different explanation levels (v/vv/vvv), and JSON history.
Installation
Build from source:
git clone --depth 1 https://github.com/Direct-Dev-Ru/linux-command-gpt.git ~/.linux-command-gpt
cd ~/.linux-command-gpt
go build -o lcg
# Add to your PATH
ln -s ~/.linux-command-gpt/lcg ~/.local/bin
Quick start
lcg "I want to extract linux-command-gpt.tar.gz file"
After generation you will see a CAPS warning that the answer is from AI and must be verified, the command, and the action menu:
ACTIONS: (c)opy, (s)ave, (r)egenerate, (e)xecute, (v|vv|vvv)explain, (n)othing
Explanations:
v— short;vv— medium;vvv— detailed with alternatives.
Clipboard support requires xclip or xsel.
What's new in 2.0.14
- Authentication: JWT-based authentication with HTTP-only cookies
- CSRF protection: Full CSRF protection with tokens and middleware
- Security: Enhanced security with token validation and sessions
- Kubernetes deployment: Full set of manifests for Kubernetes deployment with Traefik
- Reverse Proxy: Support for working behind reverse proxy with cookie configuration
- Web interface: Improved web interface with modern design
- Monitoring: Prometheus metrics and ServiceMonitor
- Scaling: HPA for automatic scaling
- Testing: CSRF protection testing tools
Environment
LCG_PROVIDER(defaultollama) — provider type:ollamaorproxyLCG_HOST(defaulthttp://192.168.87.108:11434/) — base API URLLCG_MODEL(defaulthf.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF:Q4_K_M)LCG_PROMPT— default system prompt contentLCG_PROXY_URL(default/api/v1/protected/sberchat/chat) — proxy chat endpointLCG_COMPLETIONS_PATH(defaultapi/chat) — Ollama chat endpoint (relative)LCG_TIMEOUT(default300) — request timeout in secondsLCG_RESULT_FOLDER(default~/.config/lcg/gpt_results) — folder for saved resultsLCG_RESULT_HISTORY(default$(LCG_RESULT_FOLDER)/lcg_history.json) — JSON history pathLCG_PROMPT_FOLDER(default~/.config/lcg/gpt_sys_prompts) — folder for system promptsLCG_PROMPT_ID(default1) — default system prompt IDLCG_BROWSER_PATH— custom browser executable path for--browserflagLCG_JWT_TOKEN— JWT token for proxy providerLCG_NO_HISTORY— if1/true, disables history writes for the processLCG_ALLOW_EXECUTION— if1/true, enables command execution via(e)action menuLCG_SERVER_PORT(default8080),LCG_SERVER_HOST(defaultlocalhost) — HTTP server settings
Flags
--file, -fread part of prompt from file--sys, -ssystem prompt content or ID--prompt-id, --pidchoose built-in prompt (1–5)--timeout, -trequest timeout (sec)--no-history, --nhdisable writing/updating JSON history for this run--debug, -dshow debug information (request parameters and prompts)--version, -vprint version;--help, -hhelp
Commands
models,health,configprompts list|add|deletetest-prompt <prompt-id> <command>update-jwt,delete-jwt(proxy)update-key,delete-key(not needed for ollama/proxy)history list— list history from JSONhistory view <index>— view by indexhistory delete <index>— delete by index (re-numbering)serve— start HTTP server to browse saved results (--port,--host,--browser)/run— web interface for executing requests/execute— API endpoint for programmatic access via curl
Saving results
Files are saved to LCG_RESULT_FOLDER (default ~/.config/lcg/gpt_results).
-
Command result:
gpt_request_<MODEL>_YYYY-MM-DD_HH-MM-SS.md# <title>— H1 with original request (trimmed to 120 chars: first 116 +...)## Prompt## Response
-
Detailed explanation:
gpt_explanation_<MODEL>_YYYY-MM-DD_HH-MM-SS.md# <title>## Prompt## Command## Explanation and Alternatives (model: <MODEL>)
History
- Stored as JSON array in
LCG_RESULT_HISTORY. - On new request, if the same command exists, you will be prompted to view or overwrite.
- Showing from history does not call the API; the standard action menu is shown.
Browser Integration
The serve command supports automatic browser opening:
# Start server and open browser automatically
lcg serve --browser
# Use custom browser
export LCG_BROWSER_PATH="/usr/bin/firefox"
lcg serve --browser
# Start on custom host/port with browser
lcg serve --host 0.0.0.0 --port 9000 --browser
Supported browsers (in priority order):
- Yandex Browser (
yandex-browser,yandex-browser-stable) - Mozilla Firefox (
firefox,firefox-esr) - Google Chrome (
google-chrome,google-chrome-stable) - Chromium (
chromium,chromium-browser)
API Access
The serve command provides both a web interface and REST API:
Web Interface:
- Browse results at
http://localhost:8080/(orhttp://localhost:8080<BASE_PATH>/ifLCG_BASE_URLset) - Execute requests at
.../run - Manage prompts at
.../prompts - View history at
.../history
Notes:
- Base path: set
LCG_BASE_URL(e.g./lcg) to prefix all routes and API. - Custom 404: unknown paths under base path render a modern 404 page.
- Debug: enable via flag
--debugor envLCG_DEBUG=1|true.
REST API:
# Start server
lcg serve
# Make API request
curl -X POST http://localhost:8080/execute \
-H "Content-Type: application/json" \
-d '{"prompt": "create directory test", "verbose": "vv"}'
Response:
{
"success": true,
"command": "mkdir test",
"explanation": "The mkdir command creates a new directory...",
"model": "hf.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF:Q4_K_M",
"elapsed": 1.23
}
For complete API documentation, see API_GUIDE.md.
For full guide in Russian, see USAGE_GUIDE.md.