mirror of
https://github.com/Direct-Dev-Ru/go-lcg.git
synced 2025-11-15 17:20:00 +00:00
344f763bb421ad7570b6de5821ef4175df52ba5f
Linux Command GPT (lcg)
This repo is forked from https://github.com/asrul10/linux-command-gpt.git
Generate Linux commands from natural language. Supports Ollama and Proxy backends, system prompts, different explanation levels (v/vv/vvv), and JSON history.
Installation
Build from source:
git clone --depth 1 https://github.com/Direct-Dev-Ru/linux-command-gpt.git ~/.linux-command-gpt
cd ~/.linux-command-gpt
go build -o lcg
# Add to your PATH
ln -s ~/.linux-command-gpt/lcg ~/.local/bin
Quick start
lcg "I want to extract linux-command-gpt.tar.gz file"
After generation you will see a CAPS warning that the answer is from AI and must be verified, the command, and the action menu:
ACTIONS: (c)opy, (s)ave, (r)egenerate, (e)xecute, (v|vv|vvv)explain, (n)othing
Explanations:
v— short;vv— medium;vvv— detailed with alternatives.
Clipboard support requires xclip or xsel.
Environment
LCG_PROVIDER(defaultollama) — provider type:ollamaorproxyLCG_HOST(defaulthttp://192.168.87.108:11434/) — base API URLLCG_MODEL(defaulthf.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF:Q4_K_M)LCG_PROMPT— default system prompt contentLCG_PROXY_URL(default/api/v1/protected/sberchat/chat) — proxy chat endpointLCG_COMPLETIONS_PATH(defaultapi/chat) — Ollama chat endpoint (relative)LCG_TIMEOUT(default300) — request timeout in secondsLCG_RESULT_FOLDER(default~/.config/lcg/gpt_results) — folder for saved resultsLCG_RESULT_HISTORY(default$(LCG_RESULT_FOLDER)/lcg_history.json) — JSON history pathLCG_PROMPT_FOLDER(default~/.config/lcg/gpt_sys_prompts) — folder for system promptsLCG_PROMPT_ID(default1) — default system prompt IDLCG_JWT_TOKEN— JWT token for proxy providerLCG_NO_HISTORY— if1/true, disables history writes for the processLCG_SERVER_PORT(default8080),LCG_SERVER_HOST(defaultlocalhost) — HTTP server settings
Flags
--file, -fread part of prompt from file--sys, -ssystem prompt content or ID--prompt-id, --pidchoose built-in prompt (1–5)--timeout, -trequest timeout (sec)--no-history, --nhdisable writing/updating JSON history for this run--debug, -dshow debug information (request parameters and prompts)--version, -vprint version;--help, -hhelp
Commands
models,health,configprompts list|add|deletetest-prompt <prompt-id> <command>update-jwt,delete-jwt(proxy)update-key,delete-key(not needed for ollama/proxy)history list— list history from JSONhistory view <index>— view by indexhistory delete <index>— delete by index (re-numbering)serve-result— start HTTP server to browse saved results (--port,--host)
Saving results
Files are saved to LCG_RESULT_FOLDER (default ~/.config/lcg/gpt_results).
-
Command result:
gpt_request_<MODEL>_YYYY-MM-DD_HH-MM-SS.md# <title>— H1 with original request (trimmed to 120 chars: first 116 +...)## Prompt## Response
-
Detailed explanation:
gpt_explanation_<MODEL>_YYYY-MM-DD_HH-MM-SS.md# <title>## Prompt## Command## Explanation and Alternatives (model: <MODEL>)
History
- Stored as JSON array in
LCG_RESULT_HISTORY. - On new request, if the same command exists, you will be prompted to view or overwrite.
- Showing from history does not call the API; the standard action menu is shown.
For full guide in Russian, see USAGE_GUIDE.md.
Languages
Go
83.2%
Shell
11.7%
Python
2.1%
Dockerfile
1.6%
Makefile
1.4%