mirror of
https://github.com/Direct-Dev-Ru/go-lcg.git
synced 2025-11-16 01:29:55 +00:00
before refactoring
This commit is contained in:
161
README.md
161
README.md
@@ -1,102 +1,85 @@
|
||||
# Linux Command GPT (lcg)
|
||||
|
||||
Get Linux commands in natural language with the power of ChatGPT.
|
||||
This repo is forked from <https://github.com/asrul10/linux-command-gpt.git>
|
||||
|
||||
Generate Linux commands from natural language. Supports Ollama and Proxy backends, system prompts, different explanation levels (v/vv/vvv), and JSON history.
|
||||
|
||||
## Installation
|
||||
|
||||
Build from source
|
||||
Build from source:
|
||||
|
||||
```bash
|
||||
> git clone --depth 1 https://github.com/asrul10/linux-command-gpt.git ~/.linux-command-gpt
|
||||
> cd ~/.linux-command-gpt
|
||||
> go build -o lcg
|
||||
# Add to your environment $PATH
|
||||
> ln -s ~/.linux-command-gpt/lcg ~/.local/bin
|
||||
git clone --depth 1 https://github.com/Direct-Dev-Ru/go-lcg.git ~/.linux-command-gpt
|
||||
cd ~/.linux-command-gpt
|
||||
go build -o lcg
|
||||
|
||||
# Add to your PATH
|
||||
ln -s ~/.linux-command-gpt/lcg ~/.local/bin
|
||||
```
|
||||
|
||||
Or you can [download lcg executable file](https://github.com/asrul10/linux-command-gpt/releases)
|
||||
|
||||
## Example Usage
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
> lcg I want to extract linux-command-gpt.tar.gz file
|
||||
Completed in 0.92 seconds
|
||||
|
||||
tar -xvzf linux-command-gpt.tar.gz
|
||||
|
||||
Do you want to (c)opy, (r)egenerate, or take (N)o action on the command? (c/r/N):
|
||||
```
|
||||
|
||||
```bash
|
||||
> LCG_PROMPT='Provide full response' LCG_MODEL=codellama:13b lcg 'i need bash script
|
||||
to execute some command by ssh on some array of hosts'
|
||||
Completed in 181.16 seconds
|
||||
|
||||
Here is a sample Bash script that demonstrates how to execute commands over SSH on an array of hosts:
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
hosts=(host1 host2 host3)
|
||||
|
||||
for host in "${hosts[@]}"; do
|
||||
ssh $host "echo 'Hello, world!' > /tmp/hello.txt"
|
||||
done
|
||||
```
|
||||
|
||||
This script defines an array `hosts` that contains the names of the hosts to connect to. The loop iterates over each element in the array and uses the `ssh` command to execute a simple command on the remote host. In this case, the command is `echo 'Hello, world!' > /tmp/hello.txt`, which writes the string "Hello, world!" to a file called `/tmp/hello.txt`.
|
||||
|
||||
You can modify the script to run any command you like by replacing the `echo` command with your desired command. For example, if you want to run a Python script on each host, you could use the following command:
|
||||
|
||||
```bash
|
||||
ssh $host "python /path/to/script.py"
|
||||
```
|
||||
|
||||
This will execute the Python script located at `/path/to/script.py` on the remote host.
|
||||
|
||||
You can also modify the script to run multiple commands in a single SSH session by using the `&&` operator to chain the commands together. For example:
|
||||
|
||||
```bash
|
||||
ssh $host "echo 'Hello, world!' > /tmp/hello.txt && python /path/to/script.py"
|
||||
```
|
||||
|
||||
This will execute both the `echo` command and the Python script in a single SSH session.
|
||||
|
||||
I hope this helps! Let me know if you have any questions or need further assistance.
|
||||
|
||||
Do you want to (c)opy, (r)egenerate, or take (N)o action on the command? (c/r/N):
|
||||
|
||||
``` text
|
||||
|
||||
To use the "copy to clipboard" feature, you need to install either the `xclip` or `xsel` package.
|
||||
|
||||
### Options
|
||||
```bash
|
||||
> lcg [options]
|
||||
|
||||
--help -h output usage information
|
||||
--version -v output the version number
|
||||
--file -f read command from file
|
||||
--update-key -u update the API key
|
||||
--delete-key -d delete the API key
|
||||
|
||||
# ollama example
|
||||
export LCG_PROVIDER=ollama
|
||||
export LCG_HOST=http://192.168.87.108:11434/
|
||||
export LCG_MODEL=codegeex4
|
||||
|
||||
lcg "I want to extract linux-command-gpt.tar.gz file"
|
||||
|
||||
export LCG_PROVIDER=proxy
|
||||
export LCG_HOST=http://localhost:8080
|
||||
export LCG_MODEL=GigaChat-2
|
||||
export LCG_JWT_TOKEN=your_jwt_token_here
|
||||
|
||||
lcg "I want to extract linux-command-gpt.tar.gz file"
|
||||
|
||||
lcg health
|
||||
|
||||
lcg config
|
||||
|
||||
lcg update-jwt
|
||||
|
||||
```
|
||||
|
||||
After generation you will see a CAPS warning that the answer is from AI and must be verified, the command, and the action menu:
|
||||
|
||||
```text
|
||||
ACTIONS: (c)opy, (s)ave, (r)egenerate, (e)xecute, (v|vv|vvv)explain, (n)othing
|
||||
```
|
||||
|
||||
Explanations:
|
||||
|
||||
- `v` — short; `vv` — medium; `vvv` — detailed with alternatives.
|
||||
|
||||
Clipboard support requires `xclip` or `xsel`.
|
||||
|
||||
## Environment
|
||||
|
||||
- `LCG_PROVIDER` (ollama|proxy), `LCG_HOST`, `LCG_MODEL`, `LCG_PROMPT`
|
||||
- `LCG_TIMEOUT` (default 120), `LCG_RESULT_FOLDER` (default ./gpt_results)
|
||||
- `LCG_RESULT_HISTORY` (default $(LCG_RESULT_FOLDER)/lcg_history.json)
|
||||
- `LCG_JWT_TOKEN` (for proxy)
|
||||
|
||||
## Flags
|
||||
|
||||
- `--file, -f` read part of prompt from file
|
||||
- `--sys, -s` system prompt content or ID
|
||||
- `--prompt-id, --pid` choose built-in prompt (1–5)
|
||||
- `--timeout, -t` request timeout (sec)
|
||||
- `--version, -v` print version; `--help, -h` help
|
||||
|
||||
## Commands
|
||||
|
||||
- `models`, `health`, `config`
|
||||
- `prompts list|add|delete`
|
||||
- `test-prompt <prompt-id> <command>`
|
||||
- `update-jwt`, `delete-jwt` (proxy)
|
||||
- `update-key`, `delete-key` (not needed for ollama/proxy)
|
||||
- `history list` — list history from JSON
|
||||
- `history view <index>` — view by index
|
||||
- `history delete <index>` — delete by index (re-numbering)
|
||||
|
||||
## Saving results
|
||||
|
||||
Files are saved to `LCG_RESULT_FOLDER`.
|
||||
|
||||
- Command result: `gpt_request_<MODEL>_YYYY-MM-DD_HH-MM-SS.md`
|
||||
- `# <title>` — H1 with original request (trimmed to 120 chars: first 116 + `...`)
|
||||
- `## Prompt`
|
||||
- `## Response`
|
||||
|
||||
- Detailed explanation: `gpt_explanation_<MODEL>_YYYY-MM-DD_HH-MM-SS.md`
|
||||
- `# <title>`
|
||||
- `## Prompt`
|
||||
- `## Command`
|
||||
- `## Explanation and Alternatives (model: <MODEL>)`
|
||||
|
||||
## History
|
||||
|
||||
- Stored as JSON array in `LCG_RESULT_HISTORY`.
|
||||
- On new request, if the same command exists, you will be prompted to view or overwrite.
|
||||
- Showing from history does not call the API; the standard action menu is shown.
|
||||
|
||||
For full guide in Russian, see `USAGE_GUIDE.md`.
|
||||
|
||||
Reference in New Issue
Block a user