Compare commits
3 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| f07effc51e | |||
| 80629105ed | |||
| 25c75bc89a |
259
README.md
259
README.md
@@ -1,6 +1,6 @@
|
||||
# nanocode
|
||||
|
||||
Minimal Claude Code alternative using DSPy ReAct! Single Python file, zero dependencies, ~250 lines.
|
||||
Minimal Claude Code alternative using DSPy RLM! Single Python file, ~390 lines.
|
||||
|
||||
Built using Claude Code, then used to build itself.
|
||||
|
||||
@@ -8,57 +8,272 @@ Built using Claude Code, then used to build itself.
|
||||
|
||||
## Features
|
||||
|
||||
- Full agentic loop with tool use
|
||||
- Full agentic loop with tool use via [DSPy RLM](https://dspy.ai/)
|
||||
- Tools: `read`, `write`, `edit`, `glob`, `grep`, `bash`
|
||||
- Conversation history
|
||||
- Conversation history with context
|
||||
- Colored terminal output
|
||||
- **Modaic Integration**: Push, version, and share as a [Modaic](https://modaic.dev) autoprogram
|
||||
|
||||
### OpenRouter
|
||||
---
|
||||
|
||||
Use [OpenRouter](https://openrouter.ai) to access any model:
|
||||
## Prerequisites
|
||||
|
||||
Before using nanocode (or any DSPy RLM-based program), you need to install the Deno code interpreter:
|
||||
|
||||
```bash
|
||||
brew install deno
|
||||
```
|
||||
|
||||
This is required for the RLM's code execution capabilities.
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Option 1: Use as a Modaic AutoProgram
|
||||
|
||||
Load and run nanocode directly from the Modaic Hub without cloning:
|
||||
|
||||
```python
|
||||
from modaic import AutoProgram
|
||||
|
||||
# Load the precompiled nanocode agent from Modaic Hub
|
||||
agent = AutoProgram.from_precompiled(
|
||||
"farouk1/nanocode",
|
||||
config={
|
||||
"lm": "openrouter/anthropic/claude-3.5-sonnet",
|
||||
"max_iters": 20
|
||||
}
|
||||
)
|
||||
|
||||
# Run a coding task
|
||||
result = agent(task="What Python files are in this directory?")
|
||||
print(result.answer)
|
||||
print(result.affected_files)
|
||||
```
|
||||
|
||||
### Option 2: Run Locally (Interactive CLI)
|
||||
|
||||
```bash
|
||||
export OPENROUTER_API_KEY="your-key"
|
||||
python nanocode.py
|
||||
```
|
||||
|
||||
To use a different model:
|
||||
To use a specific model:
|
||||
|
||||
```bash
|
||||
export OPENROUTER_API_KEY="your-key"
|
||||
export MODEL="openai/gpt-5.2"
|
||||
export MODEL="openai/gpt-4"
|
||||
python nanocode.py
|
||||
```
|
||||
|
||||
## Commands
|
||||
---
|
||||
|
||||
- `/c` - Clear conversation
|
||||
- `/q` or `exit` - Quit
|
||||
## Configuration
|
||||
|
||||
When using as a Modaic AutoProgram, you can configure these options:
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `lm` | str | `openrouter/anthropic/claude-3.5-sonnet` | Primary language model |
|
||||
| `sub_lm` | str | `openrouter/openai/gpt-4.1` | Sub-LM for reasoning steps |
|
||||
| `max_iters` | int | `20` | Maximum agent iterations |
|
||||
| `api_base` | str | `https://openrouter.ai/api/v1` | API base URL |
|
||||
| `max_tokens` | int | `16000` | Maximum tokens per request |
|
||||
| `max_output_chars` | int | `100000` | Maximum output character limit |
|
||||
| `verbose` | bool | `False` | Enable verbose logging |
|
||||
|
||||
Example with custom configuration:
|
||||
|
||||
```python
|
||||
from modaic import AutoProgram
|
||||
|
||||
agent = AutoProgram.from_precompiled(
|
||||
"farouk1/nanocode",
|
||||
config={
|
||||
"lm": "openrouter/openai/gpt-4",
|
||||
"sub_lm": "openrouter/openai/gpt-3.5-turbo",
|
||||
"max_iters": 30,
|
||||
"max_tokens": 8000,
|
||||
"verbose": True
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CLI Commands
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/c` | Clear conversation history |
|
||||
| `/q` or `exit` | Quit the application |
|
||||
|
||||
---
|
||||
|
||||
## Tools
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `read` | Read file with line numbers, offset/limit |
|
||||
| `write` | Write content to file |
|
||||
| `edit` | Replace string in file (must be unique) |
|
||||
| `glob` | Find files by pattern, sorted by mtime |
|
||||
| `grep` | Search files for regex |
|
||||
| `bash` | Run shell command |
|
||||
The agent has access to the following tools:
|
||||
|
||||
## Example
|
||||
| Tool | Function | Description |
|
||||
|------|----------|-------------|
|
||||
| `readfile` | `read_file(path, offset, limit)` | Read file contents with line numbers |
|
||||
| `writefile` | `write_file(path, content)` | Write content to a file |
|
||||
| `editfile` | `edit_file(path, old, new, replace_all)` | Replace text in a file (old must be unique unless `replace_all=True`) |
|
||||
| `globfiles` | `glob_files(pattern, path)` | Find files matching a glob pattern, sorted by modification time |
|
||||
| `grepfiles` | `grep_files(pattern, path)` | Search files for a regex pattern |
|
||||
| `runbash` | `run_bash(cmd)` | Run a shell command and return output |
|
||||
|
||||
---
|
||||
|
||||
## Example Usage
|
||||
|
||||
### Interactive CLI
|
||||
|
||||
```
|
||||
────────────────────────────────────────
|
||||
❯ what files are here?
|
||||
────────────────────────────────────────
|
||||
|
||||
⏺ Glob(**/*.py)
|
||||
⎿ nanocode.py
|
||||
⏺ Thinking...
|
||||
⏺ globfiles(pattern='**/*', path='.')
|
||||
|
||||
⏺ There's one Python file: nanocode.py
|
||||
⏺ I found the following files:
|
||||
- nanocode.py
|
||||
- README.md
|
||||
- modaic/SKILL.md
|
||||
```
|
||||
|
||||
### Programmatic Usage
|
||||
|
||||
```python
|
||||
from modaic import AutoProgram
|
||||
|
||||
agent = AutoProgram.from_precompiled("farouk1/nanocode")
|
||||
|
||||
# Read a file
|
||||
result = agent(task="Read the first 10 lines of nanocode.py")
|
||||
print(result.answer)
|
||||
|
||||
# Search for patterns
|
||||
result = agent(task="Find all functions that contain 'file' in their name")
|
||||
print(result.answer)
|
||||
|
||||
# Make edits
|
||||
result = agent(task="Add a comment at the top of README.md")
|
||||
print(result.affected_files) # ['README.md']
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Overview
|
||||
|
||||
```
|
||||
nanocode.py
|
||||
├── File Operations
|
||||
│ ├── read_file() - Read with line numbers
|
||||
│ ├── write_file() - Write content
|
||||
│ └── edit_file() - Find & replace
|
||||
├── Search Operations
|
||||
│ ├── glob_files() - Pattern matching
|
||||
│ └── grep_files() - Regex search
|
||||
├── Shell Operations
|
||||
│ └── run_bash() - Execute commands
|
||||
├── DSPy Components
|
||||
│ ├── CodingAssistant (Signature)
|
||||
│ └── RLMCodingProgram (PrecompiledProgram)
|
||||
└── Modaic Integration
|
||||
└── RLMCodingConfig (PrecompiledConfig)
|
||||
```
|
||||
|
||||
### Key Classes
|
||||
|
||||
#### `RLMCodingConfig`
|
||||
Configuration class extending `PrecompiledConfig` for experiment-specific parameters.
|
||||
|
||||
```python
|
||||
class RLMCodingConfig(PrecompiledConfig):
|
||||
max_iters: int = 20
|
||||
lm: str = "openrouter/anthropic/claude-3.5-sonnet"
|
||||
sub_lm: str = "openrouter/openai/gpt-4.1"
|
||||
api_base: str = "https://openrouter.ai/api/v1"
|
||||
max_tokens: int = 16000
|
||||
max_output_chars: int = 100000
|
||||
verbose: bool = False
|
||||
```
|
||||
|
||||
#### `RLMCodingProgram`
|
||||
Main program class extending `PrecompiledProgram`. Wraps a DSPy RLM agent with coding tools.
|
||||
|
||||
```python
|
||||
class RLMCodingProgram(PrecompiledProgram):
|
||||
config: RLMCodingConfig
|
||||
|
||||
def forward(self, task: str) -> dspy.Prediction:
|
||||
# Returns prediction with .answer and .affected_files
|
||||
return self.agent(task=task)
|
||||
```
|
||||
|
||||
#### `CodingAssistant`
|
||||
DSPy Signature defining the agent's input/output schema.
|
||||
|
||||
```python
|
||||
class CodingAssistant(dspy.Signature):
|
||||
task: str = dspy.InputField()
|
||||
answer: str = dspy.OutputField()
|
||||
affected_files: list[str] = dspy.OutputField()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Publishing Your Own Version
|
||||
|
||||
If you modify nanocode and want to publish your own version to Modaic Hub:
|
||||
|
||||
```python
|
||||
from nanocode import RLMCodingProgram, RLMCodingConfig
|
||||
|
||||
# Create and optionally optimize your program
|
||||
program = RLMCodingProgram(RLMCodingConfig())
|
||||
|
||||
# Push to your Modaic Hub repo
|
||||
program.push_to_hub(
|
||||
"your-username/my-nanocode",
|
||||
commit_message="My customized nanocode",
|
||||
with_code=True # Include source code for AutoProgram loading
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
- [DSPy](https://dspy.ai/) - Framework for programming language models
|
||||
- [Modaic](https://modaic.dev/) - Hub for sharing and versioning DSPy programs
|
||||
- OpenRouter API key (for accessing language models)
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```bash
|
||||
pip install dspy modaic
|
||||
# or with uv
|
||||
uv add dspy modaic
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Description |
|
||||
|----------|----------|-------------|
|
||||
| `OPENROUTER_API_KEY` | Yes | Your OpenRouter API key |
|
||||
| `MODEL` | No | Override the default model selection |
|
||||
| `MODAIC_TOKEN` | For Hub | Required for pushing/loading from Modaic Hub |
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
11
config.json
11
config.json
@@ -1,10 +1,11 @@
|
||||
{
|
||||
"model": null,
|
||||
"max_iters": 20,
|
||||
"lm": "openai/gpt-5.2-codex",
|
||||
"sub_lm": "openrouter/openai/gpt-4.1",
|
||||
"max_iters": 50,
|
||||
"lm": "openrouter/anthropic/claude-sonnet-4.5",
|
||||
"sub_lm": "openrouter/openai/gpt-5-mini",
|
||||
"api_base": "https://openrouter.ai/api/v1",
|
||||
"max_tokens": 16000,
|
||||
"max_tokens": 32000,
|
||||
"max_output_chars": 100000,
|
||||
"verbose": false
|
||||
"verbose": false,
|
||||
"track_usage": true
|
||||
}
|
||||
265
nanocode.py
265
nanocode.py
@@ -2,6 +2,9 @@ import os
|
||||
import re
|
||||
import glob as globlib
|
||||
import subprocess
|
||||
import shlex
|
||||
import json
|
||||
import tempfile
|
||||
from modaic import PrecompiledProgram, PrecompiledConfig
|
||||
import dspy
|
||||
from dspy.utils.callback import BaseCallback
|
||||
@@ -24,6 +27,16 @@ MAGENTA = "\033[35m"
|
||||
|
||||
# --- Display utilities ---
|
||||
|
||||
LONG_PASTE_THRESHOLD = int(os.environ.get("NANOCODE_LONG_PASTE_THRESHOLD", "4000"))
|
||||
|
||||
|
||||
def save_long_paste(text: str) -> str:
|
||||
fd, path = tempfile.mkstemp(prefix="nanocode_paste_", suffix=".txt")
|
||||
with os.fdopen(fd, "w") as handle:
|
||||
handle.write(text)
|
||||
return path
|
||||
|
||||
|
||||
|
||||
def separator():
|
||||
"""Return a horizontal separator line that fits the terminal width."""
|
||||
@@ -234,16 +247,6 @@ class CodingAssistant(dspy.Signature):
|
||||
)
|
||||
|
||||
|
||||
tools = {
|
||||
"readfile": read_file,
|
||||
"writefile": write_file,
|
||||
"editfile": edit_file,
|
||||
"globfiles": glob_files,
|
||||
"grepfiles": grep_files,
|
||||
"runbash": run_bash,
|
||||
}
|
||||
|
||||
|
||||
class ToolLoggingCallback(BaseCallback):
|
||||
"""Callback that logs tool calls as they happen."""
|
||||
|
||||
@@ -274,13 +277,14 @@ class ToolLoggingCallback(BaseCallback):
|
||||
|
||||
|
||||
class RLMCodingConfig(PrecompiledConfig):
|
||||
max_iters: int = 20
|
||||
lm: str = "openrouter/anthropic/claude-3.5-sonnet" # Default fallback
|
||||
sub_lm: str = "openrouter/openai/gpt-4.1" # Default fallback
|
||||
max_iters: int = 50
|
||||
lm: str = "openrouter/anthropic/claude-sonnet-4.5"
|
||||
sub_lm: str = "openrouter/openai/gpt-5-mini"
|
||||
api_base: str = "https://openrouter.ai/api/v1"
|
||||
max_tokens: int = 16000
|
||||
max_tokens: int = 32000
|
||||
max_output_chars: int = 100000
|
||||
verbose: bool = False
|
||||
track_usage: bool = True
|
||||
|
||||
|
||||
class RLMCodingProgram(PrecompiledProgram):
|
||||
@@ -289,39 +293,73 @@ class RLMCodingProgram(PrecompiledProgram):
|
||||
def __init__(self, config: RLMCodingConfig, **kwargs):
|
||||
self.config = config
|
||||
super().__init__(config, **kwargs)
|
||||
self.tools = {
|
||||
"read_file": read_file,
|
||||
"write_file": write_file,
|
||||
"edit_file": edit_file,
|
||||
"glob_files": glob_files,
|
||||
"grep_files": grep_files,
|
||||
"run_bash": run_bash,
|
||||
}
|
||||
|
||||
# tool logging for introspections on multi-turn conversations
|
||||
dspy.settings.configure(callbacks=[ToolLoggingCallback()])
|
||||
lm = dspy.LM(
|
||||
self.lm = dspy.LM(
|
||||
self.config.lm,
|
||||
api_base=self.config.api_base,
|
||||
max_tokens=self.config.max_tokens,
|
||||
track_usage=self.config.track_usage,
|
||||
)
|
||||
sub_lm = dspy.LM(
|
||||
self.sub_lm = dspy.LM(
|
||||
self.config.sub_lm,
|
||||
api_base=self.config.api_base,
|
||||
max_tokens=self.config.max_tokens,
|
||||
track_usage=self.config.track_usage,
|
||||
)
|
||||
agent = dspy.RLM(
|
||||
CodingAssistant,
|
||||
sub_lm=sub_lm,
|
||||
tools=tools,
|
||||
sub_lm=self.sub_lm,
|
||||
tools=self.tools,
|
||||
max_output_chars=self.config.max_output_chars,
|
||||
max_iterations=self.config.max_iters,
|
||||
verbose=self.config.verbose,
|
||||
)
|
||||
|
||||
agent.set_lm(lm)
|
||||
agent.set_lm(self.lm)
|
||||
self.agent = agent
|
||||
|
||||
def forward(self, task: str) -> str:
|
||||
assert task, "Task cannot be empty"
|
||||
return self.agent(task=task)
|
||||
|
||||
def get_tools(self):
|
||||
return self.tools
|
||||
|
||||
def set_tool(self, name: str, tool: callable):
|
||||
self.tools[name] = tool
|
||||
self.reload_repl_tools()
|
||||
|
||||
def remove_tool(self, name: str):
|
||||
if name in self.tools:
|
||||
del self.tools[name]
|
||||
self.reload_repl_tools()
|
||||
|
||||
def reload_repl_tools(
|
||||
self,
|
||||
): # we need to create a new instance for tool mutations to be passed back into the REPL
|
||||
new_instance = dspy.RLM(
|
||||
CodingAssistant,
|
||||
sub_lm=self.sub_lm,
|
||||
tools=self.tools,
|
||||
max_output_chars=self.config.max_output_chars,
|
||||
max_iterations=self.config.max_iters,
|
||||
verbose=self.config.verbose,
|
||||
)
|
||||
new_instance.set_lm(self.lm)
|
||||
self.agent = new_instance
|
||||
|
||||
|
||||
def main():
|
||||
model = os.getenv("MODEL")
|
||||
if model is None:
|
||||
model = select_model()
|
||||
model = select_model()
|
||||
|
||||
# Add openrouter/ prefix if not already present
|
||||
if not model.startswith("openrouter/"):
|
||||
@@ -332,18 +370,41 @@ def main():
|
||||
|
||||
agent = RLMCodingProgram(config)
|
||||
print(
|
||||
f"{BOLD}nanocode-dspy{RESET} | {DIM}{agent.config.lm} | {os.getcwd()}{RESET}\n"
|
||||
f"{BOLD}NANOCODE DSPY{RESET} | {DIM}{agent.config.lm} | {os.getcwd()}{RESET}\n"
|
||||
)
|
||||
|
||||
# Conversation history for context
|
||||
history = []
|
||||
|
||||
# MCP servers registry
|
||||
mcp_servers = {}
|
||||
|
||||
def register_mcp_server(name, server):
|
||||
tool_names = []
|
||||
for tool in server.tools:
|
||||
tool_name = f"{name}_{tool.__name__}"
|
||||
agent.set_tool(tool_name, tool)
|
||||
tool_names.append(tool_name)
|
||||
return tool_names
|
||||
|
||||
while True:
|
||||
try:
|
||||
print(separator())
|
||||
user_input = input(f"{BOLD}{BLUE}❯{RESET} ").strip()
|
||||
print(separator())
|
||||
|
||||
tmp_paste_path = None
|
||||
if len(user_input) > LONG_PASTE_THRESHOLD:
|
||||
tmp_paste_path = save_long_paste(user_input)
|
||||
print(
|
||||
f"{YELLOW}⏺ Long paste detected ({len(user_input)} chars). Saved to {tmp_paste_path}{RESET}"
|
||||
)
|
||||
user_input = (
|
||||
f"The user pasted a long input ({len(user_input)} chars). "
|
||||
f"It has been saved to {tmp_paste_path}. "
|
||||
"Use read_file to view it. The file will be deleted after this response."
|
||||
)
|
||||
|
||||
if not user_input:
|
||||
continue
|
||||
if user_input in ("/q", "exit"):
|
||||
@@ -352,6 +413,144 @@ def main():
|
||||
history = []
|
||||
print(f"{GREEN}⏺ Cleared conversation{RESET}")
|
||||
continue
|
||||
if user_input == "/model":
|
||||
print(f"\n{BOLD}Current model: {agent.config.lm}{RESET}")
|
||||
print(f"\n{BOLD}Select a new model:{RESET}")
|
||||
for key, (name, model_id) in AVAILABLE_MODELS.items():
|
||||
print(f" {BLUE}{key}{RESET}. {name} ({DIM}{model_id}{RESET})")
|
||||
print(f" {BLUE}c{RESET}. Custom model (enter manually)")
|
||||
print(f" {BLUE}k{RESET}. Keep current model")
|
||||
|
||||
choice = input(f"\n{BOLD}{BLUE}❯{RESET} Enter choice: ").strip().lower()
|
||||
|
||||
if choice == "k":
|
||||
print(f"{GREEN}⏺ Keeping current model: {agent.config.lm}{RESET}")
|
||||
continue
|
||||
elif choice in AVAILABLE_MODELS:
|
||||
name, model_id = AVAILABLE_MODELS[choice]
|
||||
new_model = (
|
||||
model_id
|
||||
if model_id.startswith("openrouter/")
|
||||
else f"openrouter/{model_id}"
|
||||
)
|
||||
config.lm = new_model
|
||||
agent = RLMCodingProgram(config)
|
||||
for server_name, info in mcp_servers.items():
|
||||
info["tools"] = register_mcp_server(server_name, info["server"])
|
||||
print(f"{GREEN}⏺ Switched to: {name} ({new_model}){RESET}")
|
||||
elif choice == "c":
|
||||
custom_model = input(
|
||||
f"{BOLD}{BLUE}❯{RESET} Enter model ID: "
|
||||
).strip()
|
||||
if custom_model:
|
||||
new_model = (
|
||||
custom_model
|
||||
if custom_model.startswith("openrouter/")
|
||||
else f"openrouter/{custom_model}"
|
||||
)
|
||||
config.lm = new_model
|
||||
agent = RLMCodingProgram(config)
|
||||
for server_name, info in mcp_servers.items():
|
||||
info["tools"] = register_mcp_server(
|
||||
server_name, info["server"]
|
||||
)
|
||||
print(f"{GREEN}⏺ Switched to custom model: {new_model}{RESET}")
|
||||
else:
|
||||
print(f"{RED}⏺ Invalid model ID, keeping current model{RESET}")
|
||||
else:
|
||||
print(f"{RED}⏺ Invalid choice, keeping current model{RESET}")
|
||||
continue
|
||||
|
||||
if user_input.startswith("/add-mcp"):
|
||||
parts = shlex.split(user_input)
|
||||
args = parts[1:]
|
||||
if not args:
|
||||
print(
|
||||
f"{YELLOW}⏺ Usage: /add-mcp <name> <server> [--auth <auth>|--oauth] [--headers '<json>'] [--auto-auth|--no-auto-auth]{RESET}"
|
||||
)
|
||||
continue
|
||||
|
||||
name = None
|
||||
auth = None
|
||||
headers = None
|
||||
auto_auth = None
|
||||
positional = []
|
||||
i = 0
|
||||
while i < len(args):
|
||||
if args[i] in ("--name", "-n") and i + 1 < len(args):
|
||||
name = args[i + 1]
|
||||
i += 2
|
||||
elif args[i].startswith("--auth="):
|
||||
auth = args[i].split("=", 1)[1]
|
||||
i += 1
|
||||
elif args[i] == "--auth" and i + 1 < len(args):
|
||||
auth = args[i + 1]
|
||||
i += 2
|
||||
elif args[i] == "--oauth":
|
||||
auth = "oauth"
|
||||
i += 1
|
||||
elif args[i] == "--auto-auth":
|
||||
auto_auth = True
|
||||
i += 1
|
||||
elif args[i] == "--no-auto-auth":
|
||||
auto_auth = False
|
||||
i += 1
|
||||
elif args[i].startswith("--headers="):
|
||||
headers = json.loads(args[i].split("=", 1)[1])
|
||||
i += 1
|
||||
elif args[i] == "--headers" and i + 1 < len(args):
|
||||
headers = json.loads(args[i + 1])
|
||||
i += 2
|
||||
else:
|
||||
positional.append(args[i])
|
||||
i += 1
|
||||
|
||||
server_cmd = None
|
||||
if positional:
|
||||
if name is None and len(positional) >= 2:
|
||||
name = positional[0]
|
||||
server_cmd = " ".join(positional[1:])
|
||||
else:
|
||||
server_cmd = " ".join(positional)
|
||||
|
||||
if not server_cmd:
|
||||
print(
|
||||
f"{YELLOW}⏺ Usage: /add-mcp <name> <server> [--auth <auth>|--oauth] [--headers '<json>'] [--auto-auth|--no-auto-auth]{RESET}"
|
||||
)
|
||||
continue
|
||||
|
||||
if not name:
|
||||
name = re.sub(r"[^a-zA-Z0-9_]+", "_", server_cmd).strip("_")
|
||||
if not name:
|
||||
name = f"mcp_{len(mcp_servers) + 1}"
|
||||
|
||||
if name in mcp_servers:
|
||||
for tool_name in mcp_servers[name]["tools"]:
|
||||
agent.remove_tool(tool_name)
|
||||
|
||||
try:
|
||||
from mcp2py import load
|
||||
|
||||
kwargs = {}
|
||||
if auth is not None:
|
||||
kwargs["auth"] = auth
|
||||
if headers:
|
||||
kwargs["headers"] = headers
|
||||
if auto_auth is not None:
|
||||
kwargs["auto_auth"] = auto_auth
|
||||
|
||||
server = load(server_cmd, **kwargs)
|
||||
tool_names = register_mcp_server(name, server)
|
||||
mcp_servers[name] = {"server": server, "tools": tool_names}
|
||||
|
||||
print(
|
||||
f"{GREEN}⏺ Added MCP server '{name}' with {len(tool_names)} tools{RESET}"
|
||||
)
|
||||
print(f"{GREEN}⏺ Tools: {list(agent.tools.keys())}{RESET}")
|
||||
except Exception as err:
|
||||
print(f"{RED}⏺ Failed to add MCP server: {err}{RESET}")
|
||||
|
||||
continue
|
||||
|
||||
# Build context from history
|
||||
context = f"Working directory: {os.getcwd()}\n"
|
||||
@@ -364,12 +563,22 @@ def main():
|
||||
|
||||
print(f"\n{CYAN}⏺{RESET} Thinking...", flush=True)
|
||||
|
||||
# Run the ReAct agent
|
||||
result = agent(task=task)
|
||||
# Run the RLM agent
|
||||
try:
|
||||
result = agent(task=task)
|
||||
finally:
|
||||
if tmp_paste_path:
|
||||
try:
|
||||
os.remove(tmp_paste_path)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Display the answer
|
||||
print(f"\n{CYAN}⏺{RESET} {render_markdown(result.answer)}")
|
||||
|
||||
# Display usage
|
||||
print(f"\n{MAGENTA}⏺ Debug Prediction: {result}{RESET}")
|
||||
|
||||
# Save to history
|
||||
history.append({"user": user_input, "assistant": result.answer})
|
||||
|
||||
@@ -385,6 +594,6 @@ def main():
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
agent = RLMCodingProgram(RLMCodingConfig(lm="openai/gpt-5.2-codex"))
|
||||
agent.push_to_hub(MODAIC_REPO_PATH, commit_message="Switch to RLM instead of ReAct", tag="v0.0.1")
|
||||
agent = RLMCodingProgram(RLMCodingConfig())
|
||||
agent.push_to_hub(MODAIC_REPO_PATH, commit_message="Add MCP server support and long paste handling", tag="v0.0.4")
|
||||
#main()
|
||||
|
||||
16
program.json
16
program.json
@@ -4,7 +4,7 @@
|
||||
"train": [],
|
||||
"demos": [],
|
||||
"signature": {
|
||||
"instructions": "You are a concise coding assistant. Help the user with their coding task by using the available tools to read, write, edit files, search the codebase, and run commands.\n\nYou are tasked with producing the following outputs given the inputs `task`:\n- {answer}\n- {affected_files} # note: the value you produce must adhere to the JSON schema: {\"type\": \"array\", \"items\": {\"type\": \"string\"}}\n\nYou have access to a Python REPL environment. Write Python code and it will be executed. You will see the output, then write more code based on what you learned. This is an iterative process.\n\nAvailable:\n- Variables: `task` (your input data)\n- `llm_query(prompt)` - query a sub-LLM (~500K char capacity) for semantic analysis\n- `llm_query_batched(prompts)` - query multiple prompts concurrently (much faster for multiple queries)\n- `print()` - ALWAYS print to see results\n- `SUBMIT(answer, affected_files)` - submit final output when done\n- Standard libraries: re, json, collections, math, etc.\n\nIMPORTANT: This is ITERATIVE. Each code block you write will execute, you'll see the output, then you decide what to do next. Do NOT try to solve everything in one step.\n\n1. EXPLORE FIRST - Look at your data before processing it. Print samples, check types/lengths, understand the structure.\n2. ITERATE - Write small code snippets, observe outputs, then decide next steps. State persists between iterations.\n3. VERIFY BEFORE SUBMITTING - If results seem wrong (zeros, empty, unexpected), reconsider your approach.\n4. USE llm_query FOR SEMANTICS - String matching finds WHERE things are; llm_query understands WHAT things mean.\n5. MINIMIZE RETYPING (INPUTS & OUTPUTS) - When values are long, precise, or error-prone (IDs, numbers, code, quotes), re-access them via variables and parse/compute in code instead of retyping. Use small, targeted prints to sanity-check, but avoid manual copying when variables can carry the exact value.\n6. SUBMIT ONLY AFTER SEEING OUTPUTS - SUBMIT ends the current run immediately. If you need to inspect printed output, run it in one step, review the result, then call SUBMIT in a later step.\n\nYou have max 50 sub-LLM calls. When done, call SUBMIT() with your output.\nAdditional tools available (use these instead of standard library equivalents):\n- `readfile(path: str, offset: int, limit: int) -> str` - Read file contents with line numbers.\n- `writefile(path: str, content: str) -> str` - Write content to a file.\n- `editfile(path: str, old: str, new: str, replace_all: bool) -> str` - Replace text in a file.\n- `globfiles(pattern: str, path: str) -> str` - Find files matching a glob pattern, sorted by modification time.\n- `grepfiles(pattern: str, path: str) -> str` - Search files for a regex pattern.\n- `runbash(cmd: str) -> str` - Run a shell command and return output.",
|
||||
"instructions": "You are a concise coding assistant. Help the user with their coding task by using the available tools to read, write, edit files, search the codebase, and run commands.\n\nYou are tasked with producing the following outputs given the inputs `task`:\n- {answer}\n- {affected_files} # note: the value you produce must adhere to the JSON schema: {\"type\": \"array\", \"items\": {\"type\": \"string\"}}\n\nYou have access to a Python REPL environment. Write Python code and it will be executed. You will see the output, then write more code based on what you learned. This is an iterative process.\n\nAvailable:\n- Variables: `task` (your input data)\n- `llm_query(prompt)` - query a sub-LLM (~500K char capacity) for semantic analysis\n- `llm_query_batched(prompts)` - query multiple prompts concurrently (much faster for multiple queries)\n- `print()` - ALWAYS print to see results\n- `SUBMIT(answer, affected_files)` - submit final output when done\n- Standard libraries: re, json, collections, math, etc.\n\nIMPORTANT: This is ITERATIVE. Each code block you write will execute, you'll see the output, then you decide what to do next. Do NOT try to solve everything in one step.\n\n1. EXPLORE FIRST - Look at your data before processing it. Print samples, check types/lengths, understand the structure.\n2. ITERATE - Write small code snippets, observe outputs, then decide next steps. State persists between iterations.\n3. VERIFY BEFORE SUBMITTING - If results seem wrong (zeros, empty, unexpected), reconsider your approach.\n4. USE llm_query FOR SEMANTICS - String matching finds WHERE things are; llm_query understands WHAT things mean.\n5. MINIMIZE RETYPING (INPUTS & OUTPUTS) - When values are long, precise, or error-prone (IDs, numbers, code, quotes), re-access them via variables and parse/compute in code instead of retyping. Use small, targeted prints to sanity-check, but avoid manual copying when variables can carry the exact value.\n6. SUBMIT ONLY AFTER SEEING OUTPUTS - SUBMIT ends the current run immediately. If you need to inspect printed output, run it in one step, review the result, then call SUBMIT in a later step.\n\nYou have max 50 sub-LLM calls. When done, call SUBMIT() with your output.\nAdditional tools available (use these instead of standard library equivalents):\n- `read_file(path: str, offset: int, limit: int) -> str` - Read file contents with line numbers.\n- `write_file(path: str, content: str) -> str` - Write content to a file.\n- `edit_file(path: str, old: str, new: str, replace_all: bool) -> str` - Replace text in a file.\n- `glob_files(pattern: str, path: str) -> str` - Find files matching a glob pattern, sorted by modification time.\n- `grep_files(pattern: str, path: str) -> str` - Search files for a regex pattern.\n- `run_bash(cmd: str) -> str` - Run a shell command and return output.",
|
||||
"fields": [
|
||||
{
|
||||
"prefix": "Variables Info:",
|
||||
@@ -29,7 +29,7 @@
|
||||
]
|
||||
},
|
||||
"lm": {
|
||||
"model": "openai/gpt-5.2-codex",
|
||||
"model": "openrouter/anthropic/claude-sonnet-4.5",
|
||||
"model_type": "chat",
|
||||
"cache": true,
|
||||
"num_retries": 3,
|
||||
@@ -37,8 +37,9 @@
|
||||
"launch_kwargs": {},
|
||||
"train_kwargs": {},
|
||||
"temperature": null,
|
||||
"max_tokens": 16000,
|
||||
"api_base": "https://openrouter.ai/api/v1"
|
||||
"max_tokens": 32000,
|
||||
"api_base": "https://openrouter.ai/api/v1",
|
||||
"track_usage": true
|
||||
}
|
||||
},
|
||||
"agent.extract": {
|
||||
@@ -67,7 +68,7 @@
|
||||
]
|
||||
},
|
||||
"lm": {
|
||||
"model": "openai/gpt-5.2-codex",
|
||||
"model": "openrouter/anthropic/claude-sonnet-4.5",
|
||||
"model_type": "chat",
|
||||
"cache": true,
|
||||
"num_retries": 3,
|
||||
@@ -75,8 +76,9 @@
|
||||
"launch_kwargs": {},
|
||||
"train_kwargs": {},
|
||||
"temperature": null,
|
||||
"max_tokens": 16000,
|
||||
"api_base": "https://openrouter.ai/api/v1"
|
||||
"max_tokens": 32000,
|
||||
"api_base": "https://openrouter.ai/api/v1",
|
||||
"track_usage": true
|
||||
}
|
||||
},
|
||||
"metadata": {
|
||||
|
||||
@@ -4,4 +4,4 @@ version = "0.1.0"
|
||||
description = "Add your description here"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.13"
|
||||
dependencies = ["dspy>=3.1.2", "modaic>=0.10.3"]
|
||||
dependencies = ["dspy>=3.1.2", "fastmcp>=2.14.3", "mcp2py>=0.6.0", "modaic>=0.10.4"]
|
||||
|
||||
Reference in New Issue
Block a user