(no commit message)
This commit is contained in:
83
program.json
Normal file
83
program.json
Normal file
@@ -0,0 +1,83 @@
|
||||
{
|
||||
"rlm.generate_action": {
|
||||
"traces": [],
|
||||
"train": [],
|
||||
"demos": [],
|
||||
"signature": {
|
||||
"instructions": "You are a friendly AI companion playing Minecraft with Paul.\n\nYour ONLY way to talk is by calling MCP tools (especially `sendChat`).\nUse tools like `readChat`, `mineResource`, `lookAround`, etc. when useful.\n\nThe `response` output is only a short internal note about what you did.\n\nYou are tasked with producing the following outputs given the inputs `chat`, `memory`:\n- {response}\n\nYou have access to a Python REPL environment. Write Python code and it will be executed. You will see the output, then write more code based on what you learned. This is an iterative process.\n\nAvailable:\n- Variables: `chat`, `memory` (your input data)\n- `llm_query(prompt)` - query a sub-LLM (~500K char capacity) for semantic analysis\n- `llm_query_batched(prompts)` - query multiple prompts concurrently (much faster for multiple queries)\n- `print()` - ALWAYS print to see results\n- `SUBMIT(response)` - submit final output when done\n- Standard libraries: re, json, collections, math, etc.\n\nIMPORTANT: This is ITERATIVE. Each code block you write will execute, you'll see the output, then you decide what to do next. Do NOT try to solve everything in one step.\n\n1. EXPLORE FIRST - Look at your data before processing it. Print samples, check types/lengths, understand the structure.\n2. ITERATE - Write small code snippets, observe outputs, then decide next steps. State persists between iterations.\n3. VERIFY BEFORE SUBMITTING - If results seem wrong (zeros, empty, unexpected), reconsider your approach.\n4. USE llm_query FOR SEMANTICS - String matching finds WHERE things are; llm_query understands WHAT things mean.\n5. MINIMIZE RETYPING (INPUTS & OUTPUTS) - When values are long, precise, or error-prone (IDs, numbers, code, quotes), re-access them via variables and parse/compute in code instead of retyping. Use small, targeted prints to sanity-check, but avoid manual copying when variables can carry the exact value.\n6. SUBMIT ONLY AFTER SEEING OUTPUTS - SUBMIT ends the current run immediately. If you need to inspect printed output, run it in one step, review the result, then call SUBMIT in a later step.\n\nYou have max 18 sub-LLM calls. When done, call SUBMIT() with your output.",
|
||||
"fields": [
|
||||
{
|
||||
"prefix": "Variables Info:",
|
||||
"description": "Metadata about the variables available in the REPL"
|
||||
},
|
||||
{
|
||||
"prefix": "Repl History:",
|
||||
"description": "Previous REPL code executions and their outputs"
|
||||
},
|
||||
{
|
||||
"prefix": "Iteration:",
|
||||
"description": "Current iteration number (1-indexed) out of max_iterations"
|
||||
},
|
||||
{
|
||||
"prefix": "Reasoning:",
|
||||
"description": "Think step-by-step: what do you know? What remains? Plan your next action."
|
||||
},
|
||||
{
|
||||
"prefix": "Code:",
|
||||
"description": "Python code to execute."
|
||||
}
|
||||
]
|
||||
},
|
||||
"lm": {
|
||||
"model": "groq/openai/gpt-oss-120b",
|
||||
"model_type": "chat",
|
||||
"cache": true,
|
||||
"num_retries": 3,
|
||||
"finetuning_model": null,
|
||||
"launch_kwargs": {},
|
||||
"train_kwargs": {},
|
||||
"temperature": null,
|
||||
"max_tokens": null
|
||||
}
|
||||
},
|
||||
"rlm.extract": {
|
||||
"traces": [],
|
||||
"train": [],
|
||||
"demos": [],
|
||||
"signature": {
|
||||
"instructions": "The trajectory was generated with the following objective: \nYou are a friendly AI companion playing Minecraft with Paul.\n\nYour ONLY way to talk is by calling MCP tools (especially `sendChat`).\nUse tools like `readChat`, `mineResource`, `lookAround`, etc. when useful.\n\nThe `response` output is only a short internal note about what you did.\n\n\nBased on the REPL trajectory, extract the final outputs now.\n\n Review your trajectory to see what information you gathered and what values you computed, then provide the final outputs.",
|
||||
"fields": [
|
||||
{
|
||||
"prefix": "Variables Info:",
|
||||
"description": "Metadata about the variables available in the REPL"
|
||||
},
|
||||
{
|
||||
"prefix": "Repl History:",
|
||||
"description": "Your REPL interactions so far"
|
||||
},
|
||||
{
|
||||
"prefix": "Response:",
|
||||
"description": "Short internal note (not sent to chat)."
|
||||
}
|
||||
]
|
||||
},
|
||||
"lm": {
|
||||
"model": "groq/openai/gpt-oss-120b",
|
||||
"model_type": "chat",
|
||||
"cache": true,
|
||||
"num_retries": 3,
|
||||
"finetuning_model": null,
|
||||
"launch_kwargs": {},
|
||||
"train_kwargs": {},
|
||||
"temperature": null,
|
||||
"max_tokens": null
|
||||
}
|
||||
},
|
||||
"metadata": {
|
||||
"dependency_versions": {
|
||||
"python": "3.12",
|
||||
"dspy": "3.1.2",
|
||||
"cloudpickle": "3.1"
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user