(no commit message)

This commit is contained in:
2025-12-16 17:18:20 -08:00
parent 4ebeaa306b
commit 6aebca5fc2
4 changed files with 234 additions and 1 deletions

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Stanford MIMI Lab
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

138
README.md
View File

@@ -1,2 +1,138 @@
# head_qa-claude-3.7-sonnet-MIPROv2
# Structured Prompting Enables More Robust Evaluation of Language Models
[![arXiv](https://img.shields.io/badge/arXiv-2511.20836-b31b1b.svg?style=for-the-badge)](https://arxiv.org/abs/2511.20836)
[![License](https://img.shields.io/github/license/stanfordmimi/helm-optimizer?style=for-the-badge)](LICENSE)
<img src="assets/fig.png" alt="Overview" width="650">
**Figure 1** | **Pipeline overview**. (a) DSPy takes HELM's baseline prompt and produces structured prompt variants. (b) HELM evaluates models under each prompt variant. With structured prompting, we observe more robust performance evaluation.
## What is DSPy-HELM?
A comprehensive framework for optimizing language model performance on benchmarks using [DSPy](https://github.com/stanfordnlp/dspy), enabling more robust evaluation. This toolkit allows automated prompt optimization (APO) for benchmarks from the [HELM](https://github.com/stanford-crfm/helm) (Holistic Evaluation of Language Models) ecosystem.
## Installation
```bash
# Clone the repository
git clone https://github.com/StanfordMIMI/dspy-helm.git
cd dspy-helm
# Install dependencies
pip install -r requirements.txt
```
## Basic Usage
**Configure settings** in `run.sh`:
```bash
scenarios=("mmlu_pro" "gpqa" "gsm8k" "medcalc_bench" "medec" "head_qa" "medbullets")
optimizers=("MIPROv2" "BootstrapFewShotWithRandomSearch")
# Language model to be optimized
model=openai/gpt-4o
api_base="your_api_base_here"
api_key="your_api_key_here"
# Teacher language model for proposing instruction candidates (MIPROv2)
prompt_model=openai/gpt-4o
prompt_api_base="your_api_base_here"
prompt_api_key="your_api_key_here"
max_bootstrapped_demos=3
max_labeled_demos=3
num_threads=1
```
**Run optimization**:
```bash
./run.sh
```
## 📊 Supported Benchmarks
| Benchmark | Input → Output | Task |
|-------------------|--------------------------------------|------------------------|
| MMLU-Pro | Reasoning Question → Answer | Multi-Task Reasoning |
| GPQA | Graduate Question → Answer | Graduate-Level QA |
| GSM8K | Math Problem → Solution | Numeric Problem-Solving|
| MedCalc-Bench | Patient Note → Computed Value | Computational Reasoning|
| Medec | Medical Narrative → Errors | Error Classification |
| HeadQA | Medical Question → Answer | USMLE-Style QA |
| MedBullets | Medical Question → Answer | USMLE-Style QA |
## Optimization Parameters
- `--max_bootstrapped_demos`: Number of bootstrapped demonstrations
- `--max_labeled_demos`: Number of labeled demonstrations
- `--num_threads`: Parallel processing threads (default: 16)
## 📁 Project Structure
```
dspy-helm/
├── main.py # Main optimization script
├── scenarios.py # Benchmark implementations
├── run.sh # Batch optimization runner
├── requirements.txt # Python dependencies
├── agents/ # Optimized DSPy agents
│ ├── medcalc_bench/
│ ├── head_qa/
│ ├── medbullets/
│ └── ...
└── README.md # This file
```
## 📈 Results and Evaluation
Optimized agents are automatically saved to the `agents/` directory:
```
agents/
└── {scenario}/
└── {model_name}/
├── MIPROv2.json
└── BootstrapFewShotWithRandomSearch.json
```
## Creating Custom Scenarios
To add a new HELM benchmark:
**Implement the scenario class** in `scenarios.py`:
```python
class my_benchmark:
def __init__(self, test_size=0.1, seed=42):
self.test_size = test_size
self.seed = seed
@staticmethod
def make_prompt(row):
return f"Question: {row['question']}\nAnswer:"
@staticmethod
def metric(example, pred, trace=None):
# Your evaluation metric
return dspy.evaluate.metrics.answer_exact_match(example, pred, trace)
def load_data(self):
# Load and return trainset, valset
pass
```
## 🙏 Acknowledgments
This repository is built using [DSPy](https://github.com/stanfordnlp/dspy) for prompt optimization.
## 📎 Citation
If you find this repository useful for your work, please cite the following paper:
```bibtex
@article{aali2025structured,
title={Structured Prompting Enables More Robust Evaluation of Language Models},
author={Aali, Asad and Mohsin, Muhammad Ahmed and Bikia, Vasiliki and Singhvi, Arnav and Gaus, Richard and Bedi, Suhana and Cui, Hejie and Fuentes, Miguel and Unell, Alyssa and Mai, Yifan and others},
journal={arXiv preprint arXiv:2511.20836},
year={2025}
}
```

14
config.json Normal file
View File

@@ -0,0 +1,14 @@
{
"model": "openrouter/anthropic/claude-3.7-sonnet",
"scenario": "head_qa",
"optimizer": "MIPROv2",
"api_base": "https://openrouter.ai/api/v1",
"prompt_model": "gpt-4o",
"prompt_api_base": null,
"max_tokens": 64000,
"temperature": null,
"max_bootstrapped_demos": 3,
"max_labeled_demos": 3,
"num_threads": 16,
"val_size": null
}

62
program.json Normal file
View File

@@ -0,0 +1,62 @@
{
"agent.predict": {
"traces": [],
"train": [],
"demos": [
{
"augmented": true,
"inputs": "You are a highly knowledgeable AI assistant specializing in biomedical sciences. Your task is to answer multiple-choice questions accurately based on the options provided. Each question will relate to biomedical concepts, and you will be asked to choose the most appropriate answer.\n\nSelect the correct answer by outputting only the letter corresponding to your choice (A, B, C, or D).\n\nQuestion: Burkitt's lymphoma and nasopharyngeal carcinoma are related to infection by:\nA. Herpes human virus type 6.\nB. Cytomegalovirus.\nC. Respiratory syncytial virus.\nD. Epstein-Barr virus.\nE. HIV.\nAnswer:",
"reasoning": "Let me analyze each option:\n\nA. Herpes human virus type 6 (HHV-6) - This virus is primarily associated with roseola infantum (exanthem subitum) and can cause complications in immunocompromised individuals, but it is not known to be associated with Burkitt's lymphoma or nasopharyngeal carcinoma.\n\nB. Cytomegalovirus (CMV) - This is a herpesvirus that can cause congenital infections and complications in immunocompromised individuals, but it is not specifically linked to Burkitt's lymphoma or nasopharyngeal carcinoma.\n\nC. Respiratory syncytial virus (RSV) - This is primarily a respiratory pathogen that causes bronchiolitis and pneumonia, particularly in infants. It is not associated with either Burkitt's lymphoma or nasopharyngeal carcinoma.\n\nD. Epstein-Barr virus (EBV) - EBV is known to be strongly associated with both Burkitt's lymphoma (particularly in endemic regions like parts of Africa) and nasopharyngeal carcinoma (especially in parts of Southeast Asia). EBV is implicated in the pathogenesis of these malignancies through various mechanisms including activation of oncogenes.\n\nE. HIV (Human Immunodeficiency Virus) - While HIV infection can increase the risk of certain malignancies due to immunosuppression, it is not directly causative of Burkitt's lymphoma or nasopharyngeal carcinoma in the way that EBV is.\n\nBased on this analysis, the correct answer is D. Epstein-Barr virus.",
"output": "D",
"answer": "D"
},
{
"augmented": true,
"inputs": "You are a highly knowledgeable AI assistant specializing in biomedical sciences. Your task is to answer multiple-choice questions accurately based on the options provided. Each question will relate to biomedical concepts, and you will be asked to choose the most appropriate answer.\n\nSelect the correct answer by outputting only the letter corresponding to your choice (A, B, C, or D).\n\nQuestion: What is the average life time of an erythrocyte:\nA. Four months.\nB. Four weeks.\nC. One year.\nD. Sixty days\nE. Two weeks.\nAnswer:",
"reasoning": "I need to recall the average lifetime of a red blood cell (erythrocyte) in human circulation.\n\nErythrocytes are produced in the bone marrow and released into the bloodstream where they circulate for a specific period before being removed by the spleen and liver.\n\nThe generally accepted average lifespan of a human erythrocyte is approximately 120 days, which is equal to about 4 months.\n\nLooking at the options:\nA. Four months = approximately 120 days \u2713\nB. Four weeks = approximately 28 days \u2717\nC. One year = approximately 365 days \u2717\nD. Sixty days \u2717\nE. Two weeks = approximately 14 days \u2717\n\nTherefore, the correct answer is A. Four months.",
"output": "A",
"answer": "A"
},
{
"inputs": "You are a highly knowledgeable AI assistant specializing in biomedical sciences. Your task is to answer multiple-choice questions accurately based on the options provided. Each question will relate to biomedical concepts, and you will be asked to choose the most appropriate answer.\n\nSelect the correct answer by outputting only the letter corresponding to your choice (A, B, C, or D).\n\nQuestion: Somatomedin IGF-1 causes:\nA. Secretion of GHRH.\nB. Protein catabolism.\nC. Hyperglycemia\nD. Activation of osteoblasts\nE. GH secretion\nAnswer:",
"output": "D"
}
],
"signature": {
"instructions": "Analyze the input text, which includes an introduction and a multiple-choice question in the biomedical field. Follow a \"Chain of Thought\" approach to articulate a step-by-step reasoning for evaluating each provided answer choice to determine the most appropriate answer. The model should output the reasoning process in detail, explaining why a particular choice is correct. Lastly, report the final answer by providing only the letter corresponding to your choice (A, B, C, or D), enhancing both accuracy and justification in decision-making.",
"fields": [
{
"prefix": "Inputs:",
"description": "${inputs}"
},
{
"prefix": "Reasoning: Let's think step by step in order to",
"description": "${reasoning}"
},
{
"prefix": "Output:",
"description": "${output}"
}
]
},
"lm": {
"model": "openrouter/anthropic/claude-3.7-sonnet",
"model_type": "chat",
"cache": true,
"num_retries": 3,
"finetuning_model": null,
"launch_kwargs": {},
"train_kwargs": {},
"temperature": null,
"max_tokens": 64000,
"api_base": "https://openrouter.ai/api/v1"
}
},
"metadata": {
"dependency_versions": {
"python": "3.13",
"dspy": "3.0.4",
"cloudpickle": "3.1"
}
}
}