From 0cf6ae838595120db70dd1d21068543b0c5f3837 Mon Sep 17 00:00:00 2001 From: Farouk Adeleke Date: Tue, 21 Oct 2025 08:04:22 +0000 Subject: [PATCH] Update README.md --- README.md | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 13b7f8d..5269c90 100644 --- a/README.md +++ b/README.md @@ -1,28 +1,24 @@ -# Red-Teaming Language Models with DSPy +# Red-Teaming Langu# Red-team Language Models with DSPy -We use the the power of [DSPy](https://github.com/stanfordnlp/dspy), a framework for structuring and optimizing language model programs, to red-team language models. +We use the the power of [DSPy](https://github.com/stanfordnlp/dspy), a framework for structuring and optimizing language model programs, to red-team language models. To our knowledge, this is the first attempt at using any auto-prompting *framework* to perform the red-teaming task. This is also probably the deepest architecture in public optimized with DSPy to date. We accomplish this using a *deep* language program with several layers of alternating `Attack` and `Refine` modules in the following optimization loop: -
- Overview of DSPy for red-teaming -
Figure 1: Overview of DSPy for red-teaming. The DSPy MIPRO optimizer, guided by a LLM as a judge, compiles our language program into an effective red-teamer against Vicuna.
-
+![Overview of DSPy for red-teaming](https://www.dropbox.com/scl/fi/4ebg4jrsebbvkpgfs8fjp/feedforward.png?rlkey=tq2saicjukzolhs1fjn30egyf&st=dmuqltlu&dl=0) + +*Figure 1: Overview of DSPy for red-teaming. The DSPy MIPRO optimizer, guided by a LLM as a judge, compiles our language program into an effective red-teamer against Vicuna.* The following Table demonstrates the effectiveness of the chosen architecture, as well as the benefit of DSPy compilation: -
- | **Architecture** | **ASR** | |:------------:|:----------:| -| None (Raw Input) | 10% | -| Architecture (5 Layer) | 26% | -| Architecture (5 Layer) + Optimization | 44% | +| None (Raw Input) | 10% | +| Architecture (5 Layer) | 26% | +| Architecture (5 Layer) + Optimization | 44% | -Table 1: ASR with raw harmful inputs, un-optimized architecture, and architecture post DSPy compilation. -
+*Table 1: ASR with raw harmful inputs, un-optimized architecture, and architecture post DSPy compilation.* With *no specific prompt engineering*, we are able to achieve an Attack Success Rate of 44%, 4x over the baseline. This is by no means the SOTA, but considering how we essentially spent no effort designing the architecture and prompts, and considering how we just used an off-the-shelf optimizer with almost no hyperparameter tuning (except to fit compute constraints), we think it is pretty exciting that we can achieve this result!