# DSPy Tweet Optimizer - Modaic Agent A composable DSPy agent that optimizes tweets using a reflective generate-evaluate algorithim, packaged for the [Modaic Hub](https://modaic.dev). Generate, evaluate, and iteratively improve tweets with configurable evaluation categories and automated optimization. ## Features - **Modaic Agent**: Deployable on Modaic Hub for easy sharing and reuse - **Hill-Climbing Optimization**: Iteratively improves tweets through automated evaluation - **Customizable Categories**: Define evaluation criteria (engagement, clarity, tone, etc.) - **Multiple Usage Modes**: Single generation, full optimization, or standalone evaluation - **Structured Evaluation**: 1-9 scoring with detailed reasoning per category - **CLI Compatibility**: Same functionality as the original CLI tool - **Easy Configuration**: Flexible model, iteration, and patience settings ## Installation ### Prerequisites - Python 3.11+ - OpenRouter API key ([Get one here](https://openrouter.ai/)) - Modaic account (for hub deployment) ### Setup 1. **Clone the repository:** ```bash git clone https://git.modaic.dev/farouk1/tweet-optimizer-v2.git cd tweet-optimizer ``` 2. **Install dependencies:** ```bash uv sync ``` 3. **Set up your API key and Modaic Token:** ```bash export OPENROUTER_API_KEY='your-api-key-here' export MODAIC_TOKEN='your-modaic-token' ``` ## Usage ### Basic Agent Usage ```python from tweet_optimizer_agent import TweetOptimizerAgent, TweetOptimizerConfig # Create agent with default settings config = TweetOptimizerConfig() agent = TweetOptimizerAgent(config) # Single tweet generation tweet = agent( input_text="Create a tweet about HuggingFace transformers", current_tweet="", previous_evaluation=None ) print(f"Generated: {tweet}") ``` ### Full Optimization Process ```python # Run complete optimization (like CLI) results = agent.optimize( input_text="Create a tweet about HuggingFace transformers", iterations=10, patience=5 ) print(f"Original: {results['initial_text']}") print(f"Optimized: {results['final_tweet']}") print(f"Score: {results['best_score']:.2f}") print(f"Iterations: {results['iterations_run']}") ``` ### Custom Configuration ```python # Custom evaluation categories and settings config = TweetOptimizerConfig( lm="openrouter/anthropic/claude-sonnet-4.5", categories=[ "Engagement potential", "Clarity and readability", "Professional tone", "Call-to-action strength" ], max_iterations=15, patience=8 ) agent = TweetOptimizerAgent(config) ``` ### Tweet Evaluation ```python # Evaluate a specific tweet evaluation = agent.evaluate_tweet( tweet_text="Excited to share our new AI model!", original_text="We released a new AI model", current_best_tweet="" ) for eval in evaluation.evaluations: print(f"{eval.category}: {eval.score}/9 - {eval.reasoning}") ``` ### Deploy to Modaic Hub ```python # Push your trained agent to Modaic Hub agent.push_to_hub( "your-username/tweet-optimizer", commit_message="Deploy tweet optimizer agent", with_code=True ) ``` ### Load from Hub ```python # Load a pre-trained agent from Modaic Hub agent = TweetOptimizerAgent.from_precompiled("your-username/tweet-optimizer") # Use immediately optimized = agent("Your tweet content here") ``` ## CLI Tool The original CLI functionality is still available: ```bash # Basic usage python cli.py "Create a tweet about AI breakthroughs" # With custom settings python cli.py "Product launch announcement" \ --model "Claude Sonnet 4.5" \ --iterations 15 \ --patience 8 \ --categories "Excitement" "Clarity" "Call-to-action" # List available models python cli.py --list-models # Quiet mode (output only final tweet) python cli.py "Content here" --quiet ``` ## Configuration Options ### TweetOptimizerConfig | Parameter | Type | Default | Description | |------------------|-----------|----------------------------------------|---------------------------------------------| | `lm` | str | `"openrouter/google/gemini-2.5-flash"` | Language model to use | | `eval_lm` | str | `"openrouter/openai/gpt-5"` | Evaluator language model to use | | `categories` | List[str] | Default evaluation categories | Custom evaluation criteria | | `max_iterations` | int | 10 | Maximum optimization iterations | | `patience` | int | 5 | Stop after N iterations without improvement | ### Default Categories 1. **Engagement potential** - How likely users are to like, retweet, or reply 2. **Clarity and readability** - How easy the tweet is to understand 3. **Emotional impact** - How well the tweet evokes feelings or reactions 4. **Relevance to target audience** - How well it resonates with intended readers ## Architecture ``` dspy-tweet-optimizer/ tweet_optimizer_agent.py # Main Modaic agent implementation cli.py # Command-line interface modules.py # DSPy generator and evaluator modules hill_climbing.py # Optimization algorithm models.py # Pydantic data models helpers.py # Utility functions utils.py # File I/O and DSPy utilities constants.py # Configuration constants tests/ # Test suite ``` ### Core Components - **TweetOptimizerAgent**: Main Modaic agent with optimization methods - **TweetGeneratorModule**: DSPy module for generating/improving tweets - **TweetEvaluatorModule**: DSPy module for structured evaluation - **HillClimbingOptimizer**: Iterative improvement algorithm ## License MIT License - see [LICENSE](LICENSE) file for details. ## Credits This Modaic agent implementation is based on the original DSPy Tweet Optimizer by [tom-doerr](https://github.com/tom-doerr/dspy-tweet-optimizer), licensed under MIT. The original project provided the foundation including: - Core DSPy modules (TweetGeneratorModule, TweetEvaluatorModule) - Hill-climbing optimization algorithm - CLI interface and utilities - Comprehensive testing framework **Original Author**: Tom Doerr ([@tom-doerr](https://github.com/tom-doerr)) **Original Repository**: [dspy-tweet-optimizer](https://github.com/tom-doerr/dspy-tweet-optimizer) ### Modifications for Modaic - Packaged as a Modaic PrecompiledAgent - Added hub deployment functionality - Enhanced configuration options - Maintained CLI compatibility - Extended usage examples