|
1 | | -# agent-as-a-judge |
2 | | -🤠 Agent-as-a-Judge and DevAI dataset |
| 1 | +<div align="center"> |
| 2 | + <h1 align="center">Agents Evaluate Agents</h1> |
| 3 | + <img src="assets/devai_logo.png" alt="DevAI Logo" width="150" height="150"> |
| 4 | + <p align="center"> |
| 5 | + <a href="https://devai.tech"><b>Project</b></a> | |
| 6 | + <a href="https://huggingface.co/DEVAI-benchmark"><b>Dataset</b></a> | |
| 7 | + <a href="https://arxiv.org/pdf/2410.10934"><b>Paper</b></a> |
| 8 | + </p> |
| 9 | +</div> |
| 10 | + |
| 11 | +> [!NOTE] |
| 12 | +> Current evaluation techniques are often inadequate for advanced **agentic systems** due to their focus on final outcomes and labor-intensive manual reviews. To overcome this limitation, we introduce the **Agent-as-a-Judge** framework. |
| 13 | +> |
| 14 | +
|
| 15 | +## 🤠 Features |
| 16 | + |
| 17 | +Agent-as-a-Judge offers two key advantages: |
| 18 | + |
| 19 | +- **Automated Evaluation**: Agent-as-a-Judge can evaluate tasks during or after execution, saving 97.72% of time and 97.64% of costs compared to human experts. |
| 20 | +- **Provide Reward Signals**: It provides continuous, step-by-step feedback that can be used as reward signals for further agentic training and improvement. |
| 21 | + |
| 22 | +<div align="center"> |
| 23 | + <img src="assets/demo.gif" alt="Demo GIF" style="width: 100%; max-width: 650px;"> |
| 24 | +</div> |
| 25 | +<div align="center"> |
| 26 | + <img src="assets/judge_first.png" alt="AaaJ" style="width: 95%; max-width: 650px;"> |
| 27 | +</div> |
| 28 | + |
| 29 | + |
| 30 | + |
| 31 | +## 🎮 Quick Start |
| 32 | + |
| 33 | +### 1. install |
| 34 | + |
| 35 | +```python |
| 36 | +git clone https://github.com/metauto-ai/agent-as-a-judge.git |
| 37 | +cd agent-as-a-judge/ |
| 38 | +conda create -n aaaj python=3.11 |
| 39 | +conda activate aaaj |
| 40 | +pip install poetry |
| 41 | +poetry install |
| 42 | +``` |
| 43 | + |
| 44 | +### 2. set LLM&API |
| 45 | + |
| 46 | +Before running, rename `.env.sample` to `.env` and fill in the **required APIs and Settings** in the main repo folder to support LLM calling. The `LiteLLM` tool supports various LLMs. |
| 47 | + |
| 48 | +### 3. run |
| 49 | + |
| 50 | +> [!TIP] |
| 51 | +> See more comprehensive [usage scripts](scripts/README.md). |
| 52 | +> |
| 53 | +
|
| 54 | + |
| 55 | +#### Usage A: **Ask Anything** for Any Workspace: |
| 56 | + |
| 57 | +```python |
| 58 | + |
| 59 | +PYTHONPATH=. python scripts/run_ask.py \ |
| 60 | + --workspace $(pwd)/benchmark/workspaces/OpenHands/39_Drug_Response_Prediction_SVM_GDSC_ML \ |
| 61 | + --question "What does this workspace contain?" |
| 62 | +``` |
| 63 | + |
| 64 | +You can find an [example](assets/ask_sample.md) to see how **Ask Anything** works. |
| 65 | + |
| 66 | + |
| 67 | +#### Usage B: **Agent-as-a-Judge** for **DevAI** |
| 68 | + |
| 69 | + |
| 70 | +```python |
| 71 | + |
| 72 | +PYTHONPATH=. python scripts/run_aaaj.py \ |
| 73 | + --developer_agent "OpenHands" \ |
| 74 | + --setting "black_box" \ |
| 75 | + --planning "efficient (no planning)" \ |
| 76 | + --benchmark_dir $(pwd)/benchmark |
| 77 | +``` |
| 78 | + |
| 79 | +💡 There is an [example](assets/aaaj_sample.md) that shows the process of how **Agent-as-a-Judge** collects evidence for judging. |
| 80 | + |
| 81 | + |
| 82 | + |
| 83 | +## 🤗 DevAI Dataset |
| 84 | + |
| 85 | + |
| 86 | + |
| 87 | +<div align="center"> |
| 88 | + <img src="assets/dataset.png" alt="Dataset" style="width: 100%; max-width: 600px;"> |
| 89 | +</div> |
| 90 | + |
| 91 | +> [!IMPORTANT] |
| 92 | +> As a **proof-of-concept**, we applied **Agent-as-a-Judge** to code generation tasks using **DevAI**, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. The results demonstrate that **Agent-as-a-Judge** significantly outperforms traditional evaluation methods, delivering reliable reward signals for scalable self-improvement in agentic systems. |
| 93 | +> |
| 94 | +> Check out the dataset on [Hugging Face 🤗](https://huggingface.co/DEVAI-benchmark). |
| 95 | +> See how to use this dataset in the [guidelines](benchmark/devai/README.md). |
| 96 | +
|
| 97 | + |
| 98 | +<!-- <div align="center"> |
| 99 | + <img src="assets/sample.jpeg" alt="Sample" style="width: 100%; max-width: 600px;"> |
| 100 | +</div> --> |
| 101 | + |
| 102 | +## Reference |
| 103 | + |
| 104 | +Feel free to cite if you find the Agent-as-a-Judge concept useful for your work: |
| 105 | + |
| 106 | +``` |
| 107 | +@article{zhuge2024agent, |
| 108 | + title={Agent-as-a-Judge: Evaluate Agents with Agents}, |
| 109 | + author={Zhuge, Mingchen and Zhao, Changsheng and Ashley, Dylan and Wang, Wenyi and Khizbullin, Dmitrii and Xiong, Yunyang and Liu, Zechun and Chang, Ernie and Krishnamoorthi, Raghuraman and Tian, Yuandong and Shi, Yangyang and Chandra, Vikas and Schmidhuber, J{\"u}rgen}, |
| 110 | + journal={arXiv preprint arXiv:2410.10934}, |
| 111 | + year={2024} |
| 112 | +} |
| 113 | +``` |
| 114 | + |
| 115 | + |
0 commit comments