You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,6 +83,14 @@ Inspect supports many model providers including OpenAI, Anthropic, Google, Mistr
83
83
inspect eval inspect_evals/ds1000
84
84
```
85
85
86
+
-### [ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation](src/inspect_evals/class_eval)
87
+
Evaluates LLMs on class-level code generation with 100 tasks constructed over 500 person-hours. The study shows that LLMs perform worse on class-level tasks compared to method-level tasks. GPT-4 and GPT-3.5 outperform other models, with holistic generation being the best strategy for them, while incremental generation works better for other models. The study also shows that the performance of LLMs is highly correlated with the number of tokens in the prompt.
[ClassEval](https://github.com/FudanSELab/ClassEval) is a benchmark for evaluating large language models (LLMs) on class-level code generation tasks. It contains 100 class-level Python code generation tasks, constructed over approximately 500 person-hours.
4
+
5
+
<!-- Contributors: Automatically Generated -->
6
+
Contributed by [@zhenningdavidliu](https://github.com/zhenningdavidliu)
7
+
<!-- /Contributors: Automatically Generated -->
8
+
9
+
<!-- Usage: Automatically Generated -->
10
+
## Usage
11
+
12
+
First, install the `inspect_ai` and `inspect_evals` Python packages with:
After running evaluations, you can view their logs using the `inspect view` command:
26
+
27
+
```bash
28
+
inspect view
29
+
```
30
+
31
+
If you don't want to specify the `--model` each time you run an evaluation, create a `.env` configuration file in your working directory that defines the `INSPECT_EVAL_MODEL` environment variable along with your API key. For example:
>The bash and python code is executed inside a Docker container, so you will need to install [Docker Engine](https://docs.docker.com/engine/install/) in order to run the evaluation.
41
+
>
42
+
> Docker containers are also used in some challenges for auxiliary services. When you run this, the containers are pulled from a public registry. The source code is still included, however, both for your reference and because some files are copied from them into the agent's container at runtime.
43
+
44
+
<!-- Options: Automatically Generated -->
45
+
## Options
46
+
47
+
You can control a variety of options from the command line. For example:
See `inspect eval --help` for all available options.
56
+
<!-- /Options: Automatically Generated -->
57
+
58
+
## Dataset
59
+
60
+
See https://huggingface.co/datasets/FudanSELab/ClassEval for more information.
61
+
62
+
63
+
## Scoring
64
+
For each sample, the LLM-generated code is appended with test cases from the dataset, and the code is executed to check if it passes all the test cases. If all test cases pass, the sample is considered correct, and the score for the sample is 1. Otherwise, the score for the sample is 0.
65
+
66
+
The final score for the model is the average score across all samples.
0 commit comments