Skip to content

Conversation

@savannahostrowski
Copy link
Member

I discovered that we didn't have a benchmark in here for FastAPI so I figured I'd add one in to start. This is a pretty basic example of canonical FastAPI request handling.

@albertedwardson
Copy link

Hi! Just passing by and wanted to put my two cents in. I’m a bit unsure about adding this to the benchmark suite though.

pyperformance already have an async HTTP benchmark with Tornado (bm_tornado_http) that covers Python async request handling. FastAPI, by contrast, pulls in extra layers: especially Pydantic, which does validation in Rust.

I worry it could add noise rather than useful data about Python performance, if there’s ever a regression in this benchmark, it might be hard to tell what caused it. Maybe this fits better as an external benchmark rather than part of the core suite?

Not trying to nitpick, just genuinely curious what’s the goal to measure here? For a real-world minimal FastAPI app, this example lacks database or business logic and only serves semi-static responses, so I’m not sure what Python-side behavior it’s meant to represent

@savannahostrowski
Copy link
Member Author

I think there are a couple things to consider here:

  • There's existing precedent for adding popular libraries/frameworks into the benchmark suite, see Django or, as you pointed out, Tornado. FastAPI is now the most popular Python web framework so it seems relevant to consider adding a benchmark in here.
  • To your point about FastAPI dependencies potentially causing extra noise, the same could be said for Django or Tornado which also have their own set of dependencies. If we see a regression, the first step is always to investigate whether it's CPython or dependencies...but IMO, that's true for any external framework benchmark.
  • I did consider adding more complexity to this benchmark but candidly, most benchmarks in here are very simple. The goal here is to track how Python changes affect FastAPI's core request handling and async patterns. If we wanted more involved benchmarking, I think I'd rather include other separate benchmarks to measure perf on other scenarios/features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants