|
103 | 103 | - [Stable vs Develop](#stable-vs-develop) |
104 | 104 | - [Release Schedule](#release-schedule) |
105 | 105 | - [Threads vs Threadless](#threads-vs-threadless) |
| 106 | + - [Threadless Remote vs Local Execution Mode](#threadless-remote-vs-local-execution-mode) |
106 | 107 | - [SyntaxError: invalid syntax](#syntaxerror-invalid-syntax) |
107 | 108 | - [Unable to load plugins](#unable-to-load-plugins) |
108 | 109 | - [Unable to connect with proxy.py from remote host](#unable-to-connect-with-proxypy-from-remote-host) |
|
115 | 116 | - [High level architecture](#high-level-architecture) |
116 | 117 | - [Everything is a plugin](#everything-is-a-plugin) |
117 | 118 | - [Internal Documentation](#internal-documentation) |
| 119 | + - [Read The Doc](#read-the-doc) |
| 120 | + - [pydoc](#pydoc) |
| 121 | + - [pyreverse](#pyreverse) |
118 | 122 | - [Development Guide](#development-guide) |
119 | 123 | - [Setup Local Environment](#setup-local-environment) |
120 | 124 | - [Setup Git Hooks](#setup-git-hooks) |
|
132 | 136 | - Fast & Scalable |
133 | 137 |
|
134 | 138 | - Scale up by using all available cores on the system |
135 | | - - Use `--num-acceptors` flag to control number of cores |
136 | 139 |
|
137 | 140 | - Threadless executions using asyncio |
138 | | - - Use `--threaded` for synchronous thread based execution mode |
139 | 141 |
|
140 | 142 | - Made to handle `tens-of-thousands` connections / sec |
141 | 143 |
|
|
186 | 188 | [200] 100000 responses |
187 | 189 | ``` |
188 | 190 |
|
| 191 | + Consult [Threads vs Threadless](#threads-vs-threadless) and [Threadless Remote vs Local Execution Mode](#threadless-remote-vs-local-execution-mode) to control number of CPU cores utilized. |
| 192 | + |
189 | 193 | See [Benchmark](https://github.com/abhinavsingh/proxy.py/tree/develop/benchmark#readme) for more details and for how to run benchmarks locally. |
190 | 194 |
|
191 | 195 | - Lightweight |
@@ -1689,11 +1693,24 @@ optional arguments: |
1689 | 1693 |
|
1690 | 1694 | ## Internal Documentation |
1691 | 1695 |
|
1692 | | -Code is well documented. You have a few options to browse the internal class hierarchy and documentation: |
| 1696 | +### Read The Doc |
| 1697 | + |
| 1698 | +- Visit [proxypy.readthedocs.io](https://proxypy.readthedocs.io/) |
| 1699 | +- Build locally using: |
| 1700 | + |
| 1701 | +`make lib-doc` |
| 1702 | + |
| 1703 | +### pydoc |
| 1704 | + |
| 1705 | +Code is well documented. Grab the source code and run: |
| 1706 | + |
| 1707 | +`pydoc3 proxy` |
| 1708 | + |
| 1709 | +### pyreverse |
| 1710 | + |
| 1711 | +Generate class level hierarchy UML diagrams for in-depth analysis: |
1693 | 1712 |
|
1694 | | -1. Visit [proxypy.readthedocs.io](https://proxypy.readthedocs.io/) |
1695 | | -2. Build and open docs locally using `make lib-doc` |
1696 | | -2. Use `pydoc3` locally using `pydoc3 proxy` |
| 1713 | +`make lib-pyreverse` |
1697 | 1714 |
|
1698 | 1715 | # Run Dashboard |
1699 | 1716 |
|
@@ -1893,6 +1910,20 @@ For `windows` and `Python < 3.8`, you can still try out threadless mode by start |
1893 | 1910 |
|
1894 | 1911 | If threadless works for you, consider sending a PR by editing `_env_threadless_compliant` method in the `proxy/common/constants.py` file. |
1895 | 1912 |
|
| 1913 | +## Threadless Remote vs Local execution mode |
| 1914 | + |
| 1915 | +Original threadless implementation used `remote` execution mode. This is also depicted under [High level architecture](#high-level-architecture) as ASCII art. |
| 1916 | + |
| 1917 | +Under `remote` execution mode, acceptors delegate incoming client connection processing to a remote worker process. By default, acceptors delegate connections in round-robin fashion. Worker processing the request may or may not be running on the same CPU core as the acceptor. This architecture scales well for high throughput, but results in spawning two process per CPU core. |
| 1918 | + |
| 1919 | +Example, if there are N-CPUs on the machine, by default, N acceptors and N worker processes are started. You can tune number of processes using `--num-acceptors` and `--num-workers` flag. You might want more workers than acceptors or vice versa depending upon your use case. |
| 1920 | + |
| 1921 | +In v2.4.x, `local` execution mode was added, mainly to reduce number of processes spawned by default. This model serves well for day-to-day single user use cases and for developer testing scenarios. Under `local` execution mode, acceptors delegate client connections to a companion thread, instead of a remote process. `local` execution mode ensure CPU affinity, unlike in the `remote` mode where acceptor and worker might be running on different CPU cores. |
| 1922 | + |
| 1923 | +`--local-executor 1` was made default in v2.4.x series. Under `local` execution mode, `--num-workers` flag has no effect, as no remote workers are started. |
| 1924 | + |
| 1925 | +To use `remote` execution mode, use `--local-executor 0` flag. Then use `--num-workers` to tune number of worker processes. |
| 1926 | + |
1896 | 1927 | ## SyntaxError: invalid syntax |
1897 | 1928 |
|
1898 | 1929 | `proxy.py` is strictly typed and uses Python `typing` annotations. Example: |
|
0 commit comments