You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Experimental Feature**: Dynamic pipelines are currently an experimental feature. There are known issues and limitations, and the interface is subject to change. This feature is only supported by the `local`and `kubernetes` orchestrators. If you encounter any issues or have feedback, please let us know at [https://github.com/zenml-io/zenml/issues](https://github.com/zenml-io/zenml/issues).
8
+
**Experimental Feature**: Dynamic pipelines are currently an experimental feature. There are known issues and limitations, and the interface is subject to change. This feature is only supported by the `local`, `kubernetes`, `sagemaker`and `vertex` orchestrators. If you encounter any issues or have feedback, please let us know at [https://github.com/zenml-io/zenml/issues](https://github.com/zenml-io/zenml/issues).
9
9
{% endhint %}
10
10
11
11
{% hint style="info" %}
@@ -180,6 +180,39 @@ def unmapped_example():
180
180
consumer.map(a=a, b=unmapped(b))
181
181
```
182
182
183
+
#### Unpacking mapped outputs
184
+
185
+
If a mapped step returns multiple outputs, you can split them into separate lists (one per output) using `unpack()`. This returns a tuple of lists of artifact futures, aligned by mapped invocation.
186
+
187
+
```python
188
+
from zenml import pipeline, step
189
+
190
+
@step
191
+
defcreate_int_list() -> list[int]:
192
+
return [1, 2]
193
+
194
+
@step
195
+
defcompute(a: int) -> tuple[int, int]:
196
+
return a *2, a *3
197
+
198
+
@pipeline(dynamic=True)
199
+
defmap_pipeline():
200
+
ints = create_int_list()
201
+
results = compute.map(a=ints) # Map over [1, 2]
202
+
203
+
# Unpack per-output across all mapped invocations
204
+
double, triple = results.unpack()
205
+
206
+
# Each element is an ArtifactFuture; load to get concrete values
207
+
doubles = [f.load() for f in double] # [2, 4]
208
+
triples = [f.load() for f in triple] # [3, 6]
209
+
```
210
+
211
+
Notes:
212
+
-`results` is a future that refers to all outputs of all steps, and `unpack()` works for both `.map(...)` and `.product(...)`.
213
+
- Each list contains future objects that refer to a single artifact.
214
+
215
+
183
216
### Parallel Step Execution
184
217
185
218
Dynamic pipelines support true parallel execution using `step.submit()`. This method returns a `StepRunFuture` that you can use to wait for results or pass to downstream steps:
@@ -276,26 +309,11 @@ When running multiple steps concurrently using `step.submit()`, a failure in one
276
309
Dynamic pipelines are currently only supported by:
277
310
- `local`orchestrator
278
311
- `kubernetes`orchestrator
312
+
- `sagemaker`orchestrator
313
+
- `vertex`orchestrator
279
314
280
315
Other orchestrators will raise an error if you try to run a dynamic pipeline with them.
281
316
282
-
### Remote Execution Requirement
283
-
284
-
When running dynamic pipelines remotely (e.g., with the `kubernetes` orchestrator), you **must** include `depends_on` for at least one step in your pipeline definition. This is currently required due to a bug in remote execution.
285
-
286
-
{% hint style="warning" %}
287
-
**Required for Remote Execution**: Without `depends_on`, remote execution will fail. This requirement does not apply when running locally with the `local` orchestrator.
288
-
{% endhint %}
289
-
290
-
For example:
291
-
292
-
```python
293
-
@pipeline(dynamic=True, depends_on=[some_step])
294
-
def dynamic_pipeline():
295
-
some_step()
296
-
# ... rest of your pipeline
297
-
```
298
-
299
317
### Artifact Loading
300
318
301
319
When you call `.load()` on an artifact in a dynamic pipeline, it synchronously loads the data. For large artifacts or when you want to maintain parallelism, consider passing the step outputs (future or artifact) directly to downstream steps instead of loading them.
0 commit comments