Skip to content

Commit 709022b

Browse files
authored
Merge branch 'main' into inline-comment-ui
2 parents 04fb43c + ab6bf35 commit 709022b

31 files changed

+661
-30
lines changed

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
<center>
44
<img class="header-img" src="assets/header-getting-started.png" alt="Getting Started Header Image" >
5-
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/ifkirianto.if" target="_blank" title="Iki">Iki</a> | <a href='mailto:info@ml5js.org'>Contribute ♥️</a> </p>
5+
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/ifkirianto.if" target="_blank" title="Iki">Iki</a> | <a href='https://forms.gle/5EpwYabG8hLn4p926' target="contribute-form">Contribute ♥️</a> </p>
66
</center>
77

88
Welcome! We're going to walk through how to start using ml5.js by creating a simple image classification program.
86.9 KB
Loading

docs/contributing/develop-contributor-notes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
<center>
44
<img class="header-img" src="assets/header-contributor-notes.png" alt="Develop Contributor Notes Header Image" >
5-
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/fajarstudio/" target="_blank" title="Fajar Studio">Fajar Studio</a> | <a href='mailto:info@ml5js.org'>Contribute ♥️</a> </p>
5+
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/fajarstudio/" target="_blank" title="Fajar Studio">Fajar Studio</a> | <a href='https://forms.gle/5EpwYabG8hLn4p926' target="contribute-form">Contribute ♥️</a> </p>
66
</center>
77

88
_last updated: 10, July 2024 [(source)](https://github.com/ml5js/ml5-next-gen/blob/main/CONTRIBUTING.md)_

docs/contributing/how-to-contribute.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,19 +2,21 @@
22

33
<center>
44
<img class="header-img" src="assets/header-how-to-contribute.png" alt="How To Contribute Header Image" >
5-
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/hassan2959/" target="_blank" title="hassan grewal">hassan grewal</a> | <a href='mailto:info@ml5js.org'>Contribute ♥️</a> </p>
5+
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/hassan2959/" target="_blank" title="hassan grewal">hassan grewal</a> | <a href='https://forms.gle/5EpwYabG8hLn4p926' target="contribute-form">Contribute ♥️</a> </p>
66
</center>
77

88
Our community is always looking for enthusiasts to help in all different ways.
99

10-
- **Development**. [GitHub](https://github.com/ml5js/ml5-library) is the main place where code is collected, issues are documented, and discussions about code are had. Check out the [Contributor Notes](/contributing/develop-contributor-notes.md) to get started with developing the ml5js library and website.
10+
- **Development**. [GitHub](https://github.com/ml5js/ml5-library) is the main place where code is collected, issues are documented, and code discussions take place. Check out the [Contributor Notes](/contributing/develop-contributor-notes.md) to get started with developing the ml5js library and website.
1111

1212
- **Documentation**. Quality documentation is crucial for the usability of our library. You can help by writing, updating, or translating documentation to ensure it's accessible to all users. We are also open for examples, demos, and tutorials that guide users through the features of each model. Refer to the [Contributor Notes](/contributing/documentation-contributor-notes.md) for more information.
1313

14-
- **Glossary**. We aim to maintain a glossary of terms to help users understand key concepts and terminology related to machine learning and ml5js. This glossary is designed to be editable by any ml5 user. Add new terms or update existing ones by submitting the [ml5 Glossary Contribution Form](https://docs.google.com/forms/d/e/1FAIpQLSdPz0ICzTSVdLAteIKwJ-zFzX6dX5l3dOpjWGzm6LIZutKvlA/viewform)!
14+
- **Community Share**. You are encouraged to share your projects, tutorials, events, ideas, and stories within the ml5 community! You can submit via the [Community Contribution Form](https://forms.gle/5EpwYabG8hLn4p926)!
1515

16-
- **Community Share**. You are encouraged to share your projects, tutorials, ideas, and stories within the ml5 community! You can submit via the [Community Contribution Form](https://docs.google.com/forms/d/e/1FAIpQLSdPz0ICzTSVdLAteIKwJ-zFzX6dX5l3dOpjWGzm6LIZutKvlA/viewform), tag [@ml5js on Twitter](https://twitter.com/ml5js?lang=en), or share in our [Discord forum](https://discord.gg/sUtHWmgg)!
16+
- **Glossary**. We aim to maintain a glossary of terms to help users understand key concepts and terminology related to machine learning and ml5js. This glossary is designed to be editable by any ml5 user. Add new terms or update existing ones by submitting the same [Contribution Form](https://forms.gle/5EpwYabG8hLn4p926) and selecting *ml5 Glossary*.!
1717

18-
- **Illustration**. Contribute to the visual appeal of the ml5 website by creating header images for our documentation or the hero sketch on the homepage. If your are intersted, email your work to <a href="mailto:info@ml5js.org">info@ml5js.org</a>!
1918

20-
We welcome all forms of socially and technically driven contributions. No contribution is too small. If you need more information, please contact us at [@ml5js on twitter](https://twitter.com/ml5js), <a href="mailto:info@ml5js.org">info@ml5js.org</a>, [Discord](https://discord.gg/sUtHWmgg) or [Github](https://github.com/ml5js/ml5-library/issues).
19+
20+
- **Illustration**. Contribute to the visual appeal of the ml5 website by creating header images for our documentation or the hero sketch on the homepage. If your are interested, email your work to <a href="mailto:info@ml5js.org">info@ml5js.org</a>!
21+
22+
We welcome all forms of socially and technically driven contributions. No contribution is too small. If you need more information, please contact us at <a href="mailto:info@ml5js.org">info@ml5js.org</a> or via [Github](https://github.com/ml5js/ml5-library/issues).

docs/css/style-markdown.css

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -450,14 +450,14 @@ body.close .sidebar-toggle {
450450
}
451451

452452
.markdown-section p.tip {
453-
color: var(--color-text-light);
454-
background-color: var(--color-bg-code);
455-
/* border-bottom-right-radius: 2px; */
456-
/* border-top-right-radius: 2px; */
457-
border-radius: var(--border-radius);
458-
margin: -1rem 0;
459-
padding: 0.75rem 1.5rem 0.75rem 1.875rem;
460-
position: relative;
453+
color: var(--color-text-light);
454+
background-color: var(--color-bg-code);
455+
/* border-bottom-right-radius: 2px; */
456+
/* border-top-right-radius: 2px; */
457+
border-radius: var(--border-radius);
458+
margin: -2rem 0;
459+
padding: 0.75rem 1.5rem 0.75rem 1.875rem;
460+
position: relative;
461461
}
462462

463463
.markdown-section p.tip:before {

docs/reference/body-segmentation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
<center>
44
<img class="header-img" src="assets/header-body-segmentation.png" alt="BodySegmentation Header Image" >
5-
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/ibrandify/" target="_blank" title="ibrandify">ibrandify</a> | <a href='mailto:info@ml5js.org'>Contribute ♥️</a> </p>
5+
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/ibrandify/" target="_blank" title="ibrandify">ibrandify</a> | <a href='https://forms.gle/5EpwYabG8hLn4p926' target="contribute-form">Contribute ♥️</a> </p>
66
</center>
77

88
## Description

docs/reference/bodypose.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
<center>
44
<img class="header-img" src="assets/header-bodypose.png" alt="BodyPose Header Image" >
5-
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/sentyairma1/" target="_blank" title="sentya irma">sentya irma</a> | <a href='mailto:info@ml5js.org'>Contribute ♥️</a> </p>
5+
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/sentyairma1/" target="_blank" title="sentya irma">sentya irma</a> | <a href='https://forms.gle/5EpwYabG8hLn4p926' target="contribute-form">Contribute ♥️</a> </p>
66
</center>
77

88
## Description

docs/reference/depthestimation.md

Lines changed: 208 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,208 @@
1+
# DepthEstimation
2+
## Description
3+
4+
The ml5.js DepthEstimation module offers a pretrained model for inferring depth maps **from images of people**, estimating the distance between each pixel and the camera that captured the image. The model used is TensorFlow's [AR Portrait Depth](https://blog.tensorflow.org/2022/05/portrait-depth-api-turning-single-image.html) which is designed specifically for portrait images and does not perform very well with images of other types of subjects.
5+
6+
## Quick Start
7+
8+
Get up and running with the [webcam example](https://editor.p5js.org/nasif-co/sketches/Pep6DjEtD), which shows a realtime depth map estimated from the webcam video.
9+
10+
</br>
11+
12+
[DEMO](iframes/depthestimation ":include :type=iframe width=100% height=550px")
13+
14+
## Examples
15+
- [Webcam](https://editor.p5js.org/nasif-co/sketches/Pep6DjEtD): Show the live depth map of the video captured by the webcam.
16+
- [Video](https://editor.p5js.org/nasif-co/sketches/vifmzXg6o): Generate the depth map of a video file as it plays.
17+
- [Single Image](https://editor.p5js.org/nasif-co/sketches/_TcZofgrt): Depth map of an image using single-shot estimation.
18+
- [Mask Background](https://editor.p5js.org/nasif-co/sketches/Z_1xMhUPl): Showcases how to mask out the background from the depth result.
19+
- [Point Cloud](https://editor.p5js.org/nasif-co/sketches/VbT8hEoDz): Creates a live 3D point cloud visualization of the webcam video.
20+
- [Mesh](https://editor.p5js.org/nasif-co/sketches/X-e1DEZr4): Creates a live 3D mesh geometry of the webcam video.
21+
22+
## Step-by-Step Guide
23+
### Initialization and options
24+
Before starting, make sure you have included the ml5 library in your `index.html` file:
25+
26+
```html
27+
<script src="https://unpkg.com/ml5@1/dist/ml5.js"></script>
28+
```
29+
30+
?> For more information on importing the ml5 library, check out the [Getting Started](/?id=set-up-ml5js) page.
31+
32+
Create an instance of `ml5.depthEstimation` in your preload function, to allow the model to load
33+
```js
34+
function preload() {
35+
depthEstimator = ml5.depthEstimation({
36+
// Use options here to configure how the model behaves.
37+
// See a full list of options below, in the 'Methods' section of this reference page
38+
});
39+
}
40+
```
41+
For the full list of options, check out the [methods section](#ml5depthestimation) below!
42+
43+
#### p5.js 2.0
44+
You can also use this module with p5.js 2.0! Instead of creating `ml5.depthEstimation` in preload, do it using your async `setup()` and `await`:
45+
```js
46+
async function setup() {
47+
// Load the depth estimation model
48+
depthEstimator = await ml5.depthEstimation({
49+
// Options go here
50+
});
51+
//the rest of your setup goes here
52+
}
53+
```
54+
55+
### Estimating depth
56+
As with many other ml5 models, you have two options to run depth estimation on the image, video or webcam of your choice: _Continuous Estimation_ and _Single Shot Estimation_ .
57+
58+
For any of these, make sure you first load the image, video or start the webcam capture. This is the media we will pass to the model.
59+
60+
#### Continuous estimation
61+
This method is used to continuously estimate depth on every frame of a video or webcam feed.
62+
```js
63+
// Make sure to load the model in preload or async in p5 2.0!
64+
function setup() {
65+
// Create the video capture element
66+
webcam = createCapture(VIDEO);
67+
68+
// Start continuous depth estimation on the webcam feed
69+
depthEstimator.estimateStart(webcam, gotResults);
70+
}
71+
72+
function gotResults(result) {
73+
// The most recent depth map is in the result object!
74+
}
75+
```
76+
Using this method, the depth estimator will take care of doing estimation of a frame and waiting for it to finish before working on the next frame. Any time a depth map is ready, it will fire the callback function to provide it.
77+
78+
#### Single shot estimation
79+
This method is used to estimate depth once, for a single image:
80+
```js
81+
// Make sure to load the image and the model in preload or asyn in p5 2.0!
82+
function setup() {
83+
// Estimate depth from the loaded image
84+
depthEstimator.estimate(img, gotResults);
85+
}
86+
87+
function gotResults(result) {
88+
// The depth map is in the result object!
89+
}
90+
```
91+
Because the estimation takes time, we pass in a callback function that will fire when estimation is ready. The `estimate` method is called in setup because it **will only run once**. If calling it multiple times, it is prudent to wait for each operation to finish before starting the next one.
92+
93+
### Using the depth result
94+
Whenever the callback function fires, we have acces to the depth result that contains all the depth information.
95+
```js
96+
let depthMap;
97+
98+
function gotResults(result) {
99+
// Save the depth result in a variable
100+
depthMap = result;
101+
}
102+
```
103+
The `result` is a `DepthEstimationResult` object that contains the depth map and other relevant data. Save it to variable so you can use it inside the p5 `draw()` loop!
104+
105+
For more information, on the structure and data contained in the result, check out [DepthEstimationResult Structure](#depthestimationresult) below.
106+
107+
## Methods
108+
109+
### ml5.depthEstimation()
110+
111+
This method is used to initialize the depth estimation object.
112+
113+
In p5.js 1.x.x, use it inside the `preload()` function:
114+
115+
```js
116+
const depthEstimator = ml5.depthEstimation(?options);
117+
```
118+
119+
In p5.js 2.0, use it in the `async setup()`:
120+
121+
```js
122+
const depthEstimator = await ml5.depthEstimation(?options);
123+
```
124+
125+
126+
**Options:**
127+
128+
- `flipHorizontal`: Used to mirror the depth map horizontally
129+
- Default: `false`
130+
- Accepted values: `true`, `false` (boolean).
131+
- `dilationFactor`: Sets how many pixels around the detected edges of a person should be ignored. This is useful because depth values are inaccurate and noisy around the contours.
132+
- Default: `4`
133+
- Accepted values: `0` to `10` (integer).
134+
- `colormap`: Defines how the depth map is drawn; either Grayscale, mapping depth from black (far) to white (close), or Color, mapping depth using the whole range of color hues.
135+
- Default: `'GRAYSCALE'`
136+
- Accepted values: `'GRAYSCALE'` or `'COLOR'` (string).
137+
- `minDepth`: Sets the depth value that will map to the 'close' color.
138+
- Default: `0.2`
139+
- Accepted values: `0` to `1` (float). Must be less than `maxDepth`.
140+
- `maxDepth`: Sets the depth value that will map to the 'far' color.
141+
- Default: `0.75`
142+
- Accepted values: `0` to `1` (float). Must be greater than `minDepth`.
143+
- `normalizeDynamically`: Whether to do a manual mapping (using maxDepth and minDepth) or do it dynamically; recording the lowest and highest values detected in the depth map on every frame and using them as the mapping limits. This means that any particular color will not always represent the same absolute distance from the screen.
144+
- Default: `false`
145+
- Accepted values: `true`, `false` (boolean). Setting to `true` will ignore `minDepth` and `maxDepth` options
146+
- `normalizationSmoothingFactor`: Only used if normalizing dynamically. Sets how much to smooth the varying maximum and minimum depth values detected during normalization. Higher values result in faster reaction to changes. Lower values result in smoother changes.
147+
- Default: `0.5`
148+
- Accepted values: `0` to `1` (float).
149+
150+
**Returns:**
151+
152+
- **Object**: `depthEstimation` object that contains the methods to run estimation.
153+
154+
### depthEstimator.estimateStart()
155+
This method is used for _Continuous Estimation_: estimating depth on a video/webcam continuously, for each frame. Calling it will initiate an estimation loop, running until `depthEstimator.estimateStop()` is called.
156+
157+
```js
158+
depthEstimator.estimateStart(media, callback)
159+
```
160+
161+
**Parameters:**
162+
163+
- **media**: An HTML or p5.js image, video, or canvas element to estimate a depth map for continuously.
164+
- **callback(result)**: A callback function that will be called *every time* an estimation result is available. The `result` is a `DepthEstimationResult` object. Check the section on it below for details on its structure.
165+
166+
### depthEstimator.estimateStop()
167+
This method is used to stop an estimation loop that was previously started by a call to `depthEstimator.estimateStart()`.
168+
169+
```js
170+
depthEstimator.estimateStop()
171+
```
172+
173+
### depthEstimator.estimate()
174+
This method is used for _Single Shot Estimation_: estimating depth one time on a single image or video/webcam frame.
175+
176+
```js
177+
depthEstimator.estimate(media, callback)
178+
```
179+
180+
**Parameters:**
181+
182+
- **media**: An HTML or p5.js image, video, or canvas element to estimate a depth map for.
183+
- **callback(result)**: A callback function that will be called when estimation is ready. The `result` is a `DepthEstimationResult` object. Check the section on it below for details on its structure.
184+
185+
186+
### DepthEstimationResult
187+
This is the object that is passed as an argument to the callback functions of `depthEstimator.estimateStart()` or `depthEstimator.estimate()`. It contains the result of the depth estimation process and other useful relevant data
188+
189+
These are its properties:
190+
191+
- `image`: A p5 image of the depth map in the chosen colormap.
192+
- Type: `p5.Image` object
193+
- `getDepthAt(x, y)`: Function that returns the depth value of the pixel at `x, y`.
194+
- Type: Function.
195+
- Returns: Floating point number in the 0 - 1 range.
196+
- `data`: The raw depth values for each pixel in a two dimensional array format.
197+
- Type: 2D array of floating point numbers in the 0 - 1 range.
198+
- `mask`: The mask of the people detected in the image and for whom depth values were estimated. It can be used directly with the `mask()` function in p5.
199+
- Type: `p5.Image` object
200+
- `sourceFrame`: The exact frame that was analyzed to create the depth map. Because depth estimation is not immediate, the result can fall out of sync from the source video. By using this value instead of the video, the depth data is guaranteed to be in sync. See a [demo sketch](https://editor.p5js.org/nasif-co/sketches/Z_1xMhUPl) showcasing the difference.
201+
- Type: `p5.Image`
202+
- `width`: Width of the depth map
203+
- Type: number (integer)
204+
- `height`: Height of the depth map
205+
- Type: number (integer)
206+
207+
## Learn more
208+
Check out the community article [Finding the Z-axis](https://ml5js.org/blog/bringing-depth-estimation/) to learn more about the way depth estimation was implemented in ml5.

docs/reference/facemesh.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
<center>
44
<img class="header-img" src="assets/header-facemesh.png" alt="FaceMesh Header Image" >
5-
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/pglen/" target="_blank" title="Paweł Gleń">Paweł Gleń</a> | <a href='mailto:info@ml5js.org'>Contribute ♥️</a> </p>
5+
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/pglen/" target="_blank" title="Paweł Gleń">Paweł Gleń</a> | <a href='https://forms.gle/5EpwYabG8hLn4p926' target="contribute-form">Contribute ♥️</a> </p>
66
</center>
77

88
## Description

docs/reference/handpose.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
<center>
44
<img class="header-img" src="assets/header-handpose.png" alt="HandPose Header Image" >
5-
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/dinosoftlab/" target="_blank" title="DinosoftLabs">DinosoftLabs</a> | <a href='mailto:info@ml5js.org'>Contribute ♥️</a> </p>
5+
<p class="img-credit"> Image Credit: <a href="https://thenounproject.com/creator/dinosoftlab/" target="_blank" title="DinosoftLabs">DinosoftLabs</a> | <a href='https://forms.gle/5EpwYabG8hLn4p926' target="contribute-form">Contribute ♥️</a> </p>
66
</center>
77

88
## Description

0 commit comments

Comments
 (0)