Skip to content

Commit 1566563

Browse files
authored
Update README.md
1 parent 6e0d1b6 commit 1566563

File tree

1 file changed

+30
-18
lines changed

1 file changed

+30
-18
lines changed

README.md

Lines changed: 30 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ Like the Dance Party repo, it is a standalone repo that is published as an [NPM
99

1010
**AI for Oceans** was produced for the Hour of Code in 2019. This module provides the student experience for the 5 interactive levels in the **AI for Oceans** script at https://studio.code.org/s/oceans.
1111

12+
We have measured over one million unique [completions](https://twitter.com/codeorg/status/1385266661700288513) of the script.
13+
1214
![grid_comp](https://user-images.githubusercontent.com/2205926/165404102-87073dad-8d90-482a-ad68-bc475beb6b11.png)
1315

1416
# Design notes
@@ -19,7 +21,7 @@ These 5 levels are invoked with a "mode" (stored internally as `appMode`) parame
1921

2022
### `fishvtrash`
2123

22-
The user trains the AI to differentiate between fish versus trash, and then examine the results.
24+
The user trains the AI to differentiate between fish and trash, and then examines the results.
2325

2426
### `creaturesvtrashdemo`
2527

@@ -35,7 +37,7 @@ In this mode, the user chooses from one of six adjectives and then categorizes f
3537

3638
### `long`
3739

38-
In this mode, the user chooses from one of fifteen adjectives. With more subjectivity in this list, the user can explore more subtle implications of training and recognition.
40+
In this mode, the user chooses from one of fifteen adjectives. With more subjectivity in this list, the user can explore more subtle implications of training and categorization.
3941

4042
## ML technology
4143

@@ -47,24 +49,23 @@ Adapted from content at https://code.org/oceans:
4749
4850
## Scenes
4951

50-
The **AI for Oceans** script presents a linear narrative structure. The app is designed to deliver the interactive levels for this script, one mode at a time, with no need to persist data to the browser or server between each level.
52+
The **AI for Oceans** script presents a linear narrative structure. This app is designed to deliver the interactive levels for this script, one mode at a time, with no need to persist data to the browser or server between each level.
5153

52-
The app itself contains a variety of "scenes", with each mode using a different subset. The scenes (known as `currentMode` internally) are as follows:
54+
The app itself presents a variety of "scenes", with each mode using a different subset. The scenes (known as `currentMode` internally) are as follows:
5355

5456
### `loading`
5557

5658
<img width="1328" alt="loading" src="https://user-images.githubusercontent.com/2205926/165404296-5f5c71df-6650-476b-8ada-b4e277a25a51.png">
5759

58-
A simple loading screen.
60+
A simple "loading" screen, used when loading or processing data.
5961

6062
### `words`
6163

6264
<img width="1301" alt="short" src="https://user-images.githubusercontent.com/2205926/165404312-26e8ca9b-847d-4d75-81bd-97bd735a55b0.png">
6365

6466
<img width="1301" alt="words" src="https://user-images.githubusercontent.com/2205926/165404326-83af55e8-0aaf-4541-94b8-e6f28946a9f3.png">
6567

66-
67-
Select adjectives for the `short` & `long` modes.
68+
The user selects from a list of adjectives for the `short` & `long` modes.
6869

6970
### `train`
7071

@@ -84,7 +85,7 @@ The user watches A.I. (the "bot") categorizing items, one at a time.
8485

8586
<img width="1298" alt="pond-false" src="https://user-images.githubusercontent.com/2205926/165404481-6e36e7d2-c6db-4e69-b84c-afd28f6444ba.png">
8687

87-
The user shows the results of the predictions. The user can toggle between the matching & non-matching sets. In short & long, the user can click each item to view additional information about the AI's recognition.
88+
The user is shown the result of the predictions. The user can toggle between the matching & non-matching sets.
8889

8990
In the `short` and `long` modes, the pond also has a metapanel which can show general information about the ML processing, or, when a fish is selected, specific information about that fish's categorization:
9091

@@ -94,27 +95,27 @@ In the `short` and `long` modes, the pond also has a metapanel which can show ge
9495

9596
## Graphics & UI
9697

97-
The app uses two layers in the DOM. Underneath, a canvas provides the background and all the sprites. On top, a regular DOM uses HTML elements to provide the user interface. The HTML interface is implemented in React.
98+
The app uses three layers in the DOM. Underneath, one canvas contains the scene's background image, while another canvas contains all the sprites. On top, the app uses React to render HTML elements for the user interface, implemented [here](https://github.com/code-dot-org/ml-activities/blob/c9d24c4b7a20ea12d5dc7a094094c5ef4dfbbde3/src/oceans/ui.jsx).
9899

99-
The app is fully responsive by scaling the canvas and also scaling the size of the HTML elements correspondingly. The UI simply shrinks to match the underlying canvas.
100+
The app is fully responsive by scaling the canvases and also scaling the size of the HTML elements correspondingly. This way, the UI simply shrinks to match the underlying canvases.
100101

101102
## Animation
102103

103104
The animation is designed to be be smooth and frame-rate independent.
104105

105-
The prediction screen notably renders the progression based on the concept of a "current offset in time", making it possible to pause, and even reverse the animation, as well as adjust its the speed.
106+
The prediction screen notably renders the progression based on the concept of a "current offset in time", making it possible to pause, and even reverse the animation, as well as adjust its speed.
107+
108+
All items have simple "bobbing" animations, using offsets cycling in a sine loop, such as [here](https://github.com/code-dot-org/ml-activities/blob/f8a438628f9f5a0dba4a602f8ae0bbffb714ce35/src/oceans/renderer.js#L615-L618).
106109

107-
All items have a simple "bobbing" animation, using out of sync X and Y offsets cycling in a sine loop.
110+
The fish pause under the scanner using a simple S-curve adjustment to their movement, implemented [here](https://github.com/code-dot-org/ml-activities/blob/f8a438628f9f5a0dba4a602f8ae0bbffb714ce35/src/oceans/renderer.js#L258).
108111

109112
## The Guide
110113

111-
After initial playtests, we identified a need to slow the pacing of the tutorial and tell a clear story. The solution we adapted was pop-up text boxes with "typing" text, reminiscent of old-school computer games.
114+
After initial playtests, we identified a need to slow the pacing of the tutorial and tell a clear story. The solution we adopted was text boxes with "typing" text, reminiscent of old-school computer games.
112115

113116
"The Guide" is the implementation of this solution, and was designed to be a simple but flexible system that allowed us to add a variety of text for every step and situation encountered in the tutorial.
114117

115-
Each piece of Guide text is declared, along with the app state needed for it to show, which can even include code for more expressiveness.
116-
117-
See the implementation at https://github.com/code-dot-org/ml-activities/blob/main/src/oceans/models/guide.js
118+
Each piece of Guide text is declared, along with the app state needed for it to show (which can even include code for more expressiveness), [here](https://github.com/code-dot-org/ml-activities/blob/main/src/oceans/models/guide.js).
118119

119120
This simple system enabled the team to add a detailed narrative voice to the script, and allowed a variety of team members to contribute text.
120121

@@ -124,16 +125,27 @@ This simple system enabled the team to add a detailed narrative voice to the scr
124125

125126
## Popups
126127

127-
We also use popups to give extra information.
128+
We also use modal popups to give extra information.
128129

129130
<img width="1311" alt="popup" src="https://user-images.githubusercontent.com/2205926/165404670-4b556c6e-18e7-4ec6-b3d2-19c025c5b108.png">
130131

132+
## State
133+
134+
The app's runtime state is stored in a very simple module [here](https://github.com/code-dot-org/ml-activities/blob/c9d24c4b7a20ea12d5dc7a094094c5ef4dfbbde3/src/oceans/state.js). Updates to state trigger a React render, unless deliberately skipped.
135+
136+
## Host interface
137+
138+
The full functionality of this app is enabled when hosted by https://studio.code.org. The main repo loads this app via code [here](https://github.com/code-dot-org/code-dot-org/tree/c3325655902e82479d0a85d5adc73049810e5b66/apps/src/fish). Specific parameters passed in during initialization, [here](https://github.com/code-dot-org/code-dot-org/blob/c3325655902e82479d0a85d5adc73049810e5b66/apps/src/fish/Fish.js#L127-L136), include a foreground and background canvas, the `appMode`, a callback when the user continues to the next level, callbacks for loading & playing sound effects, and localized strings.
139+
140+
## Analytics
141+
142+
If Google Analytics is available on the page, the app generates a synthetic page view for each scene, allowing for an understanding of usage and duration of each scene in the script.
131143

132144
# Additional information
133145

134146
## Common operations
135147

136-
The documentation for common operations for AI Lab is comprehensive and should apply to this project too: https://github.com/code-dot-org/ml-playground#common-operations
148+
The documentation for common operations for **AI Lab** is comprehensive and should apply to this project too: https://github.com/code-dot-org/ml-playground#common-operations
137149

138150

139151
## Getting started

0 commit comments

Comments
 (0)