diff --git a/index.html b/index.html index 2d2c40e178..aac57e5698 100644 --- a/index.html +++ b/index.html @@ -7,215 +7,442 @@ * tree. --> -
- - - - - - - - - - -- This is a collection of small samples demonstrating various parts of the WebRTC APIs. The code for all - samples are available in the GitHub repository. + This is a collection of small samples demonstrating various parts of + the + WebRTC APIs. The code for all samples are available in the + GitHub repository.
-Most of the samples use adapter.js, a shim to insulate apps - from spec changes and prefix differences.
- -https://webrtc.org/getting-started/testing - lists command line flags useful for development and testing with Chrome.
- -Patches and issues welcome! See CONTRIBUTING.md - for instructions.
++ Most of the samples use + adapter.js, a shim to + insulate apps from spec changes and prefix differences. +
-Warning: It is highly recommended to use headphones when testing these - samples, as it will otherwise risk loud audio feedback on your system.
-+ https://webrtc.org/getting-started/testing + lists command line flags useful for development and testing with + Chrome. +
-+ Patches and issues welcome! See + CONTRIBUTING.md + for instructions. +
-+ Warning: It is highly recommended to use headphones + when testing these samples, as it will otherwise risk loud audio + feedback on your system. +
+Access media devices
Query media devices
Stream from canvas or video elements
Controlling peer connectivity
Send arbitrary data over peer connections
Full featured WebRTC application
API for processing media
The camera is captured to a video element, which is mapped onto a - canvas, and a red square is added.
-The canvas is then captured to an ImageData object, and painted - onto a second canvas.
-A stream is captured from the second canvas element using its
- captureStream() method and set as the srcObject of the video element.
The inputStream, source,
- canvasIn, canvasOut,
- result, and stream variables are in global
- scope, so you can
- inspect them from the browser console.
+ The camera is captured to a video element, which is mapped onto a + canvas, and a red square is added. +
++ The canvas is then captured to an ImageData object, and painted onto a + second canvas. +
+
+ A stream is captured from the second canvas element using its
+ captureStream() method and set as the
+ srcObject of the video element.
+
+ The inputStream, source,
+ canvasIn, canvasOut, result, and
+ stream variables are in global scope, so you can inspect
+ them from the browser console.
+
Click and drag on the canvas element (on the left) to move the teapot.
- -This demo requires Firefox 47 or Chrome 52 (or later).
- -The teapot is drawn on the canvas element using WebGL. A stream is captured from the canvas using its captureStream()
- method and streamed via a peer connection to the video element on the right.
View the browser console to see logging.
- -Several variables are in global scope, so you can inspect them from the console: canvas,
- video, localPeerConnection, remotePeerConnection and stream.
-
For more demos and information about captureStream(), see Media Capture from Canvas Implementation.
For more information about RTCPeerConnection, see Getting - Started With WebRTC.
- - View source on GitHub - -+ Click and drag on the canvas element (on the left) to move the teapot. +
+ +This demo requires Firefox 47 or Chrome 52 (or later).
+ +
+ The teapot is drawn on the canvas element using WebGL. A stream is
+ captured from the canvas using its captureStream() method
+ and streamed via a peer connection to the video element on the right.
+
View the browser console to see logging.
+ +
+ Several variables are in global scope, so you can inspect them from the
+ console: canvas, video,
+ localPeerConnection, remotePeerConnection and
+ stream.
+
+ For more demos and information about captureStream(), see
+ Media Capture from Canvas Implementation.
+
+ For more information about RTCPeerConnection, see + Getting Started With WebRTC. +
+ + View source on GitHub +Click and drag on the canvas (on the left) to move the teapot.
+ ++ This demo requires Firefox 43 or above, Chrome 51 or above, or Chrome 50 + with Experimental Web Platform features enabled. +
+ +
+ The teapot is drawn on the canvas element using WebGL. A stream is
+ captured from the canvas element using its
+ captureStream() method then recorded using the
+ MediaRecorder API.
+
+ The canvas, video, and
+ stream variables are in global scope, so you can inspect
+ them from the browser console.
+
+ For more demos and information about captureStream(), see
+ Media Capture from Canvas Implementation.
+
+ For more information see the MediaStream Recording API + Editor's Draft. +
+ View source on GitHubClick and drag on the canvas (on the left) to move the teapot.
- -This demo requires Firefox 43 or above, Chrome 51 or above, or Chrome 50 with Experimental Web Platform - features enabled.
- -The teapot is drawn on the canvas element using WebGL. A stream is captured from the canvas element using its
- captureStream() method then recorded using the MediaRecorder API.
The canvas, video, and stream variables are in global scope, so you can
- inspect them from the browser console.
For more demos and information about captureStream(), see Media Capture from Canvas Implementation.
For more information see the MediaStream Recording API Editor's Draft.
- View source on GitHub - -Click and drag on the canvas (on the left) to move the teapot.
- -This demo requires Firefox 47 or Chrome 52 (or later).
- -The teapot is drawn on the canvas element using WebGL. A stream is captured from the canvas element using its
- captureStream() method and set as the srcObject of the video element.
The canvas, video, and stream variables are in global scope, so you can
- inspect them from the browser console.
For more demos and information about captureStream(), see Media Capture from Canvas Implementation.
Click and drag on the canvas (on the left) to move the teapot.
+ +This demo requires Firefox 47 or Chrome 52 (or later).
+ +
+ The teapot is drawn on the canvas element using WebGL. A stream is
+ captured from the canvas element using its
+ captureStream() method and set as the
+ srcObject of the video element.
+
+ The canvas, video, and
+ stream variables are in global scope, so you can inspect
+ them from the browser console.
+
+ For more demos and information about captureStream(), see
+ Media Capture from Canvas Implementation.
+
+ This demo requires Chrome 57.0.2957.0 or later with + Experimental Web Platform features enabled from + chrome://flags. +
+ +
+ A stream is captured from the source video using the
+ captureStream() method. The stream is cloned and
+ transmitted via two separate PeerConnections using 50kbps of video
+ bandwidth. This is insufficient to generate good quality in the encoded
+ bitstream, so trade-offs have to be made.
+
+ The transmitted stream tracks are using + MediaStreamTrack Content Hints + to indicate characteristics in the video stream, which informs + PeerConnection on how to encode the track (to prefer motion or + individual frame detail). +
+ ++ The text part of the clip shows a clear case for when + 'detail' is better, and the fighting scene shows a clear case + for when 'motion' is better. The spinning model however shows a + case where 'motion' or 'detail' are not clear-cut + decisions and even with good content detection what's preferred depends + on what the user prefers. +
+ ++ Other MediaStreamTrack consumers such as MediaStreamRecorder can also + make use of this information to guide encoding parameters for the stream + without additional extensions to the MediaStreamRecorder specification, + but this is currently not implemented in Chromium. +
+ + View source on GitHubThis demo requires Chrome 57.0.2957.0 or later with Experimental Web Platform features enabled - from chrome://flags.
- -A stream is captured from the source video using the captureStream() method. The stream is cloned
- and transmitted via two separate PeerConnections using 50kbps of video bandwidth. This is insufficient to
- generate good quality in the encoded bitstream, so trade-offs have to be made.
The transmitted stream tracks are using MediaStreamTrack - Content Hints to indicate characteristics in the video stream, which informs PeerConnection on how to encode - the track (to prefer motion or individual frame detail).
- -The text part of the clip shows a clear case for when 'detail' is better, and the fighting scene shows a - clear case for when 'motion' is better. The spinning model however shows a case where 'motion' - or 'detail' are not clear-cut decisions and even with good content detection what's preferred depends - on what the user prefers.
- -Other MediaStreamTrack consumers such as MediaStreamRecorder can also make use of this information to guide - encoding parameters for the stream without additional extensions to the MediaStreamRecorder specification, but - this is currently not implemented in Chromium.
- - View source on GitHub - -+ This demo requires Firefox 47, Chrome 53 with + Experimental Web Platform features enabled from + chrome://flags. +
+ +
+ A stream is captured from the video on the left using the
+ captureStream() method, and streamed via a peer connection
+ to the video element on the right.
+
View the browser console to see logging.
+ +
+ Several variables are in global scope, so you can inspect them from the
+ console: pc1, pc2 and stream.
+
+ For more information about RTCPeerConnection, see + Getting Started With WebRTC. +
+ + View source on GitHub +This browser does not support the video element.
-Press play on the left video to start the demo.
+Press play on the left video to start the demo.
-A stream is captured from the video element on the left using its captureStream() method and set as
- the srcObject of the video element on the right.
+ A stream is captured from the video element on the left using its
+ captureStream() method and set as the
+ srcObject of the video element on the right.
+
The stream variable are in global scope, so you can inspect them from the browser console.
+ The stream variable are in global scope, so you can inspect
+ them from the browser console.
+
The camera is captured to a MediaStreamTrack, which is turned into a - WHATWG Stream of ImageData objects by means of a canvas, and a red - square is added.
-The stream is sent to a Worker, which returns a new stream containing - the same video data.
-This is then mapped back to a MediaStream using another canvas.
++ The camera is captured to a MediaStreamTrack, which is turned into a + WHATWG Stream of ImageData objects by means of a canvas, and a red + square is added. +
++ The stream is sent to a Worker, which returns a new stream containing + the same video data. +
+This is then mapped back to a MediaStream using another canvas.
-The chief purpose of the demo is to demonstrate that this is doable, - but that performance can be improved significantly.
-NOTE: This works only on Chrome 76 and above with experimental Web - features enabled, since it depends on transferable Streams.
-A similar demo, without the worker process, is on the canvas filter demo.
- View source on GitHub ++ The chief purpose of the demo is to demonstrate that this is doable, but + that performance can be improved significantly. +
++ NOTE: This works only on Chrome 76 and above with experimental Web + features enabled, since it depends on transferable Streams. +
++ A similar demo, without the worker process, is on the + canvas filter demo. +
+ View source on GitHub +This page generates and sends the specified amount of data via WebRTC datachannels.
- -To accomplish this in an interoperable way, the data is split into chunks which are then transferred via the - datachannel. The datachannel is reliable and ordered by default which is well-suited to filetransfers.
- -Send and receive progress is monitored using HTML5 progress elements.
- -+ This page generates and sends the specified amount of data via WebRTC + datachannels. +
+ ++ To accomplish this in an interoperable way, the data is split into + chunks which are then transferred via the datachannel. The datachannel + is reliable and ordered by default which is well-suited to + filetransfers. +
+ ++ Send and receive progress is monitored using HTML5 + progress elements. +
+View the console to see logging.
-The RTCPeerConnection objects localConnection and remoteConnection are
- in global scope, so you can inspect them in the console as well.
For more information about RTCDataChannel, see Getting Started With WebRTC.
-
+ The RTCPeerConnection objects
+ localConnection and remoteConnection are in
+ global scope, so you can inspect them in the console as well.
+
+ For more information about RTCDataChannel, see + Getting Started With WebRTC. +
+This page shows how to transfer a file via WebRTC datachannels.
- -To accomplish this in an interoperable way, the file is split into chunks which are then transferred via the datachannel. The datachannel is reliable and ordered by default which is well-suited to filetransfers.
- -Send and receive progress is monitored using HTML5 progress elements.
- -At the receiver, the file is reassembled using the Blob API and made available for download.
- -Note: real-world applications require a file transfer protocol to send metadata about the file (such as the filename, type, size, last modification date, hash, ...).This information can be conveyed either via the signaling channel or in-band. The demo elides this by assuming knowledge of the file size at the receiver and closes both the datachannel and the peerconnection when the correct amount of bytes has been received.
- -View the console to see logging.
- -The RTCPeerConnection objects localConnection and remoteConnection are in global scope, so you can inspect them in the console as well.
For more information about RTCDataChannel, see Getting Started With WebRTC.
-This page shows how to transfer a file via WebRTC datachannels.
+ ++ To accomplish this in an interoperable way, the file is split into + chunks which are then transferred via the datachannel. The datachannel + is reliable and ordered by default which is well-suited to + filetransfers. +
+ ++ Send and receive progress is monitored using HTML5 + progress elements. +
+ ++ At the receiver, the file is reassembled using the Blob API and made + available for download. +
+ ++ Note: real-world applications require a file transfer protocol to send + metadata about the file (such as the filename, type, size, last + modification date, hash, ...).This information can be conveyed either + via the signaling channel or in-band. The demo elides this by assuming + knowledge of the file size at the receiver and closes both the + datachannel and the peerconnection when the correct amount of bytes + has been received. +
+View the console to see logging.
+ +
+ The RTCPeerConnection objects
+ localConnection and remoteConnection are in
+ global scope, so you can inspect them in the console as well.
+
+ For more information about RTCDataChannel, see + Getting Started With WebRTC. +
+This page show how to send text messages via WebRTC datachannels.
-Enter a message in one text box and press send and it will be transferred to the "remote" peer over a - datachannel.
- -+ Enter a message in one text box and press send and it will be + transferred to the "remote" peer over a datachannel. +
+ -View the console to see logging.
-For more information about RTCDataChannel, see Getting Started With WebRTC.
-+ For more information about RTCDataChannel, see + Getting Started With WebRTC. +
+Get available audio, video sources and audio output devices from mediaDevices.enumerateDevices()
- then set the source for getUserMedia() using a deviceId constraint.
Note: without permission, the browser will restrict the available devices to at most one per type.
- -
+ Get available audio, video sources and audio output devices from
+ mediaDevices.enumerateDevices() then set the source for
+ getUserMedia() using a deviceId constraint.
+
+ Note: without permission, the browser will restrict the available + devices to at most one per type. +
+ ++ Note: If you hear a reverb sound your microphone is picking up + the output of your speakers/headset, lower the volume and/or move the + microphone further away from your speakers/headset. +
+ + View source on GitHubNote: If you hear a reverb sound your microphone is picking up the output of your - speakers/headset, lower the volume and/or move the microphone further away from your speakers/headset.
- - View source on GitHub -This browser does not support the video element.
-This browser does not support the video element.
++ This demo must be run from localhost or over HTTPS Chrome 49 or later, + Firefox is not supported yet. +
+ + View source on GitHubThis demo must be run from localhost or over HTTPS Chrome 49 or later, Firefox is not supported yet.
- - View source on GitHub -- - -
-- - -
-- - -
-- - -
-- -
- - - + ++ + +
++ + +
++ + +
++ + +
++ +
+ + + diff --git a/src/content/extensions/svc/index.html b/src/content/extensions/svc/index.html index a14a4faaf6..1780cb1414 100644 --- a/src/content/extensions/svc/index.html +++ b/src/content/extensions/svc/index.html @@ -7,79 +7,104 @@ * tree. --> - - - - - - - - - - - -This sample shows how to setup a connection between two peers using - RTCPeerConnection and - choose the preferred video codec to use and scalability mode when the Scalable Video Coding (SVC) Extension is available. -
- -+ This sample shows how to setup a connection between two peers using + RTCPeerConnection + and choose the preferred video codec to use and scalability mode when + the + Scalable Video Coding (SVC) Extension + is available. +
+ +Warning: if you're not using headphones, pressing play will cause feedback.
- -Render the audio stream from an audio-only getUserMedia() call with an audio element.
The MediaStream object stream passed to the getUserMedia()
- callback is in global scope, so you can inspect it from the console.
+ Warning: if you're not using headphones, pressing play will cause + feedback. +
+ +
+ Render the audio stream from an audio-only
+ getUserMedia() call with an audio element.
+
+ The MediaStream object stream passed
+ to the getUserMedia() callback is in global scope, so you
+ can inspect it from the console.
+
Draw a frame from the video onto the canvas element using the drawImage() method.
The variables canvas, video and stream are in global scope, so you can
- inspect them from the console.
+ Draw a frame from the video onto the canvas element using the
+ drawImage() method.
+
+ The variables canvas, video and
+ stream are in global scope, so you can inspect them from
+ the console.
+
+ Display the video stream from getUserMedia() in a video
+ element and control exposureMode, exposureTime and exposureCompensation
+ if camera supports it.
+
+ The MediaStreamTrack object track is in global
+ scope, so you can inspect it from the console.
+
Display the video stream from getUserMedia() in a video
- element and control exposureMode, exposureTime and exposureCompensation if camera supports it.
The MediaStreamTrack object track is in
- global scope, so you can inspect it from the console.
+ Draw a frame from the getUserMedia video stream onto the canvas element, + then apply CSS filters. +
-Draw a frame from the getUserMedia video stream onto the canvas element, then apply CSS filters.
+
+ The variables canvas, video and
+ stream are in global scope, so you can inspect them from
+ the console.
+
The variables canvas, video and stream are in global scope, so you can
- inspect them from the console.
+ Display the screensharing stream from getDisplayMedia() in
+ a video element.
+
Display the screensharing stream from getDisplayMedia() in a video element.
Warning: if you're not using headphones, pressing play will cause feedback.
- -Display the video stream from getUserMedia() in a video element.
The MediaStream object stream passed to the getUserMedia() callback is in
- global scope, so you can inspect it from the console.
+ Warning: if you're not using headphones, pressing play + will cause feedback. +
+ +
+ Display the video stream from getUserMedia() in a video
+ element.
+
+ The MediaStream object stream passed to the
+ getUserMedia() callback is in global scope, so you can
+ inspect it from the console.
+
+ Display the video stream from getUserMedia() in a video
+ element and control pan, tilt, and zoom if camera supports
+ Pan-Tilt-Zoom.
+
+ The MediaStreamTrack object track is in global
+ scope, so you can inspect it from the console.
+
Display the video stream from getUserMedia() in a video
- element and control pan, tilt, and zoom if camera supports Pan-Tilt-Zoom.
The MediaStreamTrack object track is in
- global scope, so you can inspect it from the console.
For more information see the MediaStream Recording API Editor's Draft.
- -+ For more information see the MediaStream Recording API + Editor's Draft. +
+ +Echo cancellation:
-+ Echo cancellation: +
+This example uses constraints.
- -Click a button to call getUserMedia() with appropriate resolution.
Measure the volume of a local media stream using WebAudio.
- -Measure the volume of a local media stream using WebAudio.
+ +The 'instant' volume changes approximately every 50ms; the 'slow' volume approximates the average volume over - about a second.
-Note that you will not hear your own voice; use the local audio rendering demo for that. -
-The audioContext, stream and soundMeter variables are in global scope, so
- you can inspect them from the console.
+ The 'instant' volume changes approximately every 50ms; the 'slow' volume + approximates the average volume over about a second. +
++ Note that you will not hear your own voice; use the + local audio rendering demo for that. +
+
+ The audioContext, stream and
+ soundMeter variables are in global scope, so you can
+ inspect them from the console.
+
This sample shows how to perform processing on an audio stream using the experimental - insertable streams API. - It applies a low-pass filter in realtime to audio recorded from a microphone and plays it - back. -
- - -+ This sample shows how to perform processing on an audio stream using the + experimental + insertable streams + API. It applies a low-pass filter in realtime to audio recorded from a + microphone and plays it back. +
+ + +Warning: if you're not using headphones, pressing Start will cause feedback.
- - -View the console to see logging. The audio, processor,
- generator, transformer, stream and
- processedStream variables are in global scope, so you can inspect them from the
- console. You may also adjust the level of filtering by assigning to cutoff.
- Note: This sample is using an experimental API that has not yet been standardized. As - of 2021-09-29, this API is available in Chrome M94. -
- View source on GitHub - -+ Warning: if you're not using headphones, pressing Start will cause + feedback. +
+ + +
+ View the console to see logging. The audio,
+ processor, generator,
+ transformer, stream and
+ processedStream variables are in global scope, so you can
+ inspect them from the console. You may also adjust the level of
+ filtering by assigning to cutoff.
+
+ Note: This sample is using an experimental API that has not yet + been standardized. As of 2021-09-29, this API is available in Chrome + M94. +
+ View source on GitHub +-
+ Crypto key: Encrypt first bytes: + +
-
This sample shows how Insertable Streams can be used to analyze - the encoded form of a video track. -
++ This sample shows how Insertable Streams can be used to analyze the + encoded form of a video track. +
-View the console to see logging. -
-View the console to see logging.
+This sample shows how to perform cropping on a video stream using the experimental - mediacapture-transform API - in a Worker. -
++ This sample shows how to perform cropping on a video stream using the + experimental + mediacapture-transform + API in a Worker. +
-- Note: This sample is using an experimental API that has not yet been standardized. As - of 2022-11-21, this API is available in the latest version of Chrome based browsers. -
- View source on GitHub ++ Note: This sample is using an experimental API that has not yet + been standardized. As of 2022-11-21, this API is available in the latest + version of Chrome based browsers. +
+ View source on GitHub ++ This sample shows how to perform processing on a video stream using the + experimental + insertable streams + API. There are options for the source of the input stream, the + destination of the output stream, and the API used to transform the + stream. There is also the option to duplicate the source stream to a + video element on the page, which may affect the source FPS. +
-This sample shows how to perform processing on a video stream using the experimental - insertable streams API. - There are options for the source of the input stream, the destination of the output stream, - and the API used to transform the stream. There is also the option to duplicate the source - stream to a video element on the page, which may affect the source FPS. -
+ - - -View the console to see logging.
+View the console to see logging.
-- Note: This sample is using an experimental API that has not yet been standardized. - This API is available in Chrome 94 or later. -
- View source on GitHub ++ Note: This sample is using an experimental API that has not yet + been standardized. This API is available in Chrome 94 or later. +
+ View source on GitHub +This sample shows how to render multiple video streams to canvas using the insertable streams and WebGPU APIs. There are options to either process the - rendering on the main thread or on a worker thread. -
- -
- Note: This sample is using WebGPU API that is in Origin Trial as
- of 2021-09-21 and is available in Chrome M94 if the experimental code is enabled on
- the command line with
- --enable-unsafe-webgpu.
-
+ This sample shows how to render multiple video streams to canvas using + the + insertable streams + and WebGPU APIs. There + are options to either process the rendering on the main thread or on a + worker thread. +
+ +
+ Note: This sample is using WebGPU API that is in Origin Trial as
+ of 2021-09-21 and is available in Chrome M94 if the experimental code is
+ enabled on the command line with --enable-unsafe-webgpu.
+
| Opus | iSAC 16K | G722 | PCMU | Browsers Tested | -
|---|---|---|---|---|
| ~40 kbps / Muted : Same, ~50 Packets, Muted : Same or slight drop | -~30 kbps / Muted : Same, ~33 Packets, Muted : Same or slight drop | +|||
| + ~40 kbps / Muted : Same, ~50 Packets, Muted : Same or slight drop + | ++ ~30 kbps / Muted : Same, ~33 Packets, Muted : Same or slight drop + | ~70 kbps / Muted : Same, ~50 Packets, Muted : Same | ~70 kbps / Muted : Same, ~55 Packets, Muted : Same | Tested in Chrome, Not tested in Opera, Firefox, Safari, Edge | -
This sample shows how to setup a connection between two peers using - RTCPeerConnection and - choose the preferred video codec to use (when that functionality is available.) -
- -+ This sample shows how to setup a connection between two peers using + RTCPeerConnection + and choose the preferred video codec to use (when that functionality is + available.) +
+ +
+ View the console to see logging. The MediaStream object
+ localStream, and the RTCPeerConnection objects
+ pc1 and pc2 are in global scope, so you can
+ inspect them in the console as well.
+
+ For more information about RTCPeerConnection, see + Getting Started With WebRTC. +
+ + View source on GitHubView the console to see logging. The MediaStream object localStream, and the RTCPeerConnection
- objects pc1 and pc2 are in global scope, so you can inspect them in the console as
- well.
For more information about RTCPeerConnection, see Getting - Started With WebRTC.
- - - View source on GitHub - -This sample shows how to setup a connection between two peers in different tabs using - RTCPeerConnection - and Broadcast Channel -
++ This sample shows how to setup a connection between two peers in + different tabs using + RTCPeerConnection + and + Broadcast Channel +
-Click the start button in two tabs (of the same browser; can be in different windows) to make a call
++ Click the start button in two tabs (of the same browser; can be in + different windows) to make a call +
- View source on GitHub + View source on GitHub +This demo shows ways to use constraints and statistics in WebRTC applications.
-Set camera constraints, and click Get media to (re)open the camera with these included.
- Click Connect to create a (local) peer connection. The RTCPeerConnection objects localPeerConnection
- and remotePeerConnection can be inspected from the console.
Setting a value to zero will remove that constraint.
-The lefthand video shows the output of getUserMedia(); on the right is the video after being
- passed through the peer connection. The transmission bitrate is displayed below the righthand video.
+ This demo shows ways to use constraints and statistics in WebRTC + applications. +
+
+ Set camera constraints, and click Get media to
+ (re)open the camera with these included. Click
+ Connect to create a (local) peer connection. The
+ RTCPeerConnection objects localPeerConnection and
+ remotePeerConnection can be inspected from the console.
+
Setting a value to zero will remove that constraint.
+
+ The lefthand video shows the output of getUserMedia(); on
+ the right is the video after being passed through the peer connection.
+ The transmission bitrate is displayed below the righthand video.
+
This page tests the createOffer() method. It creates a peer connection, then prints out the SDP
- generated by createOffer(), with the number of desired audio MediaStreamTracks and the
- checked constraints. Currently, only audio tracks can be added, as there is no programmatic way to generate
- video tracks. (Web Audio is used to generate the audio tracks.)
+ This page tests the createOffer() method. It creates a peer
+ connection, then prints out the SDP generated by
+ createOffer(), with the number of desired audio
+ MediaStreamTracks and the checked constraints. Currently,
+ only audio tracks can be added, as there is no programmatic way to
+ generate video tracks. (Web Audio is used to generate the audio tracks.)
+
+ View the console to see logging and to inspect the
+ MediaStream object localStream.
+
+ For more information about RTCPeerConnection, see + Getting Started With WebRTC. +
+ + View source on GitHubView the console to see logging and to inspect the MediaStream object localStream.
For more information about RTCPeerConnection, see Getting - Started With WebRTC.
- - - View source on GitHub - -