The MediaStream API was designed to easy access the media streams from local cameras and microphones. The getUserMedia() method is the primary way to access local input devices.
The API has a few key points −
A real-time media stream is represented by a stream object in the form of video or audio
It provides a security level through user permissions asking the user before a web application can start fetching a stream
The selection of input devices is handled by the MediaStream API (for example, when there are two cameras or microphones connected to the device)
Each MediaStream object includes several MediaStreamTrack objects. They represent video and audio from different input devices.
Each MediaStreamTrack object may include several channels (right and left audio channels). These are the smallest parts defined by the MediaStream API.
There are two ways to output MediaStream objects. First, we can render output into a video or audio element. Secondly, we can send output to the RTCPeerConnection object, which then send it to a remote peer.
Let's create a simple WebRTC application. It will show a video element on the screen, ask the user permission to use the camera, and show a live video stream in the browser. Create an index.html file −
<!DOCTYPE html> <html lang = "en"> <head> <meta charset = "utf-8" /> </head> <body> <video autoplay></video> <script src = "client.js"></script> </body> </html>
Then create the client.js file and add the following;
function hasUserMedia() { //check if the browser supports the WebRTC return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia); } if (hasUserMedia()) { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; //enabling video and audio channels navigator.getUserMedia({ video: true, audio: true }, function (stream) { var video = document.querySelector('video'); //inserting our stream to the video tag video.src = window.URL.createObjectURL(stream); }, function (err) {}); } else { alert("WebRTC is not supported"); }
Here we create the hasUserMedia() function which checks whether WebRTC is supported or not. Then we access the getUserMedia function where the second parameter is a callback that accept the stream coming from the user's device. Then we load our stream into the video element using window.URL.createObjectURL which creates a URL representing the object given in parameter.
Now refresh your page, click Allow, and you should see your face on the screen.
Remember to run all your scripts using the web server. We have already installed one in the WebRTC Environment Tutorial.
MediaStream.active (read only) − Returns true if the MediaStream is active, or false otherwise.
MediaStream.ended (read only, deprecated) − Return true if the ended event has been fired on the object, meaning that the stream has been completely read, or false if the end of the stream has not been reached.
MediaStream.id (read only) − A unique identifier for the object.
MediaStream.label (read only, deprecated) − A unique identifier assigned by the user agent.
You can see how the above properties look in my browser −
MediaStream.onactive − A handler for an active event that is fired when a MediaStream object becomes active.
MediaStream.onaddtrack − A handler for an addtrack event that is fired when a new MediaStreamTrack object is added.
MediaStream.onended (deprecated) − A handler for an ended event that is fired when the streaming is terminating.
MediaStream.oninactive − A handler for an inactive event that is fired when a MediaStream object becomes inactive.
MediaStream.onremovetrack − A handler for a removetrack event that is fired when a MediaStreamTrack object is removed from it.
MediaStream.addTrack() − Adds the MediaStreamTrack object given as argument to the MediaStream. If the track has already been added, nothing happens.
MediaStream.clone() − Returns a clone of the MediaStream object with a new ID.
MediaStream.getAudioTracks() − Returns a list of the audio MediaStreamTrack objects from the MediaStream object.
MediaStream.getTrackById() − Returns the track by ID. If the argument is empty or the ID is not found, it returns null. If several tracks have the same ID, it returns the first one.
MediaStream.getTracks() − Returns a list of all MediaStreamTrack objects from the MediaStream object.
MediaStream.getVideoTracks() − Returns a list of the video MediaStreamTrack objects from the MediaStream object.
MediaStream.removeTrack() − Removes the MediaStreamTrack object given as argument from the MediaStream. If the track has already been removed, nothing happens.
To test the above APIs change change the index.html in the following way −
<!DOCTYPE html> <html lang = "en"> <head> <meta charset = "utf-8" /> </head> <body> <video autoplay></video> <div><button id = "btnGetAudioTracks">getAudioTracks() </button></div> <div><button id = "btnGetTrackById">getTrackById() </button></div> <div><button id = "btnGetTracks">getTracks()</button></div> <div><button id = "btnGetVideoTracks">getVideoTracks() </button></div> <div><button id = "btnRemoveAudioTrack">removeTrack() - audio </button></div> <div><button id = "btnRemoveVideoTrack">removeTrack() - video </button></div> <script src = "client.js"></script> </body> </html>
We added a few buttons to try out several MediaStream APIs. Then we should add event handlers for our newly created button. Modify the client.js file this way −
var stream; function hasUserMedia() { //check if the browser supports the WebRTC return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia); } if (hasUserMedia()) { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; //enabling video and audio channels navigator.getUserMedia({ video: true, audio: true }, function (s) { stream = s; var video = document.querySelector('video'); //inserting our stream to the video tag video.src = window.URL.createObjectURL(stream); }, function (err) {}); } else { alert("WebRTC is not supported"); } btnGetAudioTracks.addEventListener("click", function(){ console.log("getAudioTracks"); console.log(stream.getAudioTracks()); }); btnGetTrackById.addEventListener("click", function(){ console.log("getTrackById"); console.log(stream.getTrackById(stream.getAudioTracks()[0].id)); }); btnGetTracks.addEventListener("click", function(){ console.log("getTracks()"); console.log(stream.getTracks()); }); btnGetVideoTracks.addEventListener("click", function(){ console.log("getVideoTracks()"); console.log(stream.getVideoTracks()); }); btnRemoveAudioTrack.addEventListener("click", function(){ console.log("removeAudioTrack()"); stream.removeTrack(stream.getAudioTracks()[0]); }); btnRemoveVideoTrack.addEventListener("click", function(){ console.log("removeVideoTrack()"); stream.removeTrack(stream.getVideoTracks()[0]); });
Now refresh your page. Click on the getAudioTracks() button, then click on the removeTrack() - audio button. The audio track should now be removed. Then do the same for the video track.
If you click the getTracks() button you should see all MediaStreamTracks (all connected video and audio inputs). Then click on the getTrackById() to get audio MediaStreamTrack.
In this chapter, we created a simple WebRTC application using the MediaStream API. Now you should have a clear overview of the various MediaStream APIs that make WebRTC work.