|FROM ||Ruben Safir
|SUBJECT ||Subject: [Learn] WebRTC coding in html5
|From learn-bounces-at-nylxs.com Sat Feb 11 19:02:26 2017
Received: from www.mrbrklyn.com (www.mrbrklyn.com [18.104.22.168])
by mrbrklyn.com (Postfix) with ESMTP id 1566F16131E;
Sat, 11 Feb 2017 19:02:26 -0500 (EST)
Received: from [10.0.0.62] (flatbush.mrbrklyn.com [10.0.0.62])
by mrbrklyn.com (Postfix) with ESMTP id 9F114160E77;
Sat, 11 Feb 2017 19:02:20 -0500 (EST)
To: Hangout , "learn-at-nylxs.com"
From: Ruben Safir
Date: Sat, 11 Feb 2017 19:02:20 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101
Subject: [Learn] WebRTC coding in html5
Content-Type: text/plain; charset="utf-8"
WebRTC enables peer to peer communication.
WebRTC still needs servers:
For clients to exchange metadata to coordinate communication:
this is called signaling.
To cope with network address translators (NATs) and firewalls.
In this article we show you how to build a signaling service, and
how to deal with the quirks of real-world connectivity by using STUN and
TURN servers. We also explain how WebRTC apps can handle multi-party
calls and interact with services such as VoIP and PSTN (aka telephones).
If you're not familiar with the basics of WebRTC, we strongly
recommend you take a look at Getting Started With WebRTC before reading
What is signaling?
Signaling is the process of coordinating communication. In order for a
WebRTC application to set up a 'call', its clients need to exchange
Session control messages used to open or close communication.
Media metadata such as codecs and codec settings, bandwidth and
Key data, used to establish secure connections.
Network data, such as a host's IP address and port as seen by the
This signaling process needs a way for clients to pass messages back and
forth. That mechanism is not implemented by the WebRTC APIs: you need to
build it yourself. We describe below some ways to build a signaling
service. First, however, a little context...
Why is signaling not defined by WebRTC?
To avoid redundancy and to maximize compatibility with established
technologies, signaling methods and protocols are not specified by
Session Establishment Protocol:
The thinking behind WebRTC call setup has been to fully specify and
control the media plane, but to leave the signaling plane up to the
application as much as possible. The rationale is that different
applications may prefer to use different protocols, such as the existing
SIP or Jingle call signaling protocols, or something custom to the
particular application, perhaps for a novel use case. In this approach,
the key information that needs to be exchanged is the multimedia session
description, which specifies the necessary transport and media
configuration information necessary to establish the media plane.
JSEP's architecture also avoids a browser having to save state: that is,
to function as a signaling state machine. This would be problematic if,
for example, signaling data was lost each time a page was reloaded.
Instead, signaling state can be saved on a server.
JSEP architecture diagram
JSEP requires the exchange between peers of offer and answer: the media
metadata mentioned above. Offers and answers are communicated in Session
Description Protocol format (SDP), which look like this:
o=3D- 7614219274584779017 2 IN IP4 127.0.0.1
a=3Dgroup:BUNDLE audio video
m=3Daudio 1 RTP/SAVPF 111 103 104 0 8 107 106 105 13 126
c=3DIN IP4 0.0.0.0
a=3Drtcp:1 IN IP4 0.0.0.0
Want to know what all this SDP gobbledygook actually means? Take a look
at the IETF examples.
Bear in mind that WebRTC is designed so that the offer or answer can be
tweaked before being set as the local or remote description, by editing
the values in the SDP text. For example, the preferAudioCodec() function
in apprtc.appspot.com can be used to set the default codec and bitrate.
discussion about whether future versions of WebRTC should use JSON
instead, but there are some advantages to sticking with SDP.
RTCPeerConnection + signaling: offer, answer and candidate
RTCPeerConnection is the API used by WebRTC applications to create a
connection between peers and communicate audio and video.
To initialise this process RTCPeerConnection has two tasks:
Ascertain local media conditions, such as resolution and codec
capabilities. This is the metadata used for the offer and answer mechanism.
Get potential network addresses for the application's host, known as
Once this local data has been ascertained, it must be exchanged via a
signaling mechanism with the remote peer.
Imagine Alice is trying to call Eve. Here's the full offer/answer
mechanism in all its gory detail:
Alice creates an RTCPeerConnection object.
Alice creates an offer (an SDP session description) with the
RTCPeerConnection createOffer() method.
Alice calls setLocalDescription() with his offer.
Alice stringifies the offer and uses a signaling mechanism to send
it to Eve.
Eve calls setRemoteDescription() with Alice's offer, so that her
RTCPeerConnection knows about Alice's setup.
Eve calls createAnswer(), and the success callback for this is
passed a local session description: Eve's answer.
Eve sets her answer as the local description by calling
Eve then uses the signaling mechanism to send her stringified answer
back to Alice.
Alice sets Eve's answer as the remote session description using
So many immigrant groups have swept through our town
that Brooklyn, like Atlantis, reaches mythological
proportions in the mind of the world - RI Safir 1998
DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002
http://www.nylxs.com - Leadership Development in Free Software
http://www2.mrbrklyn.com/resources - Unpublished Archive
http://www.coinhangout.com - coins!
Being so tracked is for FARM ANIMALS and and extermination camps,
but incompatible with living as a free human being. -RI Safir 2013
Learn mailing list