WebRTC

WebRTC

WebRTC Handler Does the basic linking between the WebRTCClient and the UI/events as well as most WebRTC related logic.

Constructor

new WebRTC(settings)

Source:
Parameters:
Name Type Description
settings WebRTCSettings

The WebRTC Settings object to use

Members

_broadcastingAudio :Boolean

Source:

Cached setting on whether we are broadcasting our audio In case we unmute our microphone and we're in PTT mode, we need to know if we are to broadcast our audio to every user or not.

Type:
  • Boolean

_connected :Boolean

Source:

Flag to determine if we are connected to the signalling server or not Required for synchronization between connection and reconnection attempts and to know if disconnection events are due to loss of connection or

Type:
  • Boolean

_pttHandlers :Object.<function()>

Source:

Push-To-Talk handlers Key Event handlers for push to talk need to be bound and cannot be anonymous functions if we want to use removeEventListener. The .bind returns a new function every time so we cannot use them as is with addEventListener. We save bound copies of the event handler so we can add/remove them as needed in case voice detection modes are modified.

Type:
  • Object.<function()>

_pttMuteTimeout :Number

Source:

Push-To-Talk timeout ID for muting When using Push-To-Talk, we need to delay the disablement of our microphone after the user releases their PTT key. But if they press it again, we need to cancel the timeout, so we have to save its ID here

Type:

_speakingData :Object

Source:

Object to keep track of which users are speaking and their volume histories Format is : {id : { speaking: Boolean volumeHistories: Array.Number } }

Type:
  • Object

client :WebRTCInterface

Source:

WebRTC Implementation to use for all signalling and connecting logic Must implement the WebRTCInterface interface

Type:

config :AVConfig

Source:

Configuration sheet for the Audio/Video settings

Type:

settings :WebRTCSettings

Source:

Configuration for all settings used by the WebRTC Implementation

Type:

Methods

(async) _closeLocalStream()

Source:

Closes our local stream. Stop listening to audio levels from our existing stream, then close

_enableMediaTracks(tracks, enable)

Source:

Enables or disables media tracks See https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/enabled

Parameters:
Name Type Description
tracks Array

The tracks to enable/disable

enable boolean

Whether to enable or disable the tracks

(async) _initLocalStream()

Source:

Initialize local stream according to A/V mode and user settings. Tries to fallback to using audio only or video only if capturing both fails.

_pttBroadcast(stream, broadcast)

Source:

Starts or stops broadcasting audio when in Push-to-talk mode If our local stream is muted, then it will set to broadcasting mode without actually broadcasting audio

Parameters:
Name Type Description
stream MediaStream

Our local stream

broadcast Boolean

Whether to broadcast or not

_pttPush()

Source:

Push the Push-To-Talk trigger This will enable or disable broadcasting if we are not muted depending on whether we are in push-to-talk or push-to-mute

_pttRelease()

Source:

Release the Push-To-Talk trigger After the configured delay, the

_resetSpeakingHistory(userId)

Source:

Resets the speaking history of a user If the user was considered speaking, then mark them as not speaking

Parameters:
Name Type Description
userId string

The ID of the user

_setupVoiceDetection()

Source:

Setup Voice detection for our local stream. Depending on the settings, this will either add an audio level listener or key/mouse event listeners for push-to-talk handling.

Note that if the microphone is disabled (muted), then we never broadcast our audio regardless of the voice detection algorithm in use.

Valid voice detection modes are :

  • "always": Broadcasting of audio is always on
  • "activity": Broadcasting only if audio level above a threshold
  • "ptt": Broadcasting while a keyboard or mouse button is pressed

_stopVoiceDetection()

Source:

Stop listening to local stream for voice detection and push to talk

broadcastMicrophone(broadcast)

Source:

Enable or disable the broadcasting of our own microphone to the other users without changing the enabled state of the master stream. This is to be used with push-to-talk or voice activity detection methods.

Parameters:
Name Type Description
broadcast Boolean

Whether to broadcast our microphone to the other users or not

(async) connect()

Source:

Connect to the WebRTC server and initialize our local stream If connection fails, will release capture resources.

debug(…args)

Source:

Display debug messages on the console if debugging is enabled

Parameters:
Name Type Attributes Description
args * <repeatable>

Arguments to console.log

disableCamera()

Source:

Disables the sending of our own video camera capture to all users.

disableMicrophone()

Source:

Disables the sending of our own audio capture to all users.

disableStreamAudio(stream)

Source:

Disables the audio tracks in a stream

Parameters:
Name Type Description
stream MediaStream

The stream to modify

disableStreamVideo(stream)

Source:

Disables the video tracks in a stream

Parameters:
Name Type Description
stream MediaStream

The stream to modify

(async) disconnect() → {Boolean}

Source:

Disconnect from the WebRTC server and close our local stream. Prevent the disconnection from appearing as if it was a lost connection. Signal the disconnection only if we had been connected.

Returns:

Returns whether there was a valid connection that was terminated

Type
Boolean

enableCamera(enable)

Source:

Enables or disables the sending of our own video camera capture to all users.

Parameters:
Name Type Default Description
enable boolean true

(optional) Whether to enable camera or not

enableMicrophone(enable)

Source:

Enables or disables the sending of our own audio capture to all users. Enable/Disable the master stream so it affects new users joining in. Also Enable/Disable the individual local streams on each connected calls.

Parameters:
Name Type Default Description
enable Boolean true

(optional) Whether to enable the microphone or not

enableStreamAudio(stream, enable)

Source:

Enable or disable the audio tracks in a stream

Disabling a track represents what a typical user would consider muting it. We use the term 'enable' here instead of 'mute' to match the MediaStreamTrack field name and to avoid confusion with the 'muted' read-only field of the MediaStreamTrack as well as the video element's muted field which only stops playing the audio. Muting by definition stops rendering any of the data, while a disabled track in this case is still rendering its data, but is simply generating disabled content (silence and black frames) See https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/enabled

Parameters:
Name Type Default Description
stream MediaStream

The stream to modify

enable boolean true

(optional) Whether to enable or disable the tracks

enableStreamVideo(stream, enable)

Source:

Enable or disable the video tracks in a stream

Disabling a track represents what a typical user would consider muting it. We use the term 'enable' here instead of 'mute' to match the MediaStreamTrack field name and to avoid confusion with the 'muted' read-only field of the MediaStreamTrack as well as the video element's muted field which only stops playing the audio.

Muting by definition stops rendering any of the data, while a disabled track in this case is still rendering its data, but is simply generating disabled content (silence and black frames).

See https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/enabled

Parameters:
Name Type Default Description
stream MediaStream

The stream to modify

enable boolean true

(optional) Whether to enable or disable the tracks

(async) initialize()

Source:

Initialize the WebRTC implementation. After initialization, will automatically connect and set up the calls

isStreamAudioEnabled(stream) → {Boolean}

Source:

Checks if a stream has any audio tracks enabled

Parameters:
Name Type Description
stream MediaStream

The stream to check

Returns:
Type
Boolean

isStreamVideoEnabled(stream) → {boolean}

Source:

Checks if a stream has any video tracks enabled

Parameters:
Name Type Description
stream MediaStream

The stream to check

Returns:
Type
boolean

(async) onDisconnect()

Source:

Notify of a disconnection with the server Only consider it a connection loss if we were supposed to be still connected

onError(error)

Source:

Notify of an error from the webrtc library

Parameters:
Name Type Description
error string

The error string

onLocalStreamCreated(userId, stream)

Source:

Notify when a new local stream is created for a user's call When the client creates a new local stream to share with a user, this method is called to notify the implementation of it. On voice activity mode, all user's local streams are disabled unless audio is above threshold, but the master stream is not disabled so we can detect our audio level. New users joining would get a copy of the master stream's tracks which would therefore be enabled, even if voice activity is below threshold so we need to disable them if our last voice level events were inactive.

Parameters:
Name Type Description
userId String

ID of the user for whom this stream was created

stream MediaStream

Our new local stream

onSettingsChanged(changed)

Source:

Notify of changes to the webrtc related settings. Handle settings changes in order of importance. If the mode changed, we reload the page and can ignore any other setting change and if the server changed and we need to reconnect, we can ignore anything that has to do with the our own stream since we'll recreate it or with the other user's settings.

Parameters:
Name Type Description
changed Object

Object of {key: value} of all changed settings

onUserStreamChange(userId, stream)

Source:

Notify a change of a user's stream

Parameters:
Name Type Description
userId string

The ID of the user whose stream has changed.

stream MediaStream

The new stream of the user, or null if the user is not sending data anymore.

render()

Source:

Renders the webrtc streams. This should be called by the UI after it's done rendering, so the webrtc implementation can set all video elements to the appropriate streams

setVideoStream(userId, stream)

Source:

Connects a stream to a video element

Parameters:
Name Type Description
userId String

The ID of the user to whom the stream belongs to

stream MediaStream

The media stream to set for the user. Can be null if user has no stream

streamHasAudio(stream) → {boolean}

Source:

Checks whether a stream has any audio tracks

Parameters:
Name Type Description
stream MediaStream

The stream to check

Returns:
Type
boolean

streamHasVideo(stream) → {boolean}

Source:

Checks whether a stream has any video tracks

Parameters:
Name Type Description
stream MediaStream

The stream to check

Returns:
Type
boolean