The master Audio/Video controller instance. This is available as the singleton game.webrtc

Param: settings

The Audio/Video settings to use

Properties

client: AVClient

The Audio/Video client class

broadcasting: boolean

A flag to track whether the current user is actively broadcasting their microphone.

_connected: boolean

Flag to determine if we are connected to the signalling server or not. This is required for synchronization between connection and reconnection attempts.

Methods

  • Connect to the Audio/Video client.

    Returns Promise<boolean>

    Was the connection attempt successful?

  • Disconnect from the Audio/Video client.

    Returns Promise<boolean>

    Whether an existing connection was terminated?

  • Callback actions to take when the user becomes disconnected from the server.

    Returns Promise<void>

  • A user can broadcast audio if the AV mode is compatible and if they are allowed to broadcast.

    Parameters

    • userId: string

    Returns boolean

  • A user can share audio if they are allowed to broadcast and if they have not muted themselves or been blocked.

    Parameters

    • userId: string

    Returns boolean

  • A user can broadcast video if the AV mode is compatible and if they are allowed to broadcast.

    Parameters

    • userId: string

    Returns boolean

  • A user can share video if they are allowed to broadcast and if they have not hidden themselves or been blocked.

    Parameters

    • userId: string

    Returns boolean

  • Trigger a change in the audio broadcasting state when using a push-to-talk workflow.

    Parameters

    • intent: boolean

      The user's intent to broadcast. Whether an actual broadcast occurs will depend on whether or not the user has muted their audio feed.

    Returns any

  • Activate voice detection tracking for a userId on a provided MediaStream. Currently only a MediaStream is supported because MediaStreamTrack processing is not yet supported cross-browser.

    Parameters

    • stream: MediaStream

      The MediaStream which corresponds to that User

    • Optional ms: number

      A number of milliseconds which represents the voice activation volume interval

    Returns void

  • Actions which the orchestration layer should take when a peer user disconnects from the audio/video service.

    Returns void

  • Resets the speaking history of a user If the user was considered speaking, then mark them as not speaking

    Returns void

  • Handle activation of a push-to-talk key or button.

    Parameters

    • context: KeyboardEventContext

      The context data of the event

    Returns boolean

  • Handle deactivation of a push-to-talk key or button.

    Parameters

    • context: KeyboardEventContext

      The context data of the event

    Returns boolean

  • Render the audio/video streams to the CameraViews UI. Assign each connected user to the correct video frame element.

    Returns void

  • Respond to changes which occur to AV Settings. Changes are handled in descending order of impact.

    Parameters

    • changed: object

      The object of changed AV settings

    Returns Promise<boolean>

  • Private

    Initialize the local broadcast state.

    Returns void

  • Private

    Set up audio level listeners to handle voice activation detection workflow.

    Parameters

    • mode: string

      The currently selected voice broadcasting mode

    Returns void

  • Private

    Periodic notification of user audio level

    This function uses the audio level (in dB) of the audio stream to determine if the user is speaking or not and notifies the UI of such changes.

    The User is considered speaking if they are above the decibel threshold in any of the history values. This marks them as speaking as soon as they have a high enough volume, and marks them as not speaking only after they drop below the threshold in all histories (last 4 volumes = for 200 ms).

    There can be more optimal ways to do this and which uses whether the user was already considered speaking before or not, in order to eliminate short bursts of audio (coughing for example).

    Parameters

    • dbLevel: number

      The audio level in decibels of the user within the last 50ms

    Returns any