Methods
(async) initialize() → {kMdpsError}
Check licensing status and, if valid, initialize MDPS.
- See:
-
- Constants.kMdpsError
Returns:
Any error
- Type
- kMdpsError
(async) setAudioNodes(input, output, context)
Create a processing graph using the given input and output nodes, adding the DPS worklet node in between.
Parameters:
Name | Type | Description |
---|---|---|
input |
AudioNode | the input audio node |
output |
AudioNode | the output audio node |
context |
AudioContext | the optional AudioContext object to apply the audio processing - if undefined, uses the default context from the window |
Returns:
the created AudioWorkletNode
setAudioStream(stream, context)
Set the audio media stream. This function will apply MDPS processing to the audio signal automatically according
to the current settings. Processing is performed in the audio processing thread via an audio worklet.
Parameters:
Name | Type | Description |
---|---|---|
stream |
MediaStream | the MediaStream object (e.g. from a MediaStreamAudioDestinationNode) |
context |
AudioContext | the optional AudioContext object to apply the audio processing - if undefined, uses the default context from the window |
setMicStream(stream, context) → {MediaStream}
Set the microphone media stream. This function will apply MDPS processing to the mic signal automatically according
to the current settings.
Parameters:
Name | Type | Description |
---|---|---|
stream |
MediaStream | the MediaStream object from the RTCPeerConnection |
context |
AudioContext | the optional AudioContext object to apply the audio processing - if undefined, uses the default context from the window |
Returns:
the destination stream for the mic
- Type
- MediaStream
setSpeakerStream(stream, context) → {ScriptProcessorNode}
Set the speaker media stream. This function will apply MDPS processing to the speaker signal automatically according
to the current settings.
NOTE: For WebRTC, the remote
video
or audio
element needs to be muted so that the MDPS-processed stream
overrides the original stream. This can be done via the following code on that element:
el.onloadedmetadata = function(e) {
el.play();
el.muted = true;
};
Parameters:
Name | Type | Description |
---|---|---|
stream |
MediaStream | the MediaStream object (e.g. from the RTCPeerConnection) |
context |
AudioContext | the optional AudioContext object to apply the audio processing - if undefined, uses the default context from the window |
Returns:
AudioNode for the speaker audio context
- Type
- ScriptProcessorNode
setupDataChannel(peerConnection, isCaller, dataChannel)
Setup the peer connection data channel for our state communication between peers. If
isCaller
is true,
a data channel is created on the given connection (or the given one is used) - if isCaller
is false,
the remote data channel is used via peerConnections's ondatachannel
callback.
Parameters:
Name | Type | Description |
---|---|---|
peerConnection |
RTCPeerConnection | RTCPeerConnection object |
isCaller |
Boolean | Is this the originating connection - false if its the responding connection |
dataChannel |
RTCDataChannel | optional data channel to use, otherwise one is created if isCaller is true |