Skip to main content

BaseAudioContext

The BaseAudioContext interface acts as a supervisor of audio-processing graphs. It provides key processing parameters such as current time, output destination or sample rate. Additionally, it is responsible for nodes creation and audio-processing graph's lifecycle management. However, BaseAudioContext itself cannot be directly utilized, instead its functionalities must be accessed through one of its derived interfaces: AudioContext, OfflineAudioContext.

Audio graph

An audio graph is a structured representation of audio processing elements and their connections within an audio context. The graph consists of various types of nodes, each performing specific audio operations, connected in a network that defines the audio signal flow. In general we can distinguish four types of nodes:

Rendering audio graph

Audio graph rendering is done in blocks of sample-frames. The number of sample-frames in a block is called render quantum size, and the block itself is called a render quantum. By default render quantum size value is 128 and it is constant.

The AudioContext rendering thread is driven by a system-level audio callback. Each call has a system-level audio callback buffer size, which is a varying number of sample-frames that needs to be computed on time before the next system-level audio callback arrives, but render quantum size does not have to be a divisor of the system-level audio callback buffer size.

info

Concept of system-level audio callback does not apply to OfflineAudioContext.

Properties

NameTypeDescription
currentTimenumberDouble value representing an ever-increasing hardware time in seconds, starting from 0.
Read only
destinationAudioDestinationNodeFinal output destination associated with the context.
Read only
sampleRatenumberFloat value representing the sample rate (in samples per seconds) used by all nodes in this context.
Read only
stateContextStateEnumerated value represents the current state of the context.
Read only

Methods

createAnalyser

Creates AnalyserNode.

Returns AnalyserNode.

createRecorderAdapter

Creates RecorderAdapterNode.

Returns RecorderAdapterNode

createWorkletNode
Mobile only

Creates WorkletNode.

ParameterTypeDescription
worklet(Array<Float32Array>, number) => voidThe worklet to be executed.
bufferLengthnumberThe size of the buffer that will be passed to the worklet on each call.
inputChannelCountnumberThe number of channels that the node expects as input (it will get min(expected, provided)).
workletRuntimeAudioWorkletRuntimeThe kind of runtime to use for the worklet. See worklet runtimes for details.

Errors

Error typeDescription
Errorreact-native-worklet is not found as dependency.
NotSupportedErrorbufferLength < 1.
NotSupportedErrorinputChannelCount is not in range [1, 32].

Returns WorkletNode.

createWorkletSourceNode
Mobile only

Creates WorkletSourceNode.

ParameterTypeDescription
worklet(Array<Float32Array>, number, number, number) => voidThe worklet to be executed.
workletRuntimeAudioWorkletRuntimeThe kind of runtime to use for the worklet. See worklet runtimes for details.

Errors

Error typeDescription
Errorreact-native-worklet is not found as dependency.

Returns WorkletSourceNode.

createWorkletProcessingNode
Mobile only

Creates WorkletProcessingNode.

ParameterTypeDescription
worklet(Array<Float32Array>, Array<Float32Array>, number, number) => voidThe worklet to be executed.
workletRuntimeAudioWorkletRuntimeThe kind of runtime to use for the worklet. See worklet runtimes for details.

Errors

Error typeDescription
Errorreact-native-worklet is not found as dependency.

Returns WorkletProcessingNode.

createBuffer

Creates AudioBuffer.

ParameterTypeDescription
numOfChannelsnumberAn integer representing the number of channels of the buffer.
lengthnumberAn integer representing the length of the buffer in sampleFrames. Two seconds buffer has length equals to 2 * sampleRate.
sampleRatenumberA float representing the sample rate of the buffer.

Errors

Error typeDescription
NotSupportedErrornumOfChannels is outside the nominal range [1, 32].
NotSupportedErrorsampleRate is outside the nominal range [8000, 96000].
NotSupportedErrorlength is less then 1.

Returns AudioBuffer.

createBufferSource

Creates AudioBufferSourceNode.

ParameterTypeDescription
options
Optional
AudioBufferBaseSourceNodeOptionsDictionary object that specifies if pitch correction has to be available.

Returns AudioBufferSourceNode.

createBufferQueueSource
Mobile only

Creates AudioBufferQueueSourceNode.

ParameterTypeDescription
options
Optional
AudioBufferBaseSourceNodeOptionsDictionary object that specifies if pitch correction has to be available.

Returns AudioBufferQueueSourceNode.

createGain

Creates GainNode.

Returns GainNode.

createConvolver

Creates ConvolverNode.

ParameterTypeDescription
options
Optional
ConvolverNodeOptionsDictionary object that specifies associated buffer and normalization.

Errors

Error typeDescription
NotSupportedErrornumOfChannels of buffer is not 1, 2 or 4.

Returns ConvolverNode.

createOscillator

Creates OscillatorNode.

Returns OscillatorNode.

createStreamer
Mobile only

Creates StreamerNode.

Returns StreamerNode.

createConstantSource

Creates ConstantSourceNode.

Returns ConstantSourceNode.

createPeriodicWave

Creates PeriodicWave. This waveform specifies a repeating pattern that an OscillatorNode can use to generate its output sound.

ParameterTypeDescription
realFloat32ArrayAn array of cosine terms.
imagFloat32ArrayAn array of sine terms.
constraints
Optional
PeriodicWaveConstraintsAn object that specifies if normalization is disabled. If so, periodic wave will have maximum peak value of 1 and minimum peak value of -1.

Errors

Error typeDescription
InvalidAccessErrorreal and imag arrays do not have same length.

Returns PeriodicWave.

createStereoPanner

Creates StereoPannerNode.

Returns StereoPannerNode.

createBiquadFilter

Creates BiquadFilterNode.

Returns BiquadFilterNode.

caution

Supported file formats:

  • aac
  • flac
  • m4a
  • mp3
  • mp4
  • ogg
  • opus
  • wav

decodeAudioData

Decodes audio data from either a file path or an ArrayBuffer. The optional sampleRate parameter lets you resample the decoded audio. If not provided, the audio will be automatically resampled to match the audio context's sampleRate.

ParameterTypeDescription
inputArrayBufferArrayBuffer with audio data.
stringPath to audio file located on the device.
sampleRate
Optional
numberTarget sample rate for the decoded audio.

Returns Promise<AudioBuffer>.

Example decoding with memory block
const url = ... // url to an audio

const buffer = await fetch(url)
.then((response) => response.arrayBuffer())
.then((arrayBuffer) => this.audioContext.decodeAudioData(arrayBuffer))
.catch((error) => {
console.error('Error decoding audio data source:', error);
return null;
});
Example using expo-asset library
import { Asset } from 'expo-asset';

const buffer = await Asset.fromModule(require('@/assets/music/example.mp3'))
.downloadAsync()
.then((asset) => {
if (!asset.localUri) {
throw new Error('Failed to load audio asset');
}
return this.audioContext.decodeAudioData(asset.localUri);
})

decodePCMInBase64

Decodes base64-encoded PCM audio data.

ParameterTypeDescription
base64StringstringBase64-encoded PCM audio data.
inputSampleRatenumberSample rate of the input PCM data.
inputChannelCountnumberNumber of channels in the input PCM data.
isInterleaved
Optional
booleanWhether the PCM data is interleaved. Default is true.

Returns Promise<AudioBuffer>

Example decoding with data in base64 format
const data = ... // data encoded in base64 string
// data is not interleaved (Channel1, Channel1, ..., Channel2, Channel2, ...)
const buffer = await this.audioContext.decodeAudioData(data, 4800, 2, false);

Remarks

currentTime

  • Timer starts when context is created, stops when context is suspended.

ContextState

Details

Acceptable values:

  • suspended

The audio context has been suspended (with one of suspend or OfflineAudioContext.suspend).

  • running

The audio context is running normally.

  • closed

The audio context has been closed (with close method).