BaseAudioContext
The BaseAudioContext
interface acts as a supervisor of audio-processing graphs. It provides key processing parameters such as current time, output destination or sample rate.
Additionally, it is responsible for nodes creation and audio-processing graph's lifecycle management.
However, BaseAudioContext
itself cannot be directly utilized, instead its functionalities must be accessed through one of its derived interfaces: AudioContext
, OfflineAudioContext
.
Audio graph
An audio graph is a structured representation of audio processing elements and their connections within an audio context. The graph consists of various types of nodes, each performing specific audio operations, connected in a network that defines the audio signal flow. In general we can distinguish four types of nodes:
- Source nodes (e.g
AudioBufferSourceNode
,OscillatorNode
) - Effect nodes (e.g
GainNode
,BiquadFilterNode
) - Analysis nodes (e.g
AnalyserNode
) - Destination nodes (e.g
AudioDestinationNode
)
Rendering audio graph
Audio graph rendering is done in blocks of sample-frames. The number of sample-frames in a block is called render quantum size, and the block itself is called a render quantum. By default render quantum size value is 128 and it is constant.
The AudioContext
rendering thread is driven by a system-level audio callback.
Each call has a system-level audio callback buffer size, which is a varying number of sample-frames that needs to be computed on time before the next system-level audio callback arrives,
but render quantum size does not have to be a divisor of the system-level audio callback buffer size.
Concept of system-level audio callback does not apply to OfflineAudioContext
.
Properties
Name | Type | Description | |
---|---|---|---|
currentTime | number | Double value representing an ever-increasing hardware time in seconds, starting from 0. | Read only |
destination | AudioDestinationNode | Final output destination associated with the context. | Read only |
sampleRate | number | Float value representing the sample rate (in samples per seconds) used by all nodes in this context. | Read only |
state | ContextState | Enumerated value represents the current state of the context. | Read only |
Methods
createAnalyser
The above method lets you create AnalyserNode
.
Returns AnalyserNode
.
createRecorderAdapter
The above method lets you create RecorderAdapterNode
.
Returns RecorderAdapterNode
createBuffer
The above method lets you create AudioBuffer
.
Parameters | Type | Description |
---|---|---|
numOfChannels | number | An integer representing the number of channels of the buffer. |
length | number | An integer representing the length of the buffer in sampleFrames. Two seconds buffer has length equals to 2 * sampleRate . |
sampleRate | number | A float representing the sample rate of the buffer. |
Errors
Error type | Description |
---|---|
NotSupportedError | numOfChannels is outside the nominal range [1, 32]. |
NotSupportedError | sampleRate is outside the nominal range [8000, 96000]. |
NotSupportedError | length is less then 1. |
Returns AudioBuffer
.
createBufferSource
The above method lets you create AudioBufferSourceNode
.
Parameters | Type | Description |
---|---|---|
pitchCorrection Optional | AudioBufferSourceNodeOptions | Dictionary object that specifies if pitch correction has to be available. |
Returns AudioBufferSourceNode
.
createBufferQueueSource
Mobile only
The above method lets you create AudioBufferQueueSourceNode
.
Returns AudioBufferQueueSourceNode
.
createGain
The above method lets you create GainNode
.
Returns GainNode
.
createOscillator
The above method lets you create OscillatorNode
.
Returns OscillatorNode
.
createStreamer
Mobile only
The above method lets you create StreamerNode
.
Returns StreamerNode
.
createPeriodicWave
The above method lets you create PeriodicWave
. This waveform specifies a repeating pattern that an OscillatorNode can use to generate its output sound.
Parameters | Type | Description |
---|---|---|
real | Float32Array | An array of cosine terms. |
imag | Float32Array | An array of sine terms. |
constraints Optional | PeriodicWaveConstraints | An object that specifies if normalization is disabled. |
Errors
Error type | Description |
---|---|
InvalidAccessError | real and imag arrays do not have same length. |
Returns PeriodicWave
.
createStereoPanner
The above method lets you create StereoPannerNode
.
Returns StereoPannerNode
.
createBiquadFilter
The above method lets you create BiquadFilterNode
.
Returns BiquadFilterNode
.
Supported file formats:
- mp3
- wav
- flac
- opus
- ogg
- m4a
- aac
- mp4
decodeAudioData
Example
const url = ... // url to an audio
const buffer = await fetch(url)
.then((response) => response.arrayBuffer())
.then((arrayBuffer) => this.audioContext.decodeAudioData(arrayBuffer))
.catch((error) => {
console.error('Error decoding audio data source:', error);
return null;
});
The above method lets you decode audio data. It decodes with in memory audio data block.
Parameters | Type | Description |
---|---|---|
arrayBuffer | ArrayBuffer | ArrayBuffer with audio data. |
Returns Promise<AudioBuffer>
.
decodeAudioDataSource
Example using expo-asset library
import { Asset } from 'expo-asset';
const buffer = await Asset.fromModule(require('@/assets/music/example.mp3'))
.downloadAsync()
.then((asset) => {
if (!asset.localUri) {
throw new Error('Failed to load audio asset');
}
return this.audioContext.decodeAudioDataSource(asset.localUri);
})
The above method lets you decode audio data file.
Parameters | Type | Description |
---|---|---|
sourcePath | string | Path to audio file located on the device. |
Returns Promise<AudioBuffer>
.
decodePCMInBase64Data
Mobile only
Example
const data = ... // data encoded in base64 string
const buffer = await this.audioContext.decodePCMInBase64Data(data);
The above method lets you decode audio data. It decodes with PCM data in Base64.
Parameters | Type | Description |
---|---|---|
base64 | string | Base64 string with audio data. |
playbackRate Optional | number | Number that represents audio speed, which will be applied during decoding. |
Returns Promise<AudioBuffer>
.
Remarks
currentTime
- Timer starts when context is created, stops when context is suspended.
ContextState
Details
Acceptable values:
suspended
The audio context has been suspended (with one of suspend
or OfflineAudioContext.suspend
).
running
The audio context is running normally.
closed
The audio context has been closed (with close
method).
AudioBufferSourceNodeOptions
Details
AudioBufferSourceNodeOptions
is a dictionary object specifies if pitch correction algorithm has to be available.
interface AudioBufferSourceNodeOptions {
pitchCorrection: boolean
}
PeriodicWaveConstraints
Details
PeriodicWaveConstraints
is a dictionary object specifies whether normalization should be disabled during creating periodic wave. If not specified normalization is enabled.
If normalized, periodic wave will have maximum peak value of 1 and minimum peak value of -1.
interface PeriodicWaveConstraints {
disableNormalization: boolean;
}