Decoding
You can decode audio data independently, without creating an AudioContext, using the exported functions decodeAudioData and
decodePCMInBase64.
Decoding on the web has to be done via AudioContext only.
If you already have an audio context, you can decode audio data directly using its decodeAudioData function;
the decoded audio will then be automatically resampled to match the context's sampleRate.
Supported file formats:
- flac
- mp3
- ogg
- opus
- wav
- aac
- m4a
- mp4
Last three formats are decoded with ffmpeg on the mobile, see for more info.
decodeAudioData
Decodes audio data from either a file path or an ArrayBuffer. The optional sampleRate parameter lets you resample the decoded audio;
if not provided, the original sample rate from the file is used.
| Parameter | Type | Description |
|---|---|---|
input | ArrayBuffer | ArrayBuffer with audio data. |
string | Path to remote or local audio file. | |
number | Asset module id. Mobile only | |
sampleRateOptional | number | Target sample rate for the decoded audio. |
fetchOptionsOptional | RequestInit | Additional headers parameters when passing url to fetch. |
Returns Promise<AudioBuffer>.
If you are passing number to decode function, bear in mind that it uses Image component provided by React Native internally. By default only support .mp3, .wav, .mp4, .m4a, .aac audio file formats. If you want to use other types, refer to this section for more info.
Example decoding remote URL
import { decodeAudioData } from 'react-native-audio-api';
const url = ... // url to an audio
const buffer = await decodeAudioData(url);
Internally decoding local files uses Image component to retrieve asset uri, but it does not work on the web platform. You can use expo-asset library for this purpose or retrieve ArrayBuffer on your own and pass it to decoding function.
Example using expo-asset library
import { Asset } from 'expo-asset';
import { AudioContext } from 'react-native-audio-api';
const uri = await Asset.fromModule(require('@/assets/music/example.mp3'))
.downloadAsync()
.then((asset) => {
if (!asset.localUri) {
console.error('Failed to load audio asset');
}
return asset.localUri;
})
const context = new AudioContext();
if (uri) {
const buffer = await fetch(uri)
.then((response) => response.arrayBuffer())
.then((arrayBuffer) => context.decodeAudioData(arrayBuffer));
console.log('Audio buffer loaded:', buffer);
}
decodePCMInBase64
Decodes base64-encoded PCM audio data.
| Parameter | Type | Description |
|---|---|---|
base64String | string | Base64-encoded PCM audio data. |
inputSampleRate | number | Sample rate of the input PCM data. |
inputChannelCount | number | Number of channels in the input PCM data. |
isInterleaved Optional | boolean | Whether the PCM data is interleaved. Default is true. |
Returns Promise<AudioBuffer>
Example decoding with data in base64 format
const data = ... // data encoded in base64 string
// data is interleaved (Channel1, Channel2, Channel1, Channel2, ...)
const buffer = await decodeAudioData(data, 4800, 2, true);