Skip to main content

See your sound

In this section, we will get familiar with capabilities of the AnalyserNode interface, focusing on how to extract audio data in order to create a simple real-time visualization of the sounds.

Base application

To kick-start things a bit, lets use code based on previous tutorials. It is a simple application that can load and play a sound from file. As previously if you would like to code along the tutorial, copy and paste the code provided below into your project.

import React, {
useState,
useEffect,
useRef,
useMemo,
} from 'react';
import {
AudioContext,
AudioBuffer,
AudioBufferSourceNode,
} from 'react-native-audio-api';
import { ActivityIndicator, View, Button, LayoutChangeEvent } from 'react-native';

const AudioVisualizer: React.FC = () => {
const [isPlaying, setIsPlaying] = useState(false);
const [isLoading, setIsLoading] = useState(false);

const audioContextRef = useRef<AudioContext | null>(null);
const bufferSourceRef = useRef<AudioBufferSourceNode | null>(null);
const audioBufferRef = useRef<AudioBuffer | null>(null);

const handlePlayPause = () => {
if (isPlaying) {
bufferSourceRef.current?.stop();
} else {
if (!audioContextRef.current) {
return
}

bufferSourceRef.current = audioContextRef.current.createBufferSource();
bufferSourceRef.current.buffer = audioBufferRef.current;
bufferSourceRef.current.connect(audioContextRef.current.destination);

bufferSourceRef.current.start();
}

setIsPlaying((prev) => !prev);
};

useEffect(() => {
if (!audioContextRef.current) {
audioContextRef.current = new AudioContext();
}

const fetchBuffer = async () => {
setIsLoading(true);
audioBufferRef.current = await fetch('https://software-mansion.github.io/react-native-audio-api/audio/music/example-music-02.mp3')
.then((response) => response.arrayBuffer())
.then((arrayBuffer) =>
audioContextRef.current!.decodeAudioData(arrayBuffer)
)

setIsLoading(false);
};

fetchBuffer();

return () => {
audioContextRef.current?.close();
};
}, []);

return (
<View style={{ flex: 1}}>
<View style={{ flex: 0.2 }} />
<View
style={{ flex: 0.5, justifyContent: 'center', alignItems: 'center' }}>
{isLoading && <ActivityIndicator color="#FFFFFF" />}
<View
style={{
justifyContent: 'center',
flexDirection: 'row',
marginTop: 16,
}}>
<Button
onPress={handlePlayPause}
title={isPlaying ? 'Pause' : 'Play'}
disabled={!audioBufferRef.current}
color={'#38acdd'}
/>
</View>
</View>
</View>
);
};

export default AudioVisualizer;

Create an analyzer to capture and process audio data

To obtain frequency and time-domain data, we need to utilize the AnalyserNode. It is an AudioNode that passes data unchanged from input to output while enabling the extraction of this data in two domains: time and frequency.

We will use two specific AnalyserNode's methods:

These methods will allow us to acquire the necessary data for our analysis.

/* ... */

import {
AudioContext,
AudioBuffer,
AudioBufferSourceNode,
AnalyserNode,
} from 'react-native-audio-api';

/* ... */

const FFT_SIZE = 512;

const AudioVisualizer: React.FC = () => {
const [isPlaying, setIsPlaying] = useState(false);
const [isLoading, setIsLoading] = useState(false);
const [times, setTimes] = useState<Uint8Array>(
new Uint8Array(FFT_SIZE).fill(127)
);
const [freqs, setFreqs] = useState<Uint8Array>(
new Uint8Array(FFT_SIZE / 2).fill(0)
);

const audioContextRef = useRef<AudioContext | null>(null);
const bufferSourceRef = useRef<AudioBufferSourceNode | null>(null);
const audioBufferRef = useRef<AudioBuffer | null>(null);
const analyserRef = useRef<AnalyserNode | null>(null);

const handlePlayPause = () => {
if (isPlaying) {
bufferSourceRef.current?.stop();
} else {
if (!audioContextRef.current || !analyserRef.current) {
return
}

bufferSourceRef.current = audioContextRef.current.createBufferSource();
bufferSourceRef.current.buffer = audioBufferRef.current;
bufferSourceRef.current.connect(analyserRef.current);

bufferSourceRef.current.start();

requestAnimationFrame(draw);
}

setIsPlaying((prev) => !prev);
};

const draw = () => {
if (!analyserRef.current) {
return;
}

const timesArrayLength = analyserRef.current.fftSize;
const frequencyArrayLength = analyserRef.current.frequencyBinCount;

const timesArray = new Uint8Array(timesArrayLength);
analyserRef.current.getByteTimeDomainData(timesArray);
setTimes(timesArray);

const freqsArray = new Uint8Array(frequencyArrayLength);
analyserRef.current.getByteFrequencyData(freqsArray);
setFreqs(freqsArray);

requestAnimationFrame(draw);
};

useEffect(() => {
if (!audioContextRef.current) {
audioContextRef.current = new AudioContext();
}

if (!analyserRef.current) {
analyserRef.current = audioContextRef.current.createAnalyser();
analyserRef.current.fftSize = FFT_SIZE;
analyserRef.current.smoothingTimeConstant = 0.8;

analyserRef.current.connect(audioContextRef.current.destination);
}

const fetchBuffer = async () => {
setIsLoading(true);
audioBufferRef.current = await fetch('https://software-mansion.github.io/react-native-audio-api/audio/music/example-music-02.mp3')
.then((response) => response.arrayBuffer())
.then((arrayBuffer) =>
audioContextRef.current!.decodeAudioData(arrayBuffer)
)

setIsLoading(false);
};

fetchBuffer();

return () => {
audioContextRef.current?.close();
};
}, []);

return (
<View style={{ flex: 1}}>
<View style={{ flex: 0.2 }} />
<View
style={{ flex: 0.5, justifyContent: 'center', alignItems: 'center' }}>
{isLoading && <ActivityIndicator color="#FFFFFF" />}
<View
style={{
justifyContent: 'center',
flexDirection: 'row',
marginTop: 16,
}}>
<Button
onPress={handlePlayPause}
title={isPlaying ? 'Pause' : 'Play'}
disabled={!audioBufferRef.current}
color={'#38acdd'}
/>
</View>
</View>
</View>
);
};

export default AudioVisualizer;

We utilize the requestAnimationFrame method to continuously fetch and update real-time audio visualization data.

Visualize time-domain and frequency data

To render both the time as well as frequency domain visualizations, we will use our beloved graphic library -
react-native-skia.

If you would like to know more what are time and frequency domains, have at look at Time domain vs Frequency domain section of the AnalyserNode documentation, which explains those terms in details, but otherwise here is the code:

Time domain

Loading...

Frequency domain

Loading...

Summary

In this guide, we have learned how to extract audio data using AnalyserNode, what types of data we can obtain and how to visualize them. To sum up:

  • AnalyserNode is sniffer node that extracting audio data.
  • There are two domains of audio data: frequency and time.
  • We have learned how to use those data to create simple animation.

What's next?

I’m not sure, but give yourself a pat on the back – you’ve earned it! More guides are on the way, so stay tuned! 🎼