Visualize your music with React, a sound design experiment


React is a great library to design user interfaces, but what about rendering sounds? In this talk we'll dive deeper in handling sounds, creating a transformation to convert any input vibration into images.


Hi everyone, I'm really grateful to join react advanced today for presenting you my sound design project. I'm Claudia Bressi and I'm a senior front-end and mobile developer. I live near Venice in Italy and I work for a consultant agency in Milan where we build applications using react and react Native for mobile. I'm really passionate about both music and computer science and since I was a child I started learning piano and also I started using the computer in my father's office. At present time I still love playing instruments like electronic keyboard and guitar, listen to my favorite playlists and love going to gigs and festivals. That's why I try to combine a front-end project with music. Let's talk about the project. The main goal is to visualize music using react library and the overall application can handle any type of audio file, for example mp3 or wav files and as a result we generate a visual component for each sound spectrum. Let's dive deeper in the project details. The code is react and typescript in order to have easier maintainability. For handling a global state throughout the application I've set up Redux toolkit that boosts up writing boilerplate code. Then for animating components I've opted for Framer Motion library which is a quite modern and straightforward package. Then on global UI components like buttons, inputs and typography I've chosen the well-known Material UI which probably many of you already use in your react applications. The Web audio api is an amazing library that lets you deal with audio data. It gives us developers thousands of methods to analyze, filtering and visualizing sounds. Also there are multiple available options to add 3d special effects, panning, multi-channel splitting and much more. For this project we mainly use two interfaces, the Analyzer node which let us handle with specific audio information as for frequencies and the audio Worklet that let us developers add new modules to be executed off the main thread. High level speaking, every time you work with Web audio api the generated result will be a direct acyclic graph where each node is either an audio source, a filter effect or a destination target. Through the Web audio api visualizations are achieved by assessing audio parameters ranging over time. These variables are gain, pitch and frequencies most of the times. The Web audio api has indeed the already mentioned Analyzer node that leave unaltered the audio signal passing through it. Instead it outputs audio data that can be passed to a visualization technology such as html canvas or in our case react functional components. An expected fun fact is the browser compatibility and as you can see Internet Explorer is not supported at all. Luckily for us developers this browser will be retired next year on June 15. In order to improve our project I've built up some custom hooks to improve modularity and reuse throughout the codebase.

Use frequency, extract any frequency value from the audio file. Use waveform, gets the whole sound spectrum in terms of time domain data. Use pitch, reads the current sound pitch value and finally use gain, pull out volume from input. Having a rich data visualization is fundamental and to do this I split up in four areas. For giving specific shape I've used the pitch audio parameter, for coloring components I've applied the use frequency custom hook, while for animating the UI I've opted for Framer Motion api and I've used the audio html tag for providing the user playback functionality on the already uploaded audio files.

Having tools is a crucial part in code development to let us inspect and troubleshooting our code. audio in Chrome is a useful Chrome plugin for debugging any audio project. Then we have Web audio devtools extension that comes handy for inspecting the generated audio graph. As next step I would improve the whole project, enable it to perform as a progressive web application thus having a universal accessible tool. I would also enhance the features by adding some machine learning algorithms, server side rendering and the ability to use workers for performance optimizations. Finally I leave with you some reflections. react is a great library to express web development in a creative way so I suggest you to try and experiment as much as possible in your personal project, in particular coding with audio data and the Web audio api, which is very simple to apply. If you'll ever try to code on this topic, please share your thoughts using the hashtag I've created below. It would be helpful to see your creation. So yeah, thanks for listening. I hope you found this topic interesting and enjoy the rest of the conference. Bye!

9 min
22 Oct, 2021

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Workshops on related topic