React is a great library to design user interfaces, but what about rendering sounds? In this talk we'll dive deeper in handling sounds, creating a transformation to convert any input vibration into images.
Visualize your music with React, a sound design experiment
AI Generated Video Summary
Claudia Bressi, a senior front-end and mobile developer, combines music and computer science in her sound design project using React and the Web Audio API. The project supports different audio file formats and utilizes powerful methods for analyzing and visualizing sounds. Custom hooks are used for modularity and code reuse. The project is divided into four areas: shape, color, animation, and playback. Future improvements include making it a progressive web app, adding ML algorithms, server-side rendering, and optimizing performance with workers.
1. Introduction to the Sound Design Project
I'm Claudia Bressi, a senior front-end and mobile developer from Italy. I'm passionate about music and computer science, and I've combined them in my sound design project using React. The project visualizes music using the Web Audio API and supports different audio file formats. We use React, TypeScript, Redux Toolkit, Framer Motion, and Material UI for development. The Web Audio API provides powerful methods for analyzing, filtering, and visualizing sounds, and we utilize the Analyzer node and Audio Worklet for this project. Browser compatibility is a consideration, and we have built custom hooks for modularity and code reuse.
Hi everyone. I'm really grateful to join react advanced today for presenting you my sound design project. I'm Claudia Bressi, and I am a senior front-end and mobile developer. I live near Venice in Italy, and I work for a consultant agency in Milan, Italy, where we build applications using react and react native for mobile.
I'm really passionate about both music and computer science. And since I was a child, I started learning piano and also I started using the computer in my father's office. At present time, I still love playing instruments like electronic keyboard and guitar, listen to my favorite playlists and love going to jigs and festivals. That's why I try to combine a front-end project with music.
Let's talk about the project. The main goal is to visualize music using React library. And the overall application can handle any type of audio file, for example MP3 or WAV files. And as a result, we generate a visual component for each sound spectrum. Let's dive deeper in the project details. The code is React and TypeScript in order to have easier maintainability. For handling a global state throughout the application, I've set up Redux toolkit For animating components, I've opted for Framer Motion Library, which is a quite modern and straightforward package. Then, on global UI components, like buttons, inputs and typography, I've chosen the well-known material UI, which probably many of you already use in your React applications.
The Web Audio API is an amazing library that lets you deal with audio data. It gives us developers thousands of methods to analyze, filtering and visualizing sounds. Also, there are multiple available options to add 3D special effects, panning, multichannel splitting and much more. For this project, we mainly use two interfaces, the Analyzer node, which lets us handle with specific audio information as for frequencies, and the Audio Worklet, that lets us developers add new modules to be executed of the main thread. High level speaking, every time you work with Web Audio API, the generated result will be a direct acyclic graph, where each node is either an audio source, a filter effect or a destination target. Through the Web Audio API, visualizations are achieved by assessing audio parameters ranging over time. These variables are gain, pitch and frequencies most of the times. The Web Audio API has indeed the already mentioned analyzer node that leaves unaltered the audio signal passing through it. Instead it outputs audio data that can be passed to a visualization technology such as HTML Canvas or, in our case, React Functional Components. An expected fun fact is the browser compatibility. As you can see, Internet Explorer is not supported at all. Luckily for us developers, this browser will be retired next year on June 15. In order to improve our project, I've built up some custom hooks to improve modularity and reuse throughout the codebase.
2. Improving the Project and Final Thoughts
To improve our project, I've built custom hooks for modularity and reuse. UseFrequency extracts frequency values, UseWaveform gets sound spectrum data, UsePitch reads pitch values, and useGain pulls out volume. We split the project into four areas: shape with Pitch Audio, color with UseFrequency, animation with FramerMotion, and playback with AudioHTML. Tools like AudioOnChrome and WebAudioDevToolsExtension are useful for debugging and inspecting the audio graph. Next, we'll make the project a progressive web app, add ML algorithms, server-side rendering, and optimize performance with workers. Experiment with React, AudioData, and the Web Audio API in your projects. Share your thoughts with the hashtag I've created. Thanks for listening and enjoy the conference!
In order to improve our project, I've built up some custom hooks to improve modularity and reuse throughout the codebase. UseFrequency extract any frequency value from the audio file. UseWaveform gets the whole sound spectrum in terms of time domain data. UsePitch reads the current sound pitch value. And finally, useGain pulls out volume from input.
Having a rich data visualization, it's fundamental and to do this, I split up in four areas. For giving specific shape, I've used the Pitch Audio parameter. For coloring components, I've applied the UseFrequency custom hook. While for animating the UI, I've opted for FramerMotion API. And I've used the AudioHTML tag for providing the user playback functionality on the already uploaded audio files.
Having tools is a crucial part in code development to let us inspect and troubleshooting our code. AudioOnChrome is a useful Chrome plugin for debugging any audio project. Then, we have WebAudioDevToolsExtension that comes handy for inspecting the generated audio graph.
As next step, I would improve the whole project, enable it to perform as a progressive web application, thus having a universal accessible tool. I would also enhance the features by adding some Machine Learning algorithms, server-side rendering, and the ability to use workers for performance optimizations.
Finally, I'll leave with you some reflections. React is a great library to express web development in a creative way. So I suggest you to try and experiment as much as possible in your personal project, in particular coding with AudioData and the Web Audio API. which is very simple to apply. If you'll ever try to code on this topic, please share your thoughts using the hashtag I've created below. It would be awesome to see your creation. So yeah, thanks for listening, I hope you found this topic interesting and enjoy the rest of the conference.
Comments