This project explores what music would look like as a fractal. The project is composed of two source files written in Python and C++ respectively. The Python code takes a song or sound as input in the .wav format, and then the sound is analyzed offline outputting a list of parameters to send as an input to the C++ source file, which creates a series of fractal images that respond to the sound. After all the images are created, they are sent into ffmpeg along with the sound file to create the final video.
|
The frequency was analyzed through a set of filters. The filters were applied on the entire array of samples without separating it into chunks. Once the calculation was done, the array was split into 43 chunks per second and the highest value found (absolute value of each result) was the representing value.
The largest value was chosen for better visual results. At first, the results from all three filters were scaled to fit in the range of 0-255 for color, 0 being the smallest value found in that chunk and 255 being the largest. However, this resulted in numbers that were too close to each other making the image nearly gray most of the time. Instead of scaling the number based on each individual filter, I scaled the numbers according to the lowest and highest (absolute) value found in any of the three filters. This method provided better colors and visual results. |