How Digital Tools Can Help Us Understand Music Analysis

How Digital Tools Can Help Us Understand Music Analysis

I have always had a deep interest in understanding how music works. Although my relationship to music and how I play an instrument are personal, studying famous musical works improves my knowledge of how other musicians and composers communicate through their music. I have completed undergraduate and master’s degrees studying music and the science behind it. However, even now in my Ph.D. research, I am constantly reminded of how much more I must learn.  

Despite having many years of music study under my belt, I sometimes feel alienated when I can’t follow along with an analysis of a famous Classical composition because I do not know what the music in question actually sounds like. I sometimes also find myself wondering how much an analyst’s descriptions reflect their own subjective influences vs. quantifiable musical patterns.  

This curiosity led me to wonder whether statistical methods can clarify musical patterns in analyses of well-known works. Specifically, I was interested in examining the cues associated with music’s emotional meaning (such as pitch, timing, and loudness). Evidently, I wasn’t the only one mulling over this question. Because modern technology allows researchers to maintain digital repositories of music in various forms (written music, audio files, and digitized renditions), they can conduct ambitious analyses chronicling how music has changed across history (Weiß et al., 2019; Horn & Huron, 2015; Mauch et al., 2015). However, despite the growing contribution of these musical analyses to our understanding of how music works, I still understand descriptions best when I listen to the music in question.  

During my graduate residency with the Lewis & Ruth Sherman Centre for Digital Scholarship, I decided to develop a web application that supports interactive musical analysis. The app lets users listen directly to musical examples while exploring data about pitch, timing, and loudness. This can facilitate musical discoveries while mitigating possible interpretive biases from analyzing visual information without hearing its auditory counterpart. To create the application, I adopted datasets of encoded timing and pitch information from musical excerpts by Bach and Chopin, annotated by my colleague Max Delle Grazie under the supervision of Dr. Michael Schutz at McMaster’s MAPLE Lab. Max and I also collected information about each piece’s loudness by examining the average sound level in performances by Pietro De Maria (performing Bach) and Vladimir Ashkenazy (performing Chopin)—the excerpts of these performances can be heard by clicking on the labels in the first two plots.  The app can also be explored here.

These musical features directly connect to perception—we can easily distinguish low vs. high passages, slow vs. fast ones, and soft vs. loud ones. A fourth cue, called mode, is an abstract yet important organizational one and is associated with our perception of whether the music sounds happy or sad. This happy/sad description of mode is somewhat simplistic but often accurate to how we perceive a piece of music (Gagnon & Peretz, 2003; Crowder, 1984). The unique thing about the analyzed music is that they contain an even number of pieces in the western major and minor modes (typically associated with happy and sad emotional connotations, respectively). This balance in mode lets users compare how the composers vary the pitch, timing, and loudness in major vs. minor keys.  

After deciding on which cues to analyze, I looked for a suitable statistical analysis method. In my readings I noticed how many music studies employed a method called cluster analysis to describe differences in musical works from different historical periods. Cluster analysis is considered an “unsupervised” machine learning method. Although it doesn’t know exactly what objects it is grouping together, it uses patterns of similarity across their features to reveal insight into a dataset. An analyst can then listen to the musical works within each cluster to interpret their unifying characteristics. Although there are many different cluster analysis algorithms, each may discover qualitatively different patterns. Eliminating interpretive bias is just about impossible—especially in abstract stimuli like music, where there is no “ground truth” about precisely how two pieces are similar. However, informed decisions and a thorough understanding of the clustering method can help us discover interesting musical patterns. 

Researchers must make several decisions about how best to measure distances between dissimilar pieces and clusters, beginning with how many clusters to study in the first place. Because scholars often use visualization methods to determine the optimal number of clusters, I did the same and discovered the best number of clusters to demarcate each composer’s compositions was two (Friedman et al., 2001). However, this approach did not result in particularly interesting findings. For both composers, two clusters simply separated the major pieces from the minor ones.

Hoping to discover more nuanced and helpful cue patterns for understanding expressive groupings, I doubled the number of clusters to four. The results of this cluster analysis can be seen above in the interactive application. The visualizations I developed include a dendrogram (a tree-like visualization of the cluster analysis described in detail below), as well as 2- and 3-dimensional scatter plots of the analyzed cues, where timing and loudness are referred to by their technical names (attack rate and root-mean squared [RMS] amplitude, respectively). In the 2-dimensional scatter plot, you can manually change the axes yourself; the 3-dimensional scatter plot visualizes all three cues. In the latter plot, you can interact with the visualization by using your computer’s mouse to rotate and zoom. 

Both composers’ pieces divide into two major clusters and two minor clusters. The dendrogram (first plot), shows which pieces are most similar. Specifically, the lengths of the horizontal segments (or branches) connecting individual pieces indicate which are most similar when considering timing, pitch, loudness, and mode jointly. On the left side of the dendrogram (showing Bach’s compositions), ‘b’ and ‘c’ in the yellow cluster are most closely connected. In the 2D and 3D scatterplots you can see that these observations are very similar in terms of pitch, timing (called attack rate), and loudness (referred to as RMS amplitude). Both pieces are also in a minor key (as indicated by the lowercase letters, distinguishing them from the major pieces in uppercase letters). Because of these similarities in cue features, the pieces’ closeness in the dendrogram makes sense. On the other hand, large differences in the length of branches indicates how the two pieces are dissimilar. Again, looking at Bach’s compositions, we can see the branches of ‘a’ and ‘d’ in the gold-coloured cluster show a greater difference in height, suggesting these pieces are less similar than others, such as ‘a’ and ‘f’. You can see this difference in the 2D explorer by displaying only Bach’s first cluster’s pieces and changing the Y-axis to show “RMS” and “Pitch Height.” Although ‘a’ and ‘d’ are similar in terms of timing (i.e., attack rate), you can see they differ markedly in both pitch and loudness (RMS amplitude). It can help to hear these differences by clicking on the excerpts. 

To understand the unifying characteristics within each cluster, I used the 2D explorer to isolate individual clusters and modified the X and Y axes to see which features were similar between the pieces. I also clicked on each piece to hear these cue similarities for myself. For Bach, the yellow cluster features slow to walking-pace minor pieces with mid-high pitch and varied RMS amplitude (measuring loudness). The gold cluster features fast minor pieces that are typically loud (excluding ‘d’) and varied in pitch. The pink cluster contains near walking-pace major pieces with mid-high pitch and varied loudness. Lastly, the red cluster features fast major pieces which are relatively loud and low-pitched. 

For Chopin, the purple cluster includes fast minor pieces which are loud (and varied in pitch). In contrast, the light blue cluster features slow minor pieces that are softer (and varied in pitch). The olive-green cluster features walking-paced major pieces with varied loudness and pitch information. Finally, the dark green cluster features fast major pieces that are moderate in both loudness and pitch. 

I hope this blog post highlights how digital tools can help us understand musical analysis and communicate our findings, while also enabling us to critically listen to the works we analyze. Now that you know how to navigate the visualizations, I encourage you to embark on your analysis journey and explore the cluster analyses more closely.

I am grateful to my colleagues at the Sherman Centre for their support and suggestions on the application. If you have any questions about this application, or would like to discuss cluster analysis, feel free to reach me at my email address (andersoc@mcmaster.ca) and check out some of the other interesting music research at our lab (https://maplelab.net). 

Acknowledgments:

This work would not have been possible without the support of the staff and residents of the Sherman Centre. In particular I am grateful to Brianne Morgan for her feedback on this post, as well as Veronica Litt and Andrea Zeffiro for their helpful suggestions and mentorship over the course of my residency. I am also grateful to Jeffrey Demaine and John Fink for sharing their programming expertise with me, which led to improvements in the interactive app.

References 

Crowder, R. G. (1984). Perception of the major/minor distinction: I. Historical and theoretical foundations. Psychomusicology: a journal of research in music cognition, 4(1-2), 3. 

Gagnon, L., & Peretz, I. (2003). Mode and tempo relative contributions to “happy-sad” judgements in equitone melodies. Cognition and emotion, 17(1), 25-40. 

Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements of statistical learning (Vol. 1, No. 10). New York: Springer series in statistics. 

Horn, K., & Huron, D. (2015). On the changing use of the major and minor modes 1750–1900. Music Theory Online, 21(1), 11. 
Mauch, M., MacCallum, R. M., Levy, M., & Leroi, A. M. (2015). The evolution of popular music: USA 1960–2010. Royal Society open science, 2(5), 150081. 

Weiß, C., Mauch, M., Dixon, S., & Müller, M. (2019). Investigating style evolution of Western classical music: A computational approach. Musicae Scientiae, 23(4), 486-507. 

Leave a Reply

Your email address will not be published. Required fields are marked *

*