A trip up to Durham for the Theory and Analysis Graduate Student Conference

I am recently back from a trip to Durham for this year’s Society of Music Analysis Graduate Students Conference (TAGS). I enjoyed the trip not only as a delegate but as one of the SMA PGR representative, it was exciting to engage with other PGR’s and learn what their research is about. Our community is so diverse, and the work everyone is doing is exciting and influential. I especially enjoyed presentations on SoundScape and the combination of different music theoretical models it a group model.

This was my first presentation on the results of my listening study https://musicsimilarity.soton.ac.uk/. Here I analysed two questions results and their musical materials used.

I enjoyed my trip to Durham, it was beautiful as a place, along with a really influential and growing music department. There is a strong community feel at the University and therefore it was a pleasure to be a part of that for the weekend.


To reflect upon the presentation I gave in Durham, I was happy with how I generally presented it. I am beginning to form a presentation style, which still needs development, but I am getting close to this. The nerves are reducing and I am getting into the swing of how to do this ‘academic conference’ lark.

I would say, however, that I felt that I tried to cover too much information and therefore it felt rushed and was not as strong in content as my previous presentations have been. The most useful part of conferences is in the question sessions and with discussing with people after the presentation. I found it useful to discuss features such as ‘rhythm’ that I had not considered, along with discussing aspects of my research. I enjoyed discussing objectivity and subjectivity in music analysis, and my research in particular. I have learnt from this presentation, and how my research will be received by the Music Analysis community.

I aimed for this presentation to learn how the Music Analysis community would interact with my research. This was my first presentation to the community, and I feel my work was generally well received and that the benefits and the application of my work was understood. I am looking forward to presenting again to the community in July, at CityMac which my paper has recently been accepted to.


Hartley Residency: The Future of Digital Musicology

We were delighted to have Professor Simon McVeigh, from Goldsmiths, University of London, to join us for our first Hartley Residency of the 2016-17 academic year. On the 15th-16th November Simon shared with us his work on concert life, with specific emphasis on his collaborative project ‘In Concert’. As this was my first Hartley Residency, I was interested to see how this eminent researcher’s work could influence my knowledge and perception of digital archiving, and I especially looked forward to the second day’s round table discussion on ‘Big Data, Small Data and the Challenges for Musicology’.

Tuesday began with a postgraduate seminar, focused around a set of readings on digital archiving and interdisciplinarity – as well as introducing the ideas and concepts which would be prominent in Simon’s keynote. Particular emphasis was placed on Simon’s current project on Edwardian recitals, interrogating what it means for someone to study concert life in the digital age. Discussions in the seminar resonated around Simon’s collaborative project ‘In Concert’. This project aims to develop a set of new standards for curation – through taking a ‘fresh’ approach to building a digital archive. This new approach included discussions of how to document such a large range and quantity of sources, including through crowd funding. We discussed how crowd funding would aid speed of digitisation, but add an element of potential risk of quality and consistency.

For me, the most interesting insights came from the use of digital methods in the ‘In Concert’ programme. Through a bottom-up approach, a richer picture of the diversity of music making across London was created. In Simon’s presentation, we were shown a variety of visualisations to show how the places of concert making in the eighteenth and nineteenth century changed. In our seminar, I highlighted how by making this data digital the project is enabling us to combine data through semantic web methods. This shows how open data could enable us to compare our new knowledge of music making in the eighteenth and nineteenth century to other data such as maps, the population distribution at the time, and economic wealth of individuals across the city.

Simon McVeigh’s keynote, titled ‘Out of the Box into the Fire: Writing about Edwardian Musical Culture from Multiple Perspectives’, took us on a journey from his early collection of concert data, the ‘Calendar of London Concerts’, through to his recent, and on-going project ‘In Concert’. Interestingly, the end of his presentation introduced ideas of using hypertext fiction in the creation of his next book. He discussed the use of datasets behind a ‘book’ that would enable you to travel through the space in a variety of threads – enabling both a breadth of story and individual narratives to be explored in Edwardian musical culture. This part of his talk led to great discussions between the audience and Simon. Popular hypertext fiction concepts such as ‘getting lost in hyperspace’ were debated. However, overall the emphasis was placed on the development and utilisation of the Web in a movement from the scholarly multi-chapter monograph to a ‘reader perspective’ text.

In tradition of the Hartley Residency, the second day starts with a presentation from a researcher in our department. This talk by Thomas Irvine, on ‘Anglo-German musical relations around 1900: After the transnational turn’, started with the question of whether nationalism haunts British music history. Through examining the work of Hubert Parry, Tom discussed how the music we perceive as ‘British’ has large influence from German, other European and oriental musical cultures. He explored how the transnational and global turns in history have cast doubt on the utility of national frameworks in understanding music of the Edwardian period.

To finish off the residency, a stimulating debate was established around Simon’s paper ‘Big Data, Small Data and the Challenges for Musicology’. Here resident lecturers David Bretherton, Richard Polfeman and Mark Everist provided responses to Simon’s paper. The discussion was framed around how we present musicology and how we do musicology. For me, some of the most interesting insights came from these discussions. David Bretherton raised how big data has changed the way we think about sonata theory. His example highlighted how periphery music such as concertos frequently do not adhere to standard ‘sonata theory’ form, and Beethoven – whom most of these theories are based on – was an exception to the rule, not the norm. Richard Polfreman introduced the concept of ‘graveyards of data’ and how we are stuck with a large amount of our data being non-digital. Therefore we should look at making sure our future data is digital, open and useable for everyone – incorporating ideas of the semantic web. He highlighted that we should possibly look at the future first, and then crowd source to digitise our graveyards of data.

The concepts that resonated with me involved ideas of how can we impact the future. We must look at how we do work now, and what we do to make sure our data is reusable – enabling it as open, and in standardised formats. The concept of making sure my dissertation was encoded in TEI, or the music I work with encoded in MEI, any databases I collect being published openly using semantic web standards – these are all ways in which I could create sustainable and reusable data for future musicologists to utilise and engage with. Academia is moving from the solitary researcher, to the combined community to build and develop the interdisciplinary network of researchers.


A blog post written by myself for the University Blog: http://blog.soton.ac.uk/music/2016/11/19/future-digital-musicology-hartley-residency/


How can we establish the original score?

Historical musical pieces make their way to us through multiple documents and it often happens that multiple sources introduce differences and variants in the music.” In the 15th and 16th centuries music was not published, as patrons required composers to produce music for their sole use, not allowing public access to the scores. These restraints led to many audience members notating the music during or after performances, generating scores for the general public for playing and scholarly research. This has left us with multiple scores of music from these periods, which have differences and variations clearly shown in the notation.

MeiView is a sample web application that displays the variants found in 15th and 16th century music scores. The application works by the user hovering over green dots on a score (in common western music notation). These dots allow the user to view the variations in notation for  a particular bar. If there is no variation, then there will be no green dot above the bar. MeiView is an open source development, using MEI-to-VexFlow and VexFlow (explained later in the post).



With great success, meiView begins to answer traditional research questions, such as “Which is the original notation of 15th century music?”. Within music, scholars and individuals have been interested in finding the original copies of the music discovered during the 15th and 16th centuries. This application allows an academic, student or researcher to analyse the displayed data. They can see the variations and make a detailed conclusion on what the original piece would have sounded like. This is the first application of its sort that I have come across. There are many music applications which display musical notation, written documents or allow the user to notate music. However, this is the first application that I have discovered that begins to answer research questions, and actually starts to compare music for you. Therefore this application begins the process of fully embodying the concept of digital humanities research.

MEI- the Music Encoding Initiative was developed by the scholarly community to display western music notation, relevant to historical study of music. It is an independent platform, based on XML standards, allowing for scholarly analysis and a variety of both digital and print rendering possibilities. Using XML standards like MEI provide a standardized way to transfer information understood at both ends. Aware of other XML based music standardization systems, such as MusicXML, “MEI was conceived through a need for storage and exchange of scholarly manuscripts”. Though both MusicXML and MEI both encode music notation (notes, staves, key signatures etc.) expressed in XML, MusicXML was devised to allow the representation of music notation, used by sheet music companies to display music to be printed, or played from online scores. MEI allows for the digitization of sheet music, like MusicXML, but goes further than MusicXML to allow for the encoding of information about the notation in a controlled structure. These two pieces of software were conceived by different groups of people, for the use of different groups of people. MusicXML was made for publishers, and those who want to view sheet music online, whereas MEI was conceived by music scholars for scholarly use. Therefore MEI is not just purely compatible with Common Western Notation, but supports historical western music notation systems such as mensural notation and neume notation.

Whilst looking into MEI, I discovered a list of their current and previous projects, allowing me to establish that no other MEI projects had begun to use music notation software the way meiView has. MeiView allows not only the displaying of data, but also the display of analysis of scores. Past MEI projects include projects which display pages from books, written accounts, scores and notation software. MeiView shows the ability of software instigating research in musicology, which could allow for a more concise set of data.

MeiView, along with being a part of the Music Encoding Initiative, also uses VexFlow. VexFlow allows ways to create, share and publish music online. This includes its sub-programs such as My VexFlow, which allows the embedding of playable music notation within blog-posts. This application uses the generic code for The Vex Flow, which allows you to render music notation within applications. On both the MEI and MusicXML webpages, neither discusses nor suggests any alternatives to VexFlow is frequently suggested as the best engraving engine for music notation.

With meiView’s ability to display variants in scores and its developments in analysis, I feel there is so much more we can hope for in the future of music digital technologies. If we could gain concrete statistics about the similarity between varying scores, the findings of musicologists could be seen as more ‘reliable’ and consistent. This could be done by creating a likelihood that certain bars are accurate, not requiring just the findings of what we believe is the correct piece of music, but the percentage chance each variant is correct. The use of percentages creates a useable formula; consistent with all musical analysis to be developed; reducing individual bias and variants in the consistency of language analysis. This application opens up the opportunity to analyse music in different ways; an example of using a structural analysis for music, is the project SALAMI. Could it take a more mathematical approach, comparing statistics, and relationships through figures? This could be approached through creating a deviation coefficient. It could be worked out how similar notes are from one another. For example, if one note given for a specific bar was an F and the other F#, the difference could easily be put down to faults in aural skills of the transcriber, as the notes are very close together. However, if the notes were a considerable distance apart, for example at a 5th, then it will be harder to determine the correct music for that bar. Using a mathematical approach to music analysis could allow for a greater level of consistency between academics research, rather than the variants imposed by individuals’ use of language. This coefficient analysis would not be successful in performances either played wrong or purposefully varied. However, this in itself could generate an interesting analysis of the variations in performances of certain genres or composers. The use of a more quantitative research approach would also allow for more inter-disciplinary music research.

MeiView is a timely and useful application; however its usability lets the application down. It is an innovative idea, showing the similarities and differences between scores. However, the source of the music transcripts used as comparisons are not clearly stated. The list is unclear, and not in a generic catalogue system, which you would expect from a program such as meiView. The application suffers from a lack of a simple database, showing clearly the information known about these sources; such as, the date it was notated, who notated it and information about where the user could locate its original copy. The creation of a simple database in the already open source code would allow for easy additions to be made to the information about each score. This allows for the program to develop at the rate our knowledge about the music displayed does. A database which clearly lays out the data would also allow for a simple way to find trends and patterns within the data. This currently limits the suitability of the program for multiple audiences. Clear titles of music, and the ability to play the music aloud, would give access to children and the general public. The addition of named sources, and additional information on why certain sources have been chosen as a ‘main source’, would allow for humanities academics to understand the programs use and how it was collated.

Overall, this software marks a trend towards academic music software that answers current and traditional research questions. This program has a long way to go, with potential features such as a database and mathematical analysis making it much more powerful. MeiView at current purely displays information, and does not take it upon itself to actually become an interactive tool, to being to generate answers to questions independently. Programs such as Mendeley; a reference manager, has its own database of papers. The software allows you to search referencing details of papers you have downloaded, identifying the paper in an individual library. The brilliant thing about software like Mendeley is it allows academics to share annotations with other academics, allowing for the building of ideas and collaboration inter-disciplinary. If MeiView were developed to allow these collaborations and sharing of academic information; by allowing individuals to add sources, edit details and write their findings and opinions on which is the original music. There may come a time when the question of which was the original piece of 15th or 16th century music can be correctly answered based one true facts.

Can our brain directly generate music?

L. J. Rich’s recent article on the BBC discusses a project led by Professor Eduardo Miranda, who has made a living out of “music neurotechnology“. The ‘composer’ (the person whose brain is going to ‘create’ the music) puts on a brain cap which has electrodes on the back. These electrodes pick up brainwaves from the visual cortex.The brain waves are input into a computer, which generates music influenced by the brain waves received. The composer is required to choose a visual prompt to concentrate on, one of four moving checkered patterns. These patterns flicker at different rates, stimulating the brain to create a sympathetic electrical signal. The program then uses the brain’s sympathetic electrical signal, to select from pre-composed musical phrases to complete the composition.  Read more