Electronics and Electricity: The Fusion of Science and Art in Sound

“I’m more of an arts/humanities person than a math/science person,” say many high school students upon taking standardized tests or choosing their major before going to college (admittedly, I have been guilty of uttering this phrase). TV shows, such as The Big Bang Theory utilize scientific characters to poke fun at people in the humanities for their flowery language and inability to hold down a stable job. Laws, such as the COMPETES Act and the famous No Child Left Behind Act have been passed, determining which educational areas to make cuts in during economic crises, and one can’t help but notice the bitterness arts advocates and their scientific counterparts hold toward one another when the funding goes toward their opponent’s field. Yet, more recently, people have developed methods to bridge the gap between the two disciplines, making it questionable as to whether they are more interconnected than most people assume.


Physics and Music 

Have you ever heard of a Tesla Coil? This device was originally conceived to generate electricity wirelessly. It implements two mini coils (each with a capacitor, or a source which stores electrical energy), which are fused together by a spark gap. This spark gap is “a gap of air between two electrodes that generates [a] spark of electricity” (Dickerson). Scientists have taken the Tesla Coil a step further by synchronizing electricity and sound, consequently contributing to the formation of a light show. Different pitches are created from the number of times air vibrates per second. The sparks of electricity produced by the Tesla coil are a result of air vibrations, since, as mentioned previously, they conduct electricity through a spark gap which uses air. The various pitches gained from fluctuating frequencies in vibration contribute to the formation of music, allowing the Tesla coil to act as a radio.

Environmental Science and Music 

Several musicians have been turning to scientific data to compose their works. One such instance took place at the University of Minnesota, where a student and his professor utilized data sonification (or the transformation of quantitative factors into sounds other than those produced by talking) to convert surface temperatures measured from 1880 to 2012 to pitches, aiming to demonstrate the scope of global warming. The lowest pitch capable of resonating from the cello is supposed to represent the year with the lowest temperature patterns. Furthermore, the cello was tuned so that each half-tone (the smallest interval/distance between pitches) paralleled approximately 0.03 degrees Celsius of a temperature increase. Yet, data may be manipulated to coincide with more than just pitch. Composer John Eacott utilized the speeds of tide floods to design the rhythmic scheme of one of his works. Whenever the number 0 came up, there would be rests within the music, whereas the number 1 within the music signified notes to be played. Eacott made it so that the slow, steady movements of the tides called for the performer to take the piece at a slow tempo. Yet, more rapidly moving tides symbolized an increase in tempo on the performer’s part.

Below is a demonstration of “A Song of Our Warming,” detailed above. The music stops at 3:20, so if you watch the video up to that point, you should be able to gain a general idea of what data sonification is.

Electronic Music

Nowadays, electronic music is associated with using a computer or other recording technology to create mashups of songs or genres or to synthesize voices. Although splicing clips and editing voices may be considered part of the genre of electronic music, there are other methods of creating music which involve computer mapping systems actually composing music through the reading of algorithms. Composers Hiller and Isaacson developed the Illiac Suite (shown below), a piece which employed the mapping system, Illiac to interpret codes set up for pitches, rhythms, and timbres (colors of specific instrument sounds). These codes were then subjected to operating instructions, based on the genre and/or style of music that was intended to be performed. There are advantages and drawbacks to computer mapping systems. Computers read exactly what is on the score of music, preventing articulation errors (such as putting an accent in the incorrect place) from occurring. However, they do not allow for rehearsals of particular sections, and may take away from the artistry or expression a live performer may add to the piece.

Feel free to discuss one or more of the following questions:

Taking into consideration the increasing reliance on machines to carry out the functions that humans normally complete onstage, do you think that technology has the potential to replace performers? Would seeing a computer or a Tesla Coil perform music take away from the expressive side one would hope to see from a musician while watching an opera or a symphony? Should music be left to chance (as scientific music production is left to natural phenomenons such as tide movement), or should it be pre-planned, as it has been in centuries leading up to this point? Should more efforts be made in the education sector to fuse the fields of art and science together?

Scroll to the VERY, VERY bottom of the page (past the Works Cited, until you can’t scroll anymore) or go to the “Electronics and Electricity…” section to leave your comments. Check out the “Help! What Do I Write About?” tab if you have questions about what to discuss musically, or even if you have writer’s block!

Works Cited:

Agent Utah. “The Musical Tesla Coils.” Physics Central. Physics Buzz Blog15 November 2007. Web. 24 July 2015.

Colman, Dan. “A Song of our Warming Planet: Cellist Turns 130 Years of Climate Change Data into Music.” Open Culture. Music, Science, 3 July 2013. Web. 23 July 2015.

Eacott, John. “Flood Tide See Further: Sonification as Musical Performance.” International Computer Music Conference Proceedings (2011): 69-74. Web. 24 July 2015.

Eric Goodchild. “‘Beethoven Virus”‘-Musical Tesla Coils.” YouTube. YouTube, 12 December 2011. Web. 23 July 2015.

Gerzso, Andrew. “Paradigms and Computer Music.” Leonardo Music Journal 2.1 (1992): 73-79. JSTOR. Web. 23 July 2015.

Institute on the Environment, University of Minnesota. “A Song of Our Warming Planet.” YouTube. YouTube, 28 June 2013. Web. 23 July 2015.

UPNA. “UPNA. Suite Illiac.” YouTube. YouTube, 2 November 2012. Web. 23 July 2015.

6 thoughts on “Electronics and Electricity: The Fusion of Science and Art in Sound

  1. Technology has definitely added to the enjoyment of music, but it will never replace the emotion created by listening to a live, human performance.

    P.S. I now see global warming in a different way!


    • I don’t think it’s meant to replace emotional context you’re talking about, it’s simply meant to enhance it. Microphones make it possible for larger audiences, electric guitars, synthesizers, etc. bring in new and unique sounds that expand the combinations one can make with a person/band performing.

      P.S. Even older instruments such as traditional violins, mandolins, etc, can be considered technology.


  2. Benjamin Franklin merged science and art to produce the -Glass Armonica- circa 1760.This involved glasses of various sizes and thickness rotated on a spindle operated by a foot treadle.


  3. I noticed the lower notes go down (towards the ground) on the Tesla Coils while the higher notes go up (towards the sky). Is this something that would occur naturally, or something that the person building it designed?


  4. “There are advantages and drawbacks to computer mapping systems. Computers read exactly what is on the score of music, preventing articulation errors (such as putting an accent in the incorrect place) from occurring. However, they do not allow for rehearsals of particular sections…”

    Speaking as someone who performed in my high school orchestra (although it’s been a while since I last picked up an instrument), I’d say that this system might have helped in terms of note accuracy, but would have taken away from the improvisation aspect as a player tries to keep up with the computer, either from the conductor or the player(s).

    Technology can definitely augment live performances when used correctly, but when overused can turn an otherwise decent song into a mess. When listening to contemporary songs, I can always tell when they’re using synthesizers when the singer’s voice pitch drastically changes frequency, either from extremely high to extremely low and vice versa (something outside the range of natural human abilities).


  5. Tailing off my last comment, this Nova clip from a while back showcases note-correction software (Autotune, etc). With science and technology applied to the music industry like this, is there much room for natural talent anymore? I honestly don’t have an answer…


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s