Thoughts on Sound Programming

Artful Design Reflection #4

Jenny Wang
3 min readOct 12, 2020

I was watching a reality music show a few days ago where a musician self-mockingly said that she can’t play most of her music on piano because it’s too hard. She is so used to write music on computer, which allows her to write things that would be impossible to play on live instruments (she is a great pianist but not at a master level), but possible to do on computer. What Ge mentioned, “design to the medium”, reminds me of this funny little anecdote.

A new medium gives us more freedom to do what used to constrain us on the old medium. The music and emotion might be the same, but the technology can change.

Piano is a technology. It was invented around 1700 as a way to structure sounds. It was revolutionary at the time and allowed people to create sounds and sequence them in countless different ways. Programming music on computer is a different technology, a more recent and still continuously updating technology.

With the piano, we know the constraints. For example, a person with tiny hands cannot spread fingers over a piano octave. The tone is static. There are a set range of sounds that can be played… With music programming, small hand is no longer a problem! What are the new constraints?

Well, for me, I’m an awful programmer, so I find it hard to learn how this tool works.

Piano is also not an easy tool. I remember spending hours as a little kid playing the same song over and over in order to pass certain piano certificate exams, to make sure that I don’t mistakenly play the wrong note. Therefore, I wasn’t expecting programming to be easy, and practice makes perfect.

But the downside of programming comparing to piano is that there’s less of a visual and physical cue. With piano, when you see where the key is located, you can kind of tell how the sound will be like, and you can directly interact with the keyboard by hand to hear the sound. This interaction creates a mysteriously wonderful chemistry between the instrument and the musician. That’s why a lot people give their instrument names, and consider them dear friends that they can talk to. When they have various emotions, they improvise playing the piano to express their feelings. These instruments have unique characters. For example, every cello has a gender, and they all sound different.

This visual and physical live interaction is somehow lost in music programming. When reading or typing a lot of codes, I feel like a solo astronaut exploring the outer universe, discovering new space, but I miss the human interaction, culture, and stories on the earth ground. The codes can do infinite things, but if one fully immerses in this infinity and gives away the physical engagement of music, then this infinity becomes the constraint of music programming. It becomes just lines of codes. It becomes something mechanical and powerful, but untouchable. It becomes intimidating.

I want to end my thought today with a question to keep thinking about. How to make programming music more like playing an instrument?

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

No responses yet

Write a response