Creative MIDI Sequencing Tips and “Sitsiritsit Alibangbang”

A beetle, that topic of the Filipino folk song "Sitsiritsit Alibangbang"

I’ve been working through my folk song-inspired ideas for new tracks and this latest one, “Sitsiritsit Alibangbang,” is one I always wanted to play with but had no idea how to manipulate it into something electronic and modern.

The song is a silly folk song that makes no sense, and is quite surreal, but which is fun to sing to children. The entire explanation of the song is on Wikipedia.

The track was born during the 2005 recording sessions for my EP “Kodomo”, I asked vocalist Yu:Mi Calderon to sing any old Filipino folk song that she knew by heart. She sang the first verse of “Sitsiritsit Alibangbang” into the microphone, acapella — with no backing track, no fixed tempo and no designated key. I figured, I may someday use it. I wasn’t quite sure what I would do with it though.

Here’s the acapella recording that I sat on for 6 years:

And then sometime this year, there was the entire Youtube viral meme making the interweb rounds of Justin Bieber’s track “U Smile” slowed down by 800% using a software called Paul’s Extreme Sound Stretch. The end result was a cinematic soundscape of stretched vowels and syllables becoming looooooooooong wordless epics. I thought it would be cool to use this effect in one of my tracks. But never got to experiment with it till this week.

I figured since the acapella vocal isn’t tied to a specific tempo, I may as well use the effect to stretch it out and create an ambient, dreamy version of the folk song. Nice idea. Didn’t happen that way though.

After stretching it out, I realized the best way to play around with the vocal and use it for percussive or rhythmic effects is to slice it up using Propellerheads ReCycle and then dropping the REX file into a Dr.Rex sample playback module within Propellerheads Reason.

In ReCycle, I spliced the stretched vocal at random intervals hence the rough edges in some of the clips.

Sitsiritsit sliced up in ReCycle

With Reason deployed and the sample loaded, I came to an early decision point: what BPM? I went with the hip hop favorite: 91 beats per minute, loaded a UK underground drum set on Redrum, and began sequencing a light beat I could use as a metronome for everything else. After a basic kick and snare, I sequenced a bunch of 16th note high hats which I realized could be the basis of my rhythmic vocal.

I copied the MIDI notes I used for the high hats onto the Dr.Rex vocal track and then used one of my favorite techniques: applying PITCH>RANDOMIZE and setting it within 2 octaves. The notes were then sufficiently scattered around the spectrum. Listen to the excerpt:

The trick was to find a random part that made sense, isolate it, and then duplicate it across other instruments to create richer textures. I went with a bell-like synth so that it had some ethnic, percussive qualities to it. Then, I entered each note by hand, copying the pitch in the rhythmic vocal phrase I had chosen. Listen to the excerpt:

The random notes suddenly presented several opportunities for me chord-wise. They seemed to work best in the C-major scale, with an option to go to A minor. But because of a stray G# note in the vocal/synth rhythm, I ended up with this pattern: C – F minor – D minor – F minor. (alternate: A minor – F minor -D minor – F minor). Working with that framework, I added the bass parts, the guitar strums, and everything else. Listen:

End result? We have a track sung 6 years ago, that was resurrected and stretched and randomized until it gave birth to a pretty ordered structure (which included its underlying chords), and then prettified with beats and bass.

Leave a comment

Your email address will not be published. Required fields are marked *