Using HTML, CSS, and JavaScript, I built this website from scratch. I've developed my skills in web programming and design through this project by researching on the web, using modern AI tools, and by trial and error. Debugging was a major part of the process.
I used templates I found on html5up.net as a starting point and then heavily customized it for my use. The templates were under the Creative Commons Attribution 3.0 license, and I made sure to credit the original creators. I also used jQuery for some of the interactive elements, Isotope for the portfolio grid, and Poptrox for the lightbox gallery.
i hope love calls to you like the sun calls the flowers to grow
(the audio is a live recording of a performance from Oct '23 called "Ambient Night" — the video was made afterwards as a complement and a visual interpretation)
The purpose of Ambient Night is to let the mind calm listening to soundscapes while sitting or lying down. Eliott's part in the night was as a performer and mixer in a collaborative ambient performance, with Eliott on EDI and Eurorack modules while processing other performers (Tan Monsoreenusorn on EWI, Nick Smucker on guitar) on Ableton Live. Eliott's act was the first in a 5-act show. The recording you are currently listening to is of the act split into 3 sections, each with their own theme:
|| SECTION I: no drums (7m 16s) — SECTION II: Das Loblied (4m 30s) — SECTION III: LoSS HYPNoSIS (7m 5s) ||
unedited, by Ella Faye
Mar '25
album
I produced, mixed and mastered an album with 8 songs for singer-songwriter Ella Faye in 48 hours.
The overarching sentiment of the album is nostalgia, using lo-fi elements creatively with some editing, additional layered recordings and other minimal effects to give the listener the feeling Ella Faye has towards the songs in the album she recorded as a child. Ella recorded the songs 5-10 years ago during her tween/teenage years, and she dug up bounces of the songs (i.e. vocal with instrumental recording mixed together) for me to produce, mix and master.
After discussing the nature of the project and how she relates to the songs now, I made some recommendations for the mix and she decidedly agreed and wanted to try it out. She wanted the songs to sound more coherent and blended as one unit playing continuously. I recommended we have the whole album played through a cassette along with a poem at the beginning and end meant as a synopsis of her relationship to the songs now.
The songs weren't well mixed to begin with, but I worked with what I had. Using Ableton stock plugins, iZotope, Ozone 9 and Insight 2, I made the songs sound coherent, providing a narrative for the listener. Ella continuously gave her input as I kept making tweaks and changes. She sent me some recordings of her reciting the poem she wrote used as the intro to the album, and I edited & processed the recordings to sound eloquent and mystical. The rest of the album sounds like it's played on a dusty cassette, everything from the "play" click, to the little lo-fi elements throughout (i.e. hiss & clicks, startup pitch bend, compression, etc.), to the final "stop" click. The final commentary on the last track is meant to sound like both the current Ella and the younger Ella coming to the same realization.
DJ set w/ music video: Hip Hop & Electronic
Dec '24 - Jan '25
DJ & video
I took a DJ class in my last semester of college at Berklee, and I made a music video with lyrics from the songs I spun on the last set of the class. I performed this short set with two analog turntables, a mixer, and two control vinyls for playback on Serato.
My professor recorded the set, and I edited the video to match the music. I used the lyrics from the songs I spun to create a narrative for the video.
Here's the setlist:
0:00 R.A.P. Ferreira - listening
1:16 Leet, NEWSENSEi - Smoke the Technology
1:53 Daedelus - Special Re: Quest
3:04 R.A.P. Ferreira, Eldon Somers - rejoice
4:52 Thook - RUDE
5:46 Mochakk - Jealous
6:29 Mad Keys - Bear Good Fruit
A People's Tale
Oct - Dec '24
installation
"A People's Tale" is an immersive sound installation (composed in 4 channels of audio, with light visually evocative visual elements) describing the recent political experience of a people under an authoritarian regime. The installation sonically relates the political struggles of the people with the rainforest of their land. The work is split into three sections, each with drastically different moods:
|| I. The Roots of Silence — II. When Silence Breaks — III. When The Streets Rise ||
(The name of the people is not mentioned to avoid persecution by the regime)
generative microcomputer synths
Aug - Dec '24
audio programming
I made a set of generative microcomputer digital synthesizers using RNBO (from Max/MSP) for software, and Raspberry Pis for hardware. The synths can be controlled by a MIDI controller or OSC messages over the network, and the stereo audio can be outputted via analog or bluetooth connection.
I created different programs that can be installed on the Raspberry Pis to generate different sounds, most of which are generative. This means that the user sets parameters to control the synth's use of randomness to generate notes over time on its own. The user also has manual control over the timbre of the sound being generated.
I created one program which is meant to synchronize looped playbacks of audio files over multiple Raspberry Pis at once. This was mostly designed for multichannel sound work (i.e. work with more than two speakers) or installations where the multichannelsound needs to be played back in sync.
Chameleon (2024)
Aug - Nov '24
sound for film
I was the sound supervisor for the short film Chameleon (2024) directed by Taylor Morales. I worked with the director to create a sound design that complemented the film's visual style and narrative. I was in charge of the sound design and all audio post-production of the film, including audio editing, recording Foley and ADR, cleaning location sound, mixing and mastering all on Pro Tools, with Ableton for some audio processing.
Using sound design and mixing techniques, I intensify moments of tension and suspense in the film. I amplify the characters' movements and actions with Foley and ADR.
I worked within the director's vision to execute precisely what she had in mind, pushing the sonic boundaries with some creative liberties.
Aurora, by Ella Faye
Sep '24
single
I was the mastering engineer for Ella Faye's single Aurora. The song is an intimate soulful performance of a piano and vocals. Using my own mastering setup with Logic Pro X, iZotope Ozone 9 and spectral scopes, I mastered the stereo bounce provided by the producer. I did some light EQing, compression and stereo widening of different frequency bands for a more balanced mix of a dynamic sound. I reduced the scratchiness of the mix and brought out the vocals in the most powerful moments without diminishing the intensity of the piano. I used Joni Mitchell's and Aretha Franklin's music as reference.
audio plug-ins w/ Csound
Jun - Aug '24
audio & GUI programming
I created a set of audio plug-ins with Csound, including a synthesizer, an effects processor (with a filter, a delay and a noise generator), and a sampler (with speed/pitch change for each sample and a global delay).
Csound is a sound synthesis and signal processing language. The GUI was created with Cabbage, a code editor specifically designed for creating GUIs for Csound. I was introduced to Csound in my Csound class with Dr. Richard Boulanger at Berklee College of Music, and I developed my skills in audio programming and signal processing through this project. Using documentation and the Cabbage forum, I learned how to make audio-plugins that can be used on any DAW.
my first website
Apr - Aug '24
web programming
This was my first website I built using nothing more than HTML, CSS, and JavaScript. I did not use a template or AI tools for this project; it was fully hand-coded with tons and tons of research and trial and error.
You'll mostly find some of my YouTube videos posted there and some of my poetry. There are also some descriptions of my journey and brief thoughts on music and art.
I was the sound designer for the short film Touch With Eyes (2024) directed by Josephine Simonian. I worked with the director to create a sound design that complemented the film's visual style and narrative. I was in charge of the sound design and audio editing for the film, including Foley and cleaning dialog/location sound on Pro Tools for 5.1 Surround Sound. I passed my work onto the mixer, whose task it was to mix my sound and edits with the composer's music in 5.1 with a stereo mixdown.
Making conference calls, I worked closely with the director, the composer and the mixer to execute the director's vision. The project is well organized on Pro Tools for 5.1 Surround Sound. For more specific sound design edits, I used Ableton Live.
I worked on this project through a sound for film practicum at Berklee College of Music under the guidance of professor Brian McKeever.
build a pair of studio monitors
Jan - May '24
audio tech
I built a pair of Dayton Audio BR-1 studio monitors with a kit I bought online. The passive monitors have a 6.5" woofer and a 1.125" tweeter and a PCB meant to apply basic crossovers for each driver. Once I built the monitors, I connected them to a stereo receiver with power amps, and they sound phenomenal. These monitors are a great investment for any audio engineer or producer.
These monitors were a project in a class at Berklee College of Music under the guidance of professor Michael Abraham.
I built an Austin Model 1 ribbon microphone with a kit I bought online. The microphone has the thinnest, most fragile ribbon with ridges suspended between two super strong magnets glued apart at a precise distance picking up the bipolar movement of the ribbon caused by sound waves as an electromagnetic signal. The signal is then sent to a transformer to convert the signal to a voltage signal. The signal comes out of the microphone without the need for phantom power (48V).
The microphone was a project in a class at Berklee College of Music under the guidance of professor Michael Abraham.
This video is an audio visualization of an ambient song I created using a MaxMSP patch I made and some syncrhonized changes made on Ableton Live. I edited the video on iMovie.
The MaxMSP patch I made is called cRaYoN SyNTH. It is a generative additive synthesizer that I use to create music and soundscapes. "Generative" means the notes are generated by the synthesizer based on the parameters I set. The logic behind the note generation is complex: chance and probability algorithms prioritize some notes over others, and the rhythm of the harmony's note generation slowly shifting rhythmic divisions over time while the bass generates a new note every 1.5 measures. The timbre of the harmony also subtly shifts over time to seamlessly modulate up a perfect 5th in key every 2.5 minutes, at which point the bass's note generation boundaries also shift up a perfect 5th.
A basic reverb and delay were added in Ableton Live to the mix to create a more immersive experience. At certain points synchronized with the video, little clicks and pings are added for shock and intended as an unexpected unease to quickly cruise back into calmness from, having parallels with my own life. The video is a slow motion recording I took alongside the Charles River in Boston where I found a cool wooden platform and placed a flower on it. I synchronized the movement of the camera with my breath. Afterwards, I decided to make each motion into the flower a different memory to revisit.
I made this project under the guidance of professor Neil Leonard and the comments of my peers in the Berklee Interdisciplinary Arts Institute.
clip from The Grand Budapest Hotel (2014)
Feb '24
sound re-design
I was the sound designer for this clip from The Grand Budapest Hotel (2014). I re-designed the sound of the clip to make it more immersive and engaging. During the ADR session, I recorded my own voice for most of the dialog and a friend for the main character's voice. I edited all the audio on Pro Tools with professional organization to pass on to the mix engineer.
I worked on this project through a sound for film practicum at Berklee College of Music under the guidance of professor Brian McKeever.
cRaYoN SyNTH 1.0
Aug '23 - Jan '24
programming
The cRaYoN SyNTH 1.0 is a generative additive synthesizer that I use to create music and soundscapes. "Generative" means the notes are generated by the synthesizer based on the parameters I set. The logic behind the note generation is complex: chance and probability algorithms prioritize some notes over others, and the rhythm of the note generation can slowly shift rhythmic divisions over time. The timbre of the harmony can also subtly shift over time to seamlessly modulate in key.
This project is a personal project I created to explore how minimal changes in the timbre and rhythm of soundscapes affect the psyche when key changes and rhythmic shifts are ever so subtle.
The synth can also just be used regularly as an additive synthesizer, bypassing all the generative features.
you're speshoL
Nov - Dec '23
audio-visual 8-ch composition
This audio-visual 8-channel surround sound composition is a piece I created in collaboration with Alice Cohen. We started by playing around in her room on keys and guitars going into Ableton and recording/layering the good ideas. We then took the project into a room with 12 speakers and 2 subwoofers in it and made use of 7 of the speakers and 1 subwoofer (for a total of 8 channels). We mixed what we already had in the project across the 8 channels and added more elements as we saw fit. We took turns as the producer; while one was listening in the space and shooting out ideas, the other would implement them. After two sessions, we were both satisfied with the result and decided the piece was finished. I then made an audio-reactive Jitter (MaxMSP) patch to control the visuals of the piece, incorporating eerie found footage with TV static and grainy videos. Then I showed Alice how to control the patch, and she performed the visual aspect of the piece in a live setting.
We worked on this project through a multichannel course at Berklee College of Music under the guidance of professor Lee Gilboa.
Modular Ensemble
Aug - Dec '23
performance (EDI)
I was one of six sound design musicians in a performing ensemble called Modular Ensemble. We rehearsed and performed with Eurorack modules in a 4-channel surround sound space. We each create our own patches on the Eurorack modules and route the stereo signal to our own DAWs, on which we apply our own light processing/effects and mix the signal to the 4-channel surround sound space after queueing it on our headphones. We improvise off of each other's ideas in the surround sound space for the most part, listening intently to each other's sounds and responding to them with our own.
Every week for 14 weeks we performed in a 4-act show, each act lasting about 15 minutes, each with prediscussed themes. Improving our skills in sound design, improvisation, intent listening, and performance every week, we were able to create the most unique and engaging experience for the audience on the last week.
As artists, we created a strong bond with each other in ways words fail to describe; it was impossible to distinguish who was who in the mix considering everyone is designing sound and developing their roles in the space at different points in time. It was a true experience of unity and collaboration.
This project was developed through a modular ensemble course at Berklee College of Music under the guidance of professor Matthew Davidson.
LoSS HYPNoSIS
Nov '23
8-ch composition
This 8-channel surround sound composition titled LoSS HYPNoSIS is a piece I created that gives the sense of being hypnotized to experience a state of loss. Although loss is quite the profound experience and may have a bad connotation, loss can also be beautiful; the loss of a bitter grudge, a painful memory or a weakening prejudice is a liberating experience. Although loss——whether good or bad——can be quite painful, this piece is meant to ease and hypnotize the listener into accepting loss, if the listener so chooses to listen.
I developed most of the soundscape with Eurorack modules recorded and mixed across the 8 channels (7 speakers and 1 subwoofer) in Ableton Live. I layered one part over the next to create a sense of depth while preserving the simplicity of the emerging idea. I mix some parts in the surround sound space to lightly whirl around the listener, while other parts are panned in the center or in specific channels. At the peak point of the piece, there is one part designated for the subwoofer, vibrating the room and the listener's body to create a sense of weight. Although a stereo mixdown of the piece is available, the piece is best experienced in a surround sound space.
This piece was developed through a multichannel techniques course at Berklee College of Music under the guidance of professor Lee Gilboa.
Ambient Night
Oct '23
performance & mixing
(the audio is a live recording of a performance from Oct '23 called "Ambient Night" — the video was made afterwards as a complement and a visual interpretation)
The purpose of Ambient Night is to let the mind calm listening to soundscapes while sitting or lying down. Eliott's part in the night was as a performer and mixer in a collaborative ambient performance, with Eliott on EDI and Eurorack modules while processing other performers (Tan Monsoreenusorn on EWI, Nick Smucker on guitar) on Ableton Live. Eliott's act was the first in a 5-act show. The recording you are currently listening to is of the act split into 3 sections, each with their own theme:
|| SECTION I: no drums (7m 16s) — SECTION II: Das Loblied (4m 30s) — SECTION III: LoSS HYPNoSIS (7m 5s) ||
relax.ing.c
Jul - Aug '23
audio programming
I developed a program called relax.ing.c written in C (a general-purpose programming language) that allows a user to generate a random playlist of sounds from a database of audio files, each meant to be relaxing and soothing. When the user selects a playlist in the program, the sounds are played in a random order, with fade-ins and fade-outs to create a smooth transition between each sound. The playlists include: birds, rivers, thunderstorms, jazz, ambient and more.
The program was developed through a programming in C course at Berklee College of Music under the guidance of professor Akito van Troyer.
audio-visual instruments (for VJing, etc.)
May - Aug '23
visual programming
Here's a set of audio visual instruments I made using Jitter on Max/MSP in the summer of 2023. Some of them are audio-reactive, and they all have elements which can be manually altered with a MIDI controller. These are great for live performances where a projector or a screen monitor is involved.
The .maxpat files (and any additional files needed) are on my GitHub repo:
You're obsessed with your craft, and it NEEDS to get done. Maybe it could wait till the morning, or maybe you could plan out the craft so you can get good sleep.
But the sporadic beat keeps pounding tonight, and you can't stop asking yourself the question: sLeeP oR CoFFee ?
The longer you wait, the more the answer will default to sleep. And this bugs you.
You drank coffee moments before the midnight bell tolls because now you have to stay up all night... AGAIN. This isn't the first time this week...
Your mind needs to dream, recall beautiful moments of your recent life, but that cup of coffee is distorting them all.
Are you asleep dreaming, or awake hallucinating? Whether it's a dream or a hallucination, this video is a result of continuous sleep deprivation. Both literally and allegorically.
(The stereo audio is a 3-section experimental piece I created in Ableton Live intentionally chopping up and layering granular-processed samples. I developed the audio through a sound design course at Berklee College of Music under the guidance of professor Matthew Davidson.
The video was made afterwards as a visual interpretation of the audio. I used Jitter on Max/MSP to create a complex audio-reactive visuals patch, which creates the intricate visuals by audio-reactively mixing my own footage from over the years with generated visuals. I developed the video through a Jitter course at Berklee College of Music under the guidance of professor David Cardona.)
FiRe
Feb - May '23
audio-visual EDI program
FiRe is an audio-visual program of a campfire inside a cave decorated with handprint cave paintings on a starry night.
Each visual element has a sound associated with it. The campfire has some adjustable random parameters to control its behavior, mimicking a real campfire's behavior. The sound of the campfire is harmonious and soothing, synchronized with the flare of the fire. The stars in the night sky are also animated and have a calm pad synth sound synchronized with the stars' flickering. There are also cave paintings that are animated and have live input as a sound source for its audio-reactive animation.
The program can be used as a performance tool and/or a visual tool for live performances. The program can be controlled with MIDI and OSC messages.
The program (a Max/MSP patch) was developed through a programming in Max/MSP course at Berklee College of Music under the guidance of professor Matthew Davidson.
Ableton Instrument & Drum Racks
Jan - May '23
sound design
I made a collection of Ableton Live Instrument and Drum Rack presets for music production. Through the rack presets I developed, I gained a better understanding of sound design and Ableton stock plugins. Some of the instrumentrack presets include pads, leads and basses, while the drum racks include synthesized drums. Each rack preset has adjustable parameters to control the sound of the instrument or drum.
The collection was developed through a sound design on Ableton Live course at Berklee College of Music under the guidance of professor Jennifer Hruska.
clip from Gravity (2013)
May '23
sound re-design
I re-designed the sound of a clip from the movie "Gravity" (2013). The clip was of a scene where the main characters float in space making repairs to the space station. I re-created the same sounds of the original film using synthesis and processing samples. I also re-recorded the dialog in an ADR session, using my own voice and friends' voices. I edited, mixed and mastered all the audio on Pro Tools.
I worked on this project through a sound re-design course at Berklee College of Music under the guidance of professor Ryan Page.
clip from Journey (video game)
Apr '23
sound re-design
I re-designed the sound of a clip from the video game "Journey" (2012). The clip was of a scene where the main character runs in the dessert and navigates to the top of a metal platform to activate a magical event. I used synthesis and my imagination to create the sound of the clip. I edited, mixed and mastered all the audio on Pro Tools.
I worked on this project through a sound re-design course at Berklee College of Music under the guidance of professor Ryan Page.
Motorola Droid X ad
Nov '22
sound re-design
I re-designed the sound of a Motorola Droid X advertisement. I used synthesis and processed samples with my imagination to create the sounds for the ad. I edited, mixed and mastered all the audio on Logic Pro X.
I worked on this project through a sound re-design course at Berklee College of Music under the guidance of professor Ryan Page.
Glorious Day
Feb '19
studio recording (bass)
This video is of a studio recording I did with my band in high school of a song called Glorious Day. We regularly performed this song, and it was always a crowd favorite. I was the bassist for the band for years, and I got a strong sense of the song with the drummer.
The mentor for the band, Chris Henderson of Hendyamps, brought us to his studio to record a music video. We played the song as we always did and had a quick and successful session.