S&E:Experiment Diary 5

After countless trials and errors, I’ve come to a decision: it’s time to let go of the microphone, at least for now. While my initial goal was to create an audio system that captured and amplified real-time sound, the technical challenges, missing parts, and limited time have led me to rethink my approach. Instead, I’ve simplified the project and shifted focus to something achievable within my current constraints.

To regain some momentum, I decided to start small. Using Arduino, I programmed a simple audio output. The idea was to generate a fixed tone that could be output directly to the speaker via the PAM8403 amplifier. This approach eliminated the need for complex signal processing or external inputs and gave me a basic working system. While far from my original vision, it felt good to hear clean, consistent sound from the speaker.


Building on this foundation, I incorporated an LM393 sound sensor. This module detects sound levels and outputs a digital signal when the threshold is exceeded. With the Arduino, I created a simple circuit where the sensor detects sound and triggers a pre-programmed audio tone. Essentially, the system reacts to sound by playing a specific, fixed sound.

Here’s how it works:

  1. The LM393 detects sound and sends a digital HIGH signal to the Arduino.
  2. When the Arduino receives this signal, it outputs a predefined tone via PWM.
  3. The PAM8403 amplifier then drives the speaker to produce the sound.

This setup is straightforward yet interactive, and while it’s not the real-time microphone-based system I initially envisioned, it serves as a functional proof of concept.

For Upgrading

While this simplified project is a step forward, it’s clear that the real potential lies in going back to the drawing board with the proper tools and components. With the upcoming Christmas holiday, I plan to return to China, where I’ll have access to a wider range of parts and resources to rebuild and upgrade the system.

My goal is to refine this project during Element 2 and produce a fully functional, exhibition-ready piece. The key improvements I’ll focus on include:

  • Reintroducing the microphone with proper amplification and signal processing.
  • Exploring more advanced sound manipulation techniques using the ADAU1701 DSP.
  • Improving the overall stability and functionality of the system.

S&E:Experiment Diary 4

After several failed attempts with my setup, I decided to try something new—integrating the Bela Board into the signal chain. Bela, with its superior audio processing capabilities and real-time performance, seemed like the perfect candidate to handle the microphone signal and drive the PAM8403 amplifier. But, as much as I wanted this to work, I quickly realized that Bela might not be my cup of tea.

This time, I started by powering the MAX9814 module with a stable 12V supply, regulated through an LM7812 voltage regulator. Since the PAM8403 can only handle 5V, I had to carefully split the power:

  • I used a voltage divider circuit with two resistors to step down the 12V supply to 5V for the PAM8403.
  • While this was a temporary hack, the power supply seemed stable enough to proceed.

Next, I introduced the Bela Board into the setup. Bela’s 3.5mm audio output provided a convenient way to interface with the PAM8403. The idea was to process the microphone signal through Bela’s ADC (audio input), then output the processed audio to the PAM8403 and finally to the speaker.

But here’s where things got tricky. The Bela output, connected via a 3.5mm audio cable, had to interface with the breadboard and the PAM8403. I spent hours trying to debug the connections, ensuring the signal paths were clean and the power was stable. Despite all this effort, no sound came out of the speaker—not even a crackle or pop.

At this point, I had to ask myself: was this failure a result of my setup, or was it something about the Bela Board itself? Objectively, Bela is a fantastic tool:

  • Its real-time audio processing capabilities are far superior to Arduino.
  • The programming environment is straightforward and optimized for sound-related projects.

But for some reason, I just couldn’t connect with it. Perhaps it was the higher cost, which felt hard to justify compared to Arduino. Maybe it was the unfamiliar workflow, or maybe it was just me being stubborn. Regardless, I found myself disliking Bela on a personal level. Its strengths were undeniable, but it didn’t feel like the right fit for my project.


Reflecting on this experiment, I suspect the problem might have been the way I connected Bela’s 3.5mm output to the breadboard and PAM8403:

  • The breadboard isn’t designed for handling audio cables, and the connections likely introduced signal loss or noise.
  • It’s also possible that Bela’s output impedance wasn’t well-matched with the PAM8403 input, resulting in no signal amplification.

S&E: Experiment Diary 3- So Close Yet So Far

After several failed attempts to connect the microphone to my speaker through different methods, I decided to use the MAX9814 module as a preamplifier for the microphone signal before passing it to the PAM8403 amplifier module.

This setup brought me the closest to success yet. For the first time, the speaker responded to the microphone’s input! I could actually hear sound output as I spoke into the microphone. However, it wasn’t quite what I expected. The output from the speaker wasn’t clear audio—it was a cacophony of crackling, popping, and snapping sounds, like a series of fireworks going off. While it wasn’t what I aimed for, it was encouraging to know the system was partially working.

Determined to pinpoint the cause of the noise, I began troubleshooting. My first suspicion was that the MAX9814 module’s output signal was still too weak for the PAM8403 amplifier to process effectively. To confirm this, I measured the voltage across the MAX9814’s OUT and GND pins while speaking into the microphone. The voltage fluctuated by only 0.4V, which was far below what the PAM8403 needed for clean amplification.Unfortunately, I didn’t have any other preamplifier chips on hand to test as an alternative.


Next, I focused on the gain settings of the MAX9814. By default, the module was set to 60dB (high gain), but I wanted to see if lowering the gain could stabilize the signal. I connected the GAIN pin to VDD, reducing the gain to 40dB. The result? The speaker’s volume increased noticeably, but the crackling persisted, and the output voltage fluctuation dropped to a mere 0.01V.In addition, the voltage readings from the MAX9814 output became highly unstable, making it even harder to determine what was going wrong.


At this stage, I suspect the instability is due to one or more of the following:

  1. Insufficient Signal: Even with the MAX9814, the microphone signal may still be too weak for clean amplification.
  2. Impedance Mismatch or Noise: The connection between the MAX9814 and PAM8403 might be introducing noise or failing to match the input requirements properly.

S&E:Experiment Diary 2

Attempt 2: Microphone to ESP32 ADC and DAC

This approach seemed promising in theory. I connected the microphone to the ESP32’s ADC (analog-to-digital converter) to process the input signal and then used its DAC (digital-to-analog converter) to output the processed signal to the speaker. ESP32, with its powerful capabilities, felt like the perfect choice for such a task.

I spent hours adjusting the wiring, ensuring the connections were solid, and tinkering with the code. Despite my efforts, the ESP32 continuously threw unknown errors that I couldn’t resolve. Whether it was a software bug, a configuration issue, or just my own inexperience with ESP32’s intricacies, I couldn’t get it to work. Frustration mounted as I realized I needed more debugging time than I had.


Attempt 3: Direct Connection – Microphone to PAM8403

With the ESP32 option shelved, I decided to simplify everything. This time, I used the Arduino’s 5V pin purely as a power source and directly connected the microphone to the PAM8403 amplifier. No microcontrollers, no ADCs, just a straightforward analog chain: microphone → PAM8403 → speaker.

The result? Only static and electrical noise came out of the speaker. No matter how I adjusted the connections, spoke into the microphone, or modified the PAM8403 settings, the output didn’t change. The simplicity of this setup should have worked, but I suspect the microphone’s signal was either too weak for the PAM8403 to amplify or the input impedance mismatch rendered the setup ineffective.


Both approaches were disappointing, but they added valuable lessons to my journey. I’m not giving up yet. Introducing a proper preamplifier stage to the microphone before it reaches the PAM8403 is pretty necessary after all these experiments. Each misstep is a step closer to a working solution—or so I keep telling myself.

S&E:Experiment Diary1

1.5 months. That’s how long it took for my long-awaited package to finally arrive after being stuck at customs forever. In my impatience, I had reordered the components, hoping to meet my deadline, but, alas, the replacements didn’t arrive in time either. And then, just as I was waiting for the results of my EC, the first package finally showed up.

I thought this marked the end of my struggles. “Now everything will fall into place,” I told myself. Little did I know, I was about to enter the next chapter of chaos.


First, I realized I had ordered the wrong versions of several crucial parts:

  • The ADAU1701, MT3608, and LM386 chips I received were all surface-mount devices (SMD) instead of the breadboard-friendly through-hole versions I needed.
  • My two 18650 batteries were not equipped with proper holders, leaving me unable to securely connect them to the circuit.

Desperate Measures: Fixing What I Could

Some parts weren’t entirely unusable but required modifications:

  • The speaker terminals didn’t fit the breadboard, so I had no choice but to cut off the connectors and solder two iron leads onto it. Surprisingly, this worked.
  • The PAM8403 amplifier module arrived without pre-soldered headers. My first attempt at soldering was a disaster: I ruined one board entirely. Later, during testing, I managed to burn another board alongside an LED. Ouch.

Mic Troubles

The microphone module wasn’t spared from my chaotic tinkering. Initially, I soldered thin wires to its pins, but the connections were fragile and unreliable. I finally replaced the wires with stiff iron leads, which improved the stability.

SMD Soldering: The Ultimate Test

The three critical SMD chips—ADAU1701, MT3608, and LM386—needed legs soldered to their minuscule pins. Time was short, so I began with the largest of the three: the LM386. Hours passed, and my soldering efforts ended in failure. With only frustration to show for my attempts, I had to admit defeat.


By now, it was clear I wouldn’t have time to reorder or even replace the damaged components. Every misstep added more pressure, and the clock was ticking faster than ever. Despite these setbacks, I resolved to make the best of what I had.

This project has been an ongoing lesson in perseverance, problem-solving, and patience—more than I could have ever anticipated.


Substitution Attempt 1

Facing the mounting challenges with the original plan, I turned to a replacement solution. I decided to use Arduino as an intermediary to process the microphone signal. The idea was simple: connect the microphone to Arduino’s analog input, use analogRead to capture the signal, and then output it through a PWM pin to simulate an analog signal for the speaker.

At first, it felt like progress. I could finally hear something through the speaker—a sign that the signal chain was functional. However, the result was far from ideal. What came out of the speaker was an extremely loud mid-high frequency tone, more noise than meaningful audio. It was frustrating, but also fascinating—a clear reminder of how tricky real-time audio processing can be without proper filtering or amplification stages.

S&E: Project Overview and Preparation

The goal is a fixed installation for an exhibition—something interactive yet low-maintenance. It doesn’t need fancy controls or a screen but should handle sound input (a microphone), sound processing (a DSP chip), and sound output (a speaker). Here are the main requirements I jotted down:

  • Input: Microphone for sound capture.
  • Processing: Leave room for effects like reverb or delay in the future.
  • Output: A small speaker to amplify and play back the processed sound.
  • Power: Needs to run on batteries for portability.

The First Questions

  1. What kind of microphone should I use?
    I found two options:
    • Electret Microphone: Reliable and cost-effective, but not very sensitive.
    • Condenser Microphone: Better sensitivity, but requires phantom power.
    I decided to use the electret microphone since the phantom power is bit too over, and I purchased an LM386 module for preamplification.
  2. How will I process the audio?
    I stumbled upon the ADAU1701 chip, which looked like a great solution. It’s a small DSP that can be programmed using SigmaStudio. I also plan to use an Arduino to integrate it.
  3. How do I amplify the output for the speaker?
    After some research, I settled on the PAM8403 module, which is simple and works well with 4Ω, 3W speakers.
  4. What about power?
    I already had two 18650 lithium batteries. With a TP4056 module for charging and an MT3608 module for boosting the voltage to 5V, this part seemed manageable.

Here’s what I ordered for the initial build:

  • Microphones: Both electret and condenser types.
  • LM386 Preamp: To boost the microphone signal.
  • ADAU1701 DSP Module: For audio processing.
  • PAM8403 Amplifier: To drive the speaker.
  • TP4056 Charging Module: To handle the lithium batteries.
  • MT3608 Boost Converter: To regulate power output.

I also grabbed breadboards, jumper wires, resistors, capacitors, and some basic tools. This should be enough to build a prototype.


I wanted the design to be modular, so each component could be replaced or upgraded later. For example:

  • The DSP can be programmed to add effects down the line.
  • The amplifier and speaker can be scaled up if needed.

I mentioned the acupunctual head model in my previous blog and I would have to get it from China. It usually takes 10-15 days to deliver after purchase and after price comparison I ordered all these gadgets from China as well. I lined out this whole plan in the early October and I wish all these components could be arrive soon so that I would be able to experiment earlier.

S&E Follow-up ideas of the project

After the tutorial with Milo, I started to come up with idea of adding acupoint to my head sculpture. In my first inspiration blog of this project, I mentioned having the sculpture covered in hairs; however I think that one would be to ideological for my assessment– I still need to make something with sound!

I sold my idea like this: making a live time interactive sculpture. In the first element, I will be making mouths of the people to speak and ears to listen. In the second element, which this piece will be exhibited, I will add more acupunctual features to it: having knobs on different acupoints which controls a variety of effects on the live-picked sounds.

We’ve came across several experiments: audible pins, LDR controlled audible circuit, pure data and Bela boards. Programming was new to me yet interesting, but still, I would say, I prefer analogue to digital!! It is not because I have built some analogue synthesiser during the summer, it is all the digital related thing makes me feel like an idiot.

S&E Sound for Screen W2

I watched Gaze dir. Farnoosh Samadi 2017. The sound of the motorcycle and the footsteps built a sense of tension through out the film even without any background music. The contrast of the ambience: with traffic vs silent and the rhythm of the steps really contributed to the nervousness that does not need any voiceovers.

Please consider Akomfrah’s approach to sound on this film and write a paragraph about this on your blog. How does sound change POV? Include images in your blog.

Beyond the Archive: The Work of Remembrance in John Akomfrah’s The Nine Muses

  1. Emotional Resonance: The use of melancholic music underscores the emotional weight of migrant experiences, establishing a mood that enhances the viewer’s connection to the narratives being presented.
  2. Voice-over Narration: The narration often shifts perspectives, introducing personal stories that provide context and depth. This change in voice can alter the viewer’s understanding of a scene, prompting empathy and reflection.
  3. Contrast and Conflict: Akomfrah contrasts industrial sounds with the natural soundscapes of Alaska, creating a dialogue between the harsh realities of migrant labor and the beauty of their journeys. This duality invites viewers to grapple with the complexities of migration.
  4. Point of View Shift: The sound design often shifts the point of view, aligning the viewer with the experiences of the migrants. For example, when archival voices recount struggles, the accompanying sounds draw the audience into that moment, making it feel immediate and personal.
  5. Cultural References: The film incorporates music that resonates with the cultural backgrounds of the migrants, enriching the narrative and grounding it in specific histories and identities.

In The Nine Muses, Akomfrah’s innovative use of sound significantly alters the viewer’s point of view, creating a layered auditory experience that complements and deepens the visual narrative. The film’s soundscape intertwines melancholic music, industrial sounds, and poignant voice-over narration, effectively immersing the audience in the emotional weight of the migrants’ journeys. As the narration shifts perspectives, it personalizes the experiences being depicted, inviting empathy and reflection. For instance, when archival voices recount the struggles faced by migrants, the accompanying sounds evoke the historical context, making the past feel immediate and relevant. This interplay between sound and image not only enriches the narrative but also facilitates a nuanced understanding of the complex realities of migration, drawing viewers into a shared emotional landscape.

Images to Include

  1. Stills from the Film: Choose images that highlight key moments where sound enhances the emotional impact, such as scenes of migrants with accompanying music or evocative soundscapes.
  2. Soundwave Visualizations: Include graphical representations of sound waves to visually convey the layering of audio elements.
  3. Behind-the-Scenes Photos: If available, photos of Akomfrah and his team during the sound design process can provide insight into the film’s production.

S&E Sound for screen W1

The day when we are having our first session for Sound for screen, I used hydrophone to capture sound of the water from the little area outside Castle Center. I recorded three clips, and found that only the first one has sound. I listened to it with headphones and visualised it at home and generate this image which is so different from what came to my mind when we are doing the eye-closed listening exercise with Jessica.

The sound under water recorded by hydrophone sounds like trains and metal hitting with sparkles of strings and bubbles, which make me think of a pumpkin shaped jellyfish with beer foams in it and a train to pass by. This picture was very different to what I used to visualise that is more graphical and made up of fundamental shapes. What’s more, the sound quality and texture the hydrophone gives might not be good as the pole when we come to the application in commercial film production. What captured by H5’s microphone sounds more realistic to what we expect to how the water sounds.

S&E Inspiration

When I was walking on the street today, with the symphony of traffic and passerby, something suddenly came upon my mind. Sound is so all pervasive and passive, which, in Chinese expression, could be explicitly translated into “through every hole”, which reminds me of the growth of hair through pores- also unstoppable and could spread out the whole body.

Obviously human only grow hair on certain places, and even we take measures to get rid of extra hairs, but why we would be greedy on our desires. In Buddhism, greedy is one of the reason why we suffer. From money to interrelationships, we felt jealous, envy, ignored, unsatisfied because of the desire that could never be met. When we are saying “it could be better if…”, we are chasing a vision that could never be compromised. More does not necessarily means better. Experiencing the present is the best present I have.

I constructed an image with a human covered in hair, never been shaved. On social media hair is considered to be the symbol of liveliness, however we only felt disgusting seeing the person covered in hair. That’s how the Golden Mean meant to say.