Acoustic Source Localization with MEMS Microphone Arrays: Principles and Applications

By Wuxi Silicon Source Technology Co., Ltd.
Explore our MEMS Microphones | Explore our Sensor Modules

1. Introduction

Building upon our previous discussion on ultrasonic ranging, this article dives into how MEMS microphone arrays can be used for acoustic source localization — an essential technology in smart devices, robotics, and spatial audio systems.

When equipped with a high-performance MEMS microphone array, systems can detect the direction of arrival (DoA) or angle of arrival (AoA) of sound sources, enabling functions such as voice tracking, sound-based human-machine interaction, and indoor acoustic mapping.

2. What Is a Microphone Array?

A microphone array is a structured arrangement of multiple microphones, often in linear, rectangular, or hexagonal patterns.
These arrays capture sound waves arriving from different directions and allow computational models to estimate where a sound originates.

In principle, a microphone array functions much like an antenna array in wireless communication — but for sound waves.

For instance, a 4-microphone linear array can be used to detect the time difference of sound arrivals (TDOA) and calculate the AoA using geometry and signal processing.

Figure . Linear Microphone Array

3. AoA Localization Theory

Let’s assume a sound wave reaches two microphones spaced by a distance ddd, and the sound arrives at an angle θ\thetaθ.
The difference in the sound’s path leads to a time delay (Δt) given by: Δt=dcos⁡(θ)c\Delta t = \frac{d \cos(\theta)}{c}Δt=cdcos(θ)​

where ccc is the speed of sound (~340 m/s).
By measuring Δt, we can compute the arrival angle: θ=arccos⁡(Δt⋅cd)\theta = \arccos\left(\frac{\Delta t \cdot c}{d}\right)θ=arccos(dΔt⋅c​)

This calculation can be extended to multi-mic arrays, which significantly improve spatial resolution.

Figure. Sound Propagation Model

4. Practical Implementation

In MATLAB or embedded systems, cross-correlation functions are used to compute TDOA:

[data, fs] = audioread('array_record.wav');
x = data(1, :); y = data(4, :);
d = 0.15; c = 340;

X = fft(x); Y = fft(y);
correlation = ifft(X .* conj(Y));
[m, idx] = max(correlation);
delta_t = (idx - 1 - length(correlation)/2) / fs;
theta = acos(delta_t * c / d) / pi * 180;

This simple algorithm provides accurate directional estimates — an essential function in beamforming, noise reduction, and source separation.

5. Multi-Array Source Localization

By combining data from two or more microphone arrays, the system can pinpoint the 2D or 3D position of a sound source.
Each array estimates a direction, and their intersection determines the exact location — a principle also used in robotic auditory perception and conference audio systems.

Figure . Sound Source Localization Model

6. Fingerprint-Based Localization

Beyond AoA, another emerging technique involves sound intensity fingerprinting — analyzing signal strength distributions of multiple frequencies across space.
With the help of neural networks, these sound “fingerprints” can classify device positions within predefined grids, achieving centimeter-level accuracy indoors.

This hybrid method combines acoustic propagation modeling with machine learning, enabling applications such as:

  • Indoor device positioning
  • Audio-based gesture recognition
  • Spatial acoustic analytics

7. Applications

  • Smart home devices: Voice command direction tracking
  • Industrial acoustics: Sound-based fault detection
  • Robotics: Environmental sound localization
  • Hearing aids: Directional noise suppression
  • Conference systems: Automatic speaker focus

For commercial integration, visit:
🔗 MEMS Microphones at Wuxi Silicon Source
🔗 Microphone Array Sensor Modules

8. References

滚动至顶部