User Tools

Site Tools


sintetizador_simulador_de_tinnitus_musical

This is an old revision of the document!


Neural-inspired Tinnitus Synthesizer

Objective

This project implements a neural-inspired tinnitus synthesizer in Python. The synthesizer generates audio based on current neurological models of musical tinnitus, focusing on the relationship between brainstem-generated noise and cortical filtering. The project aims to replicate the complex harmonic structures reported in musical tinnitus cases (myself, actually) while providing insights into the underlying neural mechanisms.

Theoretical Background

Neural Basis

Musical tinnitus is believed to originate from two primary mechanisms:

  • Noise Generation: The brainstem or acoustic nerve nuclei (8th cranial nerve) generate white noise, which serves as the raw material for tinnitus perception
  • Cortical Filtering: The brain applies complex filtering mechanisms to this noise, creating harmonic-rich, musical percepts
  • Pattern Recognition: The filtered signals often match familiar musical patterns stored in auditory memory

Signal Processing Model

The synthesis process follows these key stages:

  • Source Signal: White noise generation, mimicking neural noise
  • Frequency Shaping: Multiple resonant bandpass filters create harmonic structures
  • Amplitude Modulation: 8 Hz tremolo, matching common tinnitus characteristics
  • Harmonic Processing: Material-specific scaling of harmonics to create different timbres

Implementation Details

Core Components

  • Filename: `tinnitus_synthesizer.py`
  • Dependencies: `numpy`, `scipy`, `sounddevice`
  • Features:
    1. Neural Noise Generation: Simulates brainstem-generated noise
    2. Dynamic Filtering: Implements cortical filtering mechanisms
    3. Frequency Interpolation: Smooth transitions between musical notes
    4. Material Simulation: Different harmonic profiles for various timbres

Signal Processing Architecture

  • White Noise Generation: Base signal representing neural noise
  • Bandpass Filtering: Multiple parallel filters creating resonant peaks
  • Harmonic Scaling: Material-specific attenuation of odd/even harmonics
  • Amplitude Modulation: Tremolo effect at physiologically relevant frequencies

Synthesis Parameters

  • Carrier Frequencies: Musical notes (e.g., D4, F4, E4)
  • Harmonic Structure: Up to 32 harmonics with material-specific scaling
  • Modulation: 8 Hz tremolo with adjustable depth
  • Filter Characteristics: Variable bandwidth and Q-factor

Usage Example

import numpy as np
from scipy import signal
import sounddevice as sd
 
def smooth_interpolate(freqs, t, duration, transition_time=0.33):
    """
    Create smooth frequency transitions using exponential interpolation
    """
    samples = len(t)
    freq_signal = np.zeros(samples)
    segments = len(freqs)
    segment_duration = duration / segments
 
    for i in range(segments):
        start_idx = int(i * samples / segments)
        end_idx = int((i + 1) * samples / segments)
        transition_samples = int(transition_time * samples / duration)
 
        # Get current and next frequency
        curr_freq = freqs[i]
        next_freq = freqs[(i + 1) % segments]
 
        # Create segment time array
        segment_t = np.linspace(0, 1, end_idx - start_idx)
 
        # Create exponential transition
        tau = 0.1  # Time constant for exponential
        transition = curr_freq + (next_freq - curr_freq) * (1 - np.exp(-segment_t / tau))
        freq_signal[start_idx:end_idx] = transition
 
    return freq_signal
 
def get_harmonic_scaling(harmonic_number, style='glass'):
    """
    Calculate harmonic scaling based on different organ pipe materials/styles
    """
    is_even = harmonic_number % 2 == 0
 
    if style == 'glass':
        # Glass pipes: strong even harmonics, quick decay of odds
        if is_even:
            return 1.0 / (harmonic_number ** 0.3)  # Slower decay for even harmonics
        else:
            return 1.0 / (harmonic_number ** 1.5)  # Quick decay for odd harmonics
 
    elif style == 'metal':
        # Metal pipes: strong upper harmonics, slight odd/even difference
        if is_even:
            return 1.0 / (harmonic_number ** 0.4)
        else:
            return 1.0 / (harmonic_number ** 0.6)
 
    elif style == 'crystal':
        # Crystal-like: very strong even harmonics, almost no odds
        if is_even:
            return 1.0 / (harmonic_number ** 0.25)  # Very strong even harmonics
        else:
            return 1.0 / (harmonic_number ** 2.0)  # Very weak odd harmonics
 
    return 1.0 / harmonic_number  # Default scaling
 
def generate_filtered_tinnitus(duration=20, sample_rate=44100, frequencies=None, style='glass'):
    if frequencies is None:
        frequencies = [294, 349, 330, 165]
 
    t = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)
    white_noise = np.random.normal(0, 2, len(t))
 
    def create_resonant_filter(center_freq, bandwidth=30):
        low_freq = center_freq - bandwidth/2
        high_freq = center_freq + bandwidth/2
        low_freq = max(1, low_freq)
        high_freq = min(sample_rate/2 - 1, high_freq)
        b, a = signal.butter(2, [low_freq, high_freq], btype='bandpass', fs=sample_rate)
        return b, a
 
    current_freq = smooth_interpolate(frequencies, t, duration)
    n_harmonics = 32
    output_signal = np.zeros_like(white_noise)
 
    chunk_size = sample_rate
    n_chunks = len(t) // chunk_size + 1
 
    for chunk in range(n_chunks):
        start_idx = chunk * chunk_size
        end_idx = min((chunk + 1) * chunk_size, len(t))
        if start_idx >= len(t):
            break
 
        chunk_output = np.zeros(end_idx - start_idx)
        chunk_noise = white_noise[start_idx:end_idx]
        chunk_freq = current_freq[start_idx:end_idx]
 
        for harmonic in range(1, n_harmonics + 1):
            harmonic_freq = chunk_freq * harmonic
            if np.max(harmonic_freq) < sample_rate / 2:
                avg_freq = np.mean(harmonic_freq)
                # Tighter bandwidth for even harmonics
                bandwidth = 40 if harmonic % 2 == 0 else 60
                b, a = create_resonant_filter(avg_freq, bandwidth)
 
                # Apply slight random detuning
                detuning_factor = 1 + 0.005 * (np.random.random() - 0.5)
                harmonic_signal = signal.lfilter(b, a, chunk_noise) * detuning_factor
 
                # Apply material-specific harmonic scaling
                scaling = get_harmonic_scaling(harmonic, style) * 2.0  # Amplified for more presence
                chunk_output += harmonic_signal * scaling
 
        output_signal[start_idx:end_idx] = chunk_output
 
    # Subtle tremolo
    mod_freq = 8
    mod_depth = 0.25  # Reduced tremolo for more stable tone
    tremolo = 1 + mod_depth * np.sin(2 * np.pi * mod_freq * t)
    modulated_signal = output_signal * tremolo
 
    # Less noise in the final mix
    final_signal = 0.99 * modulated_signal + 0.01 * white_noise
 
    # Normalize and apply gain
    final_signal = final_signal / np.max(np.abs(final_signal))
    final_signal *= 0.8
 
    return final_signal, sample_rate
 
# Generate and play with different styles
frequencies = [588, 698, 660, 588, 698, 660, 330]  # D5, F5, E5, D5, F5, E5, E4
 
# Try different styles: 'glass', 'metal', or 'crystal'
tinnitus_sound, sample_rate = generate_filtered_tinnitus(
    duration=20, 
    frequencies=frequencies,
    style='glass'  # Try changing this to 'metal' or 'crystal'
)
 
sd.play(tinnitus_sound, samplerate=sample_rate)
sd.wait()

Limitations

1. Computational Efficiency:

  • High computational load due to multiple parallel filters
  • Memory intensive for long durations
  • Chunk-based processing required for longer sequences

2. Physiological Accuracy:

  • Simplified model of actual neural processes
  • Does not account for individual variations in tinnitus perception
  • Limited to musical tinnitus simulation only

3. Sound Design:

  • Fixed tremolo frequency
  • Limited material presets
  • No real-time parameter modification

Future Improvements

1. Enhanced Physiological Modeling:

  • Implementation of more complex neural filtering mechanisms
  • Addition of frequency-dependent phase relationships
  • Integration of adaptation and fatigue effects

2. Performance Optimization:

  • GPU acceleration for filter processing
  • More efficient filter implementation
  • Real-time parameter control

3. Extended Features:

  • Additional material presets
  • Interactive parameter adjustment
  • Visual representation of frequency content

References

  • Research papers on musical tinnitus
  • DSP techniques for audio synthesis
  • Neurological models of tinnitus

Final Note

This neural-inspired synthesizer provides a platform for experimenting with tinnitus-like sound generation based on current neurological models. Its modular design allows for easy extension and modification of various parameters, making it useful for both research and educational purposes.

sintetizador_simulador_de_tinnitus_musical.1736979048.txt.gz · Last modified: 2025/01/15 22:10 by oso