Skip to content
Lab Streaming Layer (LSL) for synchronizing multiple EEG data streams

Lab Streaming Layer (LSL) for synchronizing multiple EEG data streams

Welcome! In this tutorial we’ll learn how to use Lab Streaming Layer (LSL) in Python to collect and synchronize Emotiv EEG data from multiple devices. It will require a basic working knowledge of Python programming language.

We will learn:

  1. What Lab Streaming Layer is and why we would use it
  2. How to collect synchronized data using multiple Emotiv EEG devices
  3. How to import and look through the data


    What is LSL and what is it good for?

    Lab streaming layer (LSL) is an open-source software framework which can be used to send, receive and synchronize neural, physiological, and behavioral data streams from diverse sensor hardware.

    Increasingly capable, precise and mobile brain- and body-sensing hardware devices (like EMOTIV EEG systems) are bringing neuroscience outside the lab into the world of real-time data. Where brain measurements like EEG and MEG had once been confined to research labs, mobile devices let us collect multiple types of data in more naturalistic environments, and from multiple people at once. If a researcher is interested in physiological synchrony between two people listening to the same music, LSL can help them collect neural data from two EEG headsets separately that is also synchronized to the presentation of sound.

    Some examples of other uses for LSL:

    1. Add event markers from an experiment to an ongoing EEG data stream.
    2. Time-align data from multiple sources for a single participant (e.g. heart rate, EMG, EEG)
    3. Time-align data from multiple participants (e.g. EEG Hyperscanning Studies)


    How does it work?

    Lab Streaming Layer is a protocol for the real-time exchange of time-series data between multiple devices. LSL can be implemented using open-source libraries for programming languages like Python, MATLAB, C++, Java and others.


    The core functionality revolves around LSL data streams:

    1. An acquisition device/software collects data and creates a data stream 

    • Physiological data can be streamed to LSL from EEG recording devices, eye-trackers, motion capture systems, heart rate monitors, etc., including metadata (sampling rate, data type, channel information, etc.)
    • Event markers from experiments (e.g. using PsychoPy) can also be sent as a data stream using LSL.

    2. The data stream is published to the network

    • This is how data is sent using LSL; the data stream is “broadcast” to the network.
    • Published streams are available on the network and discoverable by other LSL supported devices on the same network
    • LSL assigns each data chunk or sample a timestamp based on a common clock (following the Network Time Protocol).
    • The stream is pushed sampleby- sample (or chunk-by-chunk) through an “outlet”.

    3. Collection device(s) “subscribe” to data stream(s)

    • This is how data is received using LSL
    • Collection devices on the same network receive published data streams via “inlets”.
    • Each inlet receives the stream samples and metadata from only one outlet 4.

    4. Save data

    • Upon subscribing to a data stream, you can save it to a variable in your preferred programming language, or use LSL’s provided software LabRecorder to save it to a standard format such as .xdf.


    Tutorial overview

    In this tutorial, we'll take an example experimental setup and guide you through the necessary steps and code for implementing it using LSL in Python. 

    We’ll use a Python script to play a sound while collecting EEG data from two people wearing Emotiv headsets. We’ll use two computers, each running EmotivPRO, to collect the EEG data and broadcast each stream through a separate LSL outlet. We’ll use a Python library to play an audio file and simultaneously send a trigger each time the file starts.


    1. Use EmotivPRO to stream data through LSL outlets that includes EEG data (and/or motion, contact quality, signal quality, etc.)
    2. Play an audio track using a Python script, and simultaneously send a trigger through another LSL outlet
    3. Use LabRecorder to capture and save all three data streams through an LSL inlet


    STEP 1 - Setup and install

    1. You’ll need supported data acquisition devices for collecting data
      • All of EMOTIV’s brainware devices connect to LSL via EmotivPRO software
    2. Install EmotivPRO on your device(s). You will need a valid EmotivPRO license to use LSL.
    3. Install the Python LSL library with the following command: > pip install pylsl
    4. Download the LabRecorder software software. This is a simple, free app that can be run from the command line or using a standalone download
    5. For our experiment: Install the necessary packages for playing an audio file using Python: > pip install sounddevice soundfile


    STEP 2 - Set up the EEG devices

    1. Fit your Emotiv device(s) on your participant(s), turn them on, and connect them to your computer(s) via Bluetooth
    2. Open EmotivPRO and ensure the EEG data quality is sufficient using the sensor checks

      See headset setup guides for EPOCX, INSIGHT


    STEP 3 - Send the data from EmotivPRO via an LSL stream

    1. Locate the “…” in the upper right corner of the app, navigate to Settings
    2. Find the ‘Lab Streaming Layer’ section and the ‘Outlet’ subsection
    3. Select all the datatypes that you would like to broadcast
    4. Select the data format (32-bit float or 64-bit double)
    5. Select whether to send data sample-by-sample or in chunks of samples
    6. Click ‘Start’ to broadcast an LSL data stream


    STEP 4 - Use a Python script to play audio and send triggers

    1. Copy the below code block or click here to download the example Python script to play an audio file and send triggers.
    2. Locate an audio file (ideally a .wav file) you'd like to play and edit the script by changing the variable audio_filepath to the filepath of your audio file on your computer
    3. Open a command prompt to interact with the command line and navigate to the folder where your Python file is stored: cd <path/to/folder>
    4. Enter: python3
      • Depending your Python install, you may use python instead of python3
    """LSL example - playing audio and sending a trigger
    This script shows minmimal example code that allows a user to play an audio file and simultaneously send a trigger through an LSL stream that can be captured (for instance, using LabRecorder) and synchronized with other LSL data streams.
    Once run in the command line, this script immediately initiates an LSL stream. Whenever the 'Enter' key is pressed, it sends a trigger and plays an audio file."""
    import sounddevice as sd
    import soundfile as sf
    from pylsl import StreamInfo, StreamOutlet
    def wait_for_keypress():
        print("Press ENTER to start audio playback and send an LSL marker.")
        while True: # This loop waits for a keyboard input
            input_str = input()  # Wait for input from the terminal
            if input_str == "":  # If the enter key is pressed, proceed
    def AudioMarker(audio_file, outlet): # function for playing audio and sending marker
        data, fs = # Load the audio file
        print("Playing audio and sending LSL marker...")
        marker_val = [1]
        outlet.push_sample(marker_val) # Send marker indicating the start of audio playback, fs) # play the audio
        sd.wait()  # Wait until audio is done playing
        print("Audio playback finished.")
    if __name__ == "__main__": # MAIN LOOP
        # Setup LSL stream for markers
        stream_name = 'AudioMarkers'
        stream_type = 'Markers'
        n_chans = 1
        sr = 0  # Set to 0 sampling rate because markers are irregular
        chan_format = 'int32'
        marker_id = 'uniqueMarkerID12345'
        info = StreamInfo(stream_name, stream_type, n_chans, sr, chan_format, marker_id)
        outlet = StreamOutlet(info) # create LSL outlet
        # Keep the script running and wait for ENTER key to play audio and send marker
        while True:
            audio_filepath = "/path/to/your/audio_file.wav"  # replace with correct path to your audio file
            AudioMarker(audio_filepath, outlet)
            # After playing audio and sending a marker, the script goes back to waiting for the next keypress

     **By running this file (even before playing the audio), you've initiated an LSL stream through an outlet. Now we'll view that stream in LabRecorder


    STEP 5 - Use LabRecorder to view and save all LSL streams

    1. Open LabRecorder
    2. Press Update. The available LSL streams should be visible in the stream list
      • You should be able to see streams from both EmotivPROs (usually called "EmotivDataStream") and the marker stream (called "AudioMarkers")
    3. Click Browse to select a location to store data (and set other parameters)
    4. Select all streams and press Record to start recording
    5. Click Stop when you want to end the  recording


    Working with the data

    LabRecorder outputs an XDF file (Extensible Data Format) that contains data from all the streams. XDF files are structured into, streams, each with a different header that describes what it contains (device name, data type, sampling rate, channels, and more). You can use the below codeblock to open your XDF file and display some basic information.

    """Example EEG preprocessing of EMOTIV data
    This example script demonstrates a few basic functions to import and annotate EEG data collected from EmotivPRO software. It uses MNE to load an XDF file, print some basic metadata, create an `info` object and plot the power spectrum."""
    import pyxdf
    import mne
    import matplotlib.pyplot as plt
    import numpy as np
    # Path to your XDF file
    data_path = '/path/to/your/xdf_file.xdf'
    # Load the XDF file
    streams, fileheader = pyxdf.load_xdf(data_path)
    print("XDF File Header:", fileheader)
    print("Number of streams found:", len(streams))
    for i, stream in enumerate(streams):
        print("\nStream", i + 1)
        print("Stream Name:", stream['info']['name'][0])
        print("Stream Type:", stream['info']['type'][0])
        print("Number of Channels:", stream['info']['channel_count'][0])
        sfreq = float(stream['info']['nominal_srate'][0])
        print("Sampling Rate:", sfreq)
        print("Number of Samples:", len(stream['time_series']))
        print("Print the first 5 data points:", stream['time_series'][:5])
        channel_names = [chan['label'][0] for chan in stream['info']['desc'][0]['channels'][0]['channel']]
        print("Channel Names:", channel_names)
        channel_types = 'eeg'
    # Create MNE info object
    info = mne.create_info(channel_names, sfreq, channel_types)
    data = np.array(stream['time_series']).T # Data needs to be transposed: channels x samples
    raw =, info)
    raw.plot_psd(fmax=50) # plot a simple spectrogram (power spectral density)


    Additional resources


    Cart 0

    Your cart is currently empty.

    Start Shopping