Lab Streaming Layer (LSL) for synchronizing multiple data streams
Roshini Randeniya
Oct 1, 2025
Share:


by Roshini Randeniya and Lucas Kleine
Operation:
Once run in the command line, this script immediately initiates an LSL stream. Whenever the 'Enter' key is pressed, it sends a trigger and plays an audio file."""
import sounddevice as sd
import soundfile as sf
from pylsl import StreamInfo, StreamOutlet
def wait_for_keypress():
print("Press ENTER to start audio playback and send an LSL marker.")
while True: # This loop waits for a keyboard input
input_str = input() # Wait for input from the terminal
if input_str == "": # If the enter key is pressed, proceed
breakdef AudioMarker(audio_file, outlet): # function for playing audio and sending marker
data, fs = sf.read(audio_file) # Load the audio file
print("Playing audio and sending LSL marker...")
marker_val = [1]
outlet.push_sample(marker_val) # Send marker indicating the start of audio playback
sd.play(data, fs) # play the audio
sd.wait() # Wait until audio is done playing
print("Audio playback finished.")if name == "main": # MAIN LOOP
# Setup LSL stream for markers
stream_name = 'AudioMarkers'
stream_type = 'Markers'
n_chans = 1
sr = 0 # Set to 0 sampling rate because markers are irregular
chan_format = 'int32'
marker_id = 'uniqueMarkerID12345'
info = StreamInfo(stream_name, stream_type, n_chans, sr, chan_format, marker_id)
outlet = StreamOutlet(info) # create LSL outlet
# Keep the script running and wait for ENTER key to play audio and send marker
while True:
wait_for_keypress()
audio_filepath = "/path/to/your/audio_file.wav" # replace with correct path to your audio file
AudioMarker(audio_filepath, outlet)
# After playing audio and sending a marker, the script goes back to waiting for the next keypress</code></pre><p><em><strong>**By running this file (even before playing the audio), you've initiated an LSL stream through an outlet</strong></em><strong>. Now we'll view that stream in LabRecorder</strong></p><p><strong>STEP 5 - Use LabRecorder to view and save all LSL streams</strong></p><ol><li data-preset-tag="p"><p>Open LabRecorder</p></li><li data-preset-tag="p"><p>Press <em><strong>Update</strong></em>. The available LSL streams should be visible in the stream list<br> • You should be able to see streams from both EmotivPROs (usually called "EmotivDataStream") and the marker stream (called "AudioMarkers")</p></li><li data-preset-tag="p"><p>Click <em><strong>Browse</strong></em> to select a location to store data (and set other parameters)</p></li><li data-preset-tag="p"><p>Select all streams and press <em><strong>Record</strong></em> to start recording</p></li><li data-preset-tag="p"><p>Click Stop when you want to end the recording</p></li></ol><p><br></p><img alt="" src="https://framerusercontent.com/images/HFGuJF9ErVu2Jxrgtqt11tl0No.jpg"><h2><strong>Working with the data</strong></h2><p><strong>LabRecorder outputs an XDF file (Extensible Data Format) that contains data from all the streams. XDF files are structured into, </strong><em><strong>streams</strong></em><strong>, each with a different </strong><em><strong>header</strong></em><strong> that describes what it contains (device name, data type, sampling rate, channels, and more). You can use the below codeblock to open your XDF file and display some basic information.</strong></p><pre data-language="JSX"><code>
This example script demonstrates a few basic functions to import and annotate EEG data collected from EmotivPRO software. It uses MNE to load an XDF file, print some basic metadata, create an info object and plot the power spectrum."""
import pyxdf
import mne
import matplotlib.pyplot as plt
import numpy as np
Path to your XDF file
data_path = '/path/to/your/xdf_file.xdf'
Load the XDF file
streams, fileheader = pyxdf.load_xdf(data_path)
print("XDF File Header:", fileheader)
print("Number of streams found:", len(streams))
for i, stream in enumerate(streams):
print("\nStream", i + 1)
print("Stream Name:", stream['info']['name'][0])
print("Stream Type:", stream['info']['type'][0])
print("Number of Channels:", stream['info']['channel_count'][0])
sfreq = float(stream['info']['nominal_srate'][0])
print("Sampling Rate:", sfreq)
print("Number of Samples:", len(stream['time_series']))
print("Print the first 5 data points:", stream['time_series'][:5])
channel_names = [chan['label'][0] for chan in stream['info']['desc'][0]['channels'][0]['channel']]
print("Channel Names:", channel_names)
channel_types = 'eeg'Create MNE info object
info = mne.create_info(channel_names, sfreq, channel_types)
data = np.array(stream['time_series']).T # Data needs to be transposed: channels x samples
raw = mne.io.RawArray(data, info)
raw.plot_psd(fmax=50) # plot a simple spectrogram (power spectral density)Additional resourcesDownload this tutorial as a Jupyter notebook from EMOTIV GitHubCheck out the LSL online documentation, including the official README file on GitHubYou'll need one or more supported data acquisition device for collecting dataAll of EMOTIV's brainware devices connect to EmotivPRO software, which has LSL built-in capabilities for sending and receiving data streamsAdditional resources:Code to run LSL using Emotiv’s devices, with example scriptsUseful LSL demo on YouTubeSCCN LSL GitHub repository for all associated librariesGitHub repository for a collection a submodules and appsHyPyP analysis pipeline for Hyperscanning studies
by Roshini Randeniya and Lucas Kleine
Operation:
Once run in the command line, this script immediately initiates an LSL stream. Whenever the 'Enter' key is pressed, it sends a trigger and plays an audio file."""
import sounddevice as sd
import soundfile as sf
from pylsl import StreamInfo, StreamOutlet
def wait_for_keypress():
print("Press ENTER to start audio playback and send an LSL marker.")
while True: # This loop waits for a keyboard input
input_str = input() # Wait for input from the terminal
if input_str == "": # If the enter key is pressed, proceed
breakdef AudioMarker(audio_file, outlet): # function for playing audio and sending marker
data, fs = sf.read(audio_file) # Load the audio file
print("Playing audio and sending LSL marker...")
marker_val = [1]
outlet.push_sample(marker_val) # Send marker indicating the start of audio playback
sd.play(data, fs) # play the audio
sd.wait() # Wait until audio is done playing
print("Audio playback finished.")if name == "main": # MAIN LOOP
# Setup LSL stream for markers
stream_name = 'AudioMarkers'
stream_type = 'Markers'
n_chans = 1
sr = 0 # Set to 0 sampling rate because markers are irregular
chan_format = 'int32'
marker_id = 'uniqueMarkerID12345'
info = StreamInfo(stream_name, stream_type, n_chans, sr, chan_format, marker_id)
outlet = StreamOutlet(info) # create LSL outlet
# Keep the script running and wait for ENTER key to play audio and send marker
while True:
wait_for_keypress()
audio_filepath = "/path/to/your/audio_file.wav" # replace with correct path to your audio file
AudioMarker(audio_filepath, outlet)
# After playing audio and sending a marker, the script goes back to waiting for the next keypress</code></pre><p><em><strong>**By running this file (even before playing the audio), you've initiated an LSL stream through an outlet</strong></em><strong>. Now we'll view that stream in LabRecorder</strong></p><p><strong>STEP 5 - Use LabRecorder to view and save all LSL streams</strong></p><ol><li data-preset-tag="p"><p>Open LabRecorder</p></li><li data-preset-tag="p"><p>Press <em><strong>Update</strong></em>. The available LSL streams should be visible in the stream list<br> • You should be able to see streams from both EmotivPROs (usually called "EmotivDataStream") and the marker stream (called "AudioMarkers")</p></li><li data-preset-tag="p"><p>Click <em><strong>Browse</strong></em> to select a location to store data (and set other parameters)</p></li><li data-preset-tag="p"><p>Select all streams and press <em><strong>Record</strong></em> to start recording</p></li><li data-preset-tag="p"><p>Click Stop when you want to end the recording</p></li></ol><p><br></p><img alt="" src="https://framerusercontent.com/images/HFGuJF9ErVu2Jxrgtqt11tl0No.jpg"><h2><strong>Working with the data</strong></h2><p><strong>LabRecorder outputs an XDF file (Extensible Data Format) that contains data from all the streams. XDF files are structured into, </strong><em><strong>streams</strong></em><strong>, each with a different </strong><em><strong>header</strong></em><strong> that describes what it contains (device name, data type, sampling rate, channels, and more). You can use the below codeblock to open your XDF file and display some basic information.</strong></p><pre data-language="JSX"><code>
This example script demonstrates a few basic functions to import and annotate EEG data collected from EmotivPRO software. It uses MNE to load an XDF file, print some basic metadata, create an info object and plot the power spectrum."""
import pyxdf
import mne
import matplotlib.pyplot as plt
import numpy as np
Path to your XDF file
data_path = '/path/to/your/xdf_file.xdf'
Load the XDF file
streams, fileheader = pyxdf.load_xdf(data_path)
print("XDF File Header:", fileheader)
print("Number of streams found:", len(streams))
for i, stream in enumerate(streams):
print("\nStream", i + 1)
print("Stream Name:", stream['info']['name'][0])
print("Stream Type:", stream['info']['type'][0])
print("Number of Channels:", stream['info']['channel_count'][0])
sfreq = float(stream['info']['nominal_srate'][0])
print("Sampling Rate:", sfreq)
print("Number of Samples:", len(stream['time_series']))
print("Print the first 5 data points:", stream['time_series'][:5])
channel_names = [chan['label'][0] for chan in stream['info']['desc'][0]['channels'][0]['channel']]
print("Channel Names:", channel_names)
channel_types = 'eeg'Create MNE info object
info = mne.create_info(channel_names, sfreq, channel_types)
data = np.array(stream['time_series']).T # Data needs to be transposed: channels x samples
raw = mne.io.RawArray(data, info)
raw.plot_psd(fmax=50) # plot a simple spectrogram (power spectral density)Additional resourcesDownload this tutorial as a Jupyter notebook from EMOTIV GitHubCheck out the LSL online documentation, including the official README file on GitHubYou'll need one or more supported data acquisition device for collecting dataAll of EMOTIV's brainware devices connect to EmotivPRO software, which has LSL built-in capabilities for sending and receiving data streamsAdditional resources:Code to run LSL using Emotiv’s devices, with example scriptsUseful LSL demo on YouTubeSCCN LSL GitHub repository for all associated librariesGitHub repository for a collection a submodules and appsHyPyP analysis pipeline for Hyperscanning studies
by Roshini Randeniya and Lucas Kleine
Operation:
Once run in the command line, this script immediately initiates an LSL stream. Whenever the 'Enter' key is pressed, it sends a trigger and plays an audio file."""
import sounddevice as sd
import soundfile as sf
from pylsl import StreamInfo, StreamOutlet
def wait_for_keypress():
print("Press ENTER to start audio playback and send an LSL marker.")
while True: # This loop waits for a keyboard input
input_str = input() # Wait for input from the terminal
if input_str == "": # If the enter key is pressed, proceed
breakdef AudioMarker(audio_file, outlet): # function for playing audio and sending marker
data, fs = sf.read(audio_file) # Load the audio file
print("Playing audio and sending LSL marker...")
marker_val = [1]
outlet.push_sample(marker_val) # Send marker indicating the start of audio playback
sd.play(data, fs) # play the audio
sd.wait() # Wait until audio is done playing
print("Audio playback finished.")if name == "main": # MAIN LOOP
# Setup LSL stream for markers
stream_name = 'AudioMarkers'
stream_type = 'Markers'
n_chans = 1
sr = 0 # Set to 0 sampling rate because markers are irregular
chan_format = 'int32'
marker_id = 'uniqueMarkerID12345'
info = StreamInfo(stream_name, stream_type, n_chans, sr, chan_format, marker_id)
outlet = StreamOutlet(info) # create LSL outlet
# Keep the script running and wait for ENTER key to play audio and send marker
while True:
wait_for_keypress()
audio_filepath = "/path/to/your/audio_file.wav" # replace with correct path to your audio file
AudioMarker(audio_filepath, outlet)
# After playing audio and sending a marker, the script goes back to waiting for the next keypress</code></pre><p><em><strong>**By running this file (even before playing the audio), you've initiated an LSL stream through an outlet</strong></em><strong>. Now we'll view that stream in LabRecorder</strong></p><p><strong>STEP 5 - Use LabRecorder to view and save all LSL streams</strong></p><ol><li data-preset-tag="p"><p>Open LabRecorder</p></li><li data-preset-tag="p"><p>Press <em><strong>Update</strong></em>. The available LSL streams should be visible in the stream list<br> • You should be able to see streams from both EmotivPROs (usually called "EmotivDataStream") and the marker stream (called "AudioMarkers")</p></li><li data-preset-tag="p"><p>Click <em><strong>Browse</strong></em> to select a location to store data (and set other parameters)</p></li><li data-preset-tag="p"><p>Select all streams and press <em><strong>Record</strong></em> to start recording</p></li><li data-preset-tag="p"><p>Click Stop when you want to end the recording</p></li></ol><p><br></p><img alt="" src="https://framerusercontent.com/images/HFGuJF9ErVu2Jxrgtqt11tl0No.jpg"><h2><strong>Working with the data</strong></h2><p><strong>LabRecorder outputs an XDF file (Extensible Data Format) that contains data from all the streams. XDF files are structured into, </strong><em><strong>streams</strong></em><strong>, each with a different </strong><em><strong>header</strong></em><strong> that describes what it contains (device name, data type, sampling rate, channels, and more). You can use the below codeblock to open your XDF file and display some basic information.</strong></p><pre data-language="JSX"><code>
This example script demonstrates a few basic functions to import and annotate EEG data collected from EmotivPRO software. It uses MNE to load an XDF file, print some basic metadata, create an info object and plot the power spectrum."""
import pyxdf
import mne
import matplotlib.pyplot as plt
import numpy as np
Path to your XDF file
data_path = '/path/to/your/xdf_file.xdf'
Load the XDF file
streams, fileheader = pyxdf.load_xdf(data_path)
print("XDF File Header:", fileheader)
print("Number of streams found:", len(streams))
for i, stream in enumerate(streams):
print("\nStream", i + 1)
print("Stream Name:", stream['info']['name'][0])
print("Stream Type:", stream['info']['type'][0])
print("Number of Channels:", stream['info']['channel_count'][0])
sfreq = float(stream['info']['nominal_srate'][0])
print("Sampling Rate:", sfreq)
print("Number of Samples:", len(stream['time_series']))
print("Print the first 5 data points:", stream['time_series'][:5])
channel_names = [chan['label'][0] for chan in stream['info']['desc'][0]['channels'][0]['channel']]
print("Channel Names:", channel_names)
channel_types = 'eeg'Create MNE info object
info = mne.create_info(channel_names, sfreq, channel_types)
data = np.array(stream['time_series']).T # Data needs to be transposed: channels x samples
raw = mne.io.RawArray(data, info)
raw.plot_psd(fmax=50) # plot a simple spectrogram (power spectral density)Additional resourcesDownload this tutorial as a Jupyter notebook from EMOTIV GitHubCheck out the LSL online documentation, including the official README file on GitHubYou'll need one or more supported data acquisition device for collecting dataAll of EMOTIV's brainware devices connect to EmotivPRO software, which has LSL built-in capabilities for sending and receiving data streamsAdditional resources:Code to run LSL using Emotiv’s devices, with example scriptsUseful LSL demo on YouTubeSCCN LSL GitHub repository for all associated librariesGitHub repository for a collection a submodules and appsHyPyP analysis pipeline for Hyperscanning studies
Solutions
Support
Company

© 2025 EMOTIV, All rights reserved.

Your Privacy Choices (Cookie Settings)
*Disclaimer – EMOTIV products are intended to be used for research applications and personal use only. Our products are not sold as Medical Devices as defined in EU directive 93/42/EEC. Our
products are not designed or intended to be used for diagnosis or treatment of disease.
Solutions
Support
Company

© 2025 EMOTIV, All rights reserved.

Your Privacy Choices (Cookie Settings)
*Disclaimer – EMOTIV products are intended to be used for research applications and personal use only. Our products are not sold as Medical Devices as defined in EU directive 93/42/EEC. Our
products are not designed or intended to be used for diagnosis or treatment of disease.
Solutions
Support
Company

© 2025 EMOTIV, All rights reserved.

Your Privacy Choices (Cookie Settings)
*Disclaimer – EMOTIV products are intended to be used for research applications and personal use only. Our products are not sold as Medical Devices as defined in EU directive 93/42/EEC. Our
products are not designed or intended to be used for diagnosis or treatment of disease.
