Welcome! In this tutorial we’ll learn how to use Lab Streaming Layer (LSL) in Python to collect and synchronize Emotiv EEG data from multiple devices. It will require a basic working knowledge of Python programming language.
We will learn:
- What Lab Streaming Layer is and why we would use it
- How to collect synchronized data using multiple Emotiv EEG devices
- How to import and look through the data
What is LSL and what is it good for?
Lab streaming layer (LSL) is an open-source software framework which can be used to send, receive and synchronize neural, physiological, and behavioral data streams from diverse sensor hardware.
Increasingly capable, precise and mobile brain- and body-sensing hardware devices (like EMOTIV EEG systems) are bringing neuroscience outside the lab into the world of real-time data. Where brain measurements like EEG and MEG had once been confined to research labs, mobile devices let us collect multiple types of data in more naturalistic environments, and from multiple people at once. If a researcher is interested in physiological synchrony between two people listening to the same music, LSL can help them collect neural data from two EEG headsets separately that is also synchronized to the presentation of sound.
Some examples of other uses for LSL:
- Add event markers from an experiment to an ongoing EEG data stream.
- Time-align data from multiple sources for a single participant (e.g. heart rate, EMG, EEG)
- Time-align data from multiple participants (e.g. EEG Hyperscanning Studies)
How does it work?
Lab Streaming Layer is a protocol for the real-time exchange of time-series data between multiple devices. LSL can be implemented using open-source libraries for programming languages like Python, MATLAB, C++, Java and others.
The core functionality revolves around LSL data streams:
1. An acquisition device/software collects data and creates a data stream
- Physiological data can be streamed to LSL from EEG recording devices, eye-trackers, motion capture systems, heart rate monitors, etc., including metadata (sampling rate, data type, channel information, etc.)
- Event markers from experiments (e.g. using PsychoPy) can also be sent as a data stream using LSL.
2. The data stream is published to the network
- This is how data is sent using LSL; the data stream is “broadcast” to the network.
- Published streams are available on the network and discoverable by other LSL supported devices on the same network
- LSL assigns each data chunk or sample a timestamp based on a common clock (following the Network Time Protocol).
- The stream is pushed sampleby- sample (or chunk-by-chunk) through an “outlet”.
3. Collection device(s) “subscribe” to data stream(s)
- This is how data is received using LSL
- Collection devices on the same network receive published data streams via “inlets”.
- Each inlet receives the stream samples and metadata from only one outlet 4.
4. Save data
- Upon subscribing to a data stream, you can save it to a variable in your preferred programming language, or use LSL’s provided software LabRecorder to save it to a standard format such as .xdf.
Tutorial overview
In this tutorial, we'll take an example experimental setup and guide you through the necessary steps and code for implementing it using LSL in Python.
We’ll use a Python script to play a sound while collecting EEG data from two people wearing Emotiv headsets. We’ll use two computers, each running EmotivPRO, to collect the EEG data and broadcast each stream through a separate LSL outlet. We’ll use a Python library to play an audio file and simultaneously send a trigger each time the file starts.
STEPS:
- Use EmotivPRO to stream data through LSL outlets that includes EEG data (and/or motion, contact quality, signal quality, etc.)
- Play an audio track using a Python script, and simultaneously send a trigger through another LSL outlet
- Use LabRecorder to capture and save all three data streams through an LSL inlet
STEP 1 - Setup and install
- You’ll need supported data acquisition devices for collecting data
• All of EMOTIV’s brainware devices connect to LSL via EmotivPRO software - Install EmotivPRO on your device(s). You will need a valid EmotivPRO license to use LSL.
- Install the Python LSL library with the following command: > pip install pylsl
- Download the LabRecorder software software. This is a simple, free app that can be run from the command line or using a standalone download
- For our experiment: Install the necessary packages for playing an audio file using Python: > pip install sounddevice soundfile
STEP 2 - Set up the EEG devices
- Fit your Emotiv device(s) on your participant(s), turn them on, and connect them to your computer(s) via Bluetooth
- Open EmotivPRO and ensure the EEG data quality is sufficient using the sensor checks
See headset setup guides for EPOCX, INSIGHT
STEP 3 - Send the data from EmotivPRO via an LSL stream
- Locate the “…” in the upper right corner of the app, navigate to Settings
- Find the ‘Lab Streaming Layer’ section and the ‘Outlet’ subsection
- Select all the datatypes that you would like to broadcast
- Select the data format (32-bit float or 64-bit double)
- Select whether to send data sample-by-sample or in chunks of samples
- Click ‘Start’ to broadcast an LSL data stream
STEP 4 - Use a Python script to play audio and send triggers
- Copy the below code block or click here to download the example Python script to play an audio file and send triggers.
- Locate an audio file (ideally a .wav file) you'd like to play and edit the script by changing the variable audio_filepath to the filepath of your audio file on your computer
- Open a command prompt to interact with the command line and navigate to the folder where your Python file is stored: cd <path/to/folder>
-
Enter: python3 filename.py
- Depending your Python install, you may use python instead of python3
"""LSL example - playing audio and sending a trigger This script shows minmimal example code that allows a user to play an audio file and simultaneously send a trigger through an LSL stream that can be captured (for instance, using LabRecorder) and synchronized with other LSL data streams. Operation: Once run in the command line, this script immediately initiates an LSL stream. Whenever the 'Enter' key is pressed, it sends a trigger and plays an audio file.""" import sounddevice as sd import soundfile as sf from pylsl import StreamInfo, StreamOutlet def wait_for_keypress(): print("Press ENTER to start audio playback and send an LSL marker.") while True: # This loop waits for a keyboard input input_str = input() # Wait for input from the terminal if input_str == "": # If the enter key is pressed, proceed break def AudioMarker(audio_file, outlet): # function for playing audio and sending marker data, fs = sf.read(audio_file) # Load the audio file print("Playing audio and sending LSL marker...") marker_val = [1] outlet.push_sample(marker_val) # Send marker indicating the start of audio playback sd.play(data, fs) # play the audio sd.wait() # Wait until audio is done playing print("Audio playback finished.") if __name__ == "__main__": # MAIN LOOP # Setup LSL stream for markers stream_name = 'AudioMarkers' stream_type = 'Markers' n_chans = 1 sr = 0 # Set to 0 sampling rate because markers are irregular chan_format = 'int32' marker_id = 'uniqueMarkerID12345' info = StreamInfo(stream_name, stream_type, n_chans, sr, chan_format, marker_id) outlet = StreamOutlet(info) # create LSL outlet # Keep the script running and wait for ENTER key to play audio and send marker while True: wait_for_keypress() audio_filepath = "/path/to/your/audio_file.wav" # replace with correct path to your audio file AudioMarker(audio_filepath, outlet) # After playing audio and sending a marker, the script goes back to waiting for the next keypress
**By running this file (even before playing the audio), you've initiated an LSL stream through an outlet. Now we'll view that stream in LabRecorder
STEP 5 - Use LabRecorder to view and save all LSL streams
- Open LabRecorder
- Press Update. The available LSL streams should be visible in the stream list
• You should be able to see streams from both EmotivPROs (usually called "EmotivDataStream") and the marker stream (called "AudioMarkers") - Click Browse to select a location to store data (and set other parameters)
- Select all streams and press Record to start recording
- Click Stop when you want to end the recording
Working with the data
LabRecorder outputs an XDF file (Extensible Data Format) that contains data from all the streams. XDF files are structured into, streams, each with a different header that describes what it contains (device name, data type, sampling rate, channels, and more). You can use the below codeblock to open your XDF file and display some basic information.
"""Example EEG preprocessing of EMOTIV data This example script demonstrates a few basic functions to import and annotate EEG data collected from EmotivPRO software. It uses MNE to load an XDF file, print some basic metadata, create an `info` object and plot the power spectrum.""" import pyxdf import mne import matplotlib.pyplot as plt import numpy as np # Path to your XDF file data_path = '/path/to/your/xdf_file.xdf' # Load the XDF file streams, fileheader = pyxdf.load_xdf(data_path) print("XDF File Header:", fileheader) print("Number of streams found:", len(streams)) for i, stream in enumerate(streams): print("\nStream", i + 1) print("Stream Name:", stream['info']['name'][0]) print("Stream Type:", stream['info']['type'][0]) print("Number of Channels:", stream['info']['channel_count'][0]) sfreq = float(stream['info']['nominal_srate'][0]) print("Sampling Rate:", sfreq) print("Number of Samples:", len(stream['time_series'])) print("Print the first 5 data points:", stream['time_series'][:5]) channel_names = [chan['label'][0] for chan in stream['info']['desc'][0]['channels'][0]['channel']] print("Channel Names:", channel_names) channel_types = 'eeg' # Create MNE info object info = mne.create_info(channel_names, sfreq, channel_types) data = np.array(stream['time_series']).T # Data needs to be transposed: channels x samples raw = mne.io.RawArray(data, info) raw.plot_psd(fmax=50) # plot a simple spectrogram (power spectral density)
Additional resources
- Download this tutorial as a Jupyter notebook from EMOTIV GitHub
- Check out the LSL online documentation, including the official README file on GitHub
- You'll need one or more supported data acquisition device for collecting data
- All of EMOTIV's brainware devices connect to EmotivPRO software, which has LSL built-in capabilities for sending and receiving data streams
- Additional resources:
- Code to run LSL using Emotiv’s devices, with example scripts
- Useful LSL demo on YouTube
- SCCN LSL GitHub repository for all associated libraries
- GitHub repository for a collection a submodules and apps
- HyPyP analysis pipeline for Hyperscanning studies