API Reference#
- class spikewrap.Session(subject_path, session_name, file_format, run_names='all', output_path=None)[source]#
Bases:
object
Represents an electrophysiological recording session, consisting of a single or multiple runs.
Exposes
preprocess()
,plot_preprocessed()`, and ``save_preprocessed()
functions to handle preprocessing of all runs.- Parameters:
subject_path (
Path
|str
) – The path to the subject’s directory. This should contain thesession_name
directory.session_name (
str
) – The name of this session. Must match the session folder name in the subject_path.file_format (
Literal
['spikeglx'
,'openephys'
]) – Acquisition software used for recording, either"spikeglx"
or"openephys"
. Determines how a session’s runs are discovered.run_names (
Union
[Literal
['all'
],list
[str
]]) – Specifies which runs within the session to include. If"all"
(default), includes all runs detected within the session. Otherwise, alist of str
, a list of specific run names. Each name must correspond to a run folder within the session. Order passed will be the concentration order.output_path (
Path
|None
) – The path where preprocessed data will be saved (in NeuroBlueprint style).
Notes
The responsibility of this class is to manage the processing of runs contained within the session. Runs are held in
self._runs
, a list ofSeparateRun
orConcatRun
classes. Runs are loaded from raw data as separate runs, and will be converted to aConcatRun
if concatenated.The attributes on this class (except for
self._runs
) are to be treated as constant for the lifetime of the class. For example, the output path should not be changed during the class instance lifetime.When
preprocess
is called, all runs are re-loaded and operations performed from scratch. This is to cleanly handle concatenation of runs and possible splitting of multi-shank recordings. This is more robust than attempting to continually concatenate / split runs and shanks.- load_raw_data()[source]#
Load the raw data from each run into SpikeInterface recording objects.
This function can be used to check data is loading successfully, but can be skipped, with
preprocess()
run directly.Data is loaded lazily at this stage (recording objects are created but no data is actually loaded from disk).
- Return type:
None
- preprocess(configs, concat_runs=False, per_shank=False)[source]#
Preprocess recordings for all runs for this session.
This step is lazy, under the hood running the preprocessing steps from
configs
on SpikeInterface recording objects. Preprocessing of data is performed on the fly when required (e.g. plotting, saving or sorting).- Parameters:
configs (
dict
|str
|Path
) –If a
str
is provided, expects the name of a stored configuration file. Seeshow_available_configs()
andsave_config_dict()
for details.If a
Path
is provided, expects the path to a valid spikewrap config YAML file.A spikewrap configs dictionary, either including the
"preprocessing"
level or the"preprocessing"
level itself. See documentation for details.
concat_runs (
bool
) – IfTrue
, all runs will be concatenated together before preprocessing. Usesession.get_run_names()
to check the order of concatenation.per_shank (
bool
) – IfTrue
, perform preprocessing on each shank separately.
- Return type:
None
- save_preprocessed(overwrite=False, chunk_duration_s=2, n_jobs=1, slurm=False)[source]#
Save preprocessed data for all runs in the current session.
This method iterates over each run in the session and invokes the save_preprocessed method to persist the preprocessed data. It supports options to overwrite existing data, specify chunk sizes for data saving, utilize parallel processing, and integrate with SLURM for job scheduling.
- Parameters:
overwrite (
bool
) – If True, existing preprocessed run data will be overwritten. Otherwise, an error will be raised.chunk_duration_s (
float
) – Size of chunks which are separately to preprocessed and written.n_jobs (
int
) – Number of parallel jobs to run for saving preprocessed data. Sets SpikeInterface’s set_global_job_kwargs.slurm (
dict
|bool
) – Configuration for submitting the save jobs to a SLURM workload manager. If False (default), jobs will be run locally. If True, job will be run in SLURM with default arguments. If a dict is provided, it should contain SLURM arguments. See tutorials in the documentation for details.
- Return type:
None
- plot_preprocessed(run_idx='all', mode='map', time_range=(0.0, 1.0), show_channel_ids=True, show=False, figsize=(10, 6))[source]#
Plot preprocessed run data.
One plot will be generated for each run. For preprocessing per-shank, each shank will appear in a subplot. Under the hood, calls SpikeInterface’s
plot_traces()
function.- Parameters:
run_idx –
If
"all"
, plots preprocessed data for all runs in the session.If an integer, plots preprocessed data for the run at the specified index in
self._runs
.
mode – Determines the plotting style, a heatmap-style or line plot.
time_range – Time range (start, end), in seconds, to plot. e.g. (0.0, 1.0)
show_channel_ids – If
True
, displays the channel identifiers on the plots.show – If
True
, displays the plots immediately. IfFalse
, the plots are generated and returned without being displayed.figsize – Specifies the size of the figure in inches as
(width, height)
.
- Returns:
dict of {str – A dictionary mapping each run’s name to its corresponding
matplotlib.Figure
object.- Return type:
matplotlib.Figure}
- spikewrap.get_example_data_path(file_format='spikeglx')[source]#
Get the path to the example data directory. This contains a very small example spikeglx dataset in NeuroBlueprint format.
- Returns:
Path to the root folder of the example dataset.
- Return type:
Path
- spikewrap.get_configs_path()[source]#
Get the path to the User home directory folder in which all spikewrap config yamls are stored.
- Returns:
The path to the spikewrap configs directory.
- Return type:
Path
- spikewrap.show_supported_preprocessing_steps()[source]#
Print the (currently supported) SpikeInterface preprocessing steps.
- Return type:
None
- spikewrap.show_available_configs()[source]#
Print the file names of all YAML config files in the user config path.
- Return type:
None
- spikewrap.load_config_dict(filepath)[source]#
Load a configuration dictionary from a YAML file.
- Parameters:
filepath (
Path
) – The full path to the YAML file, including the file name and extension.- Returns:
The configs dict loaded from the YAML file.
- Return type:
dict
- spikewrap.save_config_dict(config_dict, name, folder=None)[source]#
Save a configuration dictionary to a YAML file.
- Parameters:
config_dict (
dict
) – The configs dictionary to save.name (
str
) – The name of the YAML file (with or without the .yaml extension).folder (
Path
|None
) – If None (default), the config is saved in the spikewrap-managed user configs folder. Otherwise, save in folder.
- spikewrap.default_slurm_options(partition='cpu')[source]#
Get a set of default SLURM job submission options based on the selected partition.
All arguments correspond to sbatch arguments except for:
wait
Whether to block the execution of the calling process until the job completes.
env_name
The name of the Conda environment to run the job in. Defaults to the active Conda environment of the calling process, or “spikewrap” if none is detected. To modify this, update the returned dictionary directly.
- Parameters:
partition (
Literal
['cpu'
,'gpu'
]) – The SLURM partition to use.- Returns:
Dictionary of SLURM job settings.
- Return type:
options