Skip to content

Trigno Importer

The TrignoImporter class is responsible for importing EMG data from Delsys Trigno CSV files.

Class Documentation

emgio.importers.trigno

BaseImporter

Bases: ABC

Base class for EMG data importers.

Source code in emgio/importers/base.py
class BaseImporter(ABC):
    """Base class for EMG data importers."""

    @abstractmethod
    def load(self, filepath: str) -> EMG:
        """
        Load EMG data from file.

        Args:
            filepath: Path to the input file

        Returns:
            EMG: EMG object containing the loaded data
        """
        pass

load(filepath) abstractmethod

Load EMG data from file.

Args: filepath: Path to the input file

Returns: EMG: EMG object containing the loaded data

Source code in emgio/importers/base.py
@abstractmethod
def load(self, filepath: str) -> EMG:
    """
    Load EMG data from file.

    Args:
        filepath: Path to the input file

    Returns:
        EMG: EMG object containing the loaded data
    """
    pass

EMG

Core EMG class for handling EMG data and metadata.

Attributes: signals (pd.DataFrame): Raw signal data with time as index. metadata (dict): Metadata dictionary containing recording information. channels (dict): Channel information including type, unit, sampling frequency. events (pd.DataFrame): Annotations or events associated with the signals, with columns 'onset', 'duration', 'description'.

Source code in emgio/core/emg.py
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
class EMG:
    """
    Core EMG class for handling EMG data and metadata.

    Attributes:
        signals (pd.DataFrame): Raw signal data with time as index.
        metadata (dict): Metadata dictionary containing recording information.
        channels (dict): Channel information including type, unit, sampling frequency.
        events (pd.DataFrame): Annotations or events associated with the signals,
                               with columns 'onset', 'duration', 'description'.
    """

    def __init__(self):
        """Initialize an empty EMG object."""
        self.signals = None
        self.metadata = {}
        self.channels = {}
        # Initialize events as an empty DataFrame with specified columns
        self.events = pd.DataFrame(columns=['onset', 'duration', 'description'])

    def plot_signals(self, channels=None, time_range=None, offset_scale=0.8,
                    uniform_scale=True, detrend=False, grid=True, title=None,
                    show=True, plt_module=None):
        """
        Plot EMG signals in a single plot with vertical offsets.

        Args:
            channels: List of channels to plot. If None, plot all channels.
            time_range: Tuple of (start_time, end_time) to plot. If None, plot all data.
            offset_scale: Portion of allocated space each signal can use (0.0 to 1.0).
            uniform_scale: Whether to use the same scale for all signals.
            detrend: Whether to remove mean from signals before plotting.
            grid: Whether to show grid lines.
            title: Optional title for the figure.
            show: Whether to display the plot.
            plt_module: Matplotlib pyplot module to use.
        """
        # Delegate to the static plotting function in visualization module
        static_plot_signals(
            emg_object=self,
            channels=channels,
            time_range=time_range,
            offset_scale=offset_scale,
            uniform_scale=uniform_scale,
            detrend=detrend,
            grid=grid,
            title=title,
            show=show,
            plt_module=plt_module
        )

    @classmethod
    def _infer_importer(cls, filepath: str) -> str:
        """
        Infer the importer to use based on the file extension.
        """
        extension = os.path.splitext(filepath)[1].lower()
        if extension in {'.edf', '.bdf'}:
            return 'edf'
        elif extension in {'.set'}:
            return 'eeglab'
        elif extension in {'.otb', '.otb+'}:
            return 'otb'
        elif extension in {'.csv', '.txt'}:
            return 'csv'
        elif extension in {'.hea', '.dat', '.atr'}:
            return 'wfdb'
        else:
            raise ValueError(f"Unsupported file extension: {extension}")

    @classmethod
    def from_file(
            cls,
            filepath: str,
            importer: Literal['trigno', 'otb', 'eeglab', 'edf', 'csv', 'wfdb'] | None = None,
            force_csv: bool = False,
            **kwargs
    ) -> 'EMG':
        """
        The method to create EMG object from file.

        Args:
            filepath: Path to the input file
            importer: Name of the importer to use. Can be one of the following:
                - 'trigno': Delsys Trigno EMG system (CSV)
                - 'otb': OTB/OTB+ EMG system (OTB, OTB+)
                - 'eeglab': EEGLAB .set files (SET)
                - 'edf': EDF/EDF+/BDF/BDF+ format (EDF, BDF)
                - 'csv': Generic CSV (or TXT) files with columnar data
                - 'wfdb': Waveform Database (WFDB)
                If None, the importer will be inferred from the file extension.
                Automatic import is supported for CSV/TXT files.
            force_csv: If True and importer is 'csv', forces using the generic CSV
                      importer even if the file appears to match a specialized format.
            **kwargs: Additional arguments passed to the importer

        Returns:
            EMG: New EMG object with loaded data
        """
        if importer is None:
            importer = cls._infer_importer(filepath)

        importers = {
            'trigno': 'TrignoImporter',  # CSV with Delsys Trigno Headers
            'otb': 'OTBImporter',  # OTB/OTB+ EMG system data
            'edf': 'EDFImporter',  # EDF/EDF+/BDF format
            'eeglab': 'EEGLABImporter',  # EEGLAB .set files
            'csv': 'CSVImporter',  # Generic CSV/Text files
            'wfdb': 'WFDBImporter'  # Waveform Database format
        }

        if importer not in importers:
            raise ValueError(
                f"Unsupported importer: {importer}. "
                f"Available importers: {list(importers.keys())}\n"
                "- trigno: Delsys Trigno EMG system\n"
                "- otb: OTB/OTB+ EMG system\n"
                "- edf: EDF/EDF+/BDF format\n"
                "- eeglab: EEGLAB .set files\n"
                "- csv: Generic CSV/Text files\n"
                "- wfdb: Waveform Database"
            )

        # If using CSV importer and force_csv is set, pass it as force_generic
        if importer == 'csv':
            kwargs['force_generic'] = force_csv

        # Import the appropriate importer class
        importer_module = __import__(
            f'emgio.importers.{importer}',
            globals(),
            locals(),
            [importers[importer]]
        )
        importer_class = getattr(importer_module, importers[importer])

        # Create importer instance and load data
        return importer_class().load(filepath, **kwargs)

    def select_channels(
            self,
            channels: Union[str, List[str], None] = None,
            channel_type: Optional[str] = None,
            inplace: bool = False) -> 'EMG':
        """
        Select specific channels from the data and return a new EMG object.

        Args:
            channels: Channel name or list of channel names to select. If None and
                    channel_type is specified, selects all channels of that type.
            channel_type: Type of channels to select ('EMG', 'ACC', 'GYRO', etc.).
                        If specified with channels, filters the selection to only
                        channels of this type.

        Returns:
            EMG: A new EMG object containing only the selected channels

        Examples:
            # Select specific channels
            new_emg = emg.select_channels(['EMG1', 'ACC1'])

            # Select all EMG channels
            emg_only = emg.select_channels(channel_type='EMG')

            # Select specific EMG channels only, this example does not select ACC channels
            emg_subset = emg.select_channels(['EMG1', 'ACC1'], channel_type='EMG')
        """
        if self.signals is None:
            raise ValueError("No signals loaded")

        # If channel_type specified but no channels, select all of that type
        if channels is None and channel_type is not None:
            channels = [ch for ch, info in self.channels.items()
                        if info['channel_type'] == channel_type]
            if not channels:
                raise ValueError(f"No channels found of type: {channel_type}")
        elif isinstance(channels, str):
            channels = [channels]

        # Validate channels exist
        if not all(ch in self.signals.columns for ch in channels):
            missing = [ch for ch in channels if ch not in self.signals.columns]
            raise ValueError(f"Channels not found: {missing}")

        # Filter by type if specified
        if channel_type is not None:
            channels = [ch for ch in channels
                        if self.channels[ch]['channel_type'] == channel_type]
            if not channels:
                raise ValueError(
                    f"None of the selected channels are of type: {channel_type}")

        # Create new EMG object
        new_emg = EMG()

        # Copy selected signals and channels
        new_emg.signals = self.signals[channels].copy()
        new_emg.channels = {ch: self.channels[ch].copy() for ch in channels}

        # Copy metadata
        new_emg.metadata = self.metadata.copy()

        if not inplace:
            return new_emg
        else:
            self.signals = new_emg.signals
            self.channels = new_emg.channels
            self.metadata = new_emg.metadata
            return self

    def get_channel_types(self) -> List[str]:
        """
        Get list of unique channel types in the data.

        Returns:
            List of channel types (e.g., ['EMG', 'ACC', 'GYRO'])
        """
        return list(set(info['channel_type'] for info in self.channels.values()))

    def get_channels_by_type(self, channel_type: str) -> List[str]:
        """
        Get list of channels of a specific type.

        Args:
            channel_type: Type of channels to get ('EMG', 'ACC', 'GYRO', etc.)

        Returns:
            List of channel names of the specified type
        """
        return [ch for ch, info in self.channels.items()
                if info['channel_type'] == channel_type]

    def to_edf(self, filepath: str, method: str = 'both',
               fft_noise_range: tuple = None, svd_rank: int = None,
               precision_threshold: float = 0.01,
               format: Literal['auto', 'edf', 'bdf'] = 'auto',
               bypass_analysis: bool | None = None,
               verify: bool = False, verify_tolerance: float = 1e-6,
               verify_channel_map: Optional[Dict[str, str]] = None,
               verify_plot: bool = False,
               events_df: Optional[pd.DataFrame] = None,
               **kwargs
               ) -> Union[str, None]:
        """
        Export EMG data to EDF/BDF format, optionally including events.

        Args:
            filepath: Path to save the EDF/BDF file
            method: Method for signal analysis ('svd', 'fft', or 'both')
                'svd': Uses Singular Value Decomposition for noise floor estimation
                'fft': Uses Fast Fourier Transform for noise floor estimation
                'both': Uses both methods and takes the minimum noise floor (default)
            fft_noise_range: Optional tuple (min_freq, max_freq) specifying frequency range for noise in FFT method
            svd_rank: Optional manual rank cutoff for signal/noise separation in SVD method
            precision_threshold: Maximum acceptable precision loss percentage (default: 0.01%)
            format: Format to use ('auto', 'edf', or 'bdf'). Default is 'auto'.
                    If 'edf' or 'bdf' is specified, that format will be used directly.
                    If 'auto', the format (EDF/16-bit or BDF/24-bit) is chosen based
                    on signal analysis to minimize precision loss while preferring EDF
                    if sufficient.
            bypass_analysis: If True, skip signal analysis step when format is explicitly
                             set to 'edf' or 'bdf'. If None (default), analysis is skipped
                             automatically when format is forced. Set to False to force
                             analysis even with a specified format. Ignored if format='auto'.
            verify: If True, reload the exported file and compare signals with the original
                    to check for data integrity loss. Results are printed. (default: False)
            verify_tolerance: Absolute tolerance used when comparing signals during verification. (default: 1e-6)
            verify_channel_map: Optional dictionary mapping original channel names (keys)
                                to reloaded channel names (values) for verification.
                                Used if `verify` is True and channel names might differ.
            verify_plot: If True and verify is True, plots a comparison of original vs reloaded signals.
            events_df: Optional DataFrame with events ('onset', 'duration', 'description').
                      If None, uses self.events. (This provides flexibility)
            **kwargs: Additional arguments for the EDF exporter

        Returns:
            Union[str, None]: If verify is True, returns a string with verification results.
                             Otherwise, returns None.

        Raises:
            ValueError: If no signals are loaded
        """
        from ..exporters.edf import EDFExporter  # Local import

        if self.signals is None:
            raise ValueError("No signals loaded")

        # --- Determine if analysis should be bypassed ---
        final_bypass_analysis = False
        if format.lower() == 'auto':
            if bypass_analysis is True:
                logging.warning("bypass_analysis=True ignored because format='auto'. Analysis is required.")
            # Analysis is always needed for 'auto' format
            final_bypass_analysis = False
        elif format.lower() in ['edf', 'bdf']:
            if bypass_analysis is None:
                # Default behaviour: skip analysis if format is forced
                final_bypass_analysis = True
                msg = (f"Format forced to '{format}'. Skipping signal analysis for faster export. "
                       "Set bypass_analysis=False to force analysis.")
                logging.log(logging.CRITICAL, msg)
            elif bypass_analysis is True:
                final_bypass_analysis = True
                logging.log(logging.CRITICAL, "bypass_analysis=True set. Skipping signal analysis.")
            else:  # bypass_analysis is False
                final_bypass_analysis = False
                logging.info(f"Format forced to '{format}' but bypass_analysis=False. Performing signal analysis.")
        else:
            # Should not happen if Literal type hint works, but good practice
            logging.warning(f"Unknown format '{format}'. Defaulting to 'auto' behavior (analysis enabled).")
            format = 'auto'
            final_bypass_analysis = False

        # Determine which events DataFrame to use
        if events_df is None:
            events_to_export = self.events
        else:
            events_to_export = events_df

        # Combine parameters
        all_params = {
            'precision_threshold': precision_threshold,
            'method': method,
            'fft_noise_range': fft_noise_range,
            'svd_rank': svd_rank,
            'format': format,
            'bypass_analysis': final_bypass_analysis,
            'events_df': events_to_export,  # Pass the events dataframe
            **kwargs
        }

        EDFExporter.export(self, filepath, **all_params)

        verification_report_dict = None
        if verify:
            logging.info(f"Verification requested. Reloading exported file: {filepath}")
            try:
                # Reload the exported file
                reloaded_emg = EMG.from_file(filepath, importer='edf')

                logging.info("Comparing original signals with reloaded signals...")
                # Compare signals using the imported function
                verification_results = compare_signals(
                    self,
                    reloaded_emg,
                    tolerance=verify_tolerance,
                    channel_map=verify_channel_map
                )

                # Generate and log report using the imported function
                report_verification_results(verification_results, verify_tolerance)
                verification_report_dict = verification_results

                # Plot comparison using imported function if requested
                summary = verification_results.get('channel_summary', {})
                comparison_mode = summary.get('comparison_mode', 'unknown')
                compared_count = sum(1 for k in verification_results if k != 'channel_summary')

                if verify_plot and compared_count > 0 and comparison_mode != 'failed':
                    plot_comparison(self, reloaded_emg, channel_map=verify_channel_map)
                elif verify_plot:
                    logging.warning("Skipping verification plot: No channels were successfully compared.")

            except Exception as e:
                logging.error(f"Verification failed during reload or comparison: {e}")
                verification_report_dict = {
                    'error': str(e),
                    'channel_summary': {'comparison_mode': 'failed'}
                }

        return verification_report_dict

    def set_metadata(self, key: str, value: any) -> None:
        """
        Set metadata value.

        Args:
            key: Metadata key
            value: Metadata value
        """
        self.metadata[key] = value

    def get_metadata(self, key: str) -> any:
        """
        Get metadata value.

        Args:
            key: Metadata key

        Returns:
            Value associated with the key
        """
        return self.metadata.get(key)

    def add_channel(
            self, label: str, data: np.ndarray, sample_frequency: float,
            physical_dimension: str, prefilter: str = 'n/a', channel_type: str = 'EMG') -> None:
        """
        Add a new channel to the EMG data.

        Args:
            label: Channel label or name (as per EDF specification)
            data: Channel data
            sample_frequency: Sampling frequency in Hz (as per EDF specification)
            physical_dimension: Physical dimension/unit of measurement (as per EDF specification)
            prefilter: Pre-filtering applied to the channel
            channel_type: Channel type ('EMG', 'ACC', 'GYRO', etc.)
        """
        if self.signals is None:
            # Create DataFrame with time index
            time = np.arange(len(data)) / sample_frequency
            self.signals = pd.DataFrame(index=time)

        self.signals[label] = data
        self.channels[label] = {
            'sample_frequency': sample_frequency,
            'physical_dimension': physical_dimension,
            'prefilter': prefilter,
            'channel_type': channel_type
        }

    def add_event(self, onset: float, duration: float, description: str) -> None:
        """
        Add an event/annotation to the EMG object.

        Args:
            onset: Event onset time in seconds.
            duration: Event duration in seconds.
            description: Event description string.
        """
        new_event = pd.DataFrame([{'onset': onset, 'duration': duration, 'description': description}])
        # Use pd.concat for appending, ignore_index=True resets the index
        self.events = pd.concat([self.events, new_event], ignore_index=True)
        # Sort events by onset time for consistency
        self.events.sort_values(by='onset', inplace=True)
        self.events.reset_index(drop=True, inplace=True)

__init__()

Source code in emgio/core/emg.py
def __init__(self):
    """Initialize an empty EMG object."""
    self.signals = None
    self.metadata = {}
    self.channels = {}
    # Initialize events as an empty DataFrame with specified columns
    self.events = pd.DataFrame(columns=['onset', 'duration', 'description'])

add_channel(label, data, sample_frequency, physical_dimension, prefilter='n/a', channel_type='EMG')

Add a new channel to the EMG data.

Args: label: Channel label or name (as per EDF specification) data: Channel data sample_frequency: Sampling frequency in Hz (as per EDF specification) physical_dimension: Physical dimension/unit of measurement (as per EDF specification) prefilter: Pre-filtering applied to the channel channel_type: Channel type ('EMG', 'ACC', 'GYRO', etc.)

Source code in emgio/core/emg.py
def add_channel(
        self, label: str, data: np.ndarray, sample_frequency: float,
        physical_dimension: str, prefilter: str = 'n/a', channel_type: str = 'EMG') -> None:
    """
    Add a new channel to the EMG data.

    Args:
        label: Channel label or name (as per EDF specification)
        data: Channel data
        sample_frequency: Sampling frequency in Hz (as per EDF specification)
        physical_dimension: Physical dimension/unit of measurement (as per EDF specification)
        prefilter: Pre-filtering applied to the channel
        channel_type: Channel type ('EMG', 'ACC', 'GYRO', etc.)
    """
    if self.signals is None:
        # Create DataFrame with time index
        time = np.arange(len(data)) / sample_frequency
        self.signals = pd.DataFrame(index=time)

    self.signals[label] = data
    self.channels[label] = {
        'sample_frequency': sample_frequency,
        'physical_dimension': physical_dimension,
        'prefilter': prefilter,
        'channel_type': channel_type
    }

add_event(onset, duration, description)

Add an event/annotation to the EMG object.

Args: onset: Event onset time in seconds. duration: Event duration in seconds. description: Event description string.

Source code in emgio/core/emg.py
def add_event(self, onset: float, duration: float, description: str) -> None:
    """
    Add an event/annotation to the EMG object.

    Args:
        onset: Event onset time in seconds.
        duration: Event duration in seconds.
        description: Event description string.
    """
    new_event = pd.DataFrame([{'onset': onset, 'duration': duration, 'description': description}])
    # Use pd.concat for appending, ignore_index=True resets the index
    self.events = pd.concat([self.events, new_event], ignore_index=True)
    # Sort events by onset time for consistency
    self.events.sort_values(by='onset', inplace=True)
    self.events.reset_index(drop=True, inplace=True)

from_file(filepath, importer=None, force_csv=False, **kwargs) classmethod

The method to create EMG object from file.

Args: filepath: Path to the input file importer: Name of the importer to use. Can be one of the following: - 'trigno': Delsys Trigno EMG system (CSV) - 'otb': OTB/OTB+ EMG system (OTB, OTB+) - 'eeglab': EEGLAB .set files (SET) - 'edf': EDF/EDF+/BDF/BDF+ format (EDF, BDF) - 'csv': Generic CSV (or TXT) files with columnar data - 'wfdb': Waveform Database (WFDB) If None, the importer will be inferred from the file extension. Automatic import is supported for CSV/TXT files. force_csv: If True and importer is 'csv', forces using the generic CSV importer even if the file appears to match a specialized format. **kwargs: Additional arguments passed to the importer

Returns: EMG: New EMG object with loaded data

Source code in emgio/core/emg.py
@classmethod
def from_file(
        cls,
        filepath: str,
        importer: Literal['trigno', 'otb', 'eeglab', 'edf', 'csv', 'wfdb'] | None = None,
        force_csv: bool = False,
        **kwargs
) -> 'EMG':
    """
    The method to create EMG object from file.

    Args:
        filepath: Path to the input file
        importer: Name of the importer to use. Can be one of the following:
            - 'trigno': Delsys Trigno EMG system (CSV)
            - 'otb': OTB/OTB+ EMG system (OTB, OTB+)
            - 'eeglab': EEGLAB .set files (SET)
            - 'edf': EDF/EDF+/BDF/BDF+ format (EDF, BDF)
            - 'csv': Generic CSV (or TXT) files with columnar data
            - 'wfdb': Waveform Database (WFDB)
            If None, the importer will be inferred from the file extension.
            Automatic import is supported for CSV/TXT files.
        force_csv: If True and importer is 'csv', forces using the generic CSV
                  importer even if the file appears to match a specialized format.
        **kwargs: Additional arguments passed to the importer

    Returns:
        EMG: New EMG object with loaded data
    """
    if importer is None:
        importer = cls._infer_importer(filepath)

    importers = {
        'trigno': 'TrignoImporter',  # CSV with Delsys Trigno Headers
        'otb': 'OTBImporter',  # OTB/OTB+ EMG system data
        'edf': 'EDFImporter',  # EDF/EDF+/BDF format
        'eeglab': 'EEGLABImporter',  # EEGLAB .set files
        'csv': 'CSVImporter',  # Generic CSV/Text files
        'wfdb': 'WFDBImporter'  # Waveform Database format
    }

    if importer not in importers:
        raise ValueError(
            f"Unsupported importer: {importer}. "
            f"Available importers: {list(importers.keys())}\n"
            "- trigno: Delsys Trigno EMG system\n"
            "- otb: OTB/OTB+ EMG system\n"
            "- edf: EDF/EDF+/BDF format\n"
            "- eeglab: EEGLAB .set files\n"
            "- csv: Generic CSV/Text files\n"
            "- wfdb: Waveform Database"
        )

    # If using CSV importer and force_csv is set, pass it as force_generic
    if importer == 'csv':
        kwargs['force_generic'] = force_csv

    # Import the appropriate importer class
    importer_module = __import__(
        f'emgio.importers.{importer}',
        globals(),
        locals(),
        [importers[importer]]
    )
    importer_class = getattr(importer_module, importers[importer])

    # Create importer instance and load data
    return importer_class().load(filepath, **kwargs)

get_channel_types()

Get list of unique channel types in the data.

Returns: List of channel types (e.g., ['EMG', 'ACC', 'GYRO'])

Source code in emgio/core/emg.py
def get_channel_types(self) -> List[str]:
    """
    Get list of unique channel types in the data.

    Returns:
        List of channel types (e.g., ['EMG', 'ACC', 'GYRO'])
    """
    return list(set(info['channel_type'] for info in self.channels.values()))

get_channels_by_type(channel_type)

Get list of channels of a specific type.

Args: channel_type: Type of channels to get ('EMG', 'ACC', 'GYRO', etc.)

Returns: List of channel names of the specified type

Source code in emgio/core/emg.py
def get_channels_by_type(self, channel_type: str) -> List[str]:
    """
    Get list of channels of a specific type.

    Args:
        channel_type: Type of channels to get ('EMG', 'ACC', 'GYRO', etc.)

    Returns:
        List of channel names of the specified type
    """
    return [ch for ch, info in self.channels.items()
            if info['channel_type'] == channel_type]

get_metadata(key)

Get metadata value.

Args: key: Metadata key

Returns: Value associated with the key

Source code in emgio/core/emg.py
def get_metadata(self, key: str) -> any:
    """
    Get metadata value.

    Args:
        key: Metadata key

    Returns:
        Value associated with the key
    """
    return self.metadata.get(key)

plot_signals(channels=None, time_range=None, offset_scale=0.8, uniform_scale=True, detrend=False, grid=True, title=None, show=True, plt_module=None)

Plot EMG signals in a single plot with vertical offsets.

Args: channels: List of channels to plot. If None, plot all channels. time_range: Tuple of (start_time, end_time) to plot. If None, plot all data. offset_scale: Portion of allocated space each signal can use (0.0 to 1.0). uniform_scale: Whether to use the same scale for all signals. detrend: Whether to remove mean from signals before plotting. grid: Whether to show grid lines. title: Optional title for the figure. show: Whether to display the plot. plt_module: Matplotlib pyplot module to use.

Source code in emgio/core/emg.py
def plot_signals(self, channels=None, time_range=None, offset_scale=0.8,
                uniform_scale=True, detrend=False, grid=True, title=None,
                show=True, plt_module=None):
    """
    Plot EMG signals in a single plot with vertical offsets.

    Args:
        channels: List of channels to plot. If None, plot all channels.
        time_range: Tuple of (start_time, end_time) to plot. If None, plot all data.
        offset_scale: Portion of allocated space each signal can use (0.0 to 1.0).
        uniform_scale: Whether to use the same scale for all signals.
        detrend: Whether to remove mean from signals before plotting.
        grid: Whether to show grid lines.
        title: Optional title for the figure.
        show: Whether to display the plot.
        plt_module: Matplotlib pyplot module to use.
    """
    # Delegate to the static plotting function in visualization module
    static_plot_signals(
        emg_object=self,
        channels=channels,
        time_range=time_range,
        offset_scale=offset_scale,
        uniform_scale=uniform_scale,
        detrend=detrend,
        grid=grid,
        title=title,
        show=show,
        plt_module=plt_module
    )

select_channels(channels=None, channel_type=None, inplace=False)

Select specific channels from the data and return a new EMG object.

Args: channels: Channel name or list of channel names to select. If None and channel_type is specified, selects all channels of that type. channel_type: Type of channels to select ('EMG', 'ACC', 'GYRO', etc.). If specified with channels, filters the selection to only channels of this type.

Returns: EMG: A new EMG object containing only the selected channels

Examples: # Select specific channels new_emg = emg.select_channels(['EMG1', 'ACC1'])

# Select all EMG channels
emg_only = emg.select_channels(channel_type='EMG')

# Select specific EMG channels only, this example does not select ACC channels
emg_subset = emg.select_channels(['EMG1', 'ACC1'], channel_type='EMG')
Source code in emgio/core/emg.py
def select_channels(
        self,
        channels: Union[str, List[str], None] = None,
        channel_type: Optional[str] = None,
        inplace: bool = False) -> 'EMG':
    """
    Select specific channels from the data and return a new EMG object.

    Args:
        channels: Channel name or list of channel names to select. If None and
                channel_type is specified, selects all channels of that type.
        channel_type: Type of channels to select ('EMG', 'ACC', 'GYRO', etc.).
                    If specified with channels, filters the selection to only
                    channels of this type.

    Returns:
        EMG: A new EMG object containing only the selected channels

    Examples:
        # Select specific channels
        new_emg = emg.select_channels(['EMG1', 'ACC1'])

        # Select all EMG channels
        emg_only = emg.select_channels(channel_type='EMG')

        # Select specific EMG channels only, this example does not select ACC channels
        emg_subset = emg.select_channels(['EMG1', 'ACC1'], channel_type='EMG')
    """
    if self.signals is None:
        raise ValueError("No signals loaded")

    # If channel_type specified but no channels, select all of that type
    if channels is None and channel_type is not None:
        channels = [ch for ch, info in self.channels.items()
                    if info['channel_type'] == channel_type]
        if not channels:
            raise ValueError(f"No channels found of type: {channel_type}")
    elif isinstance(channels, str):
        channels = [channels]

    # Validate channels exist
    if not all(ch in self.signals.columns for ch in channels):
        missing = [ch for ch in channels if ch not in self.signals.columns]
        raise ValueError(f"Channels not found: {missing}")

    # Filter by type if specified
    if channel_type is not None:
        channels = [ch for ch in channels
                    if self.channels[ch]['channel_type'] == channel_type]
        if not channels:
            raise ValueError(
                f"None of the selected channels are of type: {channel_type}")

    # Create new EMG object
    new_emg = EMG()

    # Copy selected signals and channels
    new_emg.signals = self.signals[channels].copy()
    new_emg.channels = {ch: self.channels[ch].copy() for ch in channels}

    # Copy metadata
    new_emg.metadata = self.metadata.copy()

    if not inplace:
        return new_emg
    else:
        self.signals = new_emg.signals
        self.channels = new_emg.channels
        self.metadata = new_emg.metadata
        return self

set_metadata(key, value)

Set metadata value.

Args: key: Metadata key value: Metadata value

Source code in emgio/core/emg.py
def set_metadata(self, key: str, value: any) -> None:
    """
    Set metadata value.

    Args:
        key: Metadata key
        value: Metadata value
    """
    self.metadata[key] = value

to_edf(filepath, method='both', fft_noise_range=None, svd_rank=None, precision_threshold=0.01, format='auto', bypass_analysis=None, verify=False, verify_tolerance=1e-06, verify_channel_map=None, verify_plot=False, events_df=None, **kwargs)

Export EMG data to EDF/BDF format, optionally including events.

Args: filepath: Path to save the EDF/BDF file method: Method for signal analysis ('svd', 'fft', or 'both') 'svd': Uses Singular Value Decomposition for noise floor estimation 'fft': Uses Fast Fourier Transform for noise floor estimation 'both': Uses both methods and takes the minimum noise floor (default) fft_noise_range: Optional tuple (min_freq, max_freq) specifying frequency range for noise in FFT method svd_rank: Optional manual rank cutoff for signal/noise separation in SVD method precision_threshold: Maximum acceptable precision loss percentage (default: 0.01%) format: Format to use ('auto', 'edf', or 'bdf'). Default is 'auto'. If 'edf' or 'bdf' is specified, that format will be used directly. If 'auto', the format (EDF/16-bit or BDF/24-bit) is chosen based on signal analysis to minimize precision loss while preferring EDF if sufficient. bypass_analysis: If True, skip signal analysis step when format is explicitly set to 'edf' or 'bdf'. If None (default), analysis is skipped automatically when format is forced. Set to False to force analysis even with a specified format. Ignored if format='auto'. verify: If True, reload the exported file and compare signals with the original to check for data integrity loss. Results are printed. (default: False) verify_tolerance: Absolute tolerance used when comparing signals during verification. (default: 1e-6) verify_channel_map: Optional dictionary mapping original channel names (keys) to reloaded channel names (values) for verification. Used if verify is True and channel names might differ. verify_plot: If True and verify is True, plots a comparison of original vs reloaded signals. events_df: Optional DataFrame with events ('onset', 'duration', 'description'). If None, uses self.events. (This provides flexibility) **kwargs: Additional arguments for the EDF exporter

Returns: Union[str, None]: If verify is True, returns a string with verification results. Otherwise, returns None.

Raises: ValueError: If no signals are loaded

Source code in emgio/core/emg.py
def to_edf(self, filepath: str, method: str = 'both',
           fft_noise_range: tuple = None, svd_rank: int = None,
           precision_threshold: float = 0.01,
           format: Literal['auto', 'edf', 'bdf'] = 'auto',
           bypass_analysis: bool | None = None,
           verify: bool = False, verify_tolerance: float = 1e-6,
           verify_channel_map: Optional[Dict[str, str]] = None,
           verify_plot: bool = False,
           events_df: Optional[pd.DataFrame] = None,
           **kwargs
           ) -> Union[str, None]:
    """
    Export EMG data to EDF/BDF format, optionally including events.

    Args:
        filepath: Path to save the EDF/BDF file
        method: Method for signal analysis ('svd', 'fft', or 'both')
            'svd': Uses Singular Value Decomposition for noise floor estimation
            'fft': Uses Fast Fourier Transform for noise floor estimation
            'both': Uses both methods and takes the minimum noise floor (default)
        fft_noise_range: Optional tuple (min_freq, max_freq) specifying frequency range for noise in FFT method
        svd_rank: Optional manual rank cutoff for signal/noise separation in SVD method
        precision_threshold: Maximum acceptable precision loss percentage (default: 0.01%)
        format: Format to use ('auto', 'edf', or 'bdf'). Default is 'auto'.
                If 'edf' or 'bdf' is specified, that format will be used directly.
                If 'auto', the format (EDF/16-bit or BDF/24-bit) is chosen based
                on signal analysis to minimize precision loss while preferring EDF
                if sufficient.
        bypass_analysis: If True, skip signal analysis step when format is explicitly
                         set to 'edf' or 'bdf'. If None (default), analysis is skipped
                         automatically when format is forced. Set to False to force
                         analysis even with a specified format. Ignored if format='auto'.
        verify: If True, reload the exported file and compare signals with the original
                to check for data integrity loss. Results are printed. (default: False)
        verify_tolerance: Absolute tolerance used when comparing signals during verification. (default: 1e-6)
        verify_channel_map: Optional dictionary mapping original channel names (keys)
                            to reloaded channel names (values) for verification.
                            Used if `verify` is True and channel names might differ.
        verify_plot: If True and verify is True, plots a comparison of original vs reloaded signals.
        events_df: Optional DataFrame with events ('onset', 'duration', 'description').
                  If None, uses self.events. (This provides flexibility)
        **kwargs: Additional arguments for the EDF exporter

    Returns:
        Union[str, None]: If verify is True, returns a string with verification results.
                         Otherwise, returns None.

    Raises:
        ValueError: If no signals are loaded
    """
    from ..exporters.edf import EDFExporter  # Local import

    if self.signals is None:
        raise ValueError("No signals loaded")

    # --- Determine if analysis should be bypassed ---
    final_bypass_analysis = False
    if format.lower() == 'auto':
        if bypass_analysis is True:
            logging.warning("bypass_analysis=True ignored because format='auto'. Analysis is required.")
        # Analysis is always needed for 'auto' format
        final_bypass_analysis = False
    elif format.lower() in ['edf', 'bdf']:
        if bypass_analysis is None:
            # Default behaviour: skip analysis if format is forced
            final_bypass_analysis = True
            msg = (f"Format forced to '{format}'. Skipping signal analysis for faster export. "
                   "Set bypass_analysis=False to force analysis.")
            logging.log(logging.CRITICAL, msg)
        elif bypass_analysis is True:
            final_bypass_analysis = True
            logging.log(logging.CRITICAL, "bypass_analysis=True set. Skipping signal analysis.")
        else:  # bypass_analysis is False
            final_bypass_analysis = False
            logging.info(f"Format forced to '{format}' but bypass_analysis=False. Performing signal analysis.")
    else:
        # Should not happen if Literal type hint works, but good practice
        logging.warning(f"Unknown format '{format}'. Defaulting to 'auto' behavior (analysis enabled).")
        format = 'auto'
        final_bypass_analysis = False

    # Determine which events DataFrame to use
    if events_df is None:
        events_to_export = self.events
    else:
        events_to_export = events_df

    # Combine parameters
    all_params = {
        'precision_threshold': precision_threshold,
        'method': method,
        'fft_noise_range': fft_noise_range,
        'svd_rank': svd_rank,
        'format': format,
        'bypass_analysis': final_bypass_analysis,
        'events_df': events_to_export,  # Pass the events dataframe
        **kwargs
    }

    EDFExporter.export(self, filepath, **all_params)

    verification_report_dict = None
    if verify:
        logging.info(f"Verification requested. Reloading exported file: {filepath}")
        try:
            # Reload the exported file
            reloaded_emg = EMG.from_file(filepath, importer='edf')

            logging.info("Comparing original signals with reloaded signals...")
            # Compare signals using the imported function
            verification_results = compare_signals(
                self,
                reloaded_emg,
                tolerance=verify_tolerance,
                channel_map=verify_channel_map
            )

            # Generate and log report using the imported function
            report_verification_results(verification_results, verify_tolerance)
            verification_report_dict = verification_results

            # Plot comparison using imported function if requested
            summary = verification_results.get('channel_summary', {})
            comparison_mode = summary.get('comparison_mode', 'unknown')
            compared_count = sum(1 for k in verification_results if k != 'channel_summary')

            if verify_plot and compared_count > 0 and comparison_mode != 'failed':
                plot_comparison(self, reloaded_emg, channel_map=verify_channel_map)
            elif verify_plot:
                logging.warning("Skipping verification plot: No channels were successfully compared.")

        except Exception as e:
            logging.error(f"Verification failed during reload or comparison: {e}")
            verification_report_dict = {
                'error': str(e),
                'channel_summary': {'comparison_mode': 'failed'}
            }

    return verification_report_dict

TrignoImporter

Bases: BaseImporter

Importer for Delsys Trigno EMG system data.

Source code in emgio/importers/trigno.py
class TrignoImporter(BaseImporter):
    """Importer for Delsys Trigno EMG system data."""

    def _analyze_csv_structure(self, csv_path: str) -> Tuple[List[str], int, str]:
        """
        Analyze the CSV file structure to identify metadata and data sections.

        Args:
            csv_path: Path to the CSV file

        Returns:
            Tuple containing:
                - List of metadata lines
                - Line number where data starts
                - Header line
        """
        metadata_lines = []
        data_start_line = 0
        header_line = None

        with open(csv_path, 'r') as f:
            for i, line in enumerate(f):
                line = line.strip()
                if not line:  # Skip empty lines
                    continue

                if 'X[s]' in line:  # This is the header line
                    header_line = line
                    data_start_line = i + 1
                    break

                metadata_lines.append(line)

        return metadata_lines, data_start_line, header_line

    def _parse_metadata(self, metadata_lines: List[str]) -> dict:
        """
        Parse metadata lines to extract channel information.

        Args:
            metadata_lines: List of metadata lines from the file

        Returns:
            Dictionary containing channel information
        """
        channel_info = {}

        for line in metadata_lines:
            if line.startswith('Label:'):
                # Extract channel name
                name_part = line[line.find('Label:') + 6:line.find('Sampling')].strip()

                # Extract sampling frequency
                freq_str = line[line.find('frequency:') + 10:].split()[0]
                sampling_freq = float(freq_str)

                # Extract unit
                unit = line[line.find('Unit:') + 5:line.find('Domain')].strip()

                channel_info[name_part] = {
                    'sample_frequency': sampling_freq,
                    'physical_dimension': unit
                }

        return channel_info

    def load(self, filepath: str) -> EMG:
        """
        Load EMG data from Trigno CSV file.

        Args:
            filepath: Path to the Trigno CSV file

        Returns:
            EMG: EMG object containing the loaded data
        """
        # Create EMG object
        emg = EMG()

        # Analyze file structure
        metadata_lines, data_start, header_line = self._analyze_csv_structure(filepath)

        # Parse metadata
        channel_info = self._parse_metadata(metadata_lines)

        # Read data section
        df = pd.read_csv(filepath, skiprows=data_start-1)

        # Clean up column names
        df.columns = [col.replace('X[s]', '').strip('"') for col in df.columns]

        # Get valid channel names (excluding time columns and extra columns)
        channel_labels = [col for col in df.columns if col and not col.startswith('.')]

        # Create time index
        time_col = df.columns[0]  # First column is time
        df.set_index(time_col, inplace=True)

        # Add channels to EMG object
        for label in channel_labels:
            if label in channel_info:
                info = channel_info[label]

                # Determine channel type
                if 'EMG' in label:
                    ch_type = 'EMG'
                elif 'ACC' in label:
                    ch_type = 'ACC'
                elif 'GYRO' in label:
                    ch_type = 'GYRO'
                else:
                    ch_type = 'OTHER'

                emg.add_channel(
                    label=label,
                    data=df[label].values,
                    sample_frequency=info['sample_frequency'],
                    physical_dimension=info['physical_dimension'],
                    channel_type=ch_type
                )

        # Add file metadata
        emg.set_metadata('source_file', filepath)
        emg.set_metadata('device', 'Delsys Trigno')

        return emg

load(filepath)

Load EMG data from Trigno CSV file.

Args: filepath: Path to the Trigno CSV file

Returns: EMG: EMG object containing the loaded data

Source code in emgio/importers/trigno.py
def load(self, filepath: str) -> EMG:
    """
    Load EMG data from Trigno CSV file.

    Args:
        filepath: Path to the Trigno CSV file

    Returns:
        EMG: EMG object containing the loaded data
    """
    # Create EMG object
    emg = EMG()

    # Analyze file structure
    metadata_lines, data_start, header_line = self._analyze_csv_structure(filepath)

    # Parse metadata
    channel_info = self._parse_metadata(metadata_lines)

    # Read data section
    df = pd.read_csv(filepath, skiprows=data_start-1)

    # Clean up column names
    df.columns = [col.replace('X[s]', '').strip('"') for col in df.columns]

    # Get valid channel names (excluding time columns and extra columns)
    channel_labels = [col for col in df.columns if col and not col.startswith('.')]

    # Create time index
    time_col = df.columns[0]  # First column is time
    df.set_index(time_col, inplace=True)

    # Add channels to EMG object
    for label in channel_labels:
        if label in channel_info:
            info = channel_info[label]

            # Determine channel type
            if 'EMG' in label:
                ch_type = 'EMG'
            elif 'ACC' in label:
                ch_type = 'ACC'
            elif 'GYRO' in label:
                ch_type = 'GYRO'
            else:
                ch_type = 'OTHER'

            emg.add_channel(
                label=label,
                data=df[label].values,
                sample_frequency=info['sample_frequency'],
                physical_dimension=info['physical_dimension'],
                channel_type=ch_type
            )

    # Add file metadata
    emg.set_metadata('source_file', filepath)
    emg.set_metadata('device', 'Delsys Trigno')

    return emg

Usage Example

from emgio import EMG
from emgio.importers.trigno import TrignoImporter

# Method 1: Using EMG.from_file (recommended)
emg = EMG.from_file('data.csv', importer='trigno')

# Method 2: Using the importer directly
importer = TrignoImporter('data.csv')
signals, channels, metadata = importer.load()
emg = EMG(signals, channels, metadata)

File Format Requirements

The importer expects a CSV file with:

  1. A header section (optional) containing metadata lines starting with '#'
  2. A data section with columns:
  3. First column: Time in seconds
  4. Remaining columns: Channel data

Example:

# Delsys Trigno EMG Data
# Recording Date: 2023-01-01
# Subject: S001
Time(s),EMG1,EMG2,EMG3,EMG4,ACC1_X,ACC1_Y,ACC1_Z
0.000,0.01,0.02,0.03,0.04,0.05,0.06,0.07
0.001,0.02,0.03,0.04,0.05,0.06,0.07,0.08
...

Channel Type Detection

The Trigno importer automatically detects channel types based on channel names:

  • Channels containing 'EMG' or 'emg' are classified as 'EMG'
  • Channels containing 'ACC' or 'acc' are classified as 'ACC'
  • Other channels are classified as 'OTHER'

Parameters

  • file_path (str): Path to the Trigno CSV file
  • kwargs (dict): Additional keyword arguments
  • header_rows (int, optional): Number of header rows to skip. If not provided, automatically detected.
  • delimiter (str, optional): Column delimiter. Default is ','.
  • channel_types (dict, optional): Manual mapping of channel names to types.

Return Values

The load() method returns a tuple of:

  1. signals (pandas.DataFrame): Signal data with channels as columns
  2. channels (dict): Dictionary of channel information
  3. metadata (dict): Dictionary containing metadata from the header