Annotation Garden Initiative homepage showing the collaborative annotation platform

Annotation Garden Initiative: Collaborative Annotation for Neuroscience

AGI addresses the critical gap between increasingly complex naturalistic stimuli and the need for standardized, reusable annotations. Through GitHub-based version control and modern web interfaces, AGI enables researchers to share, refine, and build upon stimulus annotations across studies.

Image Annotation tool interface showing VLM-based annotation of NSD images

Image Annotation Tool: VLM-Based Annotation for Neuroscience

An annotation bank for the NSD Shared 1000 images generated using open-source Vision Language Models. The tool supports batch processing across multiple VLMs with plans to integrate Hierarchical Event Descriptors (HED) for standardized semantic annotations.

The overall design of the Lab Streaming Layer (LSL) for synchronized data recording.

The Lab Streaming Layer for Synchronized Multimodal Recording

The Lab Streaming Layer (LSL) presents a software-based solution for synchronizing data streams across multiple instruments in neurophysiological research. Utilizing per-sample time stamps and LAN-based time synchronization, LSL ensures accurate, continuous recording despite varying device clocks. It automatically corrects for network delays and jitters, maintaining data integrity through disruptions. Supporting over 150 device classes and compatible with numerous programming languages, LSL has become a vital tool for integrating diverse data acquisition systems. Its robustness and adaptability have extended its application beyond research, into art, performance, and commercial realms, making it a cornerstone for multimodal data collection and synchronization.

Health-Specific Evaluation for AI Systems

Learn how to evaluate AI systems in healthcare using specialized metrics and frameworks that address clinical validity, FDA regulatory requirements, bias detection, safety assessment, and practical implementation strategies. This comprehensive guide provides insights into designing robust evaluation pipelines for health AI applications.

Statistical Analysis for Evaluation

Learn how to apply statistical methods for robust evaluation of models, including power analysis, mixed-effects models, bootstrap confidence intervals, multiple comparison corrections, and effect size calculations. This guide provides practical algorithms and Python code snippets to help researchers ensure their evaluations are statistically sound and meaningful.

LLM Evaluation Methods

Learn about various methods for evaluating large language models (LLMs), including automatic metrics like BLEU and ROUGE, the LLM-as-judge paradigm, human-in-the-loop strategies, and specialized approaches for health-related applications. This comprehensive guide also covers best practices for benchmark design, red teaming, and scaling evaluations.

Human Evaluation & Psychometrics for AI Systems

This post provides a detailed overview of human evaluation and psychometrics in the context of AI systems, covering key concepts, reliability metrics, scale design, and practical implementation strategies. It includes algorithms and code snippets to help practitioners design robust evaluation frameworks.

NEMAR Dataset Citations Analysis Dashboard

NEMAR Dataset Citations Analysis Dashboard

This dashboard provides comprehensive analysis of dataset citations within the NEMAR ecosystem, revealing collaboration patterns, research trends, and the impact of open neuroscience data sharing on the research community.

Creating Interactive Dashboards in Hugo: A Complete Guide

Learn how to transform your Hugo static site into an interactive dashboard powerhouse using Chart.js, structured JSON data, and modern web development practices.

HBN Data Insights Dashboard

HBN Dataset Insights Dashboard

This is a data visualization dashboard for exploring the Healthy Brain Network EEG dataset. Features age and sex distributions, task availability metrics, mental health correlations, and per-release analysis across 11 dataset releases with over 3,600 participants.