-
CiteScore
2.31
Impact Factor
Chinese Journal of Information Fusion, 2024, Volume 1, Issue 3: 183-211

Code (Data) Available | Free to Read | Review Article | 15 December 2024
1 School of Computer Science and Technology, East China Normal University, Shanghai 200062, China
* Corresponding Authors: Yuanbin Wu, [email protected] ; Xiaoling Wang, [email protected]
Received: 31 July 2024, Accepted: 10 December 2024, Published: 15 December 2024  
Abstract
In recent years, the field of electroencephalography (EEG) analysis has witnessed remarkable advancements, driven by the integration of machine learning and artificial intelligence. This survey aims to encapsulate the latest developments, focusing on emerging methods and technologies that are poised to transform our comprehension and interpretation of brain activity. The structure of this paper is organized according to the categorization within the machine learning community, with representation learning as the foundational concept that encompasses both discriminative and generative approaches. We delve into self-supervised learning methods that enable the robust representation of brain signals, which are fundamental for a variety of downstream applications. Within the realm of discriminative methods, we explore advanced techniques such as graph neural networks (GNN), foundation models, and approaches based on large language models (LLMs). On the generative front, we examine technologies that leverage EEG data to produce images or text, offering novel perspectives on brain activity visualization and interpretation. This survey provides an extensive overview of these cutting-edge techniques, their current applications, and the profound implications they hold for future research and clinical practice. The relevant literature and open-source materials have been compiled and are consistently updated at https://github.com/wpf535236337/LLMs4TS.

Graphical Abstract
A Comprehensive Survey on Emerging Techniques and Technologies in Spatio-Temporal EEG Data Analysis

Keywords
electroencephalography (EEG)
self-supervised learning (SSL)
graph neural networks (GNN)
foundation models
large language models (LLMs)
generative models

Code / Data

Funding
This work was supported by NSFC under grant 62136002 and 62477014, Ministry of Education Research Joint Fund Project under grant 8091B042239, and Shanghai Trusted Industry Internet Software Collaborative Innovation Center.

Cite This Article
APA Style
Wang, P., Zheng, H., Dai, S., Wang, Y., Gu, X., Wu, Y, & Wang, X. (2024). A Comprehensive Survey on Emerging Techniques and Technologies in Spatio-Temporal EEG Data Analysis. Chinese Journal of Information Fusion, 1(3), 183–211. https://doi.org/10.62762/CJIF.2024.876830

References
  1. David, O., Blauwblomme, T., Job, A. S., Chabardès, S., Hoffmann, D., Minotti, L., & Kahane, P. (2011). Imaging the seizure onset zone with stereo-electroencephalography. Brain, 134(10), 2898–2911.
    [Google Scholar]
  2. Cai, D., Chen, J., Yang, Y., Liu, T., & Li, Y. (2023, August). MBrain: A Multi-channel Self-Supervised Learning Framework for Brain Signals. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 130–141).
    [Google Scholar]
  3. Craik, A., He, Y., & Contreras-Vidal, J. L. (2019). Deep learning for electroencephalogram (EEG) classification tasks: a review. Journal of neural engineering, 16(3), 031001.
    [Google Scholar]
  4. Hosseini, M. P., Hosseini, A., & Ahi, K. (2020). A review on machine learning for EEG signal processing in bioengineering. IEEE reviews in biomedical engineering, 14, 204–218.
    [Google Scholar]
  5. Bishop, C. M., & Nasrabadi, N. M. (2006). Pattern recognition and machine learning (Vol. 4, No. 4, p. 738). New York: springer.
    [Google Scholar]
  6. Jiang, X., Bian, G. B., & Tian, Z. (2019). Removal of artifacts from EEG signals: a review. Sensors, 19(5), 987.
    [Google Scholar]
  7. Zhang, X., Yao, L., Wang, X., Monaghan, J., Mcalpine, D., & Zhang, Y. (2019). A survey on deep learning based brain computer interface: Recent advances and new frontiers. arXiv preprint arXiv:1905.04149, 66.
    [Google Scholar]
  8. Zhang, K., Wen, Q., Zhang, C., Cai, R., Jin, M., Liu, Y., ... & Pan, S. (2024). Self-supervised learning for time series analysis: Taxonomy, progress, and prospects. IEEE Transactions on Pattern Analysis and Machine Intelligence.
    [Google Scholar]
  9. Jin, M., Koh, H. Y., Wen, Q., Zambon, D., Alippi, C., Webb, G. I., ... & Pan, S. (2024). A survey on graph neural networks for time series: Forecasting, classification, imputation, and anomaly detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 12, pp. 10466-10485.
    [Google Scholar]
  10. Liang, Y., Wen, H., Nie, Y., Jiang, Y., Jin, M., Song, D., ... & Wen, Q. (2024, August). Foundation models for time series analysis: A tutorial and survey. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 6555-6565).
    [Google Scholar]
  11. Jiang, W. B., Zhao, L. M., & Lu, B. L. (2024). Large brain model for learning generic representations with tremendous EEG data in BCI. arXiv preprint arXiv:2405.18765.
    [Google Scholar]
  12. Zhang, X., Chowdhury, R. R., Gupta, R. K., & Shang, J. (2024). Large language models for time series: A survey. arXiv preprint arXiv:2402.01801.
    [Google Scholar]
  13. Jin, M., Wen, Q., Liang, Y., Zhang, C., Xue, S., Wang, X., ... & Xiong, H. (2023). Large models for time series and spatio-temporal data: A survey and outlook. arXiv preprint arXiv:2310.10196.
    [Google Scholar]
  14. Yang, Y., Jin, M., Wen, H., Zhang, C., Liang, Y., Ma, L., ... & Wen, Q. (2024). A survey on diffusion models for time series and spatio-temporal data. arXiv preprint arXiv:2404.18886.
    [Google Scholar]
  15. Zhang, Z., Sun, Y., Wang, Z., Nie, Y., Ma, X., Sun, P., & Li, R. (2024). Large language models for mobility in transportation systems: A survey on forecasting tasks. arXiv preprint arXiv:2405.02357.
    [Google Scholar]
  16. Wen, Q., Zhou, T., Zhang, C., Chen, W., Ma, Z., Yan, J., & Sun, L. (2022). Transformers in time series: A survey. arXiv preprint arXiv:2202.07125.
    [Google Scholar]
  17. Liu, K., Xiao, A., Zhang, X., Lu, S., & Shao, L. (2023). Fac: 3d representation learning via foreground aware feature contrast. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9476-9485).
    [Google Scholar]
  18. Gao, T., Yao, X., & Chen, D. (2021). Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821.
    [Google Scholar]
  19. Mohsenvand, M. N., Izadi, M. R., & Maes, P. (2020, November). Contrastive representation learning for electroencephalogram classification. In Machine Learning for Health (pp. 238-253). PMLR.
    [Google Scholar]
  20. Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020, November). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597-1607). PMLR.
    [Google Scholar]
  21. Eldele, E., Ragab, M., Chen, Z., Wu, M., Kwoh, C. K., Li, X., & Guan, C. (2021). Time-series representation learning via temporal and contextual contrasting. arXiv preprint arXiv:2106.14112.
    [Google Scholar]
  22. Jiang, X., Zhao, J., Du, B., & Yuan, Z. (2021, July). Self-supervised contrastive learning for EEG-based sleep staging. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-8). IEEE.
    [Google Scholar]
  23. Kumar, V., Reddy, L., Kumar Sharma, S., Dadi, K., Yarra, C., Bapi, R. S., & Rajendran, S. (2022, September). mulEEG: a multi-view representation learning on EEG signals. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 398-407). Cham: Springer Nature Switzerland.
    [Google Scholar]
  24. Chuang, C. Y., Robinson, J., Lin, Y. C., Torralba, A., & Jegelka, S. (2020). Debiased contrastive learning. Advances in neural information processing systems, 33, 8765-8775.
    [Google Scholar]
  25. Robinson, J., Chuang, C. Y., Sra, S., & Jegelka, S. (2020). Contrastive learning with hard negative samples. arXiv preprint arXiv:2010.04592.
    [Google Scholar]
  26. Yang, C., Xiao, C., Westover, M. B., & Sun, J. (2023). Self-supervised electroencephalogram representation learning for automatic sleep staging: model development and evaluation study. JMIR AI, 2(1), e46769.
    [Google Scholar]
  27. Wang, Y., Han, Y., Wang, H., & Zhang, X. (2024). Contrast everything: A hierarchical contrastive framework for medical time-series. Advances in Neural Information Processing Systems, 36.
    [Google Scholar]
  28. Zhang, H., Wang, J., Xiao, Q., Deng, J., & Lin, Y. (2021). Sleeppriorcl: Contrastive representation learning with prior knowledge-based positive mining and adaptive temperature for sleep staging. arXiv preprint arXiv:2110.09966.
    [Google Scholar]
  29. Weng, W., Gu, Y., Zhang, Q., Huang, Y., Miao, C., & Chen, Y. (2023). A Knowledge-Driven Cross-view Contrastive Learning for EEG Representation. arXiv preprint arXiv:2310.03747.
    [Google Scholar]
  30. Devlin, J. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
    [Google Scholar]
  31. Kostas, D., Aroca-Ouellette, S., & Rudzicz, F. (2021). BENDR: Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data. Frontiers in Human Neuroscience, 15, 653659.
    [Google Scholar]
  32. Baevski, A., Zhou, Y., Mohamed, A., & Auli, M. (2020). wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33, 12449-12460.
    [Google Scholar]
  33. Vaswani, A. (2017). Attention is all you need. Advances in Neural Information Processing Systems.
    [Google Scholar]
  34. Chien, H. Y. S., Goh, H., Sandino, C. M., & Cheng, J. Y. (2022). Maeeg: Masked auto-encoder for eeg representation learning. arXiv preprint arXiv:2211.02625.
    [Google Scholar]
  35. Peng, R., Zhao, C., Xu, Y., Jiang, J., Kuang, G., Shao, J., & Wu, D. (2023, June). Wavelet2vec: a filter bank masked autoencoder for EEG-based seizure subtype classification. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
    [Google Scholar]
  36. Dosovitskiy, A. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
    [Google Scholar]
  37. Obeid, I., & Picone, J. (2016). The temple university hospital EEG data corpus. Frontiers in neuroscience, 10, 196.
    [Google Scholar]
  38. Zheng, W. L., Zhu, J. Y., & Lu, B. L. (2017). Identifying stable patterns over time for emotion recognition from EEG. IEEE transactions on affective computing, 10(3), 417-429.
    [Google Scholar]
  39. Kemp, B., Zwinderman, A. H., Tuk, B., Kamphuisen, H. A., & Oberye, J. J. (2000). Analysis of a sleep-dependent neuronal feedback loop: the slow-wave microcontinuity of the EEG. IEEE Transactions on Biomedical Engineering, 47(9), 1185-1194.
    [Google Scholar]
  40. Khalighi, S., Sousa, T., Santos, J. M., & Nunes, U. (2016). ISRUC-Sleep: A comprehensive public dataset for sleep researchers. Computer methods and programs in biomedicine, 124, 180-192.
    [Google Scholar]
  41. Anguita, D., Ghio, A., Oneto, L., Parra, X., & Reyes-Ortiz, J. L. (2013, April). A public domain dataset for human activity recognition using smartphones. In Esann (Vol. 3, p. 3).
    [Google Scholar]
  42. Andrzejak, R. G., Lehnertz, K., Mormann, F., Rieke, C., David, P., & Elger, C. E. (2001). Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Physical Review E, 64(6), 061907.
    [Google Scholar]
  43. Lessmeier, C., Kimotho, J. K., Zimmer, D., & Sextro, W. (2016, July). Condition monitoring of bearing damage in electromechanical drive systems by using motor current signals of electric motors: A benchmark data set for data-driven classification. In PHM Society European Conference (Vol. 3, No. 1).
    [Google Scholar]
  44. Guillot, A., Sauvet, F., During, E. H., & Thorey, V. (2020). Dreem open datasets: Multi-scored sleep datasets to compare human and automated sleep staging. IEEE transactions on neural systems and rehabilitation engineering, 28(9), 1955-1965.
    [Google Scholar]
  45. Zhang, G. Q., Cui, L., Mueller, R., Tao, S., Kim, M., Rueschman, M., ... & Redline, S. (2018). The National Sleep Research Resource: towards a sleep data commons. Journal of the American Medical Informatics Association, 25(10), 1351-1358.
    [Google Scholar]
  46. Biswal, S., Sun, H., Goparaju, B., Westover, M. B., Sun, J., & Bianchi, M. T. (2018). Expert-level sleep scoring with deep neural networks. Medical Informatics Association Journal of the American , 25(12), 1643-1650.
    [Google Scholar]
  47. Escudero, J., Abásolo, D., Hornero, R., Espino, P., & López, M. (2006). Analysis of electroencephalograms in Alzheimer’s disease patients with multiscale entropy. Physiological measurement, 27(11), 1091.
    [Google Scholar]
  48. Goldberger, A. L., Amaral, L. A., Glass, L., Hausdorff, J. M., Ivanov, P. C., Mark, R. G., ... & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. circulation, 101(23), e215-e220.
    [Google Scholar]
  49. Van Dijk, H., Van Wingen, G., Denys, D., Olbrich, S., Van Ruth, R., & Arns, M. (2022). The two decades brainclinics research archive for insights in neurophysiology (TDBRAIN) database. Scientific data, 9(1), 333.
    [Google Scholar]
  50. O’reilly, C., Gosselin, N., Carrier, J., & Nielsen, T. (2014). Montreal Archive of Sleep Studies: an open-access resource for instrument benchmarking and exploratory research. Journal of sleep research, 23(6), 628-635.
    [Google Scholar]
  51. Schalk, G., McFarland, D. J., Hinterberger, T., Birbaumer, N., & Wolpaw, J. R. (2004). BCI2000: a general-purpose brain-computer interface (BCI) system. IEEE Transactions on biomedical engineering, 51(6), 1034-1043.
    [Google Scholar]
  52. Shoeb, A. H. (2009). Application of machine learning to epileptic seizure onset detection and treatment (Doctoral dissertation, Massachusetts Institute of Technology).
    [Google Scholar]
  53. Tangermann, M., Müller, K. R., Aertsen, A., Birbaumer, N., Braun, C., Brunner, C., ... & Blankertz, B. (2012). Review of the BCI competition IV. Frontiers in neuroscience, 6, 55.
    [Google Scholar]
  54. Margaux, P., Emmanuel, M., Sébastien, D., Olivier, B., & Jérémie, M. (2012). Objective and Subjective Evaluation of Online Error Correction during P300-Based Spelling. Advances in Human-Computer Interaction, 2012(1), 578295.
    [Google Scholar]
  55. Peng, R., Zhao, C., Jiang, J., Kuang, G., Cui, Y., Xu, Y., ... & Wu, D. (2022). TIE-EEGNet: Temporal information enhanced EEGNet for seizure subtype classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30, 2567-2576.
    [Google Scholar]
  56. Loshchilov, I. (2017). Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
    [Google Scholar]
  57. Park, H. J., & Friston, K. (2013). Structural and functional brain networks: from connections to cognition. Science, 342(6158), 1238411.
    [Google Scholar]
  58. Jia, Z., Lin, Y., Wang, J., Zhou, R., Ning, X., He, Y., & Zhao, Y. (2020, July). GraphSleepNet: Adaptive spatial-temporal graph convolutional networks for sleep stage classification. In Ijcai (Vol. 2021, pp. 1324-1330).
    [Google Scholar]
  59. Defferrard, M., Bresson, X., & Vandergheynst, P. (2016). Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29.
    [Google Scholar]
  60. Wang, Y., Xu, Y., Yang, J., Wu, M., Li, X., Xie, L., & Chen, Z. (2024, March). Graph-Aware Contrasting for Multivariate Time-Series Classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 14, pp. 15725-15734).
    [Google Scholar]
  61. Cai, W., Liang, Y., Liu, X., Feng, J., & Wu, Y. (2024, March). Msgnet: Learning multi-scale inter-series correlations for multivariate time series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 10, pp. 11141-11149).
    [Google Scholar]
  62. Deng, A., & Hooi, B. (2021, May). Graph neural network-based anomaly detection in multivariate time series. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 5, pp. 4027-4035).
    [Google Scholar]
  63. Salvador, R., Suckling, J., Coleman, M. R., Pickard, J. D., Menon, D., & Bullmore, E. D. (2005). Neurophysiological architecture of functional magnetic resonance images of human brain. Cerebral cortex, 15(9), 1332-1342.
    [Google Scholar]
  64. Pearson, K., & Lee, A. (1903). On the laws of inheritance in man: I. Inheritance of physical characters. Biometrika, 2(4), 357-462.
    [Google Scholar]
  65. Danon, L., Diaz-Guilera, A., Duch, J., & Arenas, A. (2005). Comparing community structure identification. Journal of statistical mechanics: Theory and experiment, 2005(09), P09008.
    [Google Scholar]
  66. Aydore, S., Pantazis, D., & Leahy, R. M. (2013). A note on the phase locking value and its properties. Neuroimage, 74, 231-244.
    [Google Scholar]
  67. Tang, S., Dunnmon, J. A., Saab, K., Zhang, X., Huang, Q., Dubost, F., ... & Lee-Messer, C. (2021). Self-supervised graph neural networks for improved electroencephalographic seizure analysis. arXiv preprint arXiv:2104.08336.
    [Google Scholar]
  68. Ho, T. K. K., & Armanfard, N. (2023, June). Self-supervised learning for anomalous channel detection in EEG graphs: Application to seizure analysis. In Proceedings of the AAAI conference on artificial intelligence (Vol. 37, No. 7, pp. 7866-7874).
    [Google Scholar]
  69. Jia, Z., Lin, Y., Wang, J., Ning, X., He, Y., Zhou, R., ... & Li-wei, H. L. (2021). Multi-view spatial-temporal graph convolutional networks with domain generalization for sleep stage classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 1977-1986.
    [Google Scholar]
  70. Li, R., Wang, Y., & Lu, B. L. (2021, October). A multi-domain adaptive graph convolutional network for EEG-based emotion recognition. In Proceedings of the 29th ACM International Conference on Multimedia (pp. 5565-5573).
    [Google Scholar]
  71. Wang, J., Ning, X., Shi, W., & Lin, Y. (2023, April). A Bayesian Graph Neural Network for EEG Classification—A Win-Win on Performance and Interpretability. In 2023 IEEE 39th International Conference on Data Engineering (ICDE) IEEE. (pp. 2126-2139).
    [Google Scholar]
  72. Jia, Z., Lin, Y., Wang, J., Feng, Z., Xie, X., & Chen, C. (2021, October). HetEmotionNet: two-stream heterogeneous graph recurrent neural network for 29th ACM International Conference on Multimedia multi-modal emotion recognition. In Proceedings of the (pp. 1047-1056).
    [Google Scholar]
  73. Chen, J., Yang, Y., Yu, T., Fan, Y., Mo, X., & Yang, C. (2022, August). Brainnet: Epileptic wave detection from seeg with hierarchical graph diffusion learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 2741-2751).
    [Google Scholar]
  74. Koelstra, S., Muhl, C., Soleymani, M., Lee, J. S., Yazdani, A., Ebrahimi, T., ... & Patras, I. (2011). Deap: A database for emotion analysis; using physiological signals. IEEE transactions on affective computing, 3(1), 18-31.
    [Google Scholar]
  75. Soleymani, M., Lichtenauer, J., Pun, T., & Pantic, M. (2011). A multimodal database for affect recognition and implicit tagging. IEEE transactions on affective computing, 3(1), 42-55.
    [Google Scholar]
  76. Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
    [Google Scholar]
  77. Brown, T. B. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
    [Google Scholar]
  78. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., ... & Sutskever, I. (2021, July). Learning transferable visual models from natural language supervision. In International conference on machine learning (pp. 8748-8763). PMLR.
    [Google Scholar]
  79. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., ... & Girshick, R. (2023). Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4015-4026).
    [Google Scholar]
  80. Wagh, N., & Varatharajah, Y. (2020, November). Eeg-gcnn: Augmenting electroencephalogram-based neurological disease diagnosis using a domain-guided graph convolutional neural network. In Machine Learning for Health (pp. 367-378). PMLR.
    [Google Scholar]
  81. Zhang, D., Yuan, Z., Yang, Y., Chen, J., Wang, J., & Li, Y. (2024). Brant: Foundation model for intracranial neural signal. Advances in Neural Information Processing Systems, 36.
    [Google Scholar]
  82. Cui, W., Jeong, W., Thölke, P., Medani, T., Jerbi, K., Joshi, A. A., & Leahy, R. M. (2024, May). Neuro-GPT: Towards a foundation model for EEG. In 2024 IEEE International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE.
    [Google Scholar]
  83. Abbaspourazad, S., Elachqar, O., Miller, A. C., Emrani, S., Nallasamy, U., & Shapiro, I. (2023). Large-scale training of foundation models for wearable biosignals. arXiv preprint arXiv:2312.05409.
    [Google Scholar]
  84. Zhang, D., Yuan, Z., Chen, J., Chen, K., & Yang, Y. (2024, August). Brant-X: A Unified Physiological Signal Alignment Framework. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 4155-4166).
    [Google Scholar]
  85. Yuan, Z., Zhang, D., Chen, J., Gu, G., & Yang, Y. (2024). Brant-2: Foundation Model for Brain Signals. arXiv preprint arXiv:2402.10251.
    [Google Scholar]
  86. Chen, Y., Ren, K., Song, K., Wang, Y., Wang, Y., Li, D., & Qiu, L. (2024). EEGFormer: Towards transferable and interpretable large-scale EEG foundation model. arXiv preprint arXiv:2401.10278.
    [Google Scholar]
  87. Wang, C., Subramaniam, V., Yaari, A. U., Kreiman, G., Katz, B., Cases, I., & Barbu, A. (2023). BrainBERT: Self-supervised representation learning for intracranial recordings. arXiv preprint arXiv:2302.14367.
    [Google Scholar]
  88. Apple Heart & Movement Study – Study site for information and progress updates for AH&MS. https://appleheartandmovementstudy.bwh.harvard.edu/
    [Google Scholar]
  89. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
    [Google Scholar]
  90. Zaremba, W. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
    [Google Scholar]
  91. Vaswani, A. (2017). Attention is all you need. Advances in Neural Information Processing Systems.
    [Google Scholar]
  92. Gu, A., & Dao, T. (2023). Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752.
    [Google Scholar]
  93. Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., ... & Lample, G. (2023). Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
    [Google Scholar]
  94. Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., ... & Scialom, T. (2023). Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
    [Google Scholar]
  95. Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., ... & McGrew, B. (2023). Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
    [Google Scholar]
  96. Iapascurta, V., & Fiodorov, I. (2023, September). NLP Tools for Epileptic Seizure Prediction Using EEG Data: A Comparative Study of Three ML Models. In International Conference on Nanotechnologies and Biomedical Engineering (pp. 170-180). Cham: Springer Nature Switzerland.
    [Google Scholar]
  97. bbrinkm, & Will Cukierski. (2014). American Epilepsy Society Seizure Prediction Challenge. https://kaggle.com/competitions/seizure-prediction.
    [Google Scholar]
  98. Xue, H., & Salim, F. D. (2023). Promptcast: A new prompt-based learning paradigm for time series forecasting. IEEE Transactions on Knowledge and Data Engineering.
    [Google Scholar]
  99. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140), 1-67.
    [Google Scholar]
  100. Cleveland, R. B., Cleveland, W. S., McRae, J. E., & Terpenning, I. (1990). STL: A seasonal-trend decomposition. J. off. Stat, 6(1), 3-73.
    [Google Scholar]
  101. Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., ... & Chen, W. (2021). Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
    [Google Scholar]
  102. Wu, H., Xu, J., Wang, J., & Long, M. (2021). Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in neural information processing systems, 34, 22419-22430.
    [Google Scholar]
  103. Chang, C., Peng, W. C., & Chen, T. F. (2023). Llm4ts: Two-stage fine-tuning for time-series forecasting with pre-trained llms. arXiv preprint arXiv:2308.08469.
    [Google Scholar]
  104. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
    [Google Scholar]
  105. Jin, M., Wang, S., Ma, L., Chu, Z., Zhang, J. Y., Shi, X., ... & Wen, Q. (2023). Time-llm: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728.
    [Google Scholar]
  106. Pan, Z., Jiang, Y., Garg, S., Schneider, A., Nevmyvaka, Y., & Song, D. (2024). $ Sˆ 2$ IP-LLM: Semantic Space Informed Prompt Learning with LLM for Time Series Forecasting. In Forty-first International Conference on Machine Learning.
    [Google Scholar]
  107. Zhou, T., Niu, P., Sun, L., & Jin, R. (2023). One fits all: Power general time series analysis by pretrained lm. Advances in neural information processing systems, 36, 43322-43355.
    [Google Scholar]
  108. Bagnall, A., Dau, H. A., Lines, J., Flynn, M., Large, J., Bostrom, A., ... & Keogh, E. (2018). The UEA multivariate time series classification archive, 2018. arXiv preprint arXiv:1811.00075.
    [Google Scholar]
  109. Sun, C., Li, H., Li, Y., & Hong, S. (2023). TEST: Text prototype aligned embedding to activate LLM’s ability for time series. arXiv preprint arXiv:2308.08241.
    [Google Scholar]
  110. Zhang, Y., Yang, S., Cauwenberghs, G., & Jung, T. P. (2024). From Word Embedding to Reading Embedding Using Large Language Model, EEG and Eye-tracking. arXiv preprint arXiv:2401.15681.
    [Google Scholar]
  111. Hollenstein, N., Rotsztejn, J., Troendle, M., Pedroni, A., Zhang, C., & Langer, N. (2018). ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading. Scientific data, 5(1), 1-13.
    [Google Scholar]
  112. Qiu, J., Han, W., Zhu, J., Xu, M., Weber, D., Li, B., & Zhao, D. (2023, December). Can brain signals reveal inner alignment with human languages?. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 1789-1804).
    [Google Scholar]
  113. Park, C. Y., Cha, N., Kang, S., Kim, A., Khandoker, A. H., Hadjileontiadis, L., ... & Lee, U. (2020). K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations. Scientific Data, 7(1), 293.
    [Google Scholar]
  114. Li, J., Liu, C., Cheng, S., Arcucci, R., & Hong, S. (2024, January). Frozen language model helps ecg zero-shot learning. In Medical Imaging with Deep Learning (pp. 402-415). PMLR.
    [Google Scholar]
  115. Alsentzer, E., Murphy, J. R., Boag, W., Weng, W. H., Jin, D., Naumann, T., & McDermott, M. (2019). Publicly available clinical BERT embeddings. arXiv preprint arXiv:1904.03323.
    [Google Scholar]
  116. Wagner, P., Strodthoff, N., Bousseljot, R. D., Kreiseler, D., Lunze, F. I., Samek, W., & Schaeffter, T. (2020). PTB-XL, a large publicly available electrocardiography dataset. Scientific data, 7(1), 1-15.
    [Google Scholar]
  117. Moody, G. B., & Mark, R. G. (2001). The impact of the MIT-BIH arrhythmia database. IEEE engineering in medicine and biology magazine, 20(3), 45-50.
    [Google Scholar]
  118. Jia, F., Wang, K., Zheng, Y., Cao, D., & Liu, Y. (2024, March). GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 21, pp. 23343-23351).
    [Google Scholar]
  119. Yu, H., Guo, P., & Sano, A. (2024). ECG Semantic Integrator (ESI): A Foundation ECG Model Pretrained with LLM-Enhanced Cardiological Text. arXiv preprint arXiv:2405.19366.
    [Google Scholar]
  120. Yasunaga, M., Leskovec, J., & Liang, P. (2022). Linkbert: Pretraining language models with document links. arXiv preprint arXiv:2203.15827.
    [Google Scholar]
  121. Zheng, J., Chu, H., Struppa, D., Zhang, J., Yacoub, S. M., El-Askary, H., ... & Rakovski, C. (2020). Optimal multi-stage arrhythmia classification approach. Scientific reports, 10(1), 2898.
    [Google Scholar]
  122. Cheng, M., Chen, Y., Liu, Q., Liu, Z., & Luo, Y. (2024). Advancing Time Series Classification with Multimodal Language Modeling. arXiv preprint arXiv:2403.12371.
    [Google Scholar]
  123. Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., ... & Kavukcuoglu, K. (2016). Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 12.
    [Google Scholar]
  124. Cheng, M., Liu, Q., Liu, Z., Zhang, H., Zhang, R., & Chen, E. (2023). Timemae: Self-supervised representations of time series with decoupled masked autoencoders. arXiv preprint arXiv:2303.00320.
    [Google Scholar]
  125. Liu, M., Ren, S., Ma, S., Jiao, J., Chen, Y., Wang, Z., & Song, W. (2021). Gated transformer networks for multivariate time series classification. arXiv preprint arXiv:2103.14438.
    [Google Scholar]
  126. Cheng, M., Tao, X., Liu, Q., Zhang, H., Chen, Y., & Lei, C. (2024). Learning Transferable Time Series Classifier with Cross-Domain Pre-training from Language Model. arXiv preprint arXiv:2403.12372.
    [Google Scholar]
  127. Kim, J. W., Alaa, A., & Bernardo, D. (2024). EEG-GPT: exploring capabilities of large language models for EEG classification and interpretation. arXiv preprint arXiv:2401.18006.
    [Google Scholar]
  128. Wang, Y., Jin, R., Wu, M., Li, X., Xie, L., & Chen, Z. (2024). K-Link: Knowledge-Link Graph from LLMs for Enhanced Representation Learning in Multivariate Time-Series Data. arXiv preprint arXiv:2403.03645.
    [Google Scholar]
  129. Han, Z., Gao, C., Liu, J., Zhang, J., & Zhang, S. Q. (2024). Parameter-efficient fine-tuning for large models: A comprehensive survey. arXiv preprint arXiv:2403.14608.
    [Google Scholar]
  130. Lester, B., Al-Rfou, R., & Constant, N. (2021). The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.
    [Google Scholar]
  131. Hinton, G. (2015). Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531.
    [Google Scholar]
  132. Jiang, Y., Pan, Z., Zhang, X., Garg, S., Schneider, A., Nevmyvaka, Y., & Song, D. (2024). Empowering time series analysis with large language models: A survey. arXiv preprint arXiv:2402.03182.
    [Google Scholar]
  133. Wang, Z., & Ji, H. (2022, June). Open vocabulary electroencephalography-to-text decoding and zero-shot sentiment classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 5, pp. 5350-5358).
    [Google Scholar]
  134. Lewis, M. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
    [Google Scholar]
  135. Cao, D., Jia, F., Arik, S. O., Pfister, T., Zheng, Y., Ye, W., & Liu, Y. (2023). Tempo: Prompt-based generative pre-trained transformer for time series forecasting. arXiv preprint arXiv:2310.04948.
    [Google Scholar]
  136. Liu, P., Guo, H., Dai, T., Li, N., Bao, J., Ren, X., ... & Xia, S. T. (2024). Taming Pre-trained LLMs for Generalised Time Series Forecasting via Cross-modal Knowledge Distillation. arXiv preprint arXiv:2403.07300.
    [Google Scholar]
  137. Tan, M., Merrill, M. A., Gupta, V., Althoff, T., & Hartvigsen, T. (2024, June). Are language models actually useful for time series forecasting?. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
    [Google Scholar]
  138. Zheng, L. N., Dong, C. G., Zhang, W. E., Yue, L., Xu, M., Maennel, O., & Chen, W. (2024). Revisited Large Language Model for Time Series Analysis through Modality Alignment. arXiv preprint arXiv:2410.12326.
    [Google Scholar]
  139. Zhou, T., Niu, P., Wang, X., Sun, L., & Jin, R. (2023). One fits all: Universal time series analysis by pretrained lm and specially designed adaptors. arXiv preprint arXiv:2311.14782.
    [Google Scholar]
  140. Li, T., Kong, L., Yang, X., Wang, B., & Xu, J. (2024). Bridging Modalities: A Survey of Cross-Modal Image-Text Retrieval. Chinese Journal of Information Fusion, 1(1), 79-92.
    [Google Scholar]
  141. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
    [Google Scholar]
  142. Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, 6840-6851.
    [Google Scholar]
  143. Kavasidis, I., Palazzo, S., Spampinato, C., Giordano, D., & Shah, M. (2017, October). Brain2image: Converting brain signals into images. In Proceedings of the 25th ACM international conference on Multimedia (pp. 1809-1817).
    [Google Scholar]
  144. Spampinato, C., Palazzo, S., Kavasidis, I., Giordano, D., Souly, N., & Shah, M. (2017). Deep learning human mind for automated visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6809-6817).
    [Google Scholar]
  145. Tirupattur, P., Rawat, Y. S., Spampinato, C., & Shah, M. (2018, October). Thoughtviz: Visualizing human thoughts using generative adversarial network. In Proceedings of the 26th ACM international conference on Multimedia (pp. 950-958).
    [Google Scholar]
  146. Kumar, P., Saini, R., Roy, P. P., Sahu, P. K., & Dogra, D. P. (2018). Envisioned speech recognition using EEG sensors. Personal and Ubiquitous Computing, 22, 185-199.
    [Google Scholar]
  147. Singh, P., Pandey, P., Miyapuram, K., & Raman, S. (2023, June). EEG2IMAGE: image reconstruction from EEG brain signals. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
    [Google Scholar]
  148. Singh, P., Dalal, D., Vashishtha, G., Miyapuram, K., & Raman, S. (2024). Learning Robust Deep Visual Representations from EEG Brain Recordings. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 7553-7562).
    [Google Scholar]
  149. Kaneshiro, B., Perreau Guimaraes, M., Kim, H. S., Norcia, A. M., & Suppes, P. (2015). A representational similarity analysis of the dynamics of object processing using single-trial EEG classification. Plos one, 10(8), e0135697.
    [Google Scholar]
  150. Bai, Y., Wang, X., Cao, Y. P., Ge, Y., Yuan, C., & Shan, Y. (2023). Dreamdiffusion: Generating high-quality images from brain eeg signals. arXiv preprint arXiv:2306.16934.
    [Google Scholar]
  151. Lan, Y. T., Ren, K., Wang, Y., Zheng, W. L., Li, D., Lu, B. L., & Qiu, L. (2023). Seeing through the brain: image reconstruction of visual perception from human brain signals. arXiv preprint arXiv:2308.02510.
    [Google Scholar]
  152. Liu, H., Hajialigol, D., Antony, B., Han, A., & Wang, X. (2024). EEG2TEXT: Open Vocabulary EEG-to-Text Decoding with EEG Pre-Training and Multi-View Transformer. arXiv preprint arXiv:2405.02165.
    [Google Scholar]
  153. Gifford, A. T., Dwivedi, K., Roig, G., & Cichy, R. M. (2022). A large and rich EEG dataset for modeling human visual object recognition. NeuroImage, 264, 119754.
    [Google Scholar]
  154. Wang, J., Song, Z., Ma, Z., Qiu, X., Zhang, M., & Zhang, Z. (2024). Enhancing EEG-to-Text Decoding through Transferable Representations from Pre-trained Contrastive EEG-Text Masked Autoencoder. arXiv preprint arXiv:2402.17433.
    [Google Scholar]
  155. Duan, Y., Chau, C., Wang, Z., Wang, Y. K., & Lin, C.T. (2024). Dewave: Discrete encoding of eeg waves for eeg to text translation. Advances in Neural Information Processing Systems, 36.
    [Google Scholar]
  156. Guo, Y., Liu, T., Zhang, X., Wang, A., & Wang, W. (2023). End-to-end translation of human neural activity to speech with a dual–dual generative adversarial network. Knowledge-Based Systems, 277, 110837.
    [Google Scholar]
  157. Daly, I. (2023). Neural decoding of music from the EEG. Scientific Reports, 13(1), 624.
    [Google Scholar]
  158. Radford, A. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
    [Google Scholar]
  159. Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., & Aila, T. (2020). Training generative adversarial networks with limited data. Advances in neural information processing systems, 33, 12104-12114.
    [Google Scholar]
  160. Jayaram, V., & Barachant, A. (2018). MOABB: trustworthy algorithm benchmarking for BCIs. Journal of neural engineering, 15(6), 066011.
    [Google Scholar]
  161. Blankertz, B., Dornhege, G., Krauledat, M., Müller, K. R., & Curio, G. (2007). The non-invasive Berlin brain–computer interface: fast acquisition of effective performance in untrained subjects. NeuroImage, 37(2), 539-550.
    [Google Scholar]
  162. Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7), 1145-1159.
    [Google Scholar]
  163. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. Advances in neural information processing systems, 29.
    [Google Scholar]
  164. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.
    [Google Scholar]
  165. Bińkowski, M., Sutherland, D. J., Arbel, M., & Gretton, A. (2018). Demystifying mmd gans. arXiv preprint arXiv:1801.01401.
    [Google Scholar]
  166. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
    [Google Scholar]
  167. Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002, July). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics (pp. 311-318).
    [Google Scholar]
  168. Lin, C. Y. (2004, July). Rouge: A package for automatic evaluation of summaries. In Text summarization branches out (pp. 74-81).
    [Google Scholar]
  169. Kubichek, R. (1993, May). Mel-cepstral distance measure for objective speech quality assessment. In Proceedings of IEEE pacific rim conference on communications computers and signal processing (Vol. 1, pp. 125-128). IEEE.
    [Google Scholar]
  170. Dao, T., & Gu, A. (2024). Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality. arXiv preprint arXiv:2405.21060.
    [Google Scholar]
  171. Liu, Z., Wang, Y., Vaidya, S., Ruehle, F., Halverson, J., Soljačić, M., ... & Tegmark, M. (2024). Kan: Kolmogorov-arnold networks. arXiv preprint arXiv:2404.19756.
    [Google Scholar]
  172. Ni, R., Lin, Z., Wang, S., & Fanti, G. (2024, April). Mixture-of-Linear-Experts for Long-term Time Series Forecasting. In International Conference on Artificial Intelligence and Statistics (pp. 4672-4680). PMLR.
    [Google Scholar]
  173. Yu, C., Wang, F., Shao, Z., Qian, T., Zhang, Z., Wei, W., & Xu, Y. (2024, August). Ginar: An end-to-end multivariate time series forecasting model suitable for variable missing. In Proceedings of the 30th ACM SIGKDD conference on knowledge discovery and data mining (pp. 3989-4000).
    [Google Scholar]
  174. Qiao, Z., Pham, Q., Cao, Z., Le, H. H., Suganthan, P. N., Jiang, X., & Savitha, R. (2024). Class-incremental learning for time series: Benchmark and evaluation. arXiv preprint arXiv:2402.12035.
    [Google Scholar]
  175. Ragab, M., Eldele, E., Wu, M., Foo, C. S., Li, X., & Chen, Z. (2023, August). Source-free domain adaptation with temporal imputation for time series data. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 1989-1998).
    [Google Scholar]
  176. Qiu, X., Hu, J., Zhou, L., Wu, X., Du, J., Zhang, B., ... & Yang, B. (2024). Tfb: Towards comprehensive and fair benchmarking of time series forecasting methods. arXiv preprint arXiv:2403.20150.
    [Google Scholar]
  177. Wang, Y., Wu, H., Dong, J., Liu, Y., Long, M., & Wang, J. (2024). Deep time series models: A comprehensive survey and benchmark. arXiv preprint arXiv:2407.13278.
    [Google Scholar]
  178. Savran, A., Ciftci, K., Chanel, G., Cruz_Mota, J., Viet, L. H., Sankur, B., ... & Rombaut, M. (2006). Emotion detection in the loop from brain signals and facial images. In eINTERFACE’06-SIMILAR NoE Summer Workshop on Multimodal Interfaces.
    [Google Scholar]
  179. Trujillo, L. T., Stanfield, C. T., & Vela, R. D. (2017). The effect of electroencephalogram (EEG) reference choice on information-theoretic measures of the complexity and integration of EEG signals. Frontiers in neuroscience, 11, 425.
    [Google Scholar]

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 609
PDF Downloads: 126

Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
IECE or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Chinese Journal of Information Fusion

Chinese Journal of Information Fusion

ISSN: 2998-3371 (Online) | ISSN: 2998-3363 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/

Copyright © 2024 Institute of Emerging and Computer Engineers Inc.