NIRS data analysis (GLM and visualization)

3 min read

[last updated: 2019/11/20]

Also check out NIRS data analysis (time series)

Environment requirement

  1. MatLab
  2. SPM 5 or 8
  3. xjView 8
    xjview can be downloaded for free from https://www.alivelearn.net/xjview/
    (If you are inside CIBSR, xjview  is located in /fs/fmrihome/fMRItools/Xjview)
    Add xjview to path by addpath(genpath('/fs/fmrihome/fMRItools/Xjview'))
  4. nirs2img function in this article can be obtained in the page https://www.alivelearn.net/?p=2230 [Link updated on 2019/10/02] or https://www.alivelearn.net/?p=1574. It is located in the nirs folder.
  5. NFRI toolbox  (for standard brain registration)
    Download from https://alivelearn.net/20180320_nfri_functions.zip[Link updated on 2021/07/15] and save it in a directory whose name contains no space (e.g. not in something like c:\program files\…).

Preparation

  1. convert NIRS data file to csv format using ETG4000 program.
  2. copy the 3D positioning data (00X.pos). If you didn’t measure 3D positioning data, jump to step 5
  3. use NFRI toolbox (Pepe Dan, Japan. http://www.jichi.ac.jp/brainlab/tools.html) to get the MNI coordinates of each probe.  Detailed information on how to use this toolbox can be found in its manual. [Update 2015-07-13. A video tutorial: https://www.alivelearn.net/?p=1726]
    1. Convert 00?.pos file to csv file using
      pos2csv
    2. Convert the 3D positioning data into MNI space coordinate using
      nfri_mni_estimation
    3. You will get a xls file containing the positions. There are several sheets in that file and you should use the sheet called “WShatC”, which contains the positions of cortical surface.
  4. Find channel positions based on probe positions using probe2channel.m
    probe2channel(probe, config)
    Download probe2channel.m here
  5. If you don’t have 3D positioning data, you may use the template channel positions located in
    load xjview/nirs_data_sample/templateMNI.mat
    . (Please note, this file was created by Xu based on a single subject’s data. It’s useful for quick data review but it will be inaccurate to your own subjects. So you should create your own file in formal analysis. [update 2019-02-01])
    You will find 6 variables in MatLab workspace. They are channelMNI3x11  channelMNI3x5   channelMNI4x4   probeMNI3x11    probeMNI3x5     probeMNI4x4. They are all Nx3 matrix.

Read data and do GLM

  1. use readHitachData.m to read the data file (csv format). Type help readHitachData to see how to use it. Note if your input is two files (for 4×4 and 3×5 configurations), this script will automatically concatenate the data.
    [hbo,hbr,mark] = readHitachData({'XC_tap_MES_Probe1.csv','XC_tap_MES_Probe2.csv'});
  2. Prepare event onset timing, duration etc from the mark data, or external data you have, for later GLM analysis (step 3). The format is:
    • onset: onset timing of every event. a cell array. Each element is a numeric vector for one event type. Unit: second
    • duration: duration of every event. same with onset, except the meaning of numbers are duration. If the event is punctuated event, use 0 as duration. Unit: second
    • modulation (optional): modulation of event. For the same type of event you may have different intensities. For example, your event is flash of 5 levels of intensities. You can use modulation to modulate the intensity.  Format is exactly same with onset.
  3. GLM analysis using glm. Type help glm for more info.
    [beta, T, pvalue] = glm(hbdata, onset, duration, modulation);
  4. You may want to save the data for future use.
  5. (if you want to view the result in a standard brain) Convert the values (T or beta or contrasts) to an image file by nirs2img. Try help nirs2img to get more information.
    nirs2img(imgFileName, mni, value, doInterp, doXjview)

Visualization

  1. plotTopoMap will plot data on a plane. The data can be T or beta or other values. Type help plotTopoMap for more info. Here is an example (note the data is smoothed by spline):
    plotTopoMap(randn(24,1), '4x4');
  2. nirs2img will convert your data to an image file which can be visuzlied by many fMRI functional image programs (such as xjview). Here is an example of visualizing the image by xjview. Note, after xjview window launches, you need to check “render view”, and then you may choose between new or old style.
    nirs2img('nirs_test.img', mni, value, 1, 1);
  3. You can also visualize the result with NFRI’s nfri_mni_plot (in NFRI toolbox). You need to prepare the the plot data in excel format beforehand. More information can be found in Readme.doc in NFRI toolbox.

Group analysis

  1. For each individual subject, perform GLM and save the beta values for each condition and subject.
  2. Do contrast  on each subject. Contrasts are simply difference of beta values. For example, contrast between 1st condition and 2nd condition is simply c = beta(:,1) - beta(:,2); Then save the contrast in an image file using nirs2img for each subject. You get a bunch of contrast images (one for each subject)
  3. Perform T test on the contrast images using onesampleT.m, or you can use SPM to do one sample T test if you prefer. You will get a T test image file.
  4. Visualize the T test image with xjview (or SPM)



写作助手,把中式英语变成专业英文


Want to receive new post notification? 有新文章通知我

第五十八期fNIRS Journal Club通知2024/12/07, 10am 王硕教授团队

理解噪音中的言语对老年听力损失患者来说是一个重大挑战。来自首都医科大学附属北京同仁医院耳鼻咽喉科研究所王硕教授团队的助理研究员王松建将为大家介绍他们采用同步EEG-fNIRS技术,从神经与血流动力学两
Wanling Zhu
10 sec read

第五十七期fNIRS Journal Club视频 王欣悦博士

Youtube: https://youtu.be/vyo-kECC2Ps 优酷:https://v.youku.com/v_show/id_XNjQzNTA0ODIwMA==.html 肢体语言——
Wanling Zhu
20 sec read

第五十七期fNIRS Journal Club通知2024/11/02, 10am 王欣悦博士

肢体语言——例如人际距离、眼神、手势等,如何影响我们的交流,是一个有趣的谜题。它们是优雅而神秘的代码,无本可依、无人知晓,却又无人不懂。来自南京师范大学的王欣悦博士将分享如何通过fNIRS超扫描技术,
Wanling Zhu
16 sec read

39 Replies to “NIRS data analysis (GLM and visualization)”

  1. Dear Xu,

    I have been reading your writing on NIRS_SPM for a while. Thank you for contributing 🙂

    Just a short qns: do you know if there is any Help Forum on NIRS-SPM? I have been trying to use the program but I encountered some annoying errors which I couldnt solve.

    Hope you may point me to the right direction. Thanks in advance!

    – Lee from Singapore

  2. Hi Xu,
    I have been following your writings on NIRS_SPM. Your contribution have been a great help for me.
    I have been trying to learn this NIRS_SPM, but it keeps on giving me error. Is there any other material that I can use to learn NIRS_SPM apart from its user manual OR is there any help forum for NIRS_SPM.
    Hope to get help from you.
    Thanks in advance.
    Sabin

  3. I haven’t been using NIRS_SPM for a while. I don’t know if there is any other resources but I do know the authors are very helpful. You might want to contact them directly.

  4. Why can’t find this three function code (glm.m, nirs2img.m, onesampleT.m) in environment requirement your suggestion?

  5. Hi Xu,
    I just downloaded the code described in this section and want to play around with an ETG dataset that I did not create position data for. I’m trying to find the template position files that are referenced in:

    “If you don’t have 3D positioning data, you may use the template channel positions located in
    load xjview/nirs_data_sample/templateMNI.mat”

    But I’m having trouble. I don’t have a folder called xjview, just the xjview.m. I can’t seem to find templateMNI.mat. This may be just an issue I’m having w/ matlab, as I haven’t worked with .mat files before. I’ve only worked with .m files. Can you be more specific about how to find these template datafiles? Thanks–I’m really excited to play around with the code you have posted here:)!!

  6. Hi Xu,
    Could you please also send me the copy of those three files? glm.m, nirs2img.m, and onesampleT.m. Thank you a lot!

  7. We used “readHitachData”to read files from two probes, but an error appeared:
    “Undefined function or variable “timeindex”.
    Error in loadHitachiText (line 42)
    data(:, timeindex) = timedata;
    Error in readHitachData (line 48)
    [data, variablename] = loadHitachiText(filenames{fileindex});

    Our data have a “Time” column. Could you tell me what are the possibe reasons?

  8. Dear Prof Xu
    Hi

    could you help me?
    i have some datas for nirs. i want to read these datas and use wavelet for processing. i want to know do you have any book or paper that help me how to read data in matlab and use wavelet.

    Thank you a lot!

  9. Dear Prof Xu,
    Hi,
    Thank you sooo much for sharing this Visualization tool of fNIRS.
    But I have a problem about the input parameter of nirs2img.
    In the 5th step of “Read data and do GLM” process,”(if you want to view the result in a standard brain) Convert the values (T or beta or contrasts) to an image file by nirs2img(imgFileName, mni, value, doInterp, doXjview)”, I don’t know why can beta or contrasts can work and get the P value in xjview. Because NIRS_SPM just use the T to get the P value. Could you please help me understand why beta and contrasts can work?

    Thank you so much!!

  10. @Yanchun Zheng
    Yanchun, nirs2img is to convert your data (whatever it is) to an image file which can be opened by xjview. It’s does not convert to p-value. In xjview, you can view the image (beta, T, contrast etc). If it’s beta or contrast, then p-value in xjview is meaningless.

  11. Hi I am at the GLM stage and cannot continue because of this erro that I am getting:

    [beta, T, pvalue, covb] = glm(hbo, {onset}, {duration});
    Undefined function ‘max’ for input arguments of type ‘cell’.

    Can you please help me?

    Thanks

  12. @Avi
    See source code of glm in line 84~ 86, this is where the function ‘max’ appears. You may set a stop point there to see what’s going on there.

  13. Dear Prof. Cui
    I have a question concerning following step:
    “GLM analysis using glm. Type help glm for more info.[beta, T, pvalue] = glm(hbdata, onset, duration, modulation)”. Shouldn’t we correct the serial correlation for Hb data before we estimate the beta coefficient? Hb data usually shows strong serial correlation after low pass filtering,also considering its relative high sampling rate compared to fMRI.

  14. Hi,Cuixu,
    For the nirs2img.funciton. I have a question about “value” in the function.[nirs2img(imgFileName, mni, value, doInterp, doXjview, bilateral) ]
    The Note said “value: Nx1 matrix, each row is the value corresponding to mni” , But it sill confunse me a lot.

    The value is related with girddata function ? I have try it several time, but still fail…

    How should I define this parameter?

  15. Thanks, I got it !! I could also define these value using the t-value or F-value, right? Thanks again for your help.

  16. Hi There Prof. Cui,

    Can you please direct me to the probe2channel.m script? Is it in xjView? I cannot seem to find it.

    Regards,
    Hayden

  17. Hi Xu
    Could you please send me the script that includes the ‘plotTopoMap’ function ? I can’t find this function in tips. Thank you a lot!

  18. Hello Professor Cui, I used the complete link to xjview that you provided in the comments, intending to perform a GLM analysis. However, the downloaded files do not contain the ‘glm.m’ script. Could you please specify which script in your xjView is used for GLM analysis?

      1. Thank you very much! But I have checked the files in this download package, and there are no glm.m, plotTopomap.m functions。。

  19. Dear Professor Xu,

    In your group analysis, you used c = beta(:,1) – beta(:,2) to represent the difference in effects between two conditions. I would like to clarify whether you assumed there is only one channel, or if you compared the differences in all channels between different conditions to determine the effects of different conditions on the entire brain (covering areas where sources and detectors are placed). If I want to understand the differential effects of different conditions on each subject across every channel, should I modify the mentioned code to c = beta(i,1) – beta(i,2)? Furthermore, if I have M conditions and N channels, should each subject’s contrast result be N * (M!/(M!*(M-2)!))? Is this understanding correct?

    1. the contrast is for one channel of one subject; you calculate contrast for each channel and subject, then do group analysis

Leave a Reply

Your email address will not be published. Required fields are marked *