![]() ![]() On the other hand: if you are doing exploratory research, you should not compute p-values. If you are doing hypothesis-driven research, you should not guide your statistical analysis by a visual inspection of the data you should state your hypothesis up-front and avoid data dredging or p-hacking. Rather, in the absence of an a priori region and/or latency of interest, you should test all channels and time points and correct for multiple comparisons to ensure that you are controlling the false alarm rate. In reality it would be inappropriate to test only the largest observed effect. Note, however, that this is only for didactical reasons. picking the channel and time window with the highest effect. This specific example starts with a ROI that is based on visual inspection, i.e. It can be quantified in different ways, e.g., as the uV difference in ERP amplitude on a specific channel at a specific latency following stimulus presentation, or as a standardized measure such as Cohen’s d. The effect size is a way of quantifying the magnitude of an effect on your data. It is good practice to compute and report the size of the effect that you are studying: see for example the 2019 OHBM Committee on Best Practice in Data Analysis and Sharing (COBIDAS) recommendation Best Practices in Data Analysis and Sharing in Neuroimaging using MEEG or the 2013 Good practice for conducting and reporting MEG research guidelines. ![]() ![]() Example statistics cluster meg-language Computing and reporting the effect size ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |