From the aspects of text features, the code above seems quite simple to understand, but from the aspects of human programmer, it may hard to understand. Therefore, the measurements from pure text features are not stable on some occasions. To address this problem, \cite{Buse_2010} uses annotators to acquire a human-aspect readability measurement. However, the model operates as a binary classifier, though it succeeded in classifying snippets as "readable" or "not-readable" in more than 80% of the cases, it would be better if the model could point out what problems exist in the snippets and help developers to improve their code readability.
\cite{Buse_2010} uses annotators to acquire a human-aspect readability measurement. However, the model operates as a binary classifier, though it succeeded in classifying snippets as "readable" or "not-readable" in more than 80% of the cases, it would be better if the model could point out what problems exist in the snippets and help developers to improve their code readability.
fMRI is a pretty new technique for the software engineering research, and it has a huge potential in the evaluation of new tools, software, etc. fMRI has a tight connection with EEG(Electroencephalography) since EEG has a higher temporal resolution while fMRI has a higher spatial resolution. Therefore, combining EEG and fMRI could provide complementary Spatio-temporal information for brain activity study. With the combination, we evaluate something hard to measure with only surveys, such as the task difficulties \cite{Fritz_2014}.
To help developers better understand and review code, \cite{Barnett_2015} introduced ClusterChange an automatic decomposition of changesets. Inspired by \cite{Siegmund_2017}, we could use fMRI to evaluate if this type of toolkit (like CodeBubbles, ClusterChange, etc.) could reduce the activation intensity.