In many human brain network studies, we do not have sufficient number (n) of images relative to the number (p) of voxels due to the prohibitively expensive cost of scanning enough subjects. Thus, brain network models usually suffer the small-n large-p problem. Such a problem is often remedied by sparse network models, which are usually solved numerically by optimizing L1-penalties. Unfortunately, due to the computational bottleneck associated with optimizing L1-penalties, it is not practical to apply such methods to learn large-scale brain networks. In this paper, we introduce a new sparse network model based on cross-correlations that bypass the computational bottleneck. Our model can build the sparse brain networks at voxel level with p > 25000. Instead of using a single sparse parameter that may not be optimal in other studies and datasets, we propose to analyze the collection of networks at every possible sparse parameter in a coherent mathematical framework using graph filtrations. The method is subsequently applied in determining the extend of genetic effects on functional brain networks at voxel-level for the first time using twin fMRI.
from cs.AI updates on arXiv.org http://ift.tt/1Fhseht
via IFTTT
No comments:
Post a Comment