In order to manage voluminous viewed videos, an automatic method of detecting highly attended video segments during video viewing is required to access them for the fine-grained sharing and rewatching. Most studies of user state analysis have addressed attention recognition in a specific task condition, such as drowsiness during driving, attention during learning, and mental fatigue during task execution. In contrast to attention in a specific task condition, attention during video viewing is not distinguishable from inattention during video viewing and a few highly attended segments is required to be detected rather than classified into attention because normal attention is meaningless to viewers for managing viewed videos and high attention paid to video segments are considered to be worthwhile. Therefore, we propose a method to detect the top 20% of viewer attention independent of video genre. The subject-independent attention detection model using the interval EEG features from 14 channels achieved the best average F1 score of 39.79% with an average accuracy of 52.96%. Additionally, this paper proposed a channel-based feature selection method that considers both the performances of single-channel models and their physical locations for investigating the group of channels relevant to attention detection. The attention detection models using the interval EEG features from all four or some channels located in fronto-central, parietal, temporal, and occipital lobes of the left hemisphere achieved the best F1 score of 39.60% with an average accuracy of 48.70%. It was shown that the models achieved better performance than models using the features from all four or some of their symmetric channels in the right hemisphere and models using the features from six channels located in anterior-frontal and frontal lobes of the left and right hemispheres.