Over the last few decades, many papers about video categorization have been published. Despite of rich information in videos, previous algorithms for video categorization mainly rely on fusing multiple visual features including static and motion information. On the other words, the previous models does not utilize the audio information. In this paper, we propose a framework of video categorization which utilize both visual and auditory information from given videos and investigate diffierent types of deep features. The framework consists of feature extractor for each modality and fusion to generate audiovisual feature. For visual feature, we fine-tuned the AlexNet to obtain better discriminative features and measured the performance. Two methods are used and evaluated for capturing audio information from videos, 1D-CNN and bag of word representation. The highest mean average precision scores are achieved audiovisual features which are consists of fine-tuned AlexNet and bag of word representation for MFCCs. From the results, we proved audiovisual features help to categorize videos without any degeneration of performance.