The Medical Out-Of-Distribution analysis challenge(MOOD) has the goal to build an anomaly detection algorithm when only normal CT images of the brain and abdomen are given. It performs both a sample level task to detect an abnormal image (i.e., an Out-of Distribution (OOD) image), and a pixel(voxel)-level task to locate the abnormal position. Since these tasks have dependency that partial abnormality of the image causes the image-level anomaly, we design the model to perform
both tasks at once. We use U-Net [2] as a reference network that receives 3D patches as inputs. The pixel-level task is performed through the decoder of U-Net, and the sample-level task is performed by attaching a classication module to the bottom of the network.