Due to the increasing prevalence of foundation models, the need for precisely understanding a pre-trained neural network and its predictions has become paramount. This paper aims to improve post-analysis methods for pre-trained neural networks. Among various post-analysis methods, this dissertation identifies and solves the limitations of the flat minima hypothesis, which is a parameter-based post-hoc analysis, and the influence function, which is a sample-based post-analysis. First, this dissertation demonstrates that the Laplace Approximation in Bayesian deep learning is susceptible to reparameterization, similar to the flat minima hypothesis. Then, we solve this problem through a new concept called connectivity. Second, this dissertation shows the additivity of the Graph Influence Function is broken for neighboring edges. Based on this, we propose an efficient algorithm that removes noisy edges that deteriorate the generalization of graph neural networks. Lastly, this dissertation shows the unimodal distributional bias of self-influence caused by the bilinearity of the influence function. Then, we propose a novel non-linear influence approximation based on Geometric Ensemble to mitigate this distributional bias.