In this talk, we introduce a unified convergence analysis of a second-order method of multipliers (i.e., a second-order augmented Lagrangian method) for solving the conventional nonlinear conic optimization problems. Specifically, the algorithm that we investigated incorporates a specially designed nonsmooth (generalized) Newton step to furnish a second-order update of the multipliers in the augmented Lagrangian method. We show in a unified fashion that under a few abstract assumptions, the proposed method is locally convergent and possesses a (nonasymptotic) superlinear convergence rate, even though the penalty parameter is fixed and/or the strict complementarity fails. Subsequently, we demonstrate that, for the three typical scenarios, i.e., the classic nonlinear programming, the nonlinear second-order cone programming, and the nonlinear semidefinite programming, these abstract assumptions are nothing but exactly the implications of the iconic sufficient conditions that were assumed for establishing the Q-linear convergence rates of the method of multipliers without assuming the strict complementarity.