In this paper I assess the ethical and epistemic utility of explainable AI algorithms. I first distinguish between different types of outputs that AI can have. The first class of outputs are verifiable (either through a third-party or in virtue of contributing to a win in a game scenario) – that is, there is a way to independently verify the outputs of the model. The second class of outputs are non-verifiable and include outputs like ideals (finding the best of something) and generative AI. While there is some epistemic value gained by explaining the outputs of verifiable AI, there is no intersection between explanations offered by xAI and ethically useful explanations. Therefore, if there is an ethical problem with the use of opaque AI systems, explainable AI won’t be able to help solve it.