The rise of AI has led many regulators around the world to realize that they need to intervene and safeguard their citizens from the potential harms of this technology. The “posterchild” of the regulatory efforts is the EU’s AI Act, with the first draft published in April 2021. The Act employs a risk-based approach, listing specific AI applications and ranking them according to their potential harm. The fast adoption of ChatGPT signaled to the legislators that generative AI should not be left outside the Act. The changes that were introduced to the 2021 draft consumed intensive negotiations ending only in December 2023. A new risk arises, namely that the regulation will become outdated even before the relevant law is enacted, as newer forms of AI are likely to require additions and amendments to the AI Act. This is the challenge to be addressed by this article. The aim is to suggest a map of AI risks to be answered by future AI regulation to ensure it can handle new risks, potential and actual alike. The proposed mapping of AI risks is based on the postphenomenological relations originally developed by Ihde and their variations as developed in the context of AI. The variation assumes that the intentionality arrow, usually pointing from the experiencing “I”, is likely to be reversed in the presence of AI. Each AI-oriented relation is discussed via an ethical-political analysis in light of the works of Robert Rosenberger and Peter-Paul Verbeek. The result is recommendations for new legislation on AI.