Reliable short-term aircraft trajectory prediction is essential for safety and efficiency in Air Traffic Management (ATM). This work introduces a generative framework for probabilistic 4D trajectory forecasting based on Conditional Flow Matching (CFM), a recent deep generative modeling approach that combines stable likelihood-based training with efficient sampling. The model is trained on historical ADS–B data from the OpenSky Network to predict aircraft motion over a 60 s horizon, conditioned on the preceding 60 s of observations. The model generates ensembles of realistic future trajectories that capture the inherent uncertainty of aircraft motion and enable probabilistic assessment of potential conflicts. As an application, we estimate the probability of mid-air collision during a loss-of-separation event using Monte Carlo simulation over the generated trajectories, providing a quantitative risk measure. The results demonstrate that flow-based generative modeling offers a principled foundation for uncertainty-aware trajectory prediction and safety analysis in ATM.
Reliable short-term aircraft trajectory prediction is fundamental to safe and efficient Air Traffic Management (ATM). Operational safety nets such as TCAS II [Munoz et al. 2013] and Short-Term Conflict Alert (STCA) [2017] rely on linear extrapolations to generate collision alerts. While simple and robust, such deterministic approaches cannot capture the uncertainty and variability inherent in real-world trajectories.
Effective short-term trajectory prediction (STTP) algorithms have immediate benefit to Air Navigation Service Providers (ANSPs) and regulators. A central task for ANSPs is assessing risk for numerous airspace occurrences, such as a loss of separation (LOS) or TCAS events, as well as thousands of conflicts detected by data mining all surveillance tracks. An essential component of this assessment is determining whether each detected conflict is real or a false positive, for instance in cases where aircraft were expected to turn as part of a published procedure before any potential collision.
As illustrated schematically in Figure 1, linear extrapolation can indicate a high-risk situation if no deviation occurs, yet it is often unclear whether the aircraft had intended to turn as part of its standard path. Historical trajectories can reveal whether an aircraft was following an established procedure or deviating from it, thus determining whether the risk was genuine or merely apparent. In practice, this distinction is rarely binary: large-scale surveillance data exhibit significant variability, and data-driven models are needed to capture the range of plausible futures consistent with observed intent.
In recent years, research has increasingly explored data-driven prediction methods to address these challenges. Liu and Hansen [Liu and Hansen 2018] proposed a deep generative convolutional recurrent network for multimodal trajectory prediction, while Krauth et al.[Krauth et al. 2021] introduced multivariate density models to synthesize realistic aircraft trajectories. Jarry et al. [Jarry et al. 2019] employed a Generative Adversarial Network (GAN) to learn the probability distributions of real aircraft approach paths, enabling the generation of realistic trajectories and the detection of atypical flight behaviors. Zeng et al. [Zeng et al. 2022] provide a comprehensive review of trajectory prediction techniques, emphasizing both progress and remaining challenges. Despite these advances, most models still predict a single deterministic trajectory, making uncertainty quantification difficult.
To address this limitation, Krauth et al. [Krauth et al. 2025] recently proposed a multi-objective CNN–LSTM architecture that predicts not only the expected trajectory but also spatio-temporal confidence areas, enabling the construction of 95% prediction intervals for each state component. These developments highlight a growing recognition that uncertainty-aware prediction is essential for robust ATM applications.
Building on these efforts, this paper introduces a generative framework for probabilistic short-term trajectory forecasting based on Conditional Flow Matching (CFM) [Lipman et al. 2023], which learns to transform random noise into plausible trajectories via ordinary differential equations (ODEs). Given one minute of observation, a Transformer-based conditional flow estimates the distribution of the trajectory for the next minute conditioned on observed inputs. The one-minute prediction horizon is chosen to align with the operational timescales of airborne safety nets such as TCAS whose alerting logic typically operates within a 30–45 s look-ahead window [Munoz et al. 2013]. In practice, the proposed framework is not limited to this horizon and can be extended to longer prediction intervals as required by specific Air Traffic Management applications.
This formulation allows the generation of multiple plausible future trajectories that capture the stochastic nature of real aircraft motion, offering a data-driven means to assess both real and false conflicts within a probabilistic risk-assessment framework. We favor CFM over GANs [Goodfellow et al. 2020], VAEs [Kingma and Welling 2013], or standard diffusion models [Ho et al. 2020] because it affords explicit likelihood training and hence naturally calibrated uncertainties, uses a stable regression-based objective that avoids the adversarial instabilities of GANs and the heavy noise-schedule simulation burden of diffusion models, and enables faster inference via simpler flow paths with fewer integration steps [Tong et al. 2023].
Flow Matching (FM) is a framework for training generative models via continuous flows [Lipman et al. 2023; Liu et al. 2022]. The key idea is to describe the transformation from a simple base distribution (e.g., Gaussian noise) to a complex data distribution (e.g., aircraft trajectories) as the solution of an ordinary differential equation (ODE) driven by a time-dependent vector field . A flow , defined as the solution to this ODE, maps samples from the prior to the data space. Flow Matching provides a simulation-free method to learn this vector field by regressing it against a target vector field that generates a desired probability path connecting a prior distribution to a target data distribution .
This section summarizes the Flow Matching and Conditional Flow Matching results introduced by Lipman et al. [Lipman et al. 2023] and further developed in subsequent lecture notes [Holderrieth and Erives 2025].
Let be a simple, tractable prior distribution (e.g., a standard normal ) and let be the target data distribution from which we can draw samples. We consider a probability path such that and approximates . This path is generated by an unknown target vector field . The goal is to train a neural network to approximate .
The Flow Matching (FM) objective is a regression loss defined as: Here denotes the Euclidean () norm. Minimizing this objective forces the learned vector field to match the target field . At inference, we can generate new samples by solving the initial value problem from to , with . However, this objective is intractable because both the marginal path and its vector field are generally unknown.
CFM reformulates the problem to be solvable in practice. The core idea is to construct the intractable marginal path by marginalizing over a set of simpler, per-sample conditional probability paths : Each conditional path is designed to start from the prior at (i.e., ) and end in a distribution concentrated around a specific data sample at . The corresponding marginal vector field can also be expressed as an aggregation of the conditional vector fields .
The key insight of CFM is that the gradients of the intractable FM objective [eq:fm-loss] are identical to the gradients of a much simpler objective that uses the conditional paths directly. The CFM objective is: This loss is tractable because sampling from and evaluating its vector field can be done in closed form for well-chosen conditional paths.
A general and effective choice for the conditional paths are Gaussian paths of the form: which define a smooth probability path from the base distribution at (typically standard normal noise) to a distribution concentrated around a particular data example at . Intuitively, controls the drift toward while controls how quickly uncertainty is removed along the path. where the time-dependent mean and standard deviation satisfy the boundary conditions and , with being a small positive constant. In the remainder of this paper, we use the common simplifying choice (deterministic endpoint at ), which yields the linear interpolation in Eq. [eq:fm_target]. The vector field that generates this path is given by: Here, primes denote derivatives with respect to the scalar flow-time (holding fixed).
A particularly powerful instance uses linear schedules for the mean and standard deviation, which corresponds to the Optimal Transport (OT) displacement interpolant between the Gaussians at and . Setting the target vector field in Equation [eq:cfm-vectorfield] simplifies to: This vector field has a direction that is constant over time, making it simpler for a neural network to learn. The resulting paths move along straight line trajectories from noise to data, leading to more efficient training and sampling.
We train a vector-field network (optionally with context ) that predicts the instantaneous velocity of a sample along a probability path from a simple prior to the data distribution. Conceptually, replaces the unknown target field and encodes “how to move” data at each time .
Learning is pure regression: minimize the Conditional Flow Matching loss in [eq:cfm-loss], which is an MSE between the network and a closed-form target field defined by your chosen conditional path (e.g., the Gaussian/OT path of [eq:cfm-vectorfield]). This objective provides unbiased gradients for the intractable FM loss and requires no likelihoods, scores, or simulation of trajectories during training.
At each iteration, draw a random time , a data example , and a synthetic point from the conditional path; compute the analytic target ; regress toward it with MSE. Repeat over mini-batches with your optimizer of choice.
After training, generate by integrating the learned ODE from to starting at (e.g., standard normal). Any standard ODE solver (Euler/Heun/RK) with a modest number of steps suffices; is the synthesized sample.
We use one month of ADS–B surveillance data from the OpenSky
Network [Schäfer et al. 2014],
restricted to flights above FL195 within the Swiss Free Route
Airspace (FRA) and collected with the traffic
library [Olive
2019]. All trajectories are resampled at 1 Hz.
From ADS–B state vectors, we derive a consistent kinematic representation. Latitude and longitude are projected to the Swiss projected grid (CH1903+/LV95; EPSG:2056), yielding planar coordinates with Easting and Northing; altitude is converted to meters . Groundspeed and track angle (clockwise from North) define horizontal velocity components , while the vertical rate provides (ft/min converted to m/s). A turn-rate proxy is computed from the unwrapped angular change between successive horizontal velocity vectors, divided by the sampling interval (1 s), and clipped to to suppress outliers. Each trajectory point in the global frame is thus a 7-dimensional state encoding position and motion.
Examples are constructed as sliding windows across flights. Each sample comprises s of history sampled at 1 Hz and a s prediction horizon; futures are down-sampled every s, yielding targets per window. Splits are performed at flight level to eliminate leakage between train, validation, and test sets. The training set contains input–output pairs, while the validation and test sets each contain samples.
To ensure exposure to maneuvering behavior, at least of the training and validation samples contain a turn, defined as consecutive steps with occurring in the history and/or future portion of the window; remaining samples are drawn uniformly to preserve overall traffic statistics. The test set is sampled uniformly without turn constraints.
To reduce variance and improve generalization, we transform each window into an aircraft-centric frame that is fixed by the last observed state. The last observed position defines the origin, and the last horizontal velocity vector defines the forward axis; the frame does not rotate over the prediction horizon. We denote aircraft-centric quantities with tildes. The per-timestep input sequence is where and are positions and velocities expressed in this fixed local frame, while the turn rate is unchanged.
In addition, we provide an 8-dimensional context vector that captures the absolute reference state at the history endpoint: where are absolute LV95 coordinates (m), encode the track angle, is ground speed, is vertical speed, and is the final turn rate. Thus, the model ingests (i) a 7-D aircraft-centric trajectory sequence capturing local dynamics and (ii) an 8-D global context anchoring the sequence in absolute space and orientation.
Both the sequence and the context are standardized using their own mean and variance, estimated on the training set and applied to all splits.
Our predictor is a Transformer encoder–decoder that learns to map an observed flight history to a distribution of future trajectories. The model operates on three types of input:
History sequence: the last s of aircraft motion (7 features per timestep).
Context vector: an 8-D descriptor of the aircraft’s absolute position and orientation at the end of the history.
Noisy future: a sequence of future states during training (interpolated between Gaussian noise and target trajectory); during inference, this starts as pure Gaussian noise and gets progressively denoised to generate predictions.
The 60 history steps (7 features each) are first linearly projected to 512 dimensions and enriched with positional encodings. The 8-D global context vector is mapped to 512 dimensions and prepended as an extra token at the front of the sequence, so that the Transformer can jointly attend to context and history (akin to a [CLS] token [Devlin et al. 2019]). This combined sequence of 61 tokens (each 512-D) is processed by six Transformer encoder layers, producing a latent representation of the past trajectory that serves as memory for the decoder.
The flow-matching process depends on a scalar time variable , which indicates how far we are between pure noise () and the true future (). To make this information usable by the Transformer, is first expanded into sinusoidal features using 64 frequencies (yielding 128 features: sine and cosine). These are passed through a small MLP: a fully connected layer maps 128 inputs to 256 hidden units with SiLU activation, followed by a second fully connected layer mapping 256 to 512 units. The resulting 512-D time embedding is added to both the noisy future tokens and to the encoded history, so the model always knows “when” in the flow it is operating.
The noisy future sequence is projected into the latent space and processed by an eight-layer Transformer decoder with self-attention across future tokens and cross-attention to the encoded history. Finally, a linear layer maps the decoder output back to 7 physical features per step, representing the predicted vector field
In essence, the encoder compresses the past 60 s of motion into a latent memory, the time embedding guides how noise is transformed along the flow, and the decoder denoises 12 future steps conditioned on both history and context. The complete architecture is illustrated in Figure 2.
Let the normalized future be and partition the channel dimension as with shapes , , and , respectively, where denotes the number of time steps predicted.
We instantiate the OT-style Gaussian conditional path of Section 2.3 with . Sample and , and form
and define the conditional target vector field
This corresponds to the OT-style Gaussian path with (so is a convex combination of noise and data), for which is constant in .
The network outputs and we minimize a weighted MSE (i.e., ) expressed in target-space components: with , , and . These weights were chosen empirically to balance the different scales and importance of position, velocity, and turn-rate errors during training. Here denotes the sum of squared errors over tokens and channels.
We train with AdamW, a warmup+cosine learning-rate schedule, dropout , and maintain an exponential moving average (EMA) of parameters for evaluation and checkpointing.
At test time, we sample from the learned flow by initializing and integrating the ODE
from to using a predictor–corrector scheme (Heun’s method; an explicit trapezoidal / second-order Runge–Kutta integrator) with a fixed number of steps. This yields a 12-token aircraft-centric future. Samples are then denormalized and mapped back to the global frame by (i) inverse-rotating and using from the context (fixed frame), and (ii) translating by the absolute reference . Repeating the sampling procedure produces ensembles of plausible 60 s futures conditioned on the observed 60 s history.
We evaluate both point accuracy and probabilistic calibration of the proposed CFM forecaster on held-out test windows. Unless noted otherwise, we report results over windows (default ) with forecast steps (default , i.e. s stride over a s horizon).
Given a history , a context , and the learned vector field , we draw forecast samples by integrating the ODE with distinct initial noises in the aircraft-centric normalized frame. Each trajectory is then denormalized and mapped back to the global LV95 frame using the inverse of the per-window normalization and the fixed-frame transformation. We denote global positions by (Easting, Northing, altitude) and ground-truth by for .
We evaluate geometric prediction error at each forecast horizon (in seconds) using two metrics: mean absolute error (MAE) and root mean square error (RMSE). We compare three predictors:
Model (mean): Ensemble mean .
Model (best-of-): A diagnostic lower bound selecting the sample closest to ground truth: which probes ensemble coverage.
Constant-velocity baseline: Linear extrapolation in global coordinates using the last observed ground and vertical speeds.
For any predictor , errors over test windows are
We report and for all three predictors as functions of the horizon s (default s).
To assess whether the model’s forecast distribution matches empirical frequencies, we use a Probability Integral Transform (PIT) diagnostic. For each coordinate and each (forecast step), we compute a sample-based PIT value using the empirical rank: which avoids degenerate values under ties in finite ensembles. For a calibrated univariate predictive distribution, should be uniformly distributed on . We therefore aggregate over and and report axis-wise histograms; deviations from uniformity indicate under/over-dispersion or bias.
To illustrate the operational use of the proposed approach, we analyze a real encounter extracted from ADS–B data (Figure 3). In this event, one aircraft was maintaining level flight at FL310 while the other was descending through FL310 on a converging path. The recorded data show that the minimum horizontal and vertical spacings fell below the prescribed separation minima (<5 nautical miles horizontal and <1,000 feet vertical), resulting in an actual LOS.
Based on the observed histories of both aircraft, we generated stochastic future trajectories for each one using the CFM model. Every possible combination of one sampled future from each aircraft was resampled from 0.2 Hz (5 s resolution) to 1 Hz by linear interpolation and then examined to determine whether standard separation minima were breached or a MAC observed at any point within the prediction horizon.
Throughout the forecast, we monitored the horizontal and vertical spacing between the two aircraft. A situation was classified as a LOS when, at any moment, the horizontal distance between the aircraft fell below the LOS limits. To capture more critical encounters, we also defined a mid-air collision (MAC) proxy, corresponding to predicted cases where the aircraft approached closer than 0.03 nautical miles horizontally and 55 feet vertically.
By counting the proportion of trajectory pairs that met either of these criteria, we obtained straightforward Monte Carlo estimates of the probabilities of a future LOS or MAC within the prediction window.
Figure 4 shows ensemble forecasts for three representative flights. Each panel displays the 60 s observed history (black), the 60 s ground-truth future (red), 128 sampled futures from the CFM model (blue), and the ensemble mean trajectory (yellow).
In the left panel, most samples follow a curved path while some continue straight. In the middle panel, all samples form a narrow bundle along the observed flight direction. In the right panel, the true continuation is straight, and several samples deviate slightly toward a right-hand branch. Across the three examples, the ensemble spread increases with prediction horizon, and the ensemble mean remains near the center of the sampled futures.
Figure 5 illustrates the temporal evolution of the learned conditional flow for one example case. The three panels correspond to integration times (noise), (intermediate state), and (final prediction). Each map shows predicted positions (blue), sample vectors (orange), and the grid vector field (purple), together with the observed history (black) and ground-truth future (red).
The orange sample vectors visualize the model’s denoising dynamics in flow time : they are finite-difference displacements of the predicted future tokens between two consecutive ODE integration steps (i.e., in the plane), and should not be interpreted as physical aircraft velocities in trajectory time . The purple grid vector field is obtained by evaluating the learned vector field on a spatial grid for a single synthetic future token (position channels set from the grid point, remaining channels set to zero), yielding a qualitative 2D slice of the full vector field.
At , sample vectors are randomly oriented. At , the flow begins to align spatially along the future path. At , the trajectories form a coherent pattern that overlaps with the true continuation. The grid vector field exhibits smooth directional changes between neighboring locations.
Figure 6 presents mean absolute error (MAE) and root-mean-square error (RMSE) as a function of prediction horizon. Metrics are computed over 512 test windows (prediction problems) for 3D Euclidean , horizontal , and vertical components. Each plot compares three estimators: ensemble mean, best-of- sample, and constant-velocity extrapolation (CV).
For all spatial components, errors increase monotonically with horizon. 3D and horizontal errors show similar growth patterns, while vertical errors remain smaller in magnitude.
The CV baseline yields consistently larger errors for both MAE and RMSE. Typically, over a 60 s horizon, the CFM model achieves a 3D MAE of 220 m, compared with about 320 m for the CV baseline. For RMSE, the CFM model reaches approximately 500 m, whereas the CV baseline remains just below 800 m.
The best-of- curve remains consistently below the other curves across all horizons, with both MAE and RMSE staying below 50 m throughout the prediction window.
Figure 7 shows the PIT histograms for the aircraft-centric longitudinal and lateral coordinates and for altitude . Both and exhibit a central peak around 0.5 with lighter tails, indicating over-dispersion in the horizontal plane; the effect is more pronounced for the lateral component . This behavior is expected at short horizons because lateral motion is driven more strongly by turning intent than longitudinal motion, making it harder to infer from recent history alone. The component is closer to uniform, suggesting better calibration in the vertical dimension.
Using the ADS–B histories of the two aircraft, we generated stochastic futures for each trajectory and evaluated all combinations of predicted paths. Among all paired samples, combinations (83%) resulted in a predicted LOS, while (0.36%) met the stricter MAC threshold. These results indicate that the model assigns a realistic, non-negligible probability to a future LOS, consistent with the outcome observed in the actual flight data. The corresponding 95% Clopper–Pearson confidence interval for the collision probability is approximately .
Given one minute of observed flight history, the proposed CFM model can generate multiple plausible trajectories for the following minute. Each sample represents a distinct but realistic continuation of the aircraft’s motion, allowing the forecast to capture both the expected evolution and the uncertainty surrounding it. This ensemble property distinguishes the approach from deterministic predictors: instead of committing to a single extrapolated path, it provides a distribution of possible futures consistent with recent behavior.
The ensemble samples reflect context-dependent variability in aircraft motion: tight, low-spread ensembles emerge during stable, steady flight, while wider and occasionally multi-modal spreads appear in maneuvering phases such as turns or climbs. This adaptive spread indicates that the model has learned to represent uncertainty in a meaningful way. When motion is predictable, the ensemble converges; when intent is ambiguous, the model expresses multiple likely continuations.
The flow visualization indicates that the model learns a consistent vector field that continuously transforms random noise into structured trajectories.
Quantitatively, the CFM predictor consistently achieves lower MAE and RMSE than constant-velocity extrapolation, reducing 3D RMSE by roughly 40% at 60 s horizons. Vertical predictions are particularly accurate, reflecting the slower dynamics of en-route flight. The best-of- results confirm that the true trajectory is typically contained within the ensemble, suggesting that the generated variability captures the range of realistic futures.
We compare against constant-velocity extrapolation because it matches the linear-motion assumptions commonly used in short-term safety nets and in practical risk modeling. More advanced physics-based or learning-based baselines would be valuable, but are left for future work.
The PIT analysis shows that the CFM forecasts are over-dispersed, especially in the horizontal (, ) components. The central peak and light tails in their PIT histograms indicate that ensemble spreads are wider than the true variability in the test data. In contrast, the component is better calibrated, showing a more uniform distribution. Incorporating additional contextual features, such as flight plans or intent information, could help reduce over-dispersion and improve calibration. In operational settings, horizontal over-dispersion may be conservative for safety, but may also inflate uncertainty volumes and increase nuisance alerts; this motivates further work on calibration. In the aircraft-centric PIT analysis, the over-dispersion is more pronounced laterally () than longitudinally (), consistent with the stronger influence of turning intent on short-horizon lateral motion.
The real-world encounter case study further highlights the operational relevance of the approach. When applied to a pair of aircraft that actually experienced a LOS, the model predicted an LOS in approximately 83% of all sampled trajectory pairs and a MAC in 0.36%. By representing uncertainty through ensembles rather than deterministic paths, the model enables direct estimation of the likelihood and severity of potential conflicts—offering a data-driven complement to existing risk-assessment methods. Nevertheless, accurate quantitative risk evaluation still depends on good probabilistic calibration.
Despite these promising results, several limitations remain. A small fraction of generated samples show unrealistic oscillations or curvature, indicating that the learned flow occasionally violates physical motion constraints. Incorporating kinematic regularization or lightweight flight-dynamics priors could mitigate this issue. The fixed 60-second prediction horizon, although operationally meaningful for safety-net applications, also constrains performance and should be adapted to the intended use case. Finally, the model currently relies solely on motion-derived ADS-B features; integrating contextual data such as flight plans, weather fields, or nearby traffic would likely improve both accuracy and calibration.
This study demonstrates that Conditional Flow Matching (CFM) provides an effective generative formulation for short-term, uncertainty-aware aircraft trajectory prediction. Given one minute of observed motion, the model learns a continuous vector field that transforms stochastic perturbations into future trajectories over the following minute. The resulting ensembles capture both the expected continuation of flight and the uncertainty associated with short-term intent, producing forecasts that are more accurate and informative than conventional constant-velocity extrapolation.
Beyond predictive accuracy, the ensemble formulation of the CFM model offers a direct route from probabilistic forecasting to operational decision support. By representing future motion as a distribution rather than a single trajectory, it becomes possible to compute interpretable risk measures such as the probability of loss of separation or mid-air collision. In the presented case study, these probabilities aligned closely with the actual outcome, demonstrating that the model can provide early and quantitative evidence of potential conflicts. Such probabilistic indicators could complement existing safety nets by replacing binary thresholding with continuous risk levels, supporting more nuanced prioritisation and review of air traffic situations. To ensure operational reliability, however, ensemble calibration must be demonstrated.
In the longer term, models of this kind could be used to complement existing safety nets such as the STCA or the Airborne Collision Avoidance System (ACAS). By producing probabilistic forecasts that explicitly quantify future risk, they could provide an additional layer of context to existing deterministic alerting systems, helping distinguish genuine conflicts from expected manoeuvres. Equally, the same generative framework can be applied retrospectively for the forensic analysis of historical encounters, allowing quantitative reconstruction of uncertainty and intent in recorded loss-of-separation or near-miss events.
From a modeling perspective, several avenues of research emerge. First, the physical fidelity of generated trajectories can be improved by incorporating lightweight kinematic regularization terms or flight-dynamics priors to suppress oscillatory samples while preserving diversity. Second, extending the conditioning inputs beyond motion-derived ADS–B features to include flight plans, meteorological conditions, or surrounding traffic is likely to enhance intent inference and reduce lateral over-dispersion. Third, the expansion of the prediction to longer horizons to increase the usability of the model.
In summary, CFM provides a principled foundation for probabilistic trajectory forecasting in Air Traffic Management. It unifies deterministic accuracy, calibrated uncertainty, and interpretability within a single generative framework, offering tangible benefits for both safety analysis and decision support. While further work on calibration, dynamics regularization, and contextual conditioning remains, the presented results suggest that flow-based generative modeling represents a promising and operationally relevant step toward uncertainty-aware prediction and risk estimation in next-generation air traffic management systems.
First Author: Conceptualization, Data Curation, Formal Analysis, Investigation, Methodology, Software, Validation, Visualization, Writing (Original Draft), Writing (Review and Editing)
Second Author: Writing (Review and Editing)
Third Author: Visualization, Writing (Original Draft), Writing (Review and Editing)
This research was funded by the Swiss Federal Office of Civil Aviation, grant number 2022-046.
All the data used in this study can be downloaded from the OpenSky Network.
The source code used for model training, evaluation, and figure
generation is publicly available at github.com/figuetbe/generative-flight-predictions.