The futures of the air transportation system: automated foresight scenarios generation and analysis

Camille Blanchard;
This web version is auto-generated from the LaTeX source and may not include some elements. Check the PDF version for more details.

Abstract

The aviation sector needs to face multiple challenges, whether to mitigate its environmental impact, recover from the sanitary crisis, or satisfy its customers. This paper presents a foresight tool to help make decisions considering possible futures. It is designed to automatically and exhaustively generate all the possible futures of a system of agents, based on a formal model to define the system and its components along with the interactions between them. It is applied to the air transport system and the questions an airline company could ask itself. It aims to limit the impacts of past data and cognitive biases of participants with classic scenario production methods while using qualitative data. As principles of the agents of the system are considered, it adds a new perspective to make decisions and enables us to consider a notion of moral conflict. In fact, the analysis of generated scenarios shows that reaching a goal may require making a compromise between principles or defining priorities. It also shows that an agent, whatever decisions they can make, may face conflict situations because of other agents. The representations of the results allow a better understanding of the situation and analyses of the initial knowledge.

Introduction

For the past few years, the air transport sector has been facing challenges with the Covid-19 crisis [Fleming et al. 2022], the rise of global warming concerns [IPCC 2023] (e.g., the flygskam movement) combined with the war in Ukraine and the energy crisis while keeping the customers satisfied. In this context, the stakeholders have to make decisions while dealing with uncertainty. A possible and natural stance is therefore to consider the possible futures of aviation. This means taking into account the various stakeholders interacting in the sector, from governments and international organisations to airline companies, fuel suppliers, and populations. They all make decisions on their own, controlling various variables to achieve their own objectives. This may result in internal or external conflicts within a stakeholder or between them.

It appears that there are many methods available to help decision-making, relying on one or more possible futures; they can be divided into two broad categories: forecasting [Petropoulos et al. 2022] and foresight ([Amer et al. 2013], [Oliveira et al. 2018], [Cordova-Pozo and Rouwette 2023]). Scenarios are widely used, either with formal quantitative methods or with non-formal qualitative methods. In the domain of foresight, the aim is to anticipate possible futures. The literature abounds with methods and recent proposals for classifying [Crawford 2019] and defining fundamental terminologies [Spaniol and Rowland 2019]. A number of recent works also highlight the value of formal modelling and the use of technological tools as a methodological framework [Ködding et al. 2023] for simplifying methods and mitigating certain biases. In the air transport sector, they have helped to produce a high number of reports including scenarios, either global or on specific issues, considering uncertainties or staying in the trend. To our best knowledge, a fully formalized approach for generating foresight scenarios from qualitative data has not been proposed yet. Moreover, current scenario generation methods primarily target key variables characterizing stakeholder systems, overlooking the stakeholders and their choices. Only a few studies address post-generation scenario utilisation or the inherent methodological biases.

Therefore, this paper focuses on generating foresight scenarios about the future of the aviation sector and analysing them. It offers a formal foresight scenario methodology characterised by an automated and exhaustive scenario generation tool, as well as analysis tools designed to help a user in a stakeholder system make a decision. To achieve this, our method uses the notion of principles to help the user develop strategies for their decisions, justify their choices, or better represent societal phenomena in order to anticipate them. Additionally, it places significant emphasis on the involvement of stakeholders to facilitate the utilisation and comprehension of the generated scenarios. It is also expected that cognitive and methodological biases that may distort manually built scenarios will be mitigated.

A first version of the formal model has been presented in a previous paper (in French) [Blanchard et al. 2021]. Some updates of the model are presented in this paper, especially concerning the characterisation of the notion of decision and functions and the definitions related to the moral conflict. In addition, this paper presents analysis tools and their application to the air transport system.

We first focus on the various forecasting and foresight scenarios about the future of the air transport system (Section 2). Many of them have been published, which highlights the concerns of the sector. This also gives a first outlook on the existing types of scenarios. For the reader to be more familiar with forecasting and foresight, an overview of the different approaches and methods to build and generate scenarios is presented in Section 3. Section 4 describes the formal approach to model a system of agents and generate scenarios, illustrated on initial knowledge relating to the air transport system. Section 5 deals with the generation process and the resulting foresight scenarios. The last section (6) focuses on the analysis of the generated scenarios and their representation according to different criteria. We conclude (Section 8) with a discussion on usual biases in current methods and how we can reduce some of them, and on the further possible developments of our work.

Scenarios for aviation future planning

Many organisations, both inside and outside the aviation sector, have built foresight scenarios about the future of aviation (see Table 1 for some of these studies).

Some scenarios about the future of aviation
Title Date Agencies Number of scenarios Method Previous Version Aim
EREA Vision Study - The Future of Aviation in 2050 [EREA 2021] 2021 EREA 4 Workshops with experts (spring 2020), analysis tools 2010 [EREA 2010] Decision making inside EREA
Global Market Forecast 2022-2041 [Shparberg and Lange 2022] 2022 Airbus 1 Airbus model, quantitative forecasting 2019 [Airbus 2019] Decision making inside Airbus, inform stakeholders
Environmental trends in aviation to 2050 [Fleming et al. 2022] 2022 ICAO 1 to 4 Quantitative models 2019 [Fleming and Lépinay 2019] Guide the aviation sector and all its stakeholders
Élaboration de scénarios de transition écologique du secteur aérien [environnement 2022] 2022 Ademe 3 Workshops on socio-economic issues and use of a quantitative model No Analyse ecological transition paths for aviation at the French national scale
Waypoint 2050 [Group 2021] 2021 ATAG 3 + 1 "scenario 0" Airbus model, forecasting No Strategic perspectives for decision makers
Scenarios for 2050 [Klöwer et al. 2021] 2021 Academic paper 4+2 OACI data, equations developed in the paper, data from the aviation fuel emission literature No Support to quantify aviation’s contribution to global warming

Narrative scenarios are produced through participating methods with workshops animated by facilitators and surveys filled out by experts. For example, the EREA (Association of European Research Establishments in Aeronautics) has produced four narrative scenarios, including variations among possible future technologies, states of the world, or ways to achieve sustainability. They result from workshops that took place over several months, attended by EREA R&D experts after having evaluated the scenarios they had made in 2010.

Similarly, the French public agency ADEME (French Agency for Ecological Transition) has produced three narrative scenarios. Thanks to a literature review, they have based their work and assumptions on four relevant scenarios studies ([NLR and Economics 2021],[Project and Decarbo 2021],[Delbecq et al. 2021],[Committee 2019]). They have also conducted a two-month consultation, including three workshops involving stakeholders to enrich the work. However, the questions and answers of the consultation are not provided in the final report. Then, another set of workshops (attended only by the three scenario builders: ADEME, DGAC (French Civil Aviation Authority), and DGEC (French Energy and Climate Authority)) took place to produce the scenarios. Assumptions were made here on the economic context, aviation decarbonisation, and customers’ uses.

For both studies, even though the methods are precisely described, little information is given concerning the actual use of these scenarios. They are, however, claimed to be used for decision-making inside the organisations that built them.

On the other hand, formal quantitative models are used, especially in the context of forecasting with trend scenarios based on past data. ICAO (International Civil Aviation Organization), and especially the Committee on Aviation Environmental Protection, published a report about trend scenarios on environmental issues, including Greenhouse Gas emissions, noise, and Local Air Quality [Fleming et al. 2022]. Many different computational models were used to build the scenarios that are based on quantitative data on a wide range of factors (COVID-19 impact, fuel prices, global economic conditions, etc.). Scenarios are represented as quantitative curves, including the representation of uncertainties. There are no concrete applications for these scenarios claimed in this work; they are, however, included in a larger report that is intended to guide international aviation.

With the same purpose, [Klöwer et al. 2021] published six forecasting scenarios about the aviation sector at horizon 2050. The authors focus on the air traffic \(CO_2\) emissions and take into account the change of fuels with scenarios about zero-carbon fuels. We can also mention the work of [Dray et al. 2022] for their forecasting scenarios on the different fuel pathways for aviation to reach the net-zero climate impact. Likewise, Airbus publishes a trend scenario every three years [Shparberg and Lange 2022]. It is composed of trend curves computed by quantitative models, mostly Airbus’s own models. This type of scenario is claimed to be exploratory, starting from the current state of the world and using past data. It includes projections on the possible evolution of the fuel price, air traffic and demand, and the customers’ uses. It is said to be used as a reference for all the aviation sectors (airlines, airports, investors, governments, etc.). In the same way, Boeing has its trend scenario [Boeing 2022] so as Comac [Comac 2020].

Finally, the scenarios from the Waypoint 2050 report published by ATAG (Air Transport Action Group) aim to present the different paths to decarbonize air transport and thus focus on \(CO_2\) emissions. They present a baseline (scenario 0) and three backcasting scenarios (starting from a defined target in the future). These are built built on assumptions about the use of new technologies like blended wing bodies, sustainable fuels, and electric aircraft. They are composed of trend curves with a large place given to uncertainties, with attached texts highlighting changes, making a link between forecast and foresight (see Section 3). They are generated using various sources of traffic forecasting models like the ones cited above (Airbus, Boeing, ICAO, etc.). Cost of travel, changes in demand, acceptability, or policies are considered among other variables.

We can also mention the creation of a freely available tool called "CAST," to enable organisations of all types to create their own forecasting scenarios [Planès et al. 2021]. This tool can also be used to assess the impact of the scenarios on climate change.

To our best knowledge, the use of these different scenarios inside aviation companies is not documented in the literature, and there are very few narrative scenarios from other parts of the world regarding the future of aviation. However, as an example of the use of forecasting scenarios concerning the future of air traffic, we can mention the study conducted by [Grewe et al. 2021], which, among other sources, uses forecasts from Airbus and Boeing to estimate the aviation industry’s contribution to climate change. This study then compares these new scenarios to another one based on a different approach (including expert opinions) and forecasts of technological innovations in the sector.

From this limited but representative spectrum, we can first say that scenario building is highly topical. This is surely related to the many issues that the air transport sector is facing today and its objective to mitigate its environmental impact [Nations 2015]. In fact, many organisations focus their scenarios on \(CO_2\) emissions. However, non-\(CO_2\) effects must not be forgotten [Lee et al. 2021]. Moreover, even if aviation has almost fully recovered from the Covid-19 crisis, practices have changed, e.g., the way customers buy their tickets [Bulchand-Gidumal and Melian-Gonzalez 2021], teleworking, or flight shaming [Guillen-Royo 2022]. The sector has to balance this with keeping air transport affordable, safe, and efficient [Nations 2021]. Scenarios help to prepare for these disruptions, allowing companies, organisations, and governments to anticipate.

It is worth noticing that both groups mentioned above (scenarios from formal quantitative models or narrative ones produced from participating methods) have their own biases, whether caused by participants’ backgrounds and opinions or by past data that may blind models to breakthrough scenarios. Moreover, little information is available on the funders of these studies. Some of the organisations are both judge and jury, making scenarios for their own benefit. As far as methods used to generate those scenarios are concerned, they are part of an abundance of methods for planning the future. They are presented in the next section.

Future planning: a focus on scenario generation

The future of a system of agents can be considered in many different ways. K. Muiderman [Muiderman et al. 2020] suggests four categories to classify the different approaches for considering the future and anticipating1:

  1. forecasting: also described as strategic planning;

  2. foresight: to identify likely futures, possible futures building;

  3. co-creation of futures;

  4. critical approaches.

Co-creation of futures, also called experimental approaches, are based on experimentation, collective creation, and imagination of new, mostly radical, futures, with a complete disconnection from the present. For example, the Red Team is a French team of science-fiction writers and researchers who imagine and produce scenarios on possible future threats [armées 2021]. [Hajer and Pelzer 2018] also presents "Techniques of Futuring" to imagine fictional futures in working groups of agents and produce discussions on what strategies to adopt considering these imagined futures. These approaches do not involve any formal methods.
Critical approaches include various thinking processes on how anticipation and studies of possible futures impact current government policies and choices; for example, how the expectation of seeing certain technological innovations in the future leads to taking them for granted in the present [Talberg et al. 2018]. These approaches (3 and 4) do not actually relate to scenario generation, therefore, they will not be developed further in this paper.

Forecasting and foresight consider the future in very different ways. They usually rely on the use of scenarios. However, defining the term "scenario" is not easy as many methods exist. The multiple attempts to define it are the origin of the term "methodological chaos" used in the literature of this field [Spaniol and Rowland 2018]. As there is a radical difference in its use in forecasting methods and foresight methods, we will give a proper definition for each of them (see 3.1 and 3.2).

Forecasting

This approach aims at foretelling the future [Li Vigni 2022]. Like all future planning approaches, it is used when decisions have to be made to answer specific questions more than to conduct a wider thinking process (i.e., fixing prices or production level, giving advice on investment or policies). In particular, these methods aim at optimizing the path to reach a given objective, most often by minimizing the risks. They can also be related to event or failure prediction.

Forecasting is mostly based on modelling through planning tools or specific models to assess a small number of probable relevant futures [Petropoulos et al. 2022]. Each model is usually built to answer a specific question (i.e., specific quantitative forecasting models such as demographic, economic, meteorological, or epidemiological models). These specific models are sometimes combined in global forecasting models to build scenarios for systems including different issues and topics. Models rely on a very wide range of computational tools such as statistical methods, machine learning, or model combination [Januschowski et al. 2020]. They all depend on data sets composed of current and past knowledge to determine historical patterns leading to future trends. Moreover, methods using these models often include guidance to evaluate the results and the quality of the predictions. In fact, even if exact predictions are not expected, the use of probabilities to quantify the risks and uncertainties is recommended and often represented as intervals.

Scenarios are one of the results computed by forecasting models. They are defined as "narratives about conceivable futures that are likely to happen" [Petropoulos et al. 2022]. They are considered as a tool for organisations to share information and stimulate thinking and discussion. This definition highlights the main bias of forecasting, which is to focus on probable scenarios, keeping business as usual and not considering radical changes. Aircraft manufacturers or the ICAO build these kinds of scenarios (see 2).

Foresight and scenario generation

The foresight approach is more exploratory but still mostly aims at decision aid. This approach deals with future uncertainties by anticipating different situations. These situations are usually explored through participatory methods but also thanks to quantitative models. Foresight is a multidisciplinary approach to identify the major issues of a system (a particular business field, for example) through collective thinking and action. It is also a proactive strategy through which a stakeholder can consider different possible futures leading to a previously defined objective.

Scenario planning [Amer et al. 2013] is one of the most popular methods of the foresight approach. [Spaniol and Rowland 2019] provides a definition that is mainly in line with the point of view proposed by the Intuitive logic school (see Section 3.2.1): a scenario is future-oriented, often on a global subject. It is composed of a narrative description, which can be possible and even plausible. It is usually part of a set of systematically generated scenarios (generated through the same process). We can notice that there is no mention of the term "likely" contrary to the forecast definition of a scenario. Scenario generation does not always rely on past data, and the emphasis is put on uncertainties and wild cards (nonlinear events with huge impacts that cannot be predicted).

The three main schools of scenario generation are divided into two groups and were developed in parallel in the 60s:

  • American schools

    • Intuitive Logic school;

    • Probabilistic Modified Trend school;

  • French school of La Prospective.

American schools

Intuitive Logic has been popular in the industry since it was promoted by the Shell Group [Wack 1985] and is today widespread for prospective studies across the world. It has been used for transport in Europe [Keseru et al. 2021] or to evaluate the development of smart environments and their relation to the elderly in the United States [Withycombe Keeler and Bernstein 2021]. This method focuses on sequences of events and the decision-making process. It is mostly based on participative workshops, qualitative and deductive analysis of a system, and almost never relies on mathematical and formal models. The process, however, is based on specific steps (from 5 to about 15) depending on the method. A workshop is usually animated by a qualified animation team which plays a huge part in the commitment and the understanding of participants. The latter are mainly members of the organisation that has initiated the study. Experts outside the organisation may also be involved [Mauksch et al. 2020].

Probabilistic Modified Trend is another school of scenario generation and analysis that involves two matrix-based methodologies, Trend Impact analysis and Cross Impact Analysis [Gordon]. It involves both quantitative and qualitative models. Quantitative trends are first generated, often with forecasting tools. When no data are available, qualitative trends are implemented. In the Trend Impact Analysis, these extrapolations are then modified by the addition of qualitative factors and uncertain breakpoints to enrich the analysis. Cross-impact analysis focuses on the relationship between key drivers. It uses conditional probabilities to characterise the causal link between the occurrences of several factors and return matrices answering what-if exercises and the importance of possible events. There are not many examples of recent use of these methodologies but we can cite [Khademi-Jolgehnejad et al. 2021] applied to hospital development and its supply chain.

Intuitive logic almost always produces scenarios. But, even if both methods can do so, the Probabilistic Modified Trend does not systematically, as results are often given as matrices. However, no formal model is implemented to generate them or to define the issues and systems.

French school of La Prospective

The French school of "La Prospective", also known as the French school of foresight, was initiated from a philosophical point of view by G. Berger [Berger 2007].

It is based on the Scenario Method [Godet 2007a] and even if it claimed to provide participatory methods to support the decision-making process and lead to strategic actions and changes, in practice, the production of scenarios has often been seen as an end and no concrete applications of the results have come up [Bootz et al. 2022].

The Scenario Method is composed of specific steps and relies on the use of scenaring tools. Knowledge is gathered through workshops involving members of the organisation funding the study, a qualified animation team, and sometimes external experts.

[Godet 2007b] presents the six steps of this Scenario Method:

  1. Define the problem and specify the studied system (set of elements interacting with each other, all organized to reach a common goal [Godet 2007a], including variables, agents, and objectives);

  2. Identify key variables (variables characterizing agents’ actions, which they can monitor to a greater or lesser extent [Godet 2007b]. The key variables reflect the agents’ objectives and may highlight their strategies and power games inside the system);

  3. Establish stakes and strategic objectives;

  4. Explore the range of possibilities;

  5. Formulate assumptions about the future and sometimes assign probabilities to them;

  6. Construct scenarios.

Supporting tools such as MICMAC (Matrix-based Multiplication Applied to a Classification) or MACTOR (Matrix of Alliances and Conflicts: Tactics, Objectives, and Recommendations)2 are provided with this method to help in the workshops. These tools allow the computation of information regarding the link between variables, their importance, and the dependencies between agents; however, the scenarios resulting from the combination of different hypotheses are handmade. Thus, although clearly structured, the method requires restricting the number of variables taken into account. Indeed, too many agents or variables make the work of the group long, complex, and tedious. These computational tools are less used today.

The French school of foresight’s method is today more considered as a structuring tool for debate, and a larger room is allocated to the consequences and follow-up of the studies, helping through the process of changing policies and managing strategies inside companies and other organisations. The emphasis is put on the link between scenarios and actions, and high priority is given to mobilisation inside organisations [Bootz et al. 2019]. Moreover, companies often need fast, reactive, and adaptive answers when they have questions about the future of their field, that are unlikely to be provided by the Scenario Method, which requires long workshops.

Typology of scenario generation

The typology presented by [Crawford 2019] provides criteria to classify the different methods based on scenario generation. Below, we will select only the criteria we consider relevant to contextualize our work. This will allow us to specify the nature of the scenarios we want to generate.

  • The criterion Value/Reality evaluates the "desirability" of the scenarios:

    • Descriptive scenario: The scenarios are generated through exploration without any desirability consideration (they can be divided into two categories: hypothetical if the exploration is wide and sometimes far from reality, and plausible if the notion of probability is involved);

    • Normative scenario: The scenarios are generated to reach specific goals (two extreme categories exist: active if the focus is put on some stakeholders’ actions and strategy during the scenarios or passive if the stakeholder has only an observational role).

  • The starting point of the scenario [Ducot and Lubben 1980]:

    • The present, therefore inductive reasoning (likely-futures: trend scenarios or what-if scenario: exploratory scenarios) is involved;

    • The future (an ideal or a feared situation), therefore abductive reasoning (backcasting) is involved.

  • The time horizon of the scenarios;

  • The way time is taken into account: continuous or discrete time;

  • The scale of the variables: internal or external to the system;

  • The number of scenarios: less or more than two;

  • The study participants: the members of the organisation initiating the study, the shareholders, and decision-makers (referred to in the sequel as the "users");

  • The place of the organisation initiating the study over the system and its environment (internal or external to the system, actor or spectator, etc.). This criterion can help to define the limits of the studied system.

Table 2 summarises the methods presented in this section against some of these criteria.

Sample of methods for future planning that usually rely on scenario generation
Approach Objective Process/ Scenarios Organisations Tools Data type
Global forecast [Petropoulos et al. 2022] Guidelines and advice One scenario or some thematic scenarios with uncertainty intervals International organisations (i.e. ICAO), independent agencies Global quantitative forecasting models Quantitative
Strategic planning Help stakeholders’ decision-making One forecasting scenario with uncertainty intervals Companies (i.e., Airbus, Boeing) Specific quantitative forecasting models Quantitative
Intuitive Logic [Wack 1985] Decision-making, scenarios producing Focus and data collection among experts, workshops to develop logic exploratory scenarios (from 2 to 4) All types of organisations - Qualitative
Probabilistic Modified Trend [Gordon] Guidance, trend generation, causality highlighting Key factors determination, trend extrapolations or probability on events, matrices of exploratory scenarios Mostly companies Trend Impact Analysis, Cross Impact Analysis Quantitative with some qualitative additions
French Prospective (before 2015) [Godet 2007b] System understanding Workshops, system definition, key variables and relationships between components, exploratory scenarios (from 3 to 6) All types of organisations MICMAC, MACTOR, MORPHOL Qualitative
"New" French Prospective [Bootz et al. 2019] Focus on mobilisation and results implication Workshops leading to exploratory scenarios (from 3 to 6) All types of organisations Morphological analysis, SWOT analysis Qualitative
Massive Scenarios Generation (MSG) [Davis et al. 2007] Strategic planning High number of scenarios Companies, military organisations MSG generator (Ordinary sensitivity analysis, filtering) Quantitative

We can observe from the approaches listed above that there is no formal model associated with qualitative data, except for calculation tools such as MICMAC. Approaches dealing with forecasting models use only quantitative data to produce one scenario on a specific issue. Approaches dealing with qualitative data generate from 2 to 6 scenarios; they also aim to guide organisations in a sector or to support the decision-making process of governments or companies. To our best knowledge, this use is not well-documented.

In the typology of [Crawford 2019], a distinction is made between the number of generated scenarios, whether lower or higher than two. However, we can also make another distinction between the majority of methods that produce a number of "hand-made" scenarios between four and six, and the "massive" generation of scenarios proposed by [Davis et al. 2007]. Massive generation uses quantitative data and aims at exploring more than "the obvious possibilities that are already in mind" (p.13).

The formalisation of scenario-based foresight methods is still relatively under-discussed today but is increasingly being debated [Schirrmeister et al. 2020]. Formalisation can help mitigate certain biases while providing other advantages. Some ambiguities are mitigated because every term is formally defined. Formalism may also help structure the initial knowledge and provide an overview of it. This reduces the effect of false consensus, as each participant in the foresight study has access to the entire project. Furthermore, formalisation also encourages a step back from the modeled system, thus decreasing the availability bias or the advocacy bias, which relies on information taken from memory and recent experiences.

Regarding analysis, the link between Big Data and scenarios has been highlighted by [Batrouni et al. 2018] with potential common points such as statistical methods, modelling and simulation, and multi-criteria decision analysis. Big Data approaches could help generate and analyse scenarios (especially massively) by being used for continuous variable discretisation, real-time consideration, heterogeneous data integration, or alerts for invalidation of key assumptions. However, that paper notices that there is a need for a "formal theoretical foundation in the scenario domain" and rigorous definitions for computational models applied in the generation and analysis of foresight scenarios.

Formal model for the generation of scenarios on the future

In Section 2, we have described forecasting scenarios for the future of air transportation based on quantitative data. We have also presented foresight global scenarios relying on qualitative data describing what the aviation sector could become. As was said in Section 2, the aviation sector is no exception when it comes to future planning and scenario generation: in fact, we could not find in this sector any use of models dealing both with qualitative data and generating a large number of scenarios automatically. This sector will be used in 4.1 to build a use case.

Referring to Section 3.3, we will present formal general concepts constituting a formal model (see Section 4.2) to implement it and generate descriptive and hypothetical scenarios. However, the decision support objective could lead to the simulation of precise paths leading to some desired situation, which would place our work in a normative and active perspective. The generation of exploratory scenarios requires positioning oneself in a logic of forward-casting but should not exclude working in more detail on a future that would be chosen in advance. Regarding the time horizon of our scenarios, they will be limited by stopping criteria, which are defined in Section 5. Time will not be modeled explicitly, the scenarios being constructed as a sequence of states and events building the narrative of a possible future. Finally, as in [Davis et al. 2007], we wish to generate a large number of scenarios. However, our contribution differs from that work as the nature and analysis of the scenarios will not be quantitative.

Our formal model presented in Section 4.2 is based upon some concepts that are supported by the French School of Prospective (see 3.2.2) such as agents and variables. In addition, principles to which agents are committed are added, which is generally not considered with existing scenario methods 3. These different general concepts will be illustrated with the formalized Use Case to fit our model. Some definitions of the model have been revised compared to [Blanchard et al. 2021], and the various assumptions have been explained. Details of the changes in the definitions will be provided as footnotes.

Initial knowledge of selected Use Case

The issues the aviation sector has to deal with (Section 2) can be considered from many points of view. In this paper, we consider the viewpoint of a fictitious airline company Easyflight wondering how to adapt to potential future changes and how to anticipate another possible crisis. This company will be the user of the foresight tool.

This company is part of the aviation system, therefore let other considered agents be:

  • Customer;

  • GovernmentX;

  • SuperFuel: a conventional fuel supplier;

  • SARS-CoV-2: an external troublemaker who represents the sanitary crisis.

Each agent takes a stand on the following principles:

  • Wealth Creation;

  • Environmental Protection;

  • Customer Satisfaction.

Finally, we define variables of interest, representing key points of the aviation system. These variables allow to characterise the limits of the chosen system.

The variables are:

  • Flight supply: the number of civil aviation flights departing from France;

  • Ticket price: the average price of flight tickets in civil aviation;

  • Flight demand: the number of customers looking for a flight departing from France;

  • Fuel supply: the quantity of fuel available for the civil aviation flight market;

  • Limitation policies: the policies concerning the civil aviation flight supply and especially those limiting it (for example environmental restrictions);

  • Sanitary crisis: a sanitary crisis at a big scale such as the SARS-COV-2 crisis.

Each agent is associated with one or several variables and can make decisions to change their values (i.e. EasyFlight can decide to increase the Flight Supply). Because all agents make decisions at the same time, conflicts can happen. For example, if EasyFlight chooses to increase the Flight Supply and SuperFuel decreases the Fuel Supply, it is considered a conflict of a logical nature. Moreover, a single agent can face internal conflicts when they have no other choice but to decide against one of their principles or policies. For example, GovernmentX may have to choose between creating Limitation Policies therefore going against the Wealth Creation principle, or not creating Limitation Policies and therefore going against the Environmental Protection principle. GovernmentX is in favour of both principles: so whatever decisions they make, a conflict of a moral nature will appear.

Initial knowledge will be formalized and extended (see Section 4.2) in order to be processed with the algorithm we propose to generate foresight scenarios on the future of a simplified air transport system. The results will be described in Section 5.

Model

System definition

Definition 1. (Principle - Set \(\Pi\)) \(\Pi\) is a set of elements \(\pi\) called principles. [Blanchard et al. 2021]

Use Case Example \[\Pi= \{WealthCreation, CustomerSatisfaction, EnvironmentalProtection\}\]

Definition 2. (Variable - Set \(\mathbf{\mathcal{V}}\)) \(\mathbf{\mathcal{V}}\) is a set of elements \(v\) called variables, each of them takes its values in a discrete set noted \(\mathbf{\mathcal{W}}_v\). We write \(\mathbf{\mathcal{W}}_\mathbf{\mathcal{V}}=\bigcup\limits_{v\in\mathbf{\mathcal{V}}}\mathbf{\mathcal{W}}_v\) [Blanchard et al. 2021].

Use Case Example \[\begin{array}{l} \mathbf{\mathcal{V}} = \{FlightSupply, FlightDemand, TicketPrice, \\ ~~~~~~~~FuelSupply, LimitationPolicies, SanitaryCrisis\} \end{array} ~~with~~ \begin{array}{l} \mathbf{\mathcal{W}}_{FlightSupply} = \{Low, Steady, High\}\\ \mathbf{\mathcal{W}}_{FlightDemand} = \{Low, Steady, High\}\\ \mathbf{\mathcal{W}}_{FuelSupply} = \{Low, Steady, High\}\\ \mathbf{\mathcal{W}}_{LimitationPolicies} = \{Yes, No\}\\ \mathbf{\mathcal{W}}_{TicketPrice} = \{Low, High\}\\ \mathbf{\mathcal{W}}_{SanitaryCrisis} = \{Yes, No\}\\ \end{array}\]

Definition 3. (Laws of the domain - Set \(\mathbf{\mathcal{C}}\)) Expression of the system constraints [Blanchard et al. 2021].

Among the laws of the domain are variable/value pairs that are incompatible with each other: the function \(incompatible\) returns << True >> if a set of pairs \(( variable,value)\) are incompatible.

Definition 4. (Function incompatible)[Blanchard et al. 2021] \[\textrm{incompatible} : P(\mathbf{\mathcal{V}} \times \mathbf{\mathcal{W}}_\mathbf{\mathcal{V}}) \rightarrow \{True, False\}\] with \(P\) the power set of all the subsets of \(\mathbf{\mathcal{V}} \times \mathbf{\mathcal{W}}_\mathbf{\mathcal{V}}\)

Use Case Example \[\begin{array}{l} \mathbf{\mathcal{C}} = \{incompatible((FlightSupply, High), (FlightDemand, Low)) = True\} \end{array}\]

Definition 5. (Agent - Set \(\mathbf{\mathcal{A}}\)) An agent \(a\) is defined by its identifier \(i_a\) and by the set \(\mathbf{\mathcal{V}}_a\) of variables it can control (in particular through decision-making) [Blanchard et al. 2021]. \[\forall a \in \mathbf{\mathcal{A}}, a = <i_a, \mathbf{\mathcal{V}}_a>\]

The following simplifying assumptions is made here :

  • there is no variable that no agent can control.

Given these elements, we can define the system as:

Definition 6. (System \(\Sigma\)) A system is a quadruplet composed of a set \(\mathbf{\mathcal{A}}_I\) of agents which will be called internal, a set \(\Pi\) of principles, a set \(\mathbf{\mathcal{V}}\) of variables and a set of laws of the domain \(\mathbf{\mathcal{C}}\) [Blanchard et al. 2021]. \[\Sigma = <\mathbf{\mathcal{A}}_I, \Pi, \mathbf{\mathcal{V}}, \mathbf{\mathcal{C}}>\]

The following simplifying assumptions are made here:

  • the system is closed (the system’s components cannot change);

  • closed-world assumption (what is not known to be true, is false (see [Reiter 1981])).

We make a distinction between agents that are inside and outside the system.

Definition 7. (External agent - Set \(\mathbf{\mathcal{A}}_X\)) An external agent can initiate disturbances by acting on variables in the system. This category includes external agents who are human entities (for example, terrorists or an international organisation) and those that are not (e.g. a volcano, a health crisis, etc.). It doesn’t belong to the system: \(\mathbf{\mathcal{A}}_I \cap \mathbf{\mathcal{A}}_X = \oslash\)4

Use Case Example

\[\begin{array}{l} \mathbf{\mathcal{A}}_X = \{SARS-CoV-2\} \end{array}\]

Definition 8. (Internal agent - Set \(\mathbf{\mathcal{A}}_I\)) An internal agent \(a_I\) is a stakeholder of the system. It is characterised by several attributes which will be defined later on [Blanchard et al. 2021].

Use Case Example

\[\begin{array}{l} \mathbf{\mathcal{A}}_I = \{EasyFlight, SuperFuel, GovernmentX, Customer\} \end{array}\]

with EasyFlight an airline company, SuperFuel a conventional fuel supplier, GovernmentX the government of a country X and Customer a customer of air transport.

A system state is defined as follows:

Definition 9. (State of the system - Set \(\mathbf{E}\)) A state \(e\) of the system is composed of a set \(\mathbf{\mathcal{P}}_e\) of the positions (Definition 10) of the internal agents on the principles, a set \(\mathbf{\mathcal{O}}_e\) of the opinions (Definition 11) of the internal agents and a set \(\mathbf{\mathcal{I}}_e\) of the instantiated variables. The initial state of the system is given [Blanchard et al. 2021]. \[e = <\mathbf{\mathcal{P}}_e,\mathbf{\mathcal{O}}_e, \mathbf{\mathcal{I}}_e >\]

We admit that a variable cannot have two different values in a given state. \[\begin{array}{c} \forall v \in \mathbf{\mathcal{V}}, \forall (w_v, w_v') \in \mathbf{\mathcal{W}}_\mathbf{\mathcal{V}}, \\ (v,w_{v}) \in \mathbf{\mathcal{I}}_e \wedge (v,w_{v}') \in \mathbf{\mathcal{I}}_e \wedge w_{v} \neq w_{v}' \Rightarrow incompatible_e((v,w_v),(v,w_v')) = True \end{array}\]

Use Case Example

Let \(e_0\) be the initial state of the system,

\[\begin{array}{l} \mathbf{\mathcal{I}}_{e_0}=\{(FlightSupply, Steady), (FlightDemand, Steady),(LimitationPolicies, No), (FuelSupply, Steady), \\ ~~~~~~~~~(TicketPrice, Low), (SanitaryCrisis, No) \} \end{array}\]

An internal agent is qualified with the following functions:

Definition 10. (Function position) The function \(position\) specifies the view of the internal agent \(a_I\) on the principles of the system in a given state \(e\) (see Definition 9). The agent can support (\(+\)), be indifferent to (\(=\)) or be opposed to (\(-\)) a principle [Blanchard et al. 2021].
\[position_{a,e} : \Pi \rightarrow \{+,=,-\}\]

The set of the values of the positions of the agents on the principles, that is given by the function \(position\) in a state \(e\), is written \(\mathbf{\mathcal{P}}_e\): \(\mathbf{\mathcal{P}}_e \subset \mathbf{\mathcal{A}}_\mathbf{\mathcal{I}} \times \Pi \times \{+,=,-\}\)
The subset of all the positions of a unique agent \(a\) in a state \(e\) is written \(\mathbf{\mathcal{P}}_{a,e}\). All the internal agents must have a position (even if indifferent) on each of the principles. The set of positions depends on the state \(e\) of the system: for example, in a sanitary crisis situation, the Health principle will have more importance than in a regular situation.

Definition 11. (Function opinion) The function \(opinion\) returns the stated opinion of an internal agent \(a_I\) on how the value of a variable is positioned with respect to a principle in a given state \(e\). The agent may consider that the value of the variable is in line with the principle (\(1\)), that it is not related to the principle (\(0\)), or that it is in contradiction with the principle (\(-1\)) [Blanchard et al. 2021].
\[opinion_{a,e} : \mathbf{\mathcal{V}} \times \mathbf{\mathcal{W}}_\mathbf{\mathcal{V}} \times \Pi \rightarrow \{1,0,-1\}\]

The set of the stated opinions of the internal agents, given by the function \(opinion_a\) in a state \(e\), is written \(\mathbf{\mathcal{O}}_e\) and the subset of the opinion of a unique agent \(a\) in a state \(e\) is written \(\mathbf{\mathcal{O}}_{a,e}\).

Use Case Example

Let us have :

\[\begin{array}{l} e~a~given~state~(see~\ref{def:state}),\\ position_{EasyFlight,e}(CustomerSatisfaction)=~+ \\ opinion_{EasyFlight,e}((FlightSupply, Low),CustomerSatisfaction)=~-1 \end{array}\]

This means that: in the state \(e\), the company EasyFlight is in line with the principle CustomerSatisfaction and considers that if the value of variable FlightSupply is Low, it does not respect the principle CustomerSatisfaction.

Decision and Action

Internal agents can decide to modify (or not) the values of the variables they can control. They can perform actions and contrary to the external agents, they have the ability to make decisions.

Definition 12. (Decision - Set \(\mathbf{\mathcal{D}}\)) A decision \(d_{a,v,e}\) is the choice of an internal agent \(a_i\) to do something about a variable \(v\) in a state \(e\) [Blanchard et al. 2021].

A decision5 can be:

  • a desire to act (change or maintain the value of the variable);

  • do nothing about this variable (i.e. let the other agents do what they want). In the case where an agent is the only one who can act on a variable, the decision to do nothing is equivalent to the decision to maintain the state of the variable.

Let \(\mathbf{\mathcal{D}}_e\) be the set of decisions considered in state \(e\), \(\mathbf{\mathcal{D}}_{v}\), the subset of all possible decisions on a variable \(v\), \(\mathbf{\mathcal{D}}_{a,e}\), the subset of decisions considered only by the agent \(a\) in the state \(e\) and \(\mathbf{\mathcal{D}}_{a,v,e}\), the subset of decisions considered by agent \(a\) on variable \(v\) in state \(e\).

Use Case Example

\[\begin{array}{l} \mathbf{\mathcal{D}}_{EasyFlight,FlightSupply,e_0}=\{IncreaseSupply, DecreaseSupply, DoNothingSupply...\}\\ \mathbf{\mathcal{D}}_{SARS-Cov-2,SanitaryCrisis,e_0}=\{StartSanitaryCrisis, NoChangeSanitaryCrisis...\} \end{array}\]

Definition 13. (Function h) Function \(h\) returns the result of a decision \(d_{a,v,e}\) to change the value \(w_v\) of a variable \(v\) with the value \(w_v'\) [Blanchard et al. 2021]. \[h : \mathbf{\mathcal{D}}\times \mathbf{\mathcal{V}} \times \mathbf{\mathcal{W}}_\mathbf{\mathcal{V}} \rightarrow \mathbf{\mathcal{V}} \times \mathbf{\mathcal{W}}_\mathbf{\mathcal{V}}\]

A decision can be characterised as favorable or unfavorable from the perspective of an agent. The following table outlines the conditions that determine the use of these qualifiers.

Characterisation of a decision on a variable \(v\) to instantiate it with the value \(w_v\) from the perspective of user \(u\) regarding a principle \(p\) in a state \(e\)
Type of decision Position (\(\textrm{position}_{u,e}(p)\)) Opinion (\(\textrm{opinion}_{u,e}(p,v,w_v)\))
Favorable + 1
2-3 - -1
Unfavorable - 1
2-3 + -1

For instance, the third line of the table should be read as: "a decision concerning a variable is considered unfavorable if an agent holds a positive opinion (1) regarding the consequences of a decision in relation to a principle and is unfavorable to that principle (-)".

Definition 14. (Action - Set \(\mathbf{\mathcal{A}}_c\)) An action enables transitioning from one instantiation \((v,w_v)\) to another instantiation \((v,w_v')\), where \(w_v \neq w_v'\). In the case of an internal agent or a human external agent, it represents the implementation of a decision. For an external non-human agent, it exemplifies the actual impact of the disruption.6

The actions of the agents modify the state of the system.

Definition 15. (Event- Set \(\mathbf{\mathcal{E}}\)) An event is a variation of the state of the system by a change of values of one or more variables as a result of an action, or as a result of a change of positions of the internal agents on the principles or of opinions on the values of the variables [Blanchard et al. 2021].

The following simplifying assumptions are made here:

  • an agent is limited to one decision (or disturbance) per variable in each state of the system;

  • an agent knows the current values of all the variables they can control;

  • in each state, an agent must make decisions (or disturbances) on all the variables it can control;

  • there are no dynamics specific to the system: a variable value only changes under the action of an agent (definition 14); therefore, the variables are independent of each other.

Non-transgression assumption
An internal agent cannot make decisions that go against the principles they support ("unfavorable decision"). They can, however, change their positions and opinions during the course of the scenario.

Special case of the user
It is assumed that the internal agent that is initiating the foresight study, i.e., the user, may make decisions that are contrary to their own views on the principles. In our example, agent EasyFlight is the user. Indeed, the user, in addition to acting according to principles, may be guided by goals (see definition 16 below) :

Definition 16. (Goal \(\mathbf{\mathcal{G}}_u \subset (\mathbf{\mathcal{V}} \times \mathbf{\mathcal{W}}_\mathbf{\mathcal{V}})\)) Set of the values the user wants the variables to reach [Blanchard et al. 2021].

Use Case Example

\[\begin{array}{l} \mathbf{\mathcal{G}}_{EasyFlight} = \{(TicketPrice, High), (FlightSupply, High)\} \end{array}\]

Conflicts

As a result of their decisions in a given state, internal agents may face logical conflicts or moral conflicts. External agents may only face logical conflicts.7

Logical conflict

A logical conflict occurs between two or more variables when:

  • agents seek to instantiate the same variable with different values or

  • agents seek to instantiate variables in a way that is defined as incompatible (see def4).

Definition 17. (Logical conflict) : \[\begin{array}{l} \forall e, \forall \mathbf{\mathcal{D}}_e, logicalconflict(\mathbf{\mathcal{D}}_e, e) = True \iff \left[ \begin{array}{l} \exists \mathbf{\mathcal{H}}_e, \exists n, 0 < n \leq \vert \mathbf{\mathcal{V}} \vert, \exists (v^1, {w_v}^1),..., (v^n, {w_v}^n) \in \mathbf{\mathcal{H}}_e, \\ incompatible((v^1, {w_v}^1),..., (v^n, {w_v}^n)) = True \end{array} \right . \end{array}\] with \(\mathbf{\mathcal{H}}_e\) the partial state of the system resulting from the decisions of some agents in state \(e\). \[\mathbf{\mathcal{H}}_e = \{h(d_{a,v,e},v,w_v), a \in \mathbf{\mathcal{A}}, v \in \mathbf{\mathcal{V}}, w_v \in \mathbf{\mathcal{W}}_v, d_{a,v,e} \in \mathbf{\mathcal{D}}_e\}\]

Use Case Example
Let us consider company EasyFlight’s decision to IncreaseSupply in the initial state \(e_0\) where FlightSupply is Steady : \[d_{EasyFlight, FlightSupply, e_0} = IncreaseSupply\]

If agent EasyFlight makes this decision, the value of variable FlightSupply will switch from Steady to High. \[h(IncreaseSupply, FlightSupply, Steady) = (FlightSupply, High)\]

Let us now consider the Customer agent in the initial state where FlightDemand is Steady: \[d_{Customer, FlightDemand, e_0} = DecreaseDemand\] If agent Customer makes this decision, the value of variable FlightDemand will switch from Steady to Low. \[h(DecreaseDemand, FlightDemand, Steady) = (FlightDemand, Low)\]

The law of the domain : \[\mathbf{\mathcal{C}} = \{incompatible((FlightSupply, High), (FlightDemand, Low)) = True\}\] means that these two pairs are incompatible, therefore, EasyFlight and Customer’s decisions result in a logical conflict.

Moral conflict

The definition of a moral conflict is inspired by the one provided by [Bonnemains 2019]. A moral conflict focuses on the principles and opinions of one single agent. An agent faces a moral conflict in a state \(e\) when each decision they could make on a variable is either contrary to their principles by nature or has negative consequences. These two possibilities will be defined respectively through the function \(NegPrinciple\) and the function \(BadConsequence\).

Function \(NegPrinciple\) returns the Boolean << True >> when the decision of an agent \(a\) is by nature against the principles supported by this agent in a state \(e\) :

Definition 18. (Function NegPrinciple) \[NegPrinciple_{a,e} : \mathbf{\mathcal{D}}_{a,e} \times \Pi \rightarrow \{True, False\}\]

The fact that an agent’s decision is contrary to a principle is specified in the laws of the domain.

Example

Apart from the initial knowledge (4.1), one could consider for the demonstration that: \[NegPrincipe_{EasyFlight, e_0}(PromoteAviationThroughGreenwashing, Honesty)=True\] The agent EasyFlight considers that the decision PromoteAviationThroughGreenwashing is against the moral principle Honesty in the initial state \(e_0\).

Function \(BadConsequence\) returns the Boolean << True >> if the action resulting from a decision has negative consequences for an agent \(a\) in a state \(e\). "Consequences" here means a partial state \(\mathbf{\mathcal{H}}_a\) with instances of variables that go against at least one principle the agent adheres to.

Definition 19. (Function BadConsequence) \[BadConsequence_{a,e} : \mathbf{\mathcal{D}}_{a,v,e} \times \Pi \times \mathbf{\mathcal{P}}_{a,e} \times \mathbf{\mathcal{O}}_{a,e}\rightarrow \{True, False\}\]

Use Case Example 1
We consider the decision DecreaseFlightSupply, whose potential outcome changes the variable FlightSupply from the value Steady to the value Low.

\[\textrm{h}(DecreaseSupply,FlightSupply,Steady)=(FlightSupply,Low)\] In the initial state \(e_0\), the agent EasyFlight aligns with the principle WealthCreation (+).

\[\textrm{position}_{\textit{EasyFlight},e_0}(\textit{WealthCreation})=~+\]

However, they hold a negative opinion (-1) about how this principle is upheld by the value Low of the variable FlightSupply.

\[\textrm{opinion}_{\textit{EasyFlight},e_0}((\textit{FlightSupply, Low}), \textit{WealthCreation})=~-1\]

The potential outcome corresponding to the contemplated decision DecreaseFlightSupply by the agent EasyFlight to instantiate the variable FlightSupply with the value Low has negative consequences for this agent.

\[\textrm{BadConsequence}_{\textit{EasyFlight},~e_0}(\textit{DecreaseFlightSupply, WealthCreation}, +,-1)=\textrm{True}\]

We could also have a situation where an agent is against a principle (-) but acts according to it (1). The decision resulting in this situation would have negative consequences too.

Use Case Example 2

Apart from the initial knowledge (see 4.1), one could consider for the demonstration a variable Profits that could take on the values No or Yes, transitioning from the former to the latter with the decision MakeProfits:

\[h(MakeProfits,Profits,No)=(Profits,Yes)\]

In this context, we can contemplate another initial state \(e_0\), where the NGO (Non-Governmental Organisation) agent is against the principle of WealthCreation (-).

\[\textrm{position}_{\textit{NGO}, e_0}(\textit{WealthCreation}) = -\]

Let’s assume, however, that they have a positive opinion (1) about how this principle is upheld by the value Yes of the variable Profits.

\[\text{opinion}_{\textit{NGO}, e_0}((\textit{Profits, Yes}), \textit{WealthCreation}) = 1\]

The potential outcome corresponding to the contemplated decision MakeProfits by the NGO agent to instantiate the variable Profits with the value Yes has negative consequences for this agent.

\[\textrm{BadConsequence}_{\textit{NGO}, e_0}(\textit{MakeProfits, WealthCreation}, -, 1) = \text{True}\]

Definition 20. (Moral conflict) : \[\begin{array}{l} \forall e \in \mathbf{E}, \forall a \in \mathbf{\mathcal{A}}_{I},\\ moralconflict(a,\mathbf{\mathcal{P}}_{a,e}, \mathbf{\mathcal{O}}_{a,e}, e) = True \\ \iff \exists v \in \mathbf{\mathcal{V}}, \forall d_{a,v,e} \in \mathbf{\mathcal{D}}_{a,v,e}, \exists \pi \in \Pi,\\%~position_{a,e}(\pi) = +, \left[ \begin{array}{l} NegPrincipe_{a,e}(d_{a,v,e}, \pi) = True ~\lor \\ BadConsequence_{a,e}(d_{a,v,e}, \pi, position_{a,e}(\pi), opinion_{a,e}(h(d_{a,v,e},v,w_v),\pi))= True \end{array} \right . \end{array}\]

Use Case Example

Apart from the initial knowledge (see 4.1), let us consider for the demonstration, the initial state \(e_0\) where FlightSupply is Steady:

\[(\textit{FlightSupply, Steady}) \in \mathbf{\mathcal{I}}_{e_0}\]

In this state, the agent EasyFlight can make a decision on the variable FlightSupply within the following set :

\[\mathbf{\mathcal{D}}_{EasyFlight,e_0}=\{DecreaseSupply, PromoteAviationThroughGreenwashing\}\]

However, we previously stated that:

\[\begin{array}{l} NegPrincipe_{EasyFlight, e_0}(PromoteAviationThroughGreenwashing, Honesty)=True \\ BadConsequence_{EasyFlight, e_0}(DecreaseSupply, WealthCreation, +, -1)=True \end{array}\]

Whatever the decision, agent EasyFlight will compromise their principles either by nature or by consequences, it is a moral conflict situation.

Scenario definition

A scenario is defined by the initial state of the system \(e_0\), a final state of the system \(e_f\), and a path \(c\) from \(e_0\) to \(e_f\).

Definition 21. (Scenario - Set \(\mathbf{\mathcal{S}}\))[Blanchard et al. 2021] \[\forall s \in \mathbf{\mathcal{S}}, s = <e_0, e_f, c> with~e_0,~e_f \in \mathbf{E}~and~c~the~path.\]

where \(c\) is a list composed of:

  • the different states of the system during the scenario;

  • the events associated with the changes of states;

  • information such as: decisions to do nothing so that the set of instantiated variables of a system state is not modified, or the justification for stopping a scenario.

Scenario generation

Algorithm

The algorithm has been implemented using the general concepts formalized in Section 4.2. It is, therefore, independent of any specific data, especially the one formalized for the Use Case.

The generation of a single scenario (see algorithm above) consists of the generation of a succession of states of a system of agents that a user is interested in.
The transition from one state to another results from the aggregation of the decisions made by the different agents of the system. Indeed, in each state, each internal agent compares their opinion (see Definition 11) with the value of the variables in the potential state of the system which would result from the decisions considered by the agents (defined in the model as \(\mathbf{\mathcal{H}}_e\) (see Definition 17 and dotted circle on Figure 1)). For every agent but the user, decisions that are either opposed to an agent’s principles or lead to negative consequences are rejected (moral conflict). The remaining possible decisions of the internal agents are then aggregated with the decisions of the external agents (disturbances) and compared, which may lead to logical conflicts in this partial state. Once the aggregation is completed, the corresponding actions are performed, and a new state of the system is reached. The scenario ends with a final state (see \(e_f\) in Figure 1) characterised by one of the stopping criteria defined below.

Potential and final state of the scenarios

The stopping criteria of a scenario are:

  • the convergence of the scenario towards a state, characterised by :

    • a logical conflict between two or more variables;

    • a moral conflict for a single agent;

    • the achievement of the user’s goals;

    • the reaching of a predefined state of interest.

  • the convergence of the scenario towards a loop.

initial knowledge, initial state of the system List of the possible decisions the agent can make in this state Cancel the decisions that are either against the principles of the agent or that have bad consequences Scenario  Choice of a possible decision Choice of a possible decisions combination Aggregation of all agents’ combinations  End of the scenario Realisation of the actions corresponding to each decision: Next state Application of the function SCENARIO to the next state
Scenario

The generation of the entire set of scenarios is accomplished through recursion with the structure of a tree, where nodes represent system states, and edges represent sets of decisions made by agents. In contrast to the previous algorithm, instead of selecting one decision for each variable and choosing a combination of possible decisions for each agent, all possibilities are explored in loops, leading to the simultaneous generation of the complete set of scenarios. Generation stops when all possible scenarios have been explored. The scenarios are individually stored as lists.

For analysis purposes, a dictionary is also maintained, including all system states within the generated scenarios.

Since it is assumed that the initially defined agents, variables, and principles cannot change during scenario generation, the number of possible scenarios is finite. It is bounded by the number given by the formula below, which arises from a recurrence on the depth of the tree (we will refer to the depth of the tree as the maximum number of states included in the generated scenarios.) : \[\mathbf{\mathcal{N}}_{dep} \leq \Delta^{dep}- \sum_{i=0}^{dep-1} \phi_i \Delta^{i} ~with~ \Delta=\prod_{v=1}^{\mid \mathbf{\mathcal{V}}\mid} \mid \mathbf{\mathcal{D}}_{v} \mid\\ \] with:
\(\mathbf{\mathcal{N}}\) the number of scenarios
dep the depth of the tree
\(\mid \mathbf{\mathcal{D}}_{v} \mid\) the total number of decisions that can be made on variable v
\(\mid \mathbf{\mathcal{V}} \mid\) the total number of variables
\(\phi\) the number of scenarios that have been stopped in \(dep -1\)

This highlights that the complexity of the problem is exponential in the number of decisions that the agents can make thanks to the recursive function.

Results

This model has been implemented with Python 3.8. The code execution can be divided into three steps:

  • data gathering: they can be provided by the user and be the result of a morphological analysis, they could also be retrieved numerically;

  • generation of scenarios;

  • analysis of the results.

We use as input the formal expression of the knowledge of the Use Case presented in Section 4.18. To generate the scenarios, we have applied the stopping criteria defined above. In our situation, we have also limited the tree depth to three. In these conditions9 the scenarios are generated in 5 minutes10. With this limitation, 26358 scenarios are generated. Among them, 11026 end with a logical conflict and 25 with a loop. 435 of them reach the goals fixed by the user (see 16). Moreover, the fact that agents cannot make decisions compromising their principles (except the user) limits the total number of generated scenarios (for instance, only 27 states (combination of variables and values) are reached out of 216 possibilities resulting from the combination of all the variables and their different values).

One of the generated foresight scenarios is shown below.

When using the initial knowledge described in the case study in Section 4.1, the algorithm provides scenarios. They take the form illustrated in Figure 2. The structure of a scenario, as presented earlier, consists of a sequence of states in which all variables are instantiated, along with decisions made by all agents regarding each variable, which leads to the states. At the end, the stopping criterion is mentioned.

Use Case Example

In the example of the scenario shown in Figure 2, we can observe the alternation of states and agents’ decision-making. The system of agents has reached a state (State 2) which is different from the initial state.

It can be observed that the agents’ decisions are not made in response to previous states (for example, after the onset of a health crisis, the company EasyFlight decides to increase flight availability). This is due to the generation of all scenarios and, therefore, of all possible combinations of states and sets of decisions, within the constraint of not violating the principles set by the agents (except the user). There is no consideration for "coherence" in the scenario, except for that ensured by the logical conflicts.

In this example, the scenario stopped because of a logical conflict, after the system reached two different states. The logical conflict is due to agent EasyFlight wanting to IncreaseSupply and to agent GovernmentX wanting to DoNothingPolicies and therefore keep limitation policies on air transport, which is incompatible with having a High FlightSupply.

Use Case Example of one generated scenario

Such conflicts, ending various generated scenarios, are used in the analysis phase (see Section 6). They can be made explicit to the user during this phase upon request.

Analysis

The analysis of the scenarios consists in answering the user’s questions (here EasyFlight). The user’s intention can be to reach a specific goal (see Definition 16) or to have a global view of the possible futures of the system for anticipation or guidance.

In this section, we will describe some tools to analyse scenarios generated with our generation algorithm. We will illustrate these tools on the air transport system scenarios generated with the conditions described in the previous section (tree depth limited to three).

Achieving goals and avoiding conflicts

We can first retrieve the values of the variables that are never reached. They may be the first explanation for objectives that cannot be achieved.

Use Case Example

Never reached values: {\(FuelSupply: [Low],~FlightDemand: [High]\)}
However, these values do not have any impact on the achievement of the user’s goals, which are to have a High FlightSupply and a High TicketPrice.

The next question that can be asked is: Do any principles have to be compromised to achieve a given goal? By retrieving the scenarios ending with the achievement of the user’s goals (specified in the initial knowledge), we can focus on the principles that are compromised in these specific scenarios.

Use Case Example

Percentage of scenarios reaching goals in which a principle is compromised

That means that, to satisfy the complete set of their goals, the user must go against the principle Customer Satisfaction and in 92% of the scenarios, they must go against Wealth Creation.

A user could want a more global overview of the produced scenarios to guide themselves or others in the next years or to be prepared to deal with major changes. In this situation, the user may not necessarily have personal goals. However, they may want to prepare for conflict situations or develop strategies to avoid them. To do this, we will first clarify what is meant by decisions or variables that are ’directly’ at the origin of a conflict.

Illustration of the concept of variables and decisions "directly" at the origin of a conflict

Figure 4 represents a scenario that ends with a conflict detected in potential state 3. Variables \(v_1\) and \(v_2\) are directly involved in the conflict in so as they appear in the incompatibility relation causing the conflict (see Definition 17). In the same way, decisions \(d_{2,v_1}\), \(d_{2,v_2}\), and \(d_{2,v_3}\) are directly present in the conflict. They are made just before the incompatibility is detected in partial state 3.

Using an algorithm that runs through all the scenarios generated, we can identify the variables and decisions that are directly involved in conflict situations. If they are necessary to trigger a conflict, their percentage of presence before the conflicts will be 100%. This means that the variable or decision is directly present for all the logical conflicts detected in the scenarios generated. A decision would be sufficient to provoke a conflict if its presence would trigger one with a probability of 1 in the potential state directly following this decision.

Use Case Example

Percentage of the occurrence of the decisions directly involved in logical conflicts

Figure 5 above shows the percentage of the occurrence of the decisions made just before the logical conflict. In our case study, the increase of FlightSupply is present directly before 91% of the logical conflicts. This is therefore not necessary to trigger a logical conflict.

Percentage of the occurrence of the variables directly involved in logical conflicts

Figure 6 above shows the same type of result but for variables. The FlightSupply variable is directly present in 100% of the conflicts in our case study. It is therefore a necessary condition to trigger a logical conflict.

These results will be discussed in the following sections, since we recall that they take into account only the variables directly involved and the decisions that are directly present before the conflicts.

We can then retrieve more meaningful results by using open-source data mining algorithms [Fournier-Viger et al. 2016]. RuleGrowth [Fournier-Viger et al. 2011] allows to discover sequential rules in the sequences of a database and also returns, for each sequential rule of the form (\(X \Rightarrow Y\)):

  • Confidence: the number of sequences that contain \(X\) before \(Y\), divided by the number of sequences that contain \(X\);

  • Support: the number of sequences that contain \(X\) before \(Y\), divided by the total number of sequences in the database.

In our situation, the database consists of sequences of sets of decisions made between each state of the scenarios generated by the algorithm presented in Section 5 and the end of the scenario.
We can therefore find one or more set(s) of decisions that are responsible for the occurrence of a conflict (sequential rule such as : a set of decisions \(\Rightarrow\) conflict). RuleGrowth returns every sequential rules existing in the set of generated scenarios. We have then processed the results of RuleGrowth with our own tool.
A set of decisions is said to be sufficient to cause a conflict, when the rule set of decisions \(\Rightarrow\) conflict is valid, which means that the probability of the set of decisions causing a conflict equals 1 (confidence = 1). If there is no sufficient set of decisions, our tool allows us to retrieve the smallest set of decisions with the maximum probability of giving a conflict. We can finally check if the occurrence of this set is necessary for a conflict to happen, which means that this set of decisions is included in every scenario ending with a conflict.

Use Case Example

Using the RuleGrowth algorithm with a minimum support (occurrence rate) of 0.005 and a minimum confidence of 0.6, we can retrieve most of the sequential rules existing in the scenarios generated in Section 5. Then we can exploit the results with our tool and we find that there is no set of decisions that is sufficient for a conflict to happen. Therefore, the smallest set with the maximum probability of producing a conflict is:

The above set leads to a conflict with a confidence of 92%. Because its support equals 659 and the number of conflicts is 11026 (which is given by general statistics returned at the end of the generation), this set causes only 6% of the total number of generated conflicts. Therefore it is not necessary for a conflict to happen.

Nevertheless, the algorithm can only give information on a set of decisions without any order consideration inside of it. Such information can be given using the PrefixSpan [Pei et al. 2004] algorithm. This algorithm returns sequential patterns from our scenarios database. We can then process these results to retrieve more information.

Use Case Example

We use the PrefixSpan algorithm with a minimum support of 0.1. Using our tool to analyse PrefixSpan results, we can find the sequence leading to most of the conflicts:

We can exploit further PrefixSpan results to retrieve complementary information about the decisions causing a conflict. In fact, we can have a relationship between conflict and decision including not only decisions made just before the conflict but all the decisions made in the scenarios. However, PrefixSpan gives no information on the sufficiency of the set of decisions to cause a conflict.

Use Case Example

Using again the PrefixSpan algorithm, we can find the decision leading to most of the conflicts: IncreaseFuelSupply \(\to\) logicalconflict: support = 11026 (which is equal to the total number of scenarios ending with a conflict). One decision is necessary for a conflict to happen: \(IncreaseFuelSupply\).

We can also make the same analysis with a set or sequences of decisions that are sufficient or/and necessary to reach the user’s goals. It however requires more computational resources because there are much fewer scenarios ending in the achievement of the objectives than scenarios ending in a conflict. Therefore the sequences of interest are generally rare.

Graphical representation of the scenarios

There are different ways to represent the qualitative data that constitute the generated scenarios. The first one is on a parallel categories diagram. Despite the huge number of generated scenarios it is easy to represent only a small group of them, regarding some criteria.

Use Case Example

We could represent all the scenarios that do not include any unfavourable decision to the principle CustomerSatisfaction.

In particular, we can focus on scenario 39 (highlighted in red in Figure 7). For the user, decisions leading to Low FlightSupply, the presence of LimitationPolicies, High TicketPrice, and the existence of a SanitaryCrisis are considered unfavourable. It is worth noting that these variable values are never encountered, either in the highlighted scenario or in the others.

The parallel representation of scenarios never compromising the principle "CustomerSatisfaction" from the user’s perspective.

We can also offer the user the possibility to represent the set of calculated scenarios on a graph (see Figure 8).

Scenarios representation according to principles

The scenarios are represented according to the number of positive or negative decisions for the user regarding their principles (see Definition 19) as a proportion of the total number of decisions made by the user in each scenario. This representation allows us to find ideal scenarios regarding one or more principles for the user, e.g., with a minimal proportion of unfavourable decisions. The overlapping blue squares and pink crosses at the bottom right of the figure (located at (1,0)) can be seen as the best scenarios for the user EasyFlight regarding respectively the CustomerSatisfaction and the WealthCreation. Indeed, at this location, all the decisions of the user meet the principles (x=1) and none break them (y=0). However, these "best scenarios" are not necessarily the same for both principles, therefore some balance may be needed between them.

This representation could be generated for each agent of the system by considering them as the "user" in turn. We could then compare their points of view, retrieving the scenarios of best interest for each of them. The set of scenarios represented could also be limited to the scenarios ending with the achievement of the user’s goals: this allows to retrieve the principles that need to be compromised to reach the goals.

This representation of the scenarios based on the principles of the agents could also question the initial knowledge and the way the principles and the decisions were chosen and written. We can see in the example that the scenarios are distributed over almost all possible combinations of the ratio of positive and negative decisions for the CustomerSatisfaction principle, i.e., the blue squares. It means that we have scenarios that are very unfavourable to this principle (with only negative decisions) for the user, others that are very favourable and some of them that are more balanced with as many positive as negative decisions. Therefore, regarding CustomerSatisfaction, various possibilities are explored. Moreover, the location of the scenarios regarding WealthCreation, i.e. the pink crosses, shows that in the example there are no scenarios that break this principle (located at (0,1)), as they can all be found in the diagonal, between the coordinates (0.5, 0.5) and (1,0). This is due to other agents who cannot compromise their principles (see the main assumption of the model in Section [as:Main]). This results in decisions that cannot be taken and states of the system that will never be reached. Therefore the situations where the user would take unfavourable decisions never happen. Finally, all the scenarios regarding the principle EnvironmentalProtection, i.e. the green circles, are at the origin of the graph. Indeed, the user EasyFlight is neutral towards this principle, therefore their decisions cannot be positive or negative towards EnvironmentalProtection. This assumption could be changed to observe the possible evolutions of the system.

Discussion

This section deals with the issues raised by this approach, whether caused by biases in the formal model, in the knowledge used, in the questions addressed in the analysis, or the methods offered to solve them.

About the initial knowledge

The choice of the initial knowledge is not part of the work presented here, but it is worth discussing. It has been said earlier that in a foresight process, knowledge comes either from the past (usually to generate trends) and/or from workshops. In workshops, it can be chosen by experts, stakeholders, researchers, or a combination of those. These participants may have cognitive biases that come from their previous experiences and opinions and may be, intentionally or not, purpose-driven. Whatever the composition of the group, it is also worth noting that knowledge cannot be exhaustive and describe perfectly a system of agents because no one is omniscient.

Our model also relies on initial knowledge that can be the results of workshops and affected by the biases mentioned above. In most situations, the user, i.e., the stakeholder that has initiated the foresight study, provides this initial knowledge. They select the variables, agents, and decisions of the system they want to explore. They can give their own positions (Definition 10) on the principles and opinions (Definition 11) but have to make assumptions about other agents’ situations. The user may know the positions or opinions of some other agents of the system, but they can also deduce this information from their past actions, for example.

To generate the scenarios introduced in Section 5, we have chosen initial knowledge based on facts and scenarios published in the aviation sector. In doing so, we may be influenced by the same biases as those described above. Moreover, we have arbitrarily attributed positions and opinions to the user and the other agents, taking into account what is expressed on their websites or their various publications; in particular, their internal and maybe "hidden" beliefs cannot be considered.

We have also assumed the nature of a decision to illustrate the function \(NegPrinciple\) (see 18). Here, the action related to the decision is judged, rather than its consequences. This judgment has been arbitrarily made and is influenced by our point of view. In the example, the decision PromoteAviationThroughGreenwashing has been considered against the principle of Honesty, even if the consequences of this decision would be to increase the FlightSupply and not to go against any principles. This reflects a deontological point of view.

In addition, because there is no causal effect between the variables (this subject will be discussed in the next section), it is easier for us (or the stakeholder gathering the knowledge) to assess the consequences of a decision for each agent. Such information is then used in a moral conflict situation when the function BadConsequence is called (see 19). In fact, only the direct consequences of a decision are considered. For example, the decision IncreaseFlightSupply only changes the variable FlightSupply to the value High and has no impact on increasing the value of the variable \(SARS-Cov-2\).

Finally, three different issues can be noticed concerning the choice of the principles:

  • The difference between values, principles, and goals is thin, and there is no consensus in the literature, which can confuse the provider of the knowledge;

  • It is difficult to select universal principles, as principles usually depend on the cultural and social context;

  • The choice may also be reduced to principles confirming the purpose of the foresight study (confirmation bias). We could have a situation where the user only selects principles that they support.

We have moreover chosen not to use probabilities in our initial data; indeed, they can introduce a methodological bias if not supported by verified scientific data. They can also originate from incomplete models and are not well-suited for characterizing societal changes or disruptive events.

To conclude, it is worth keeping in mind that the generated scenarios reflect the perspective of the provider of the initial knowledge.

About the formal model

Some concepts, such as systems, variables, or values that are presented in this paper, are also part of the Scenario Method of M. Godet, which has been used in various studies and fields. The scenarios generation algorithm that implements the formal general model has been tested on knowledge about the future of the air transport sector (see our Use Case); however, it is intended to be general and to support knowledge describing any sector. Nonetheless, for some applications, it might be difficult to represent some concepts that have not been initially considered, even if the formalisation enables us to ask questions that may not have been raised, particularly by distinguishing the various types of agents, the specific decisions they can make, and by clarifying the different types of conflicts that can arise in a scenario.

Several assumptions may be discussed (see Section 4):

  • "There is no system dynamics" and "there is no variable that no agent can control": The first assumption implies that there is no causal effect between the variables and their changes of values; the variables are independent of one another. Therefore, they need the action of an agent to change their values. The second assumption prevents the current model from including a variable that could not change its values. Changing these assumptions would allow new features to be considered:

    • breaking the independence of the variables may allow the model to get closer to the reality where usually an action has consequences that are not limited to the values of the variables directly involved;

    • including variables whose values can be changed by causal effects could also be another way to represent disturbances in the system or global actions (such as global warming);

    • considering additional global variables may give the analysis a higher level of abstraction. For example, a global value FlightSupply could summarise value changes in ShortFlightSupply and LongHaulFlightSupply.

  • "The system is closed" means that no variable, decision, or principle can be added during the generation of the scenarios. However, in the real world, new decisions that have never been considered before can be made in a crisis situation, e.g., new variables can be considered after a technological breakthrough; new principles may be important in a war situation, for example, or after a deep change in society.

  • "The closed-world assumption" [Reiter 1981] means that what is not known to be true is false. Indeed, there are no consequences for actions that are not written in the initial knowledge. This is different in the real world where unpredictable consequences of actions may happen.

  • "In each state, the internal agents must make decisions (respectively actions if the agent is external)". In the real world, not making a decision differs from making a decision to do nothing. However, we choose to formally consider them both as decisions. This choice results in considering the decisions to do nothing the same as other decisions, and they are included in the scenario path. This information could be used afterward to identify the agents who caused a conflict (see 6). Indeed, deciding not to do anything or not deciding involves the responsibility of the agents and can have consequences that are worth noticing.

  • "Internal agents other than the user cannot make decisions that compromise their principles". As discussed in the previous section, the user, who is usually the knowledge provider, can attest to their own opinions and positions. However, they have to make assumptions about other agents’ information, such as their positions and opinions, to provide enough knowledge for the model to work. This supports the choice we have made to assume that no agent but the user could compromise their principles. Otherwise, it could increase the complexity. Indeed, much more scenarios would be generated upon these assumptions, but it would be complicated to capture foresight conclusions.

We have also assumed that the decisions made by the agents, apart from the decisions to do nothing, will always be turned into actions, as no agent can change their mind or prevent another agent from performing an action after the aggregation of all decisions. This may reduce the uncertainties faced in the real world where agents can both change their minds and/or prevent other agents from performing actions.

Moreover, we have chosen not to model uncertainties through probabilities to avoid the bias introduced by the attribution of arbitrary probabilities to decisions and events (see Definition 15).

External agents

It can be noticed (see Definition 7) that all external agents are put on the same level. A realistic viewpoint would be, however, to make a difference between a disturbance (e.g., a sanitary crisis, a natural disaster, a terrorist attack, etc.) and an external organisation constraining the system (e.g., an international organisation, a government, a company whose business sector would be external to the studied system, etc.).

Conflicts

A logical conflict (Definition 17) is an incompatibility between two or more variables. These incompatibilities of values are specified in the initial knowledge. However, no formal difference is made between an incompatibility defined by physical laws and an incompatibility judged as such by the knowledge provider. The latter may be subject to the biases discussed in the previous section.  A moral conflict (Definition 20) focuses on a single agent who cannot make a decision on a variable in a given state without compromising their principles. It is based on [Bonnemains 2019], who already discussed its formalisation. The updated definition we have proposed does not offer the agents, but the user, the opportunity to solve the moral conflict (by ordering the principles, for example) during the scenario generation. As far as the user is concerned, if they are in a moral conflict situation, they can compromise their principles; indeed, when they face such a situation, every possible resolution of the conflict is explored, and a scenario is generated for each outcome.

About generation and analysis

Positions and opinions of the agents

Even if the model allows it, the positions and the opinions of the agents (see definitions 10 and 11) do not change during the generation of scenarios. This could be questioned; for example, in a sanitary crisis, more importance could be given to the principle HealthPreservation. An interactive interface would help to change this piece of knowledge while computing the scenarios.

Time

Time is not explicitly considered in our model. Indeed, we have chosen to consider sequences rather than attributing a duration to each state. The stopping criteria within the scenario generation are therefore specific events like conflicts and scenario patterns like loops, rather than the definition of any time horizon.

Complexity

An obvious limitation of this type of algorithm is its complexity, which is exponential. Due to limitations in computational resources, we have limited the depth of the scenario tree to three for the aviation scenarios generated here. Options will be presented in the next section. However, this high complexity may or may not be an issue depending on the use of the provided tool. Indeed, the tool could be used in two ways. The first one is the generation of scenarios to explore the possible futures of a system. In this situation, a high computational time to generate the scenarios may not be an issue. The second one is when an organisation wants a quick answer to make a decision or wants to be able to interact with the tool and explore possibilities by modifying or adding initial knowledge. In such a case, the high complexity of the algorithm may be an issue, as fast results would be needed.

Analysis

The causes of a logical conflict can be analysed in two ways: (i) considering the variables or decisions right before the conflict; and (ii) considering all the variables and decisions included in the whole scenario paths leading to the conflict. These two types of results can be different. The order of appearance of the decisions in the scenario could also be considered. When the analysis aims at recovering the scenarios leading to goal achievement, it must be highlighted that every result illustrates the point of view of an agent, particularly the user. Indeed, it is generally not possible to globally qualify a scenario as ideal for all agents. This notion will always depend on their opinions, as each of them can consider a decision as favourable or not to a principle.

Numbers

Attention must also be paid to the representation and the meaning of numbers. Indeed, a user should be very careful when manipulating and using numbers in the analysis section because the same number may have different meanings and be subject to interpretation. For example, considering the intent to find the best scenarios for the user, we had to characterise the term "best scenario" and how to calculate it. For instance, the ratio between the number of favourable decisions and the number of unfavourable decisions towards a principle for the user in a scenario could be used to identify the best scenarios (see 6.2). Consequently, scenario A with 3 favourable decisions and 5 unfavourable ones, and a scenario B with 6 favourable decisions and 10 unfavourable ones, will be judged the same; however, in scenario B, more negative decisions have been taken. Therefore, such metrics must be supplemented by other considerations on the scenarios. To conclude, let us stress again the fact that the results are knowledge-dependent and should not be taken as a reliable prediction of the future or the consequences of particular actions. As part of the foresight domain, this approach only intends to guide, prepare, and take a new perspective on a particular subject.

Conclusion and Future works

Conclusion

We have introduced in Section 2 of this paper a sample of reports and methods focusing on the generation of scenarios on the future of the air transportation system. For both main future planning approaches, namely forecasting and foresight, we have highlighted the different biases included in the existing methods and results:

  • biases coming from the initial knowledge: using past data may not produce scenarios outside the trend (e.g., no mention of a possible decrease in the aviation sector in the scenarios of airline companies);

  • methodological biases: a limited number of scenarios is usually produced; however, their combination is likely to answer a user’s expectations (e.g., Ademe’s scenarios);

  • cognitive biases: they go with the opinions and experiences of the participants, whether they are experts or not.

We have presented a formal and automated method for generating scenarios about the future of the air transportation system. This tool can be used for decision-making, guidance, or risk and conflict anticipation. Scenarios are produced as successions of states and decisions sets constituted of qualitative data. They are enriched by the consideration of the principles of the stakeholders, which is usually not considered in decision support. Our goal was to overcome existing biases in the current methods. Generating an exhaustive set of scenarios prevents excluding, intentionally or not, controversial scenarios, i.e., crisis scenarios.

As far as the scenario analysis is concerned, the representation of the results highlights the potential imbalance inside the resulting data: such a situation can easily be discussed and/or changed by modifying the initial database. Moreover, the formally generated data allows the use of data mining and data representation algorithms for results analysis. We can therefore give answers to some user about how they can achieve their goals and avoid conflict situations due to some decisions. This type of result is usually not provided in qualitative scenarios because their production is usually seen as the objective, leaving the analysis to the user alone.

Considering the results themselves, the scenarios produced here show that the user, i.e., the stakeholder who has initiated the foresight process, must go against the principle Customer Satisfaction to achieve their goals, especially to have a High TicketPrice. This result can be obvious here, but in a more complex system, it could reveal contradictions and a need to prioritize some goals. The performed analysis allows us to say that the decisions directly responsible for conflict situations and under the control of the user (the airline company Easyflight) are DoNothingTicketPrice and DecreaseTicketPrice. However, these decisions must be related to the other agents’ own decisions. In fact, with the algorithm PrefixSpan, the whole paths of the scenarios are taken into account and it reveals that the decisions causing conflicts are mostly taken by the agent SuperFuel.

Future works

Formal model assumption

For future works, our assumptions for the formal model may be reconsidered. A first modification could be on time consideration: creating a timeline could answer the question of a defined horizon. It would then be possible to change the stopping criteria of the scenarios. Furthermore, one could imagine, for instance, adding agents or variables during the course of a scenario. However, generating scenarios with a much higher number of data (compared to the system studied here) raises new issues.

Combinatorial complexity

As the complexity is exponential, it requires a lot of time and computational resources. Several leads can be considered to overcome this challenge if the usage of the proposed tool requires so (see Section 7.3). The first one would be to generate a given number of random scenarios (stochastic sampling). Another one would be to define metrics to qualify whether a user gets closer to the achievement of their goals and to use methods such as Monte Carlo Tree Search, also used in decision-making. Overcoming this challenge could allow us to generate scenarios on the future of the air transport system with much more detail and including more agents. Giving a final analysis of the produced scenarios by grouping them thanks to similarity criteria could also be a way to see the big picture by considering a small number of groups of scenarios.

Uses

Any system of agents can be dealt with our formal tool (e.g., in-orbit manufacturing, aviation, societal movements, changes in behaviours, or usage patterns...). The initial knowledge could be quantitative and come from forecasting models as long as it is made qualitative so that the symbolic processing described in this article can be carried out.

Validation

Finally, the validation of the model is still to be done. However, few foresight studies include a validation process in their work, and it raises many questions:

  • Do the generated scenarios include the "true" future? Some ideas suggest modelling a past situation, generating the scenarios from it, and seeing whether the generated scenarios include the real past events. However, the aim of our model is not to foretell the future; it is almost certain that the generated scenarios will not include the real future. They may, however, help decision-makers to consider new strategies to reach their goals or to anticipate crises. Therefore, it may be more useful to validate the actual utility of the scenarios.

  • Don’t we miss a really important situation? How can we be sure that we browse all the possible futures? First, it depends on the initial knowledge. Then the exhaustive generation we offer here could be an answer if the required computational resources are available. Another one could be the implementation of an empirical validation by experts. It would, however, be subject to the biases of those experts.

  • How can we assess the quality of the scenarios? Some criteria have been used in the literature to validate scenarios [Crawford 2019], but they have no formal definition. Formally defining our own criteria to validate our own work seems moreover risky and biased.

  • How can we assess the actual impact of the scenarios inside organisations? The impact of foresight approaches in companies is indeed a research subject. Studies are conducted on how to measure the mental shifts of stakeholders after participating in a foresight study [Rhisiart et al. 2015], but also how to measure the profit for a company using such work [Rohrbeck and Kum 2018].

There is no consensus in the literature on how to answer these questions. The definition of validation criteria or the use of alternative approaches to answer them need to take into account the possible introduction of new and unanticipated biases. It must be done keeping in mind that foresight is not looking into a crystal ball but implementing a thinking process and helping to make decisions considering possible futures.

Acknowledgement

We are grateful to all the colleagues participating in the study who made it possible to obtain these outputs: Catherine Tessier, Claire Saurel, Claire Sarrat, Brieuc Danet, Thomas Chaboud (ONERA) and Isabelle Laplace (ENAC). A special thanks is adressed to Catherine Tessier and Claire Saurel for their reviews of the paper and advice.

Funding statement

This work was carried out as part of the author’s doctoral research within the federation ONERA-ISAE SUPAERO-ENAC. We would like to thank ONERA for providing resources for this work and Region Occitanie for the contribution to the PhD grant.

Open data statement

The initial knowledge to elaborate the scenarios presented in this paper can be found on GitHub: https://github.com/onera/GAFS.git

Reproducibility statement

The source code can be found on GitHub: https://github.com/onera/GAFS.git.

It is available with the process used to elaborate the scenarios presented in this paper.

Airbus. 2019. Global market forecast 2019-2038.
Amer, M., Daim, T.U., and Jetter, A. 2013. A review of scenario planning. Futures 46.
armées, M. des. 2021. La red team dévoile ses nouveaux sénarios de menaces et de conflictualités à l’horizon 2030-2060.
Batrouni, M., Bertaux, A., and Nicolle, C. 2018. Scenario analysis, from BigData to black swan. Computer Science Review 28.
Berger, G. 2007. L’attitude prospective. In: P. Durance, ed., De la prospective. Textes fondamentaux de la prospective française 1955-1966. L’Harmattan.
Blanchard, C., Saurel, C., and Tessier, C. 2021. Futurs possibles d’un système d’acteurs : formalisation et génération automatique de scénarios. 115–122.
Boeing. 2022. Commercial market outlook 2022-2041.
Bonnemains, V. 2019. Formal ethical reasoning and dilemma identification in a human-artificial agent system.
Bootz, J.P., Michel, S., Pallud, J., and Monti, R. 2022. Possible changes of Industry 4.0 in 2030 in the face of uberization: Results of a participatory and systemic foresight study. Technological Forecasting and Social Change 184.
Bootz, J.P., Monti, R., Durance, P., Pacini, V., and Chapuy, P. 2019. The links between French school of foresight and organizational learning : An assessment of developments in the last ten years. Technological Forecasting and Social Change 140.
Bulchand-Gidumal, J. and Melian-Gonzalez, S. 2021. Post-Covid-19 behavior change in purchase of air tickets. Annals of Tourism Research 87.
Comac. 2020. Comac market forecast 2020-2039.
Committee, C.C. 2019. Net zero-technical report.
Cordova-Pozo, K. and Rouwette, E. 2023. Types of scenario planning and their effectiveness. A review of reviews. Futures 149.
Crawford, M.M. 2019. A comprehensive scenario intervention typology. Technological forecasting and social change 149.
Davis, P.K., Bankes, S.C., and Egner, M. 2007. Enhancing strategic planning with massive scenario generation. The RAND Corporation, National security research division.
Delbecq, S., Fontane, J., Gourdain, N., Mugnier, H., Planès, T., and Simatos, F. 2021. Référentiel ISAE-SUPAERO aviation et climat.
Dray, L., Schäfer, A.W., Grobler, C., et al. 2022. Cost and emissions pathways towards net-zero climate impacts in aviation. Nature Climate Change 12.
Ducot, G. and Lubben, G.J. 1980. A typology for scenarios. Futures 12.
environnement, I.C. 2022. Elaboration de scénarios de transition écologique du secteur aérien.
EREA. 2010. EREA vision for the future - towards the future generation of air transport system.
EREA. 2021. EREA vision study - the future of aviation in 2050.
Fleming, G.G. and Lépinay, de. 2019. Environmental trends in aviation to 2050. In: I.C.A. Organization, ed., Destination green, the next chapter : 2019 environmental report. 17–23.
Fleming, G.G., Lépinay, I. de, and Schaufele, R. 2022. Environmental trends in aviation to 2050. In: I.C.A. Organization, ed., Innovation for a green transition : 2022 environmental report. 24–31.
Fournier-Viger, P., Lin, C.W., Gomariz, A., et al. 2016. The SPMF open-source data mining library version 2. Springer LNCS 9853, 36–40.
Fournier-Viger, P., Nikambou, R., and Tseng, V.S. 2011. RuleGrowth: Mining sequential rules common to several sequences by pattern-growth. ACM Press, 954–959.
Godet, M. 2007a. Manuel de prospective stratégique: Une indiscipline intellectuelle. Dunod.
Godet, M. 2007b. Manuel de prospective stratégique: L’art et la méthode. Dunod.
Gordon, T.J. Trend impact analysis. In: Futures research methodology. 1–19.
Grewe, V., Rao, A., Grönstedt, T., et al. 2021. Evaluating the climate impact of aviation emission scenarios towards the Paris agreement including COVID-19 effects. Nature Communications 12.
Group, A.T.A. 2021. Waypoint 2050.
Guillen-Royo, M. 2022. Flying less, mobility practices, and well-being: lessons from the Covid-19 pandemic in Norway. Sustainability: Science, Practice and Policy 18.
Hajer, M.A. and Pelzer, P. 2018. 2050-An Energetic Odyssey: Understanding ’Techniques of Futuring’ in the transition towards renewable energy. Energy Research and Social Science 44.
IPCC. 2023. Climate change 2023: Synthesis report. A Report of the Intergovernmental Panel on Climate Change. Contribution of Working Groups I, II; III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, H. Lee; J. Romero (eds.)]., Geneva, Switzerland.
Januschowski, T., Gasthaus, J., Wang, Y., et al. 2020. Criteria for classifying forecasting methods. International Journal of Forecasting 36.
Keseru, I., Coosemans, T., and Macharis, C. 2021. Stakeholders’ preferences for the future of transport in Europe: Participatory evaluation of scenarios combining scenario planning and the multi-actor multi-criteria analysis. Futures 127.
Khademi-Jolgehnejad, A., Ahmadi-Kahnali, R., and Heyrani, A. 2021. Developing Hospital Resilient Supply Chain Scenario through Cross-Impact Analysis Method. Depiction of Health 12.
Klöwer, M., Allen, M., Lee, D., Proud, S., Gallagher, L., and Skowron, A. 2021. Quantifying aviation’s contribution to global warming. Environmental Research Letters 16.
Ködding, P., Koldewey, C., and Dumitrescu, R. 2023. Scenario-Based Foresight in the Age of Digital Technologies and AI. In: A. Shajek and E.A. Hartmann, eds., New digital work: Digital sovereignty at the workplace. Springer International Publishing, 51–67.
Lee, D.S., Fahey, D.W., Skowron, A., et al. 2021. The contribution of global aviation to the anthropogenic climate forcing for 2000 to 2018. Atmospheric Environment 244.
Li Vigni, F. 2022. Companion Modeling and "Committed Scenario-Building". For a Richer Taxonomy of Futures. Journal of Futures Studies 26.
Mangnus, A.C., Oomen, J., Vervoort, J.M., and Hajer, M.A. 2021. Futures literacy and the diversity of the future. Futures 132.
Mauksch, S., Gracht, H.A. von der, and Gordon, T.J. 2020. Who is an expert for foresight ? A review of identification methods. Technological Forecasting and Social Change 154.
Muiderman, K., Vervoort, J.M., Gupta, A., and Biermann, F. 2020. Identifying four approaches to anticipatory climate governance: Varying conceptions of the future and their implications for the present. Wiley Interdisciplinary Reviews: Climate Change 11, 6.
Nations, U. 2015. Transforming our world: The 2030 agenda for sustainable development.
Nations, U. 2021. Interagency report for second global sustainable transport conference.
NLR, R. and Economics, S.A. 2021. Destination 2050. AE4; ACI; Europe; ASD; ERA; CANSO.
Oliveira, A., Barros, M. de, Carvalho Pereira, F. de, Gomes, C., and Costa, H. da. 2018. Prospective scenarios: A literature review on the Scopus database. Futures 100.
Pei, J., Han, J., Mortazavi-Asl, B., et al. 2004. Mining Sequential Patterns by Pattern-Growth: The PrefixSpan Approach. IEEE Transactions on Knowledge and Data Engineering 16.
Petropoulos, F., Apiletti, D., Assimakopoulos, V., et al. 2022. Forecasting: theory and practice. International Journal of Forecasting 38.
Planès, T., Delbecq, S., Pommier-Budinger, V., and Bénard, E. 2021. Simulation and evaluation of sustainable climate trajectories for aviation. Journal of Environmental Management 295.
Project, T.S. and Decarbo, S. 2021. Pouvoir voler en 2050: Quelle aviatio dans un monde contraint ?
Reiter, R. 1981. ON CLOSED WORLD DATA BASES. In: B.L. Webber and N.J. Nilsson, eds., Readings in artificial intelligence. Morgan Kaufmann, 119–140.
Rhisiart, M., Miller, R., and Brooks, S. 2015. Learning to use the future: developing foresight capabilities through scenario processes. Technological Forecasting and social change 101.
Rohrbeck, R. and Kum, M.E. 2018. Corporate foresight and its impact on firm performance: A longitudinal analysis. Technological Forecasting and social change 129.
Schirrmeister, E., Göhring, A.-L., and Warnke, P. 2020. Psychological biases and heuristics in the context of foresight and scenario processes. Futures and Foresight Science 2.
Shparberg, S. and Lange, B. 2022. Global market forecast 2022-2041. Airbus.
Spaniol, M.J. and Rowland, N.J. 2018. The scenario planning paradox. Futures 95.
Spaniol, M.J. and Rowland, N.J. 2019. Defining scenario. Futures foresight science 1, 1.
Talberg, A., Thomas, S., Christoff, P., and Karoly, D. 2018. How geoengineering scenarios frame assumptions and create expectations. Sustainability Science 13, 1093–1104.
UNESCO. 2021. Recommendation on the Ethics of Artificial Intelligence.
Wack, P. 1985. Scenarios: Uncharted waters ahead. Harvard Business Review 85516.
Withycombe Keeler, L. and Bernstein, M.J. 2021. The future of aging in smart environments: Four scenarios of the United States in 2050. Futures 133.