The Making of International Communication Standards Towards a Theory of Power in Standardization

: In this literature review, we provide an overview of the history of international communication (ICT) standards to argue that a comprehensive theory of standardization needs not only to be a theory of technology or institutional configuration(s), but also a theory of power. Such a theory should account for three forms of power, respectively in the realms of economy (control over capital), politics (control over practice), and ideology (control over rationalities). By reviewing the evolution of ICT standards from the telegraph to the Internet and wireless telecommunication technologies, we show how these different aspects of power play significant roles in standardization. Finally, we explore whether there is a role for the public interest in such a theory of power in standardization.


Introduction
International communication standards are the scaffolding of our digital life.They make possible several key daily activities, from accessing healthcare via digital identity services to working and studying remotely during the COVID-19 pandemic, from Internet of Things home devices to communicate to friends and family in encrypted chat channels.Resulting from complex processes of negotiation and consensus-building, standards instruct the compatibility and interoperation of technological products of everyday use.
Unsurprisingly, standards have become a central mechanism in the contemporary global governance toolkit (Abbott and Snidal 2001).In Standards: Recipes for Reality, sociologist Busch argues that standards are the invisible infrastructure of 'our technical moral, social and physical worlds' as they shape 'the physical world around us but also our social lives and even our selves ' (2011, 2).Because of their ability to shape our world, standards are intimately connected to power.Although they are often invisible to the end users, they do structure everyday life (Lampland and Star 2008).They both modify and interact with the world and the objects they govern (Higgins and Larner 2010a); they do not merely describe but actively prescribe and form reality.This led critical media theorists Lovink and Rossiter to observe that '[w]hoever sets the standard has the power' (2011,426).Yet, no matter the widespread adoption of standards across a myriad of human activities, standard setting is largely an opaque process that is hardly visible, let alone accessible, to the general public.Standards takers have been excommunicated from the processes that determine how people, for instance, communicate (Galloway, Thacker, and Wark 2013).
These observations bear some questions.How do international communication standards come into being?Who is involved in the process of developing reproducible technical standards?What 'object conflicts' (Hess 2005) and value struggles contribute to their making?This paper reviews existing literature on standard making in disciplines as diverse as sociology, media studies, geography, science and technology studies (STS), and law-with two main goals.First, it takes stock of the complex dynamics of standardization exposing, among others, how the process of creating standards, or standardization, creates new 'spaces of political rule' (Barry 2001, 63), how standards are a product of collective action, and the role played in the process by overt or covert power dynamics.Second, the paper sets the basis for a much-needed normative theory of standardization that centers on the power dimension and foregrounds standardization in the public interest.To develop our argument, we focus on the so-called organized civil society, namely the realm of human action outside the market and state dynamics, for it is the actor with the weakest standing (and the least power) in the standardmaking process.The organized civil society sector includes organizations that represent the preferences of technology users and engages in what we have previously called 'bottom-up design', or the process of inscribing unconventional policy preferences-such as human rights or the public interest-into governance fora and institutions (Milan and ten Oever 2017).
Power is here defined by sociologist Max Weber as the ability to exercise one's will over others or 'the probability that one actor within a social relationship will be in a position to carry out his own will despite resistance, regardless of the basis on which this probability rest ' (1978, 53).At the core of this definition is the idea that power is inherently conflictual, and that relationships of power are distinct from other social relationships because of the conflict of interest they embed.With this (deliberately broad) definition of power in mind and acknowledging the intrinsic link between standardization and power dynamics that other scholars have already exposed (see, e.g., Galloway 2006), we argue that a comprehensive theory of standardization needs not only to be a theory of technology or institutional configuration(s), but also a theory of power.In this paper, we outline the core tenets of such a theory of standardization and power, taking Weber's definition as a starting point but operationalizing it building on the three-fold typology of power proposed by geographer Richard Peet (2007).Peet distinguished between three forms of power, intervening respectively in the realms of the economy (control over capital), politics (control over practice), and ideology (control over rationalities).We embrace geographic thinking because geographers help explore 'the rules governing the space of everyday life' (Easterling 2014, 11), and contribute to translating and connecting concepts from different disciplines to physical lifeworlds, of which information infrastructure is an example.We ground our argument on standards that relate to Information and Communication Technologies (herein, ICTs), for they are at the core of many socio-political and economic activities across the world: think for example of the 5 th generation (5G) of cellular networks, but also of digital identity schemes and the popular digital vaccination certificates (Milan et al 2021).
The paper is organized as follows.First, it surveys the history of ICT standards highlighting the power struggles that have characterized it over time.Second, it defines standards by zooming in on the peculiarities of ICT standards.Third, it characterizes standardization actors and their values, with a focus on the public interest.Although frequently invoked in internet governance and standard-making processes, the notion of public interest remains a fuzzy one also in the literature.For the sake of this article, we join DeNardis in associating public interest with 'concerns such as privacy, access to knowledge, and freedom of expression ' (2012, 721).Finally, it outlines the core tenets of a normative theory of standardization able to account for and understand the power dimension associated with ICT standards from a public interest perspective.

ICT standardization across time
Standardization has a long history.Standards historians exposed how standards were essential to trade because of their ability to reduce risk, transaction costs, and issues of incompatibility.For example, technical standards such as measurements and standardized exchange values have facilitated trade in the Indus valley during the Harappan Civilisation in what is currently known as Western India and Pakistan since roughly 1800 B.C. (Thapar 1975;Khan and Lemmen 2013).However, even in the abundance of examples of standards throughout history, ranging from ancient China to Peru, histories of modern standardization processes often start with the industrial revolution and the telegraph, which was the first application of electricity.This paper is no exception.From the telegraph, it then moves to the telephone and radio transmissions, in view of describing the incipient telecommunication governance regime that evolved around these crucial technological innovations of modernity.Next, it reviews the standardization of the Internet and mobile telephony, focusing on the case of 5G.

The Telegraph
The first transatlantic cable was laid by the Atlantic Telegraph Company between Ireland and Newfoundland, Canada, in 1858.The cable had taken took two years to build, and it took 16.5 hours for the first message, from Queen Victoria of the UK to the US President James Buchanan, to be transmitted.Interestingly, the demand for standardization was the result of failure rather than the success of these early-day experiments.In the six weeks that followed the first telegraphed message, the signal deteriorated and eventually crashed completely (Schwartz and Hayes 2008).The failure was investigated by a joint committee of the Atlantic Telegraph Company and the British government which resulted in a report stating that future contacts for the laying of such cables should specify standard resistances.But for this to happen, standardized electrical units were needed.This led to the establishment in 1861 of the Committee on Electrical Standards at the British Association for the Advancement of Science (BAAS), which started releasing official standards in 1865 (Grant and Thompson 2019).
Next to scientific and technical questions, the telegraph also gave rise to distinct economic models to govern telecommunications.While in continental Europe the telegraph systems were in the hands of state agencies, in the US the telegraph network was operated by the Western Union Telegraph Company, which was responsible for about 90 percent of traffic as of 1866 (Wolff 2013).While the telegraph started as an industry-led invention, soon it was regarded in many countries as a public good and practically an equivalent of the postal system (Richardson 2015).As a result, standards were not just set for the voltage, resistance, and insulation, but also tariffs and revenue distribution (Ibidem).Meanwhile, many countries saw the nationalization of the telegraph system (Hochfelder 2000) or the consolidation of companies in de facto national monopolies (Winseck 1999).In addition, the introduction of the telegraph contributed to the standardization of language.Since words were far more expensive by telegraph than by post, a more universalized '"scientific" language' was consolidated (Carey 1983, 310).But the innovation spearheaded by the telegraph did not stop there: it also contributed to disconnecting communication from transport, to help coordinate military operations, and it impacted colonialism, with power being shifted from local governors to the capitals of imperial empires (Carey 1983).The tension between countries and corporations that resulted from these developments has been described along the axis of territory, capital, and technology (Zajácz 2019).In Reluctant Power, Zajácz illustrates how regulation increasingly sought to shape and influence territory, capital, and technology in a reluctantly emerging information policy regime designed to allow one party an uninterrupted flow of information while denying the same to its enemy or competitor.

The telephone
The development of the international telegraph system was relatively short-lived because soon an innovation emerged: the telephone.In 1876 Graham Bell received the US patent for the telephones and started the first commercial telephone services.In 1885 in Berlin, the International Telegraph Union (ITU) started developing international rules to support international telephone traffic.Telephone wires largely used the same network of poles of the telegraph, and the policy issues that arose were also similar, namely the negotiation of rates and the sharing of revenues.This gave rise to two committees in 1925 that were assigned to solve these issues: the International Long-distance Telephone Consultative Committee (CCIF) and the International Telephone and Telegraph Consultative Committee (CCITT).These bodies oversaw the standardization and interconnection efforts of these emerging transnational networks, highlighting the interconnection between networking technologies, institutions, standards, topography, and topology.

Radio transmissions
At the turn of the 20th century the first transatlantic radio transmissions, including experimentation with voice and music, took place.Once more, communication failure paved the way for standardization.In 1902 the Prince of Prussia attempted to send a radio message to the US president from his vessel, but the message failed because the offshore transmission was not compatible with the onshore transmission of a different nationality.This led Germany to call for the organization of the International Radio Telegraph (IRT) conference in Berlin in 1906 (Richardson 2015) where the International Radio Union (IRU) was established (Drake 2000).Up to 1932 the IRT and the ITU remained distinct entities producing regulations for cable and radio telegraph transmission respectively.The Berlin conference combined standard settings for both telegraph and radio by outlining the rules for sending and receiving telegraph by radio signals and coordinated the use of the radio spectrum.From then on, the international telegraph signal was no longer transported via cables, but via the air.The conference mainly focused on establishing interoperation between onshore and offshore signals from different manufacturers, exposing once again the role of governments and the industry in early standardization, especially for matters of safety and security.This nascent international telecommunications regime lasted for about a century and a half.Some scholars call this coordination system the ancien régime (Drake 2000, 4).

Inside the ancien régime
The ancien régime was epitomized by the International Telecommunications Union, which was formed in 1932 as a merger of the International Telegraph Union and the International Radio Telegraph Convention, one of the world's oldest still functioning international organizations.Until today, governments come together in the ITU to set standards in a multilateral framework that underpins interoperation between sovereign national networks.This approach contributed to reinforcing European domestic monopolies and imperialist power structures over overseas colonies.Private companies, most of which were UK-based, owned and operated intercontinental submarine cables.
Whereas standardization in other fields was dominated up to the 1980s by national standards bodies such as the German Deutsches Institute für Normung (DIN) and the UK British Standard Institution (BSI) (Mattli and Büthe 2003).Even though the International Organization for Standardization (ISO) and its sister organization the International Electrotechnical Commission (IEC) were already established in 1947 and 1906 respectively, the telecommunications regime, and the associated standards setting dynamics, were already part of a 'global' regime from early on.This was largely due to imperial control as well as economic reasons, and the fact that telecommunications were an inherent part of the military industrial complex, but it was also because the epistemic community working on telecommunication services and equipment believed that national monopolies formed the firmest basis for international communications by telephone, telex, and telegram (Cowhey 1990).
The United States was never fully part of the ancien régime as much as European countries were.The US only reluctantly developed governmental telecommunications policies (Zajácz 2019).Instead, the system was characterized by the de facto monopoly of Western Union for the operation of the telegraph, and of the Bell Company for telephone communications (Thompson 1972).This would grow out to be a group of companies that became known as the Bell system, consisting of a manufacturing arm (Western Electric), a telecom operator (AT&T), a research arm (Bell Labs), and other operating companies.The Bell system functioned as a de facto standard setter.This led the management of Bell Labs to use standards as an organizational tool for the whole Bell system.These standards, introduced in 1905, were called 'General Engineering Circular' (GEC).By 1914 over 400 GECs were distributed through the Bell system (Russell 2014).Standards did not just play a role in the management of the Bell systems itself, but also its collaboration with other corporations, first ad hoc, then structurally through bodies such as the American Engineering Standards Committee (AESC) and the American Standards Association (ASA).Ironically, the central role played by Bell in standard setting attracted the attention of anti-trust regulation, which led to the break-up of the Bell imperium in the 1950s and the 1960s by the US Federal Communication Commission (FCC) (Abbate 1999;Russell 2014).

The Internet
The movement away from telecommunication monopolies and towards competition in the US laid a fertile ground for the development of what later would be called 'the Internet'.The launch of the Sputnik satellite by the Soviet Union led to a technological scare in the US: the radio signal emitted by Sputnik was received around the world and was described by Time magazine as 'chilling beeps' (Veech 2019).This resulted in the establishment of the Advanced Research Projects Agency (ARPA) within the US Department of Defense in 1958.The funding made available to ARPA, and specifically to the head of its Information Processing Techniques Office Joseph Licklider, enabled the interconnection of large mainframe computers-a network envisaged as the start of an 'Intergalactic Computer Network' (Abbate 1999).Concurrently, in France, networking technologies were developed under the leadership of Louis Pouzin, whose team was working on another packet-switched network called Cyclades.However, the Arpanet was better funded than Cyclades.
In 1972, international networking researchers from Britain, France, and the United States established the International Networking Working Group (INWG), chaired by Vinton Cerf.The INWG developed a document series called General Notes to 'select a design philosophy' (Russell 2014, 171) for interconnection between different international private and public networks.Here the INWG tread on the turf of the ITU, which had standardized and regulated the international interconnection between networks since 1865.The ITU International Telegraph and Telephone Consultative Committee (CCITT) had made it clear that it wanted to standardize interconnection between international computer networks similarly as it had done for telecommunication networks.Since the INWG was an informal body, its members accepted that it could not play a role in formal standardization, but rather in coordination.Therefore, the members of the INWG arranged an affiliation with the International Federation for Information Processing (IFIP), a federation of national professional societies, through which they could contribute to the CCITT.The CCITT planned to define standards for packet-switched networks under the name X.25 (see Egyedi 1996).Quickly a debate ensued between INWG researchers and the CCITT leadership about a significant technical design choice: whether the networks would be based on datagrams (a name derived from the combination of data and telegram), later more commonly known as 'packets', or on virtual circuits.Virtual circuits echoed the design of telecom networks where operators have control over the network and routes whereas datagram networks are designed to have less control over routes.Datagram networks deliver packets on a 'best effort' basis, transporting packets without a particular order or route, which makes the datagram networks more resilient to failure but affords less control over network operation.Soon this conflict, known as IBM vs telecommunications, escalated.X.25 got approved as a virtual circuit-based network.The INWG's inability to influence this process led to pressure on the INWG itself.The different projects tried to strike a compromise that would combine traits from the different networks, most notably the French Cyclades and the North American ARPANET.Such a compromise was reached but it did not receive full consensus through a vote.Eventually, in the late 1970s, the French government stopped funding the Cyclades project, crashing its development.

Mobile telephony
Meanwhile, innovation invested also in mobile telephony.The first generation of mobile technologies was developed in the 1980s after early experimentation in the decade prior.These early generation mobile devices used analog radio signals combined with a digital control signal.This successful innovation led to the establishment of the Groupe de Travail Spécial pour les Service Mobiles (shortened to GSM) by the French state-owned telecommunication provider in 1982 (Hillebrand 2001) which standardised the second generation of mobile technologies (2G) from the end of the 1980s to the late 1990s.2G used digital signal processing paired with a digital control signal.It coincided with the liberalization of the telecommunications market in Europe, and the establishment of the European Telecommunications Standards Institute (ETSI).At the same time, the European Union provided legal frameworks for competition in telecommunication terminal equipment (1988) and telecommunication services (1990) and established an internal market for telecommunications services (1990) (Lemstra 2018).In 1992, GSM was launched in seven European countries by 13 telecommunication providers (Manninen 2002).Meanwhile, in the US a distinct standard for mobile telecommunication was developed and deployed, namely Code Division Multiple Access (CDMA): functioning on another frequency and not compatible with GSM.
Since GSM was taking off beyond Europe as well, for its next iteration it was crucial to establish a body that was larger than ETSI.In part, this happened within the ITU through the International Mobile Telecommunications for the year 2000 standards (in short: IMT-2000), which would lead to the coordination of compatible frequencies managed through the ITU Radiocommunication Sector (ITU-R).The preparation for the IMT-2000 standard that would be standardized in the ITU Telecommunication Standardization Sector (ITU-T) happened largely through the Third Generation Project Partnership (3GPP).The 3GPP is not a formal entity but an umbrella organization of seven telecommunications standard development organizations from the US (ATIS), China (CCSA), Europe (ETSI), India (TSDSI), Korea (TTA), and two from Japan (the governmental TTC and the industry-led ARIB).The 3GPP organized its work in generations (e.g, 2G, 3G...); within these generations, specifications were grouped into releases.Examples of early releases were, e.g., release 98, which standardized the General Packet Radio Service (GPRS) that made packet switched data possible, next to voice over circuit switched networks in GSM (which is why GPRS was dubbed 2.5G).In release 98, Enhanced Data Rates for GSM Evolution (EDGE) was standardized which increased data transfer speeds.A significant breakthrough was represented by the release of 3G which introduced the Universal Mobile Telecommunications System (UMTS).UMTS allowed for significant data transfer speeds, which enabled the usage and growth of the smartphone.In the meantime, the US market was still functioning under other standards, in part because of the dominant role of Qualcomm.The alternative for UMTS in the US was called CDMA2000.Elsewhere, UMTS functioned on frequencies that were not yet in use by telecom providers.Both CDMA2000 and UMTS were standardized within IMT-2000, which roughly coincided with the first full standard releases of the 3GPP, namely High-Speed Packet Access (HSPA) (3GPP release 5 and 6), HSPA+ (3GPP release 7 and 8), and Long Term Evolution (LTE) standard (3GPP release 8).After LTE, the 3GPP created LTE Advanced (or 4G), a full digital IP-based network where video and audio streaming can work seamlessly, provided the backhaul network can provide sufficient bandwidth.4G was the first unified standard between the US and the rest of the world.As evidenced by this truncated historical overview, standardization of mobile telecommunication has been characterized by often confusing naming conventions, largely because the technologies that coincided with the releases did not always perfectly match the generation eras.Also, for a network to be called '4G', not all technologies standardized in the 4G period had to be implemented.And while the 3GPP, organizationally facilitated by ETSI, was working on UMTS, the competitor organization 3GPP2, based in the US and mostly supported by Qualcomm, continued to further develop CDMA.However, the standards developed by the 3GPP have obtained a much larger market share (Baron and Gupta 2018).

5G
The fifth generation of telecommunication networks developed in the 3GPP represents more than an incremental change or just a new phase.Whereas end-users might mostly notice an increase in bandwidth and lower latencies, in 5G a significant change is made in Radio Access Networks (RAN).Because of technological advancements, mainly concerning processor speeds that continue to increase and memory getting cheaper, mobile networks are now implemented as Software Defined Networks, leveraged through paradigms of Network Function Virtualization.This means that much of the hardware implemented in 5G functions more as general-purpose computing hardware, instead of dedicated hardware that can only perform one function.Furthermore, the RAN is split into different modular parts with open interfaces.This makes 5G architectures more configurable and enables the mixed usage of hardware and software from different implementers and vendors, rather than closed proprietary interfaces.This disaggregated RAN interface can be understood as the adoption of a kind of 'internetstyle' networking in telco networks.This is also seen in the core networks of 5G where a Service Based Architecture (Guttman and Ali 2018) is implemented that separates a control plane from a user plane (Sirotkin 2020).This configurable modular separation allows for more applications running within the network, which is the direct opposite of the end-to-end principle of the Internet (Saltzer, Reed, and Clark 1984; ten Oever 2021c), but fits with the wider trend toward 'edge-computing'.
There are several use cases presented by the industry for 5G.On the one hand, there is the Industrial Internet of Things, allowing factories to be automated and controlled through 5G networks.Such networks would function similarly to other wireless networks (e.g., WiFibased), except that 5G networks provide better control for timing and latency.On the other hand, 5G is promoted as a deployment solution in urban areas where it can provide high bandwidth and low latency for many users and devices at the same time.The technology that makes 5G amenable to use with a large number of devices is called Massive Multiple-input multiple-output (or Massive MiMo).Furthermore, the 'beam-forming' antenna technology supports energy-efficient performances.However, the potential of 5G in terms of higher bandwidth is on higher frequencies.Because of the characteristic of the millimeter waves, they carry less far.This means that many new 5G-NR (new radio) installations need to be mounted in many new places, allowing for the introduction of a whole new networking environment.
The new functionalities of 5G networks (see, e.g., ten Oever 2021d led to an increase in patent applications of 5G technologies and Standard Essential Patents in 5G (Baron, Blind, and Pohlmann 2011;Baron and Pohlmann 2018).Most of 5G patents have been registered by four Asian companies (Huawei, Samsung, ZTE, and LG); Nokia and Ericsson are fifth and sixth, and only after comes the US-based Qualcomm (Pohlmann, Blind, and Heß 2020).The dominance of China in 5G development spurred concerns over security threats in the US and Europe (Kaska, Beckvard, and Minárik 2019;Rühlig and Björk 2020), but such alleged Chinese backdoors have not been found in international standards contributions by Chinese corporations.It has also spurred a debate about digital sovereignty in the EU and beyond (Couture and Toupin 2019).
This brief history of standardization has made apparent how power, broadly defined, is a constitutive dynamic within the standardization 'game'.If we use Peet's tripartite classification of power, we note how, from the telegraph to 5G, power has alternatively taken the shape of capital-related dynamics (e.g., industry preferences, policy choices at the national or supranational level) or of politics-related dynamics (e.g., geopolitical struggles, user preferences), as well as ideology related dynamics (e.g.packet-switched vs circuit switched networks).This exposes how there is a continuous shaping tussle between capital, politics, and ideology through which technology is shaped through standardization processes.In the next section, we define the properties of the ICT standards whose development we have described here.

Situating ICT Standards
There are different definitions available in the burgeoning scholarly literature on standards.For this article, we refer to technical standards as 'established norms or requirements applied to technical systems' (Shin, Kim, and Hwang 2015, 152) or 'a publicly available definitive specification of procedures, rules, and requirements, issues by a legitimate and recognized authority' (Jakobs 2000, 11).In this section, we present key features of standards and standard making and describe selected aspects of ICT standards, to zoom in on aspects of power that will later be useful to our core argument.

National and International Standards Bodies, Private Standardization, Patents, and Regulation
As the above history of standards made clear, standardization often results from technological innovation or emerging societal needs, such as safety and security.The development of different implementations typically precedes standardization.However, if competing implementations have been developed and attracted a significant user base, it can be hard to create a new standard to facilitate interoperation.A classical sample of such standardization failure is the wide range of electrical plugs used around the world.If one implementation in the market outperforms other implementations, the dominant implementation can monopolize the market and become the de facto standard.This is the case with the QWERTY keyboard, although it is generally considered inferior in quality to other competing products (Farrell and Saloner 1985).When a formal standards body, such as a national entity, produces a standard that is binding by law, this is called a de jure standard (Shin, Kim, and Hwang 2015;Simcoe 2007)-although de jure standards are not simply the opposite of de facto standards (see Adbelkafi et al 2021).When a standard is developed through an informational standards body that produces voluntary standards, it is called a consortium-led standard (Hawkins 1999;Weiss and Cargill 1992).
Aside from market dynamics, the standard development process can also have a significant impact on both the formation of the standards as well as their acceptance and uptake (Kanevskaia 2020a).On the one hand, the legitimacy of the standards body is known to affect the authority of a standard (Suchman 1995).When a standards body is a national standards body (henceforth NSB), or an international standards body that is recognized by state actors as an authoritative source, standards can promulgate through the member states and be referred to in policies and regulations.For instance, the European Telecommunications Standards Institute (ETSI), the Comité Européen de Normalisation (CEN), and the Comité Européen de Normalisation Electrotechnique (CENELEC) are officially recognized by the European Commission.This means that the European Commission can reference the standards outputs by these bodies in its regulation and policies Although the EU has changed its policy allowing it to also reference standards that are not produced by formal standards bodies.Global ISO and IEC standards are not directly binding for member countries, but if the European Commission adopts a directive that sets forth 'essential requirements' for a product, a recognized European Standards Organization (ESO) can be asked to develop a standard (then called 'European Norm' or EN) able to satisfy said essential requirements.Actors can then decide whether to adopt the EN in question; if they decide to favor another solution, the burden of proof of meeting the essential requirements of the directive rests upon the implementer.This shows how membership and membership criteria of the standards bodies matter, as do the structuring of the decisionmaking processes and the rules and procedures in standard setting (also known as institutional configurations).The combination of these configurations do not only set the rules of the game but is also constituent in the formation of business communities that give rise to distinct cultures within epistemic communities (Hawkins 1999).
Patents are as interconnected with modern technologies as standards are.The invention and dissemination of the telegraph did not only lead to one of the first international standards bodies, but also the first gigantic patent battles of the 19 th century, namely over the Edison patents for the quadruplex telegraph system (Carey 1983).Patents provide a temporary governmentgranted monopoly on a specific technology or procedure, while standards describe a 'commonly accepted technique' (Baron and Pohlmann 2018, 505).Standards and patents directly collide when a standard cannot be implemented without the technology or procedure that is patented.If this is the case a patent becomes a Standards Essential Patent, or SEP (Bekkers et al. 2017).This means that everyone who wants to implement the standard should obtain a license from the patent holder, which may require a licensing fee.One can see how this might drive actors in standard-setting away from creating the best possible technical standards to rather abuse 'standard-setting processes to secure inclusion of their technology in industry standards' (Cary et al. 2010, 913).A similar problem goes by the name of 'patent hold-up' or 'patent holdup problem', namely a situation where a patent owner uses the lock-in of standard implementers to demand higher licensing fees for their essential patents that they could have done before the patent was selected for inclusion in the standard (see Bekkers 2017;Lemley and Shapiro 2007).Several authors have argued for legislative reform (Lemley 2007) or antitrust action to curb this behavior (Cary et al. 2010).However, standards bodies themselves have also acted on this behavior through the development of specific policies, more specifically mandatory intellectual property rights (IPR) disclosure (Bekkers et al. 2017), and fair, reasonable, and nondiscriminatory (FRAND) licensing, or in some cases royalty-free licensing.There has been ample research into the effects of IPR disclosure and FRAND licensing policies on standard setting processes in standards bodies (see, among others, Contreras 2015; Blind and Pohlmann 2013;Graham et al. 2013).It is worth noting that the co-existence of multiple patents on one technology can pose a barrier to technology implementation, as it can be complex for implementers to negotiate licensing terms with all patent holders.Patent pools have emerged as a response to this problem: they provide a one-stop shop for aggregated patents to a given technology (Shapiro 2000).The MP3 standard patent pool is a case in point (Blind 2003).

Selected Aspects of ICT Standards
Whereas many of the rules that apply to non-ICT standards also apply to ICT standards, there seem to be as many as three features of ICTs that set its standardization apart.First, we note the rapid development of ICTs, which is also translated into fast access to the market.This can make standardization in formal standards bodies too slow because by the time a technology is standardized, there is a good chance it has already passed its market peak.This has led to the emergence of informal standards consortia in the field of ICTs (Hawkins 1999;Delcamp and Leiponen 2014), tasked with developing voluntary standards (Marpet 1998), which makes the organization of buy-in for the standard and legitimacy of the standards body even more important (Suchman 1995).It is worth noting that also the results of formal standardization bodies constitute voluntary standards unless their use is not part of binding law, which is however a rare event.Many voluntary standards consortia use the term 'openness' to describe their standards and processes.One way in which an understanding of openness has been normalized between standards bodies has been through the development of the Open Stand principles.These principles have been adopted by the Internet Engineering Task Force (IETF), the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA), the World Wide Web Consortium (W3C), and the Internet Architecture Board (IAB) (Aboba et al. 2013;Bartleson 2014).Another understanding of openness, which has been widely accepted by governments and regulators, was provided by the World Trade Organization's (WTO) Committee of Technical Barriers to Trade (TBT) agreement that states Membership of an international standardizing body should be open on a non-discriminatory basis to relevant bodies of at least all WTO Members', meaning that openness translates to participation in international standards bodies by members of national standards bodies.However, there continue to be differences of interpretation of the notion of openness in standardization (Zafrilla Díaz-Marta and Ferrandis 2020).
Second, like other sectors (think of railways), ICTs are characterized by high switching costs.When a user or a user group has familiarized with a specific technology, replacing that technology with another one might cost time and extra expenses-the so-called switching costs (Matzler et al. 2015;Zhu et al. 2006).The raising of switching costs locks users into a specific technology, which is known as vendor lock-in (Callon 1990;Cowan and Gunby 1996) and increases path dependencies (Barnes, Gartland, and Stack 2004).
Third, network externalities are crucial to understanding ICT standardization, adoption, and diffusion.Network externalities refer to the effect by which every device, user, or service added to a network, creates more value for all devices, users, or services already connected (Katz and Shapiro 1985;Viswanathan 2005).This characteristic also increases path dependence.As a result, it becomes harder to develop alternatives.
In the dialectic tagging back and forth between standards and implementations, there is a specific role for interoperability testing and industry collaboration.Because technology often develops faster than standards and implementations can be more complex than a standardized reality, industry coordination can help the modular and interoperable development of technologies.For instance, if one would implement a standard compliant Domain Name System-server according to the nearly 2800 pages that make up its standards, the server would not work.Simply because DNS implementations have found that some standards decisions did not work well on the Internet, and implemented other ways to implement it in a better fashion.To facilitate interoperation, relationships between competitors can be very important.These relationships are in part driven by network externalities, but also through trust relations that emerge in epistemic communities (Mathew 2014).
Standards bodies themselves can also stimulate collaboration by making it an (informal) precondition that there should be several competing implementations before a standard can be set (Dusseault and Sparks 2009).Even an industry-wide agreement on a particular standard does by no means guarantee its quick adoption, as seen with the slow uptake of IPv6 despite industry-wide consensus (DeNardis 2009; Dourish 2018).Ironically, agreement on adoption can lead to market inertia because no one wants to bear the burden of the costs of the first adopter (Kuerbis and Mueller 2020).This leads to what Meek observed already in 1993: 'there are both too many standards and too few ' (1993, 35).On the one hand, many standards are not implemented and others that are competing, on the other hand, there is still a need for standards because of emerging societal needs: think of environmental standards and technological developments, for instance concerning 5G.
Standardization aims to make competitors collaborate, stimulating innovation and lowering prices.However, standardization efforts do not always succeed (Cargill 2011).Even when a standard is developed, those who think their preferences have not been met can create a competing standard in another standards body.This happened with the standardization of Transport Layer Security version 1.3 (TLS1.3) in the IETF which would, i.a., grant higher levels of privacy and security protections to end-users, but also force institutions to change their installations.This led to the subsequent standardization of eTLS in ETSI, which was the full implementation of TLS1.3 minus this particular privacy and security feature (Mueller and Kiernan 2020).This is possible because standardization bodies are frequently in conflict with each other, either because of territoriality, the sphere of influence, or ideological issues.Finally, global standards are defined through their transnational nature, but this does not mean that the standard is implemented everywhere in the same manner, nor that its impact is the same.In fact, 'standards are intensely local, in the sense that, despite their global reach, they touch very specific communities in very specific contexts' (Lampland and Star 2008, 16).It is therefore important to discuss the roles of different stakeholders and representation in standards processes.
Next, we delve into actors as they engage in standardization.We look at their values as they become enshrined in ICT standards, to explore the connection with the third dimension of power identified by Peet, namely that of ideology and associated rationalities.

Standardization Actors and their Values
Recent scholarly works on standards bodies emphasize the low diversity, aging community, and high entry barriers of standards organizations (Yates and Murphy 2019).The trend towards private and/or multistakeholder standard setting has led to an over-representation of industry, and an under-representation of public interest perspectives by government entities or the organized civil society.When civil society is represented, it often serves to legitimize the existing ordering instead of having a meaningful impact on the policy processes (Carr 2015;Hofmann 2020).Governmental representation in ICT standardization occasionally includes representatives of law enforcement agencies, who are known to prioritize governmental access to user data over user privacy.An example of this is the Lawful Interception Working Group in the 3GPP.The classified information of the US National Security Agency leaked by whistleblower Edwards Snowden (2013) has exposed how standard-setting, specifically, the weakening of encryption standards has been used as a tool in cyberwarfare (Rogers and Eden 2017).This has contributed to the weakening of trust in the global private standards regime for several actors and has fostered the (re-)emergence of unilateral and multilateral standards regimes and processes (ten Oever 2021b), coupled with the resurgence of the digital sovereignty discourse (Braud et al. 2021;Allen 2021;Hong and Goodnight 2020;Stadnik 2019).In what follows, we focus on a specific set of values that are increasingly evoked in governance debates on standardization and internet governance in general (Cath 2019).As mentioned above, we focus on civil society actors for two reasons.First, it is civil society actors, be it users or umbrella non-governmental organizations, that most frequently evoke the public interest in standardization processes.Second, it is the set of actors that is more often than not positioned at the losing end of the equation.

In the Public Interest?
Standards organizations and industry associations regularly emphasize their aim to contribute to society, highlighting how standards are a public good (Berg 1989;Yates and Murphy 2019).From a public policy perspective, standards are a public good because when they are available for use by all, the prices of the good does not go up when one economic actor uses them, and when it is used more, other users gain from this as well through the increase in comparability and interchangeability (Kindleberger 1983)--in short they are non-excludable and nonrivalrous.But this does not necessarily mean that standards work in the public interest, however.Standards can work well to create markets and reduce transaction costs between corporations: this however might not benefit the greater public, but only the firms that engage in that market.This is why some authors argue for multiple stakeholder participation in standards setting (Balzarova and Castka 2012).Others emphasize the importance of including end-users of the technology in the process (Jakobs 2000), while also recognizing that there exist significant power differences between stakeholders.Many case studies highlight the over-representation of the private sector in standard setting (Perry and Nöelke 2005;Carmin, Darnall, and Mil-Homens 2003), or the overrepresentation of industry from specific geographic areas such as the US (Carr 2015; Scholte 2017) followed nowadays by Asian countries (Yates and Murphy 2019;Tang 2020).Other reasons that can explain the dominance of the private sector in standardization include the fact that this stakeholder group is extra incentivized to engage in standard setting because it is most likely to be directly impacted by standards (Olson 1971), as well as the availability of both case matter and standard-setting expertise and experience (Balzarova and Castka 2012;Hallstrom and Bostrom 2010).Even when the private sector has the largest interest and significant engagement with standards setting, other stakeholders continue to play an important role, directly or indirectly.In the areas of ICT standardization, for a long time, there has been a trend of deregulation and limited government engagement, after the IETF won over the government-backed ISO in the standardization of the architecture of the Internet (Russell 2006).However, there are indications that this trend might be reverted with the resurgence of government interest (Haggart, Tusikov, and Scholte 2021), even though the government-led ITU still plays a marginal role in the standardization of digital technologies (Balbi and Fickers 2020).As noted above, citizens, in the literature and standards processes described as 'users' (Nottingham 2020) or 'consumers' (Farrell and Saloner 1985), often know very little about standardization (Healy and Pope 1996).Some authors even describe the distance between users and standardization processes as 'worlds apart' (Jakobs, Procter, and Williams 1996, 183).This can be in part attributed to the expertise, time, and resources that are needed to participate in the standards process, as well as the steep learning curve and the time it takes to knit relationships and legitimacy in standards bodies.
There have been initiatives in standards bodies to develop standards for the positive societal impact of the standard setting process, as well as developing standards to certify levels of 'corporate social responsibility' (Bryson and Winfield 2017).However, these norms never reached full standard or certification level: they exist only as guidance standard and their implementation cannot be mandated.This can be attributed to the fact that the stakeholders that are already active in standards bodies, most notably corporate actors, have no direct interest in limiting their activities (Balzarova and Castka 2012).Further, new standards developed within the framework of existing standards need to be in line with existing frameworks, which makes it harder to disrupt the status quo to improve the situation of those who are not directly represented in standard setting (Castka and Balzarova 2008).
RFC formally stands for Request for Comments, but the series describe internet protocols, architectures, and procedures for several internet infrastructure bodies, most notably the IETF.Organizations that produce RFCs, such as the IETF, have developed review processes to take the societal impact of protocols into account.Examples are an RFC published by the Internet Architecture Board for privacy considerations for protocols, which also describes guidelines for privacy reviews (Cooper et al. 2013), an informational document published by the Internet Research Taskforce (IRTF), a sister research organization to the IETF, that described the relationship between human rights and protocols and produces guidelines for human rights reviews of protocols (ten Oever and Cath 2017), and an RFC from the Internet Architecture Board that described how users should always be the prioritized stakeholder group in trade-offs in protocol development (Nottingham 2020).As in the ISO, these RFCs are not normative and are only informational, meaning that they do not make normative changes to the RFC development process, in contrast to other considerations, such as security considerations, which are mandatory for every published RFC (Doty 2015).
The resistance to producing strong standards on corporate social responsibility and incorporating impact on societal values in standards processes might be part of the visions that are connected with communication technology standardization.STS scholar Jasanoff argues that the visions that come with technologies are exactly what helps to produce them: sociotechnical imaginaries allow different groups with different knowledge and backgrounds and discrete interests to work together towards a common goal, a process through which technology, institutions, and policies are developed (Jasanoff and Kim 2015).The imaginary connected to communication standardization typically relates to its nature of non-excludable and non-rivalrous public good and takes safety and security considerations into account, thus representing the public interest.But this might not (yet) be the case for the standards it produces.There is often a tension between standards imaginary and ideology at play in standard setting more broadly, as evidenced by the notion of network ideologies (Bory 2020), which points to how ideologies are located and perpetuated in and through communication networks through their (infra)structure.
From this brief overview of recent attempts to wire the public interest into standard making, we note how ideology and associated rationalities, made visible in the sociotechnical imaginaries mobilized by civil society actors, have crept into standardization, adding to the economic and political concerns that appeared to be prevalent in the early history of standardization.This makes it even more urgent for scholars to develop an understanding of standardization able to include also concerns over power dynamics and power differences, which we attempt to do next.

Towards a Theory of Standardization and Power
Traditional standardization theories combine perspectives from law and economics to understand the strategies and power of firms and whether and how they can determine if a particular approach or policy will lead to success or failure, and whether the acceptance and uptake of a standard can be predicted (see Yates and Murphy 2019).Political science and international relations literature look at how power is exerted in the standard-setting processes, while STS, including infrastructure studies, focuses on how standards are used to exercise power outside of the standardization process.Historians of technology have shown how new technologies, and their standardization, are often met with hope and fear (Sturken, Thomas, and Ball-Rokeach 2004) that are shaped also by popular myths.For instance, the telegraph, the telephone, radio, and television transmission, as well as the Internet have been associated with a promise to end wars, foster education, and curb poverty (Mosco 2005, 80 onwards)-but variably ended up contributing to the establishment or consolidation of corporate oligopolies, playing also a significant role in geopolitics.The internet, then, has been associated with imaginaries of empowerment, thanks to its generative architecture (Zittrain 2008) that would function as an engine for economic growth and innovation (Van Schewick 2012), stimulate competition (Cowhey, Aronson, and Richards 2009), and resist centralized control (Chadwick 2006).
A wide range of authors has criticized this image of the Internet and its architecture (DeNardis 2009;Carr 2015;Mathew 2014;Musiani et al. 2016;Cath and Floridi 2017;ten Oever 2021c).In a nutshell, these authors contend that the Internet, as well as earlier transnational communication networks, reflect as well as shape power relations between states and corporations.However, literature has thus far not made explicit how such a network ideology could be shaped to function in the public interest.Meanwhile, the early promise of the Internet has lost its steam, which has led, in part, to a global call for the regulation of the Internet by nation states and multilateral entities (Flonk, Jachtenfuchs, and Obendiek 2020), from California to Russia, from China to Brazil, in the EU and the United Nations.The rollout of 5G and the global consolidation of Internet infrastructure providers is both a challenge and an opportunity to develop an approach to increase legitimacy and public interest aspects of standards and harness their 'power to do' for social good (Yates and Murphy 2019, 12).This is not just relevant for the standardization of the next generation of communication networks, but also the practice of standardization at large.But what would a theory of standardization and power look like?What are good starting points for its development?Because of the intrinsic link between standardization and power a comprehensive theory of standardization, and even more so of standardization in the public interest or any contribution in that direction, needs not only to be a theory of technology or institutional configurations, but also a theory of power.Several authors have argued that an interdisciplinary analysis of standards and standard making is necessary (Contreras 2019;Kanevskaia 2020b).Taking inspiration from this observation and building on the three-fold typology of power advanced by Peet (2007), in this section we sketch how such a theory of standardization and power could look like, identifying potential building blocks across disciplines.
For starters, such a theory needs necessarily be an interdisciplinary one, able to tie together concerns over capital (represented in, e.g., macro-economic theory), policies (as understood by, among others, political science), and ideologies (as evidenced by, e.g., STS and sociology of innovation, as well as media studies and philosophy of technology).In particular, we argue, the role of ideologies and meaning making by standardization actors should be foregrounded, for they play a central role in the current 'hot' debates (think of 5G, but also the possibility of Russia 'splitting the internet' accelerated by the conflict in Ukraine (Limonier et al. 2021;Musiani 2022).This would allow us to zoom in on the conflictual nature of power as defined by Weber, exposing how conflicts of interest are played out in standardization processes.It would also partially make up for the fact that meaning-making, ideologies, imaginaries, and associated rationalities have long been relegated to the second position by dominant standardization theories emerging from economics, law, and public policy alike-which, we contend, jeopardizes the explanatory power of said theories.
Second, a theory of standardization and power needs to develop a sophisticated understanding of the actors involved in collective action for standard making, and of the complex relations that link them together (or not), considering actors that may not necessarily appear 'mainstream' or prominent if the process is observed from 'classical' perspectives on standardization.It should consider actors at the macro (e.g., multilateral organizations, multistakeholder processes), meso (e.g., private companies, thematic non-governmental organizations, governmental actors, as well as their coalitions and ad hoc alliances), and micro level (e.g., individual users and their imaginaries, and how the private sector contributes to articulate and justify them).With ICT standardization slowly becoming of concern also to the general public (think of the debate over the standards in contact-tracing apps designed to curb the pandemic, see Chan et al 2020), the micro level is likely to increasingly play an important role in the way in which macro and meso actors articulate their preferences.
Third, a theory of standardization and power must embed a geopolitical perspective able to account for the complex negotiations and balancing exercises of the tech industry, national states, their blocks as well as the socio-cultural aspects that, alongside economic and practicerelated concerns, may determine their choices and preferences.This geopolitical perspective is increasingly relevant as the focus of standardization processes progressively moves away from the Western 'center' to what used to be (but is no longer) the 'periphery' of digital innovation: see, e.g., the growing role of Asian firms such as Huawei in the standardization of mobile technologies, and the related 'power' struggles that this originates (Teki̇r 2020;Dunajcsik and ten Oever 2021).
Finally, a theory of standardization and power should embed normative ambitions, if we are to contribute to the articulation of what 'public interest' might mean in practice in technology development and deployment, and to support the ongoing efforts by civil society actors to wire in public values in the technology of everyday use (Zalnieriute and Milan 2019).This means that discussions about public interest should not just take place in the realm of politics and practices of communication standardization, but also be integrated into the (network) ideologies that underpin the material technological structure of technologies.This might lead to a deeper understanding of how power is exercised through standardization by ideology instead of over standardization through economics and politics.
In conclusion, this literature review sought to provide an overview of ICTs standardization focusing not only on economic and political factors but also on the subtending ideological builtup.More work needs to be done to develop this theory further and to answer the question as to whether we can strengthen public interest and public values considerations in the development of future ICTs standards which are so central in shaping how power will be exercised in society.