Tiago Mesquita Carvalho1
Tiago Santos Pereira2

Chapter forthcoming in:
Ferreira, M. I. A. & Fletcher, S. (Eds.) (2021). The 21st Century Industrial Robot – When Tools become Collaborators. Springer, Switzerland.

 

Abstract

In this chapter we address the challenges that automation/robotization poses for labor and employment issues. Our take is that technology assessment (TA) can provide a ground for both ethical reflection and social engagement towards participatory decision-making regarding the application of such technologies. The role of underlying narratives regarding robots and automation processes is also highlighted. The chapter debates labour substitution as a dominant narrative in economic analysis, while also stressing the need to contextualise technological change and innovation regarding robots and automation in the concrete work processes or tasks, bringing narratives closer to the ground. This discussion leads us to the second main theme of the chapter: the potential role of technology assessment in better exploiting the development and use of robots in the workplace, their unanticipated consequences and the ethical and social tensions arising therein. Such approaches do not aim at complete or sound predictions, but at building participatory and interdisciplinary processes. This chapter is then about how we ought to live and to relate to technology.

 

Introduction

The fields of ethics and philosophy of technology, STS and Technology Assessment (TA) have since long dealt with various topics concerning the use of robots at home, workplaces and in the battlefield (Rudolph 2004). Ongoing topics of research include more direct matters such as safety, privacy and responsibility and how far these concerns can be actively integrated into such artificial systems. The range and breadth of robotic innovations’ impacts can give way not only to generic social conflicts, but also to policy, legal and ethical challenges as well (Est & Gerritsen 2017, Wallach 2014). For evaluating these matters, it should also be accounted for that robotics has since decades been the center of several science fiction authors and other speculative works, which have undoubtedly helped raise questions about the nature of the interaction between humans and robots.

Our approach to these matters encompasses a broad criticism of technological determinism while acknowledging the role that related narratives play in constituting the plural social meaning of robotic applications (Grunwald 2016). The stories underlying the development of new and emergent science and technologies (NEST) like robotics, synthetic biology, nanotechnology, AI or geoengineering help shape collective imaginaries. The rejection of technological determinism and the importance of narrative conveys consequences for how TA is conducted. For instance, future visions of AI, automation and robots have been related with the dawn of a technological singularity (Kurzweil 2014), working almost as a spiritual beacon lightning the way to more promising expectations and applications. Regarding narratives, most popular is the fear of a complete or partial robot revolution or uprising. Even if it is highly unlikely that these fears materialize in an overnight upturn, it is important to underscore how collective imaginaries and visions of the future shape present relationships, either by showcasing anxiety or by unbridled hopes.

Concurrently, although progress in robotics currently proceeds at a relatively fast pace, in reality there are still a lot of technological thresholds and emerging practical problems that call into question the straightforwardness of such narratives towards images of the future. These difficulties relate, for instance, to solving apparently simple tasks that are obvious to humans but that currently thwart the best available scientific and technological knowledge (Dreyfus 1997). Not all moral and intellectual capabilities that we usually relate to human beings can be programmed into robots or machines. Unlike humans, robots cannot as yet recognize ethically charged situations, as researchers struggle to give them a world and a sense of what matters and what’s useless.

Our contention is that these thresholds and limits illustrate epistemic gaps in the development phase of robotics where TA and attention to ethical, legal and social concerns can be introduced and presented. Granted that the social impact of such products cannot be entirely predicted, there are still opportunities for interdisciplinary teams advancing new material ingenious solutions and circumvent troublesome friction points.

The role of TA should hence not be regarded as necessarily hindering or dampening technological innovation but as a thorough and engaged examination of the hurdles, issues and problems that may arise during and after developments. The point is to see that ethical concerns and TA methods reflect legitimate social, legal and moral concerns that can potentially be integrated by engineers and designers in the very core of robots or automated machines. Regulations for certain robotic performances are thus not necessarily opposed to innovation, but can even contribute to opening new markets where human laws and values are upheld. Some of the challenges that ethical and TA raise about new and upcoming technologies can also bolster industry itself to include such considerations in the design process or in later phases.

It might not be obvious in the context of liberal democracies that the importance of technological development for economic growth should be somewhat regulated and controlled. The underlying view is that the more institutions and professionals call forth a variety of questions and concerns, even apprehensions over the pace and impact of technological innovation, the more industry will be cautious about employing and developing marketable emerging technologies. Some institutional landscapes can appear to be cumbersome to companies that already face economic scarcities of various sorts in order to operate and finish projects with demanding deadlines and at the same time present profitable and socially useful products. In a worst case scenario, governmental regulations on innovative technological improvements can discourage business in a given geopolitical framework. Regulations may then be seen as part of a disadvantageous institutional landscape that affords competitors elsewhere an insurmountable advantage. There is thus a thin line of balance between avoiding or managing technological risks and uncertainties and the proper stimuli to a competitive and ingenious environment.

It must be also acknowledged that society is set to be rocked with many technological novelties emerging in the near future. Industrial robots have long been part of such developments and together with other groundbreaking technologies they are set to rise not only public hard questions related to economic growth, health, safety and employment but also other soft questions. Either way, the real effects of robotics will be bounded between social hopes and fears that are more or less realistic. The social meaning of such applications in public awareness sprouts opportunities for debate about their goals and on what grounds can and should their use be criticized. It is at such jointures that philosophically and ethically minded questions in TA about technology’s role in general and industrial robots in particular can play a role regarding public narratives and the design of new forms of governance.

 

Technology, Robotics and Employment

Public concerns regarding robots in society have been largely framed by imaginaries of industrial processes, and the corresponding automation processes, going back to the very origin of the word ‘robot’ in Capek’s play or to Chaplin’s Modern Times. While the fears of robotic domination may be less founded, not giving rise to imaginaries of confrontation between humans and machines, concerns regarding the substitution of human labour by automated processes have proliferated, characterizing more aptly these tensions as forms of displacement, translation or articulation. While sociology, anthropology, or even management have been more concerned with the microlevel dynamics of such exchanges within organisations, the impacts of robots, and automation more generally, on employment has been an important concern in economic analysis.

Economists’ interest in technical change has largely pointed to its contribution to the reduction of resource use, and in particular to its labour-saving effect. Such discussion emerges regularly as new waves of technological change (Freeman and Louçã, 2001) sweep across different sectors of the economy. The wider discussions on the impacts of innovation on employment have not reached clear conclusions. Vivarelli (2013) reviewed different approaches to the ‘compensation theory’, which state that market forces should compensate the labour-saving effects resulting from technological change, including job losses, as well as its critiques, and concludes that job losses cannot be guaranteed to be counterbalanced ex-ante, highlighting the underlying risks of unemployment.

The current developments in automation, through the increased use of robotics or artificial intelligence (AI) technologies, have brought equally, if not more, intense debates on the potential displacement of jobs, and the risks to the employment of many workers. There are insights from previous studies which are relevant to better understand such processes. In particular, the type of innovation, whether it is a product or process innovation, is of central relevance to identify potential expected impacts, with the former having typically a positive impact on employment, while the latter having mostly a negative impact (Pianta, 2004). Another long stylized fact about innovation and employment states that there is a ‘skill bias’ in employment resulting from innovation processes, with unskilled jobs declining while more skilled jobs see an increase as technological change advances (idem).

These conclusions have come partly into question with the recent wave of automation processes. While the first generations of robots were largely deployed in heavy industrial processes, the more recent wave of automation has seen a wider use of robots and AI technologies that go well beyond the traditional industrial setting, reaching on to different organizational processes and, even, into the service sector, with a wider impact on process innovation, highlighting its potentially higher negative impact on employment. In particular with the rise of AI technologies and its implementation in the service sector, there is a wider potential impact across more sectors of the economy. This increased attention on the potential impact on employment of the current wave of technological innovation through automation has led to a number of forecasts of such impacts. In particular, work by Frey and Osborne (2013, 2017) has resonated widely, both based on its methodological approach as well as on its dramatic predictions regarding the impacts of automation, concluding that 47% of jobs in the US economy are at risk of automation in the near future.

Frey and Osborne’s work follows the task-based approach to the study of work processes and labour markets which have characterized recent developments in the field (Autor, 2013). By addressing task content, relevant differentiations can be made, for example, between routine tasks, often seen as unsatisfactory and with few rewards for workers, and more advanced tasks, with higher rewards but also requiring more advanced skills. Rather than simply considering jobs as the unit of analysis, research started looking at the concrete tasks that workers need to take in the course of their jobs. This has concrete implications. Such differences become more clearly evident on a task-based approach to the characterization of jobs. Similarly, the impacts on job substitution differ significantly from the perspective of jobs vs. tasks, with the identification of the substitutability of tasks not resulting necessarily in similar levels of job losses.

However, while Frey and Osborne (2013, 2017) followed a task-based approach, analysing task contents, they still considered the automation potential of occupations as a whole and not of the corresponding tasks directly. Different critiques emerged to Frey and Osborne’s approach, reaching quite distinct conclusions regarding the potential impact on job losses in the economy. Arntz et al. (2016) used the same primary data characterizing the task content of different occupations in the US, but identified how automation was still dependent on human tasks, and reached a significantly different value of 9% of potential automation of jobs in the US economy.

Besides distinguishing in greater detail the concrete tasks that were automatable and their centrality to the corresponding occupations, thereby going beyond a simple threshold for automation, Arntz et al. took a more dynamic approach to the automation process. Technological innovation does not become implemented in a contextual void, as if it depended simply on the existence of the technology choice, almost in a deterministic model. On the contrary, technological innovation depends on a context of resources, financial, cultural or organizational, in addition to the concrete technological capabilities, that lead to its implementation. Such conditions represent differing assessment processes in different organisations. Criticising Frey and Osborne’s approach, Artnz et al. note that the process of automation does not depend simply on technological feasibility but also “on the relative price of performing tasks by either humans or machines”. Eurofound (2016) followed a similar approach, further differentiating the task content, by work activity (e.g. physical, intellectual or more social activities) as well as regarding the tools and methods typically used, further thickening the characterisation of the milieu which will be subject to automation processes. Interestingly, they find that although there appears to be a small, but significant, decrease in routine jobs, there appears to be an increasing routinization of tasks, slightly higher, and also significant, and which reaches managers, professionals and clerical occupations more significantly, challenging received wisdom on the subject. They conclude that the polarization of the impacts of the automation process on skills is less clear from a task-based approach, pointing to new questions regarding the relationship between innovation, employment and skills.

What this analysis makes clear is that automation, through the increased use of robots or AI in the workplace, is not simply dependent on some isolated technological/economic potential of the available or envisioned technologies, but rather that the changing organisation of work is a process of coproduction of technology and social and economic orders. As Moniz and Krings (2016) point out, new models of work organization framing human-robot interactions in industry are still in the making. In the processes through which robots and AI are developed and use, the reorganisation of jobs and work does not appear to amount to either the hyperbolic visions of the futurists, whereby technological change is arriving at a damning speed leaving none of us immune, nor to the catastrophic ones under which we will not simply experience change but should also be prepared to collect the fully fledged consequences, namely through significant job losses. In reviewing these different positions Wacjman (2017) points out, following John Urry, that “the social sciences must reclaim the terrain of future studies […] because future visions have enormously powerful consequences for society, carrying with them implicit ideas about public purposes and the common good.” It is in these terrain that TA emerges as an opportunity to engage all actors, futurists, technologists, workers, entrepreneurs, regulators, citizens, to discuss our different visions of the future of work and to engage in constructing futures which align technological expectations with our understandings of what works in which contexts and what we consider undesirable outcomes.

 

Robotics, automation and the task of Technology Assessment

Robotics, automation and artificial intelligent systems (Boucher 2019) in general then progress not through a linear path towards self-awareness, general machine intelligence or “the singularity”, but mostly through more domestic, service and industrial applications where the chance for a revolutionary breakthrough, affordability and market demand may justify their employment. Such is the case with carebots (TAB 2018), neuroprosthetic limbs or autonomous vehicles, among others. By acknowledging its ambiguity, as in any other technology, one is led to believe that its good outcomes can be chosen and hazards avoided. While industrial robots do not elicit the same range of social and ethical concerns as their domestic counterparts, they have problems of their own. TA builds on these matters and attempts to structure the discussion around alternative technological innovations and designs.

The core notion of an ethics of technology and of the more thorough methodologies of TA is that man-made artifacts, either roads, buildings or gadgets such as smart watches or vacuum cleaners, are value-laden. This adds an insurmountable normative dimension to innovation, design, use and marketing of technologies. TA focuses not on the internal goods in the practices of makers, designers or engineers (MacIntyre [1981], 2007) and in the regulations to which such professions abide (van Poel 2006, 2010), but on drawing attention to how technological artifacts and systems influence consumers in their actions and perceptions and alter how they relate with themselves and take up the world (Borgmann 1984, Ihde 1990). In one word, TA acknowledges as philosophically and ethically relevant the hidden normativity of the contemporary technological way of being without taking it as necessarily helpful or harmful until further examination. It proceeds through several systematic methods of inquiry into the nature and consequences of technology for society, encompassing many disciplines and methods related to risk assessment, decision theory, public policies, as well to social conflicts related to large scale projects like dams, nuclear power plants, highways or oil pipelines.

Since modern times, science has advanced based on the ontological assumption that facts and values are apart (Jonas 1983). Theoretical endeavors are all about researching and establishing the lawfulness of natural phenomena considered as value free, while politics and ethics concern themselves with all that is persistent in the human condition. Discussion of basic values and rights in an open and favorable institutional background may provide collective and individual progress towards the good life, but such progress is always fragile as it depends on the long-term continuity of political regimes and must be enacted by each and every citizen. While scientific and technological advancements can be supportive, they should not be regarded as tantamount to actual moral and political progress. When they lose a connection to the ends of human flourishing, they can even spoil or forestall the process of internal human achievements.

Likewise, before paying more attention to the nature of technological phenomena, TA started five decades ago almost as one brand of economic analysis, balancing the impacts of technologies in terms of prediction and outcomes. Technology was mostly seen as neutral and its impacts were assessed in order to provide politicians with choices about what should be favored or avoided. This entailed a positivistic understanding of science, where society was taken as another, yet complex, mechanical system that one could tinker with and observe the desired or undesired effects through various and powerful quantification methods (MacIntyre [1981], 2007; Porter 1996). TA would just report its objective predictions to the decision-makers.

In this regard, TA is the institutional and practical result of historical developments about the rising complexity of the interaction between science, technology and society. It is charged with the task of bridging both realms of facts and values, of searching for the adequate translations of theory into praxis. Instead of viewing the cumulative progress in scientific knowledge as having an intrinsically positive effect on the whole of society through the means of technology, criticism of the split between facts and values now sets itself the task of shaping technology according to values and the purported ends of social development. This task is of course heterogeneous, especially in pluralistic liberal-minded democracies. TA is thus found most likely in countries with a mature relation with scientific and technological development, usually within a post-industrial historical background where the road to growth is taken as more complex than just simply applying science and expecting good results. Providing opportunities for pausing, reflecting and assessing what kind of choice, if any, society has regarding technology is the key approach.

Overarching concerns about scientific and technical advancements started being raised with momentum due to some spectacular failures or disasters, usually connected to health and environmental harm. Accidents like Bhopal, Chernobyl or the Exxon Valdez oil spill broke up the fabric of an apparent seamless reality where technology always works according to what it was designed to (Ihde 2008). TA does not concern itself with avoiding just such dramatic adverse effects, but with exposing the more invisible ones by which everyday reality is transformed, while dealing with recurrent optimistic or pessimistic metaphors that reify what for instance robots or automation will bring about.

TA as a reflective engagement with all the phases of technological development is also a departure from the earlier 19th and 20th metaphysical readings of technology as a force of either liberation or damnation. The empirical turn (Verbeek 2011) and the influence of STS has given way to a more concrete analysis. Rather than seeking comprehensive structured readings about what technology is or does,TA provides greater attention to specific cases. While TA can be nevertheless operative, it should not be forgotten that, as a social engagement with technology, it should at least provide the spectra of the big philosophical questions and answers that are at stake. Those questions feed the narratives that frame public perceptions, worldviews and sense of self. Empirical analysis does not occur in a vacuum and it would be naive to consider them disconnected from any other system of beliefs. TA has been evolving through welcoming some conceptual changes. Assessment has shifted from being settled on an outside perspective and tied to prediction to being more of an active inside engagement with developers, engineers and other social stakeholders (Grunwald 2000). On the one hand the increasing complexity of technological developments incurs in an artificial “techno-opacity” (Vallor 2016), while, on the other, moral agency has become largely emptied due to the ignorance that agents face when deciding about the outcomes of everyday banal actions (Poel 2011), thus requiring renewed forms of engaging with technological innovation and moral responsibility

The dilemma that modern societies face can then be shortly depicted as putting all the eggs in one basket. Research and innovation receive significant amounts of public funding in the hopes of discovering innovative ways of building a better future for all. Such efforts are nevertheless liable to backlashes, sometimes with significant unanticipated effects.. An active and precautionary management of the risks and comeuppances that innovation may kindle is needed, which is simultaneously aware of its own epistemic limitations about establishing what the future will be.

TA now coexists with the notion that technology is socially constructed and although it never becomes a perfectly tamable and docile device, participatory processes can nudge it towards certain values by design in a more considerate way. Establishing a dialog between the public and the stakeholders is the first step towards facing and identifying problems. In pluralist societies, where the meaning of flourishment is mostly left to individual choices and preferences, it is nevertheless hard to argue how society as a whole should roam forth. Only evident harms are set to be unanimously avoided, but these are often only identified following the resulting consequences. Setting ethical or acceptable limits in terms of risks is thus not a theoretical but an utterly practical matter of defining what is at stake while recognizing different values and readings of what ought to be done. Controversy surrounding different visions of the future is thus at the heart of the matter of assessing technological innovation.

 

Robotics and the contextual challenges of Technology Assessment

TA is thus not a way of solving once and for all the conundrums or quagmires of technological innovation, but a scientific and context-sensitive methodological exam of the complexities that overflow it. While it may help to problem-solving, it is mostly concerned with assisting the decision and development process, detailing the assumptions of the rationality involved in the technoscientific reasoning and pointing to epistemic opacities. It attempts to highlight procedures coupling forward-looking responsibility towards the unknown with a society that holds innovation as a key to its survival, keeping at bay the optimistic fallacy that only a high level of moral individual responsibility can chart the “best” course of technological progress (Grunwald 1999). The unexpected or unforeseeable effects are then of course related to social impacts. In that sense, every impact seems to call back and reinforce the role of the subject: unemployment, accidents, poisoning and other environmental hazards are some of the most studied and measured due in part to their measurable side. “Soft” impacts like skills obsolescence, the sense of belonging or of being uprooted from one’s world and the loss of meaning are far more difficult to account for and are usually deemed to belong to the private sphere and are considered not as serious or worthy of assessment as other “hard” impacts (Lente 2017).

TA today has shifted towards a more comprehensive view, holding a guiding and advisory role to face the challenge of crossing the sea of the unknown technological effects. Participatory methods in general strengthen previous notions of deliberative democracy and emphasize how assessment of future consequences should not be just left to experts or politicians. Even if decisions later reveal themselves to be spurious and unwise, they have at least been made more legitimate and transparent (Grunwald 2016). Responsibility for decisions in this sense is at least shared and discussed and conflicts are allegedly represented, while civil society has a chance of asserting and renewing itself.

Constructive Technology Assessment (CTA), for instance, works as a real-time attempt to accompany all the various phases of technological development in order to bridge the asymmetry between our power to act and our power to predict the outcomes of technology. This promethean gap (Anders 1983) was reinstated by Hans Jonas ([1979], 1984) and according to Collingridge’s dilemma it points at the difficulty to control technology once it is unleashed but ignoring how to adequately design it in the “laboratory”. Theoretically, CTA aims at putting itself in the initial design phase in order to favor participation by various stakeholders in the co-shaping of a given technology. Rather than accepting the paradox that the more exact a prediction about the future is, the more likely it is that deviations occur (Collingridge 1980), CTA opens up the process of construction of a technology, thus ‘constructing predictions’ along the way, about technology as well as about social order, whose fulfilment remains nevertheless an open question.
CTA is strongly dependent on an institutional and political context where stakeholders are able to cast doubts about a certain development or a successful market being on the brisk of being born. Questions about which technologies should be debated and analyzed are framed according to the stakeholders who raise them. Such a framework places epistemic challenges on what is likely to be asked and on criteria for understanding what is worth assessing beyond particular interests.

As seen above, the analysis of the impacts of automation processes on work and employment has been developed largely through an economic approach that underlines aggregate economic effects. Other approaches lead to distinct questions and issues. A sociological approach might highlight the extent to which changes in the work process in the shop floor due to the implementation of robots alienate workers or threaten their posts (Decker 2017). The nature of the problem is necessarily defined by perceptions and narratives of future gains and losses, interests, narratives and groups that push their framing of the issues and of how these are affected by technology. TA has the task of balancing such views and exposing their contributions as well as limitations, recognizing the conditions that frame the questions in the exercise, namely due to its own limited resources in workforce, time and funding. For example, monetizing the phenomena in terms of opportunities and threats, strengths and weakness at stake, as does the utilitarian ethics of cost-benefit analysis, leads to an easy and available numerical comparison that favors decision-making in terms of calculus, but has its own epistemic blind spot on the non- or less-quantifiable dimensions, such as moral values. This entails that TA is not taken from an outside or from a god’s eye view, but rather is a methodological approach that expands the decision-making process and its reflective dimension. It is a step, albeit cautious, in the reinforcement of democratic procedures, attempting to see through the usual opacity of technologies to allow a more collective, shared, construction of technological futures.

Grunwald has recently argued that this expansion of TA should be taken as a broadening of responsibility towards the future with an emphasis on the NEST fields in early stages of development. By committing to the unpredictability of technological effects and how it usually thwarts our best expectations, the task dwells on the importance of interpreting and constructing meaning about the inchoative futures and the addressed ethical controversies (Lente 2017). It should nevertheless be noted that the institutional procedures under umbrella terms such as RRI (Responsible Research & Innovation) are proper to institutional contexts where funding and investment for breakthrough innovation and for a concurrent assessment is available and solidly grounded in a national scientific and technological system. Many national institutions bent on assessing more thoroughly certain products beyond the more obvious issues (safety, privacy, health and environmental) are very frequently powerless or out of pace to address all the development phases of technologies that are made outside their borders. Attention should be paid to how the dynamics between geopolitical centres and peripheries thus configure the institutional scope allowing TA to be applied before controversies are raised in the public sphere.

Correspondingly to the variety of contexts where technological innovation occurs, there are a variety of methods devised to acknowledge the most proper ways of constructing discussion and identifying conflicts (Decker 2004). The kind of stakeholders involved, their interests, power and backgrounds, along with the State as a kind of guardian of collective interests should help customize the methodological approach in order to deliver not a solution, but support to answers: knowledge, data and predictions about impacts and effects contribute to establish the different practical syllogisms that exhibit why and how such means should or should not be employed and according to how they promote what ends. Only then can communication establish the values in conflict according to a transparent structure of rationality. This entails and reinforces the normativity orientation of any TA as uncovering the layers of values involved in a conflict.

Such remarks illustrate how the contextual essence of TA prevents it to be considered as an algorithm-like process to be applied in every situation with only minor adjustments. Every assessment is an extension of practical reason, that is, a process by which moral agency is constituted in praxis, which cannot be reduced to a universal theory of decision awaiting to be applied and supplied through more and more theoretical knowledge (Jonas 1983). The contextual details, the where, how, why, when and to whom of moral agency, not only matter but define the appropriateness between knowledge and ends.

TA is then about the expansion of democratic and participatory processes to the once more or less natural, automatic advancement of scientific and technological progress. Going outside the epistemic bubble of laboratories and industries can help scientists, engineers and developers to see that the meaning and the design of technology is not only a shapeable and open matter, but something that always constitutes the social lifeworld of meaning and should hence engage the public (Habermas 1994). For industrial robots, for example, it means going beyond the question of how automation will impact jobs, employment and economic growth, in abstract, to ask also what does it mean for employees and the public in general to co-exist more and more with robots and with artifacts made by robots. This meaning is cross cultural and shaped not only by the real robots of factories and entertainment but also by an entire industry of science fiction literature and movies that help imagination run its course in the background of scenarios that will shape everyday future life.

Visions and images of a likely future condition how technologies are dealt with in the present. While the term fiction allots a figure of fantasy going wild and that “anything goes” about technology’s effects, they actually contribute to structure the questions and debates involving the social consequences and impacts of technology. This kind of turn towards meaning is one of the shifts that TA went through and results from having to reconsider the connection between knowledge and praxis. The shortcomings of a predictable positivist reading of how society develops through technology do not lead to neglecting the connection between technology and a better society. It rather draws more attention to how the complexity of such relation must be actively and attentively built. It is through these meanings that decisions can be reached which are supported on a deliberative process and are at the same time not solid anticipations but visions of development.

 

Conclusion

In the face of considerable concerns regarding the impacts of increased use of robots and automation processes in industry as well as in the services, both in job substitution as well as in work organisation, TA can be an important resource to open up contemporary narratives about the future, explore ethical and social tensions and imagine collective futures which can engage actors in building joint visions centred on the public good. With the increased ubiquity of robotics and automation it is important to develop credible institutions for public engagement, which bring hand in hand both the images of its promises as well as the realities of its social situatedness. This is an important challenge at different levels. From the point of view of private organizations, workers unions and industrial relations, technology has largely been considered a given, which may be reacted upon, but not necessarily questioned or discussed in its form. As such, the analysis of the impacts of technology and employment are largely viewed from the economic perspective, reifying deterministic approaches. For TA practitioners, the assessment of the impacts of technologies on work has not been either a central object of analysis, being a particular subset of wider technological impacts on citizens. With many TA organisations linked to Parliaments, and hence focusing on citizen representation, the existence of concrete fora where industrial relations issues are discussed and negotiated may provide little incentive to such focus. In addition, TA organisations are often marred by limited funding and already existing institutional designs which limit the development of workplace oriented TA methods.

With this chapter we propose that, despite these constraints, the wider use of TA approaches in the context of institutions can contribute to better respond to both business needs for a favorable stakeholder environment that supports innovation and at the same time exploring their potential risks and impacts, and how these can be addressed, with the participation of the different actors. Public concerns over these matters will help build and shape new collective narratives that grant a technology their space of emergence and development and society the expectations to fulfil objectives for the common good.

 


1 Collaborative Laboratory for Work, Employment and Social Protection (CoLABOR)
2 Collaborative Laboratory for Work, Employment and Social Protection (CoLABOR) and Centre for Social Studies, University of Coimbra (CES/UC)