Rather than being new, platform work extends pre-existing trends: greater control and surveillance, greater job precarity, and greater worker isolation and workplace fragmentation. Nevertheless, platform work distinguishes itself by its unique usage of algorithmic management software to constantly monitor, organize, and evaluate workers. These two features of platform work adversely affect both workers’ physical and mental health. Platform workers are exposed to layers of risks: traditional risks and risks due to the usage of artificial intelligence (AI) at the workplace. Even if these risks are preventable, the widespread misclassification of platform workers as independent contractors shifts the legal and financial responsibilities to prevent the risks onto these workers, even if they do not have the organizational means and powers to do so. After providing a mapping of the risks that platform workers are exposed to, and the challenges they are facing in practice due to their fragmented employment setting (often combining offline work with platform work), this article examines the recent European Union (EU) initiatives affecting platform work—the Directive to improve working conditions of platform work, and the Artificial Intelligence Act (AI Act). Thus, using a socio-legal methodology, the article aims to contribute to on-going debates on the platform economy and AI by providing a critical analysis of whether these two recent EU initiatives to regulate platform work address
a minima the challenges previously raised by the increased use of digital platforms, and, in particular, whether they contain provisions that will effectively empower and protect platform workers. The article argues that the proposal for a Directive on Platform Work represents a potential step forward by recognizing the impact of AI management on workers’ health and safety, including by addressing psychosocial risks. This Directive, however, has its weaknesses and does not address all relevant issues—for example, it doesn’t distinguish between psychosocial risks factors, work-related stress, and how to address them, and, in practice, there is a risk that only a limited number of workers will be able to benefit from these provisions. Meanwhile, the AI Act imposes additional requirements on the user of the AI (the labor platforms) but does not provide additional rights for the end-users (the workers). Further, because the AI Act is a form of product safety regulation (horizontal regulation), it does not take into consideration the specificities of the employment dynamic (e.g., the imbalance of power, subordination, etc). The article concludes that these two EU initiatives show awareness on current issues arising from platform work but fail to address them in an effective way.
抄録全体を表示