Sustainable Tech or Greenwashing? The IoT Industry’s E-Waste Reckoning
The Internet of Things (IoT) has become a defining feature of the modern digital economy, embedding smart devices into offices, factories and homes across Europe and the world. While the promise of interconnected devices is efficiency and innovation, the rapid proliferation of IoT has precipitated a mounting environmental crisis: electronic waste, or e-waste. The challenge for the tech industry is no longer simply to innovate, but to do so sustainably, ensuring that the benefits of connectivity do not come at the expense of the planet. The scale of the problem is daunting. According to the Global E-Waste Monitor 2024, only 22.3% of e-waste in the European Union is properly collected and recycled, leaving the vast majority to contribute to pollution and resource depletion. The EU’s Circular Economy Action Plan and the forthcoming Ecodesign Regulation, effective from 2025, aim to address these challenges by imposing robust requirements on product design, repairability and recyclability. Under the new rules, connected devices must receive software updates for at least seven years, feature standardised repairability scores and meet high recyclability thresholds. These measures are designed to extend device lifespans, reduce waste and promote a circular economy in which materials are reused rather than discarded. Legacy manufacturers face significant hurdles in meeting these new standards. Siemens Energy’s recent recall of 400,000 non-compliant building sensors is a case in point, illustrating the difficulties of retrofitting existing product lines to comply with evolving regulations. Meanwhile, innovative startups such as FairCircular are pioneering biodegradable smart sensors, demonstrating that sustainability and technological advancement can coexist. The Circular WEEE Platform, launched to enhance the collection and recycling of e-waste across Europe, provides a user-friendly online tool for coordinating waste management, connecting producers, collectors and recyclers and fostering the principles of a circular economy. Supply chain transparency is another critical frontier. The Corporate Sustainability Due Diligence Directive (CSDDD) requires companies to audit their suppliers’ environmental practices rigorously, ensuring that sustainability is embedded throughout the value chain. Apple’s 2025 audit revealed that 61% of its Asian component suppliers failed to meet mineral sourcing standards, prompting a €23 billion overhaul of its supply chain. Such revelations underscore the need for robust due diligence and the risks of superficial “greenwashing” claims. Innovative solutions are emerging to tackle these issues. Helsinki’s CircularID consortium employs blockchain technology to track device components throughout their lifecycle, enhancing accountability and facilitating recycling. In Barcelona, the GreenTech Hub retrains former oil industry workers in e-waste recycling, exemplifying how sustainability initiatives can also drive social impact. Urban mining, the recovery of valuable materials from discarded electronics, is gaining traction as a strategy for reducing resource exploitation and supporting the circular economy. As environmental concerns become central to corporate strategy, companies must move beyond superficial claims and demonstrate measurable progress in sustainability. The IoT industry’s ability to deliver on circular economy promises will be a litmus test for responsible technology in the coming decade. The stakes are high: the world generated 62 million tonnes of e-waste in 2022, with projections reaching 82 million tonnes by 2030 if current trends continue. For technology companies, the message is clear: sustainability is no longer optional and those who lead on this front will define the future of responsible innovation. References: European Commission: Circular Economy Action Plan ENISA: Guidelines for Securing the Internet of Things University of Manchester: Centre for Data Science and AI Circular WEEE Platform Corporate Sustainability Due Diligence Directive (CSDDD) Waste from Electrical and Electronic Equipment (WEEE) – European Commission Go Back
Algorithmic Management in the Gig Economy: Balancing Efficiency and Equity
The gig economy has transformed the European labour market, offering millions of workers flexibility and autonomy that traditional employment cannot match. Yet, this transformation has also ushered in a new era of algorithmic management, where opaque digital systems allocate tasks, monitor performance and even determine pay. For many gig workers, the algorithm has become the “invisible boss” – efficient, but often inscrutable and unaccountable. The European Union’s Platform Work Directive, adopted in 2024, is a pioneering legislative response to these challenges. The Directive mandates unprecedented transparency for digital labour platforms, compelling them to disclose the criteria and variables that influence algorithmic decisions. These include how tasks are distributed, how pay is calculated and how performance ratings are assigned. The aim is to empower workers with the information they need to understand and contest automated decisions, countering the power imbalance that has long characterised platform work. Research published by Eurofound earlier this year paints a stark picture: in Paris, 87% of food delivery couriers receive work instructions primarily through algorithms, yet the majority cannot explain how these decisions are made. This opacity has real-world consequences. In Lisbon, labour courts reported a 214% increase in “algorithmic grievance” cases, as workers challenge decisions they perceive as arbitrary or unfair. The Directive’s transparency provisions are designed to address these conflicts by fostering accountability and human oversight. Spain’s 2021 “Ley Rider” offers a compelling case study in the practicalities of algorithmic management reform. The law requires delivery platforms to share real-time data on surge pricing, provide advance notice of changes to rating systems and establish worker-led audit committees. The results have been mixed but instructive: rider satisfaction improved significantly, but platforms like Glovo reported profit declines, highlighting the delicate balance between ethical management and business viability. The Directive goes further than national laws, extending protections across the EU and introducing new rights for gig workers. These include the right to seek human intervention in automated decisions, the right to challenge and review those decisions and limitations on the processing of personal data for algorithmic management purposes. The provisions are grounded in the General Data Protection Regulation (GDPR), but are tailored to the unique challenges of platform work, focusing on accountability, transparency, explainability and the prevention of bias and opacity. Beyond compliance, some platforms are embracing transparency as a competitive advantage. Copenhagen’s Hilfr, Danish home-cleaning platform, involved workers in algorithm training, resulting in a 29% increase in retention rates. Such initiatives demonstrate that ethical algorithmic management can enhance trust, reduce turnover and support long-term sustainability. However, challenges remain. The Directive’s effectiveness depends on robust enforcement and ongoing adaptation to technological change. Critics argue that some provisions remain ambiguous, risking loopholes that could undermine worker protections. There is also the question of scalability: while large platforms may have the resources to comply, smaller firms may struggle with the costs of transparency and audit requirements. In conclusion, algorithmic management in the gig economy presents both opportunities and risks. The EU’s regulatory framework sets a high bar for transparency and fairness, challenging platforms to rethink their operational models. For workers, these changes promise greater agency and protection; for platforms, they signal a shift towards more responsible, human-centred innovation. The coming years will test whether these reforms can deliver on their promise, but the direction of travel is clear: the era of the invisible, unaccountable algorithm is drawing to a close. References: Eurofound: Regulatory Responses to Algorithmic Management in the EU European Parliament: Platform Work Directive Oxford Human Rights Hub: Improving Working Conditions in the Gig Economy FEPS: Algorithmic Management in Europe Kluwer Law: Regulating Algorithmic Management in the Platform Work Directive Go Back
The EU AI Act Decoded: A Compliance Imperative for HR Leaders
The European Union’s Artificial Intelligence Act (AI Act), which entered into force in August 2024, marks a watershed moment in the regulation of artificial intelligence across the continent and beyond. For HR leaders, this legislation is not simply a bureaucratic hurdle but a call to fundamentally rethink how AI is used in recruitment, talent management and workplace decision-making. As the first comprehensive legal framework of its kind, the Act’s implications for hiring practices are profound, with its risk-based approach, transparency requirements and human oversight obligations reshaping the landscape of digital recruitment. At the heart of the AI Act lies a four-tier risk classification system: unacceptable, high, limited and minimal risk. AI systems used in recruitment, candidate screening, performance evaluation and employment decision-making are categorised as “high-risk”. This classification is not arbitrary; it reflects the recognition that automated decisions in these contexts can have life-changing consequences for individuals and significant reputational and legal risks for organisations. High-risk systems are subject to a suite of obligations, including rigorous risk assessments, transparency and documentation of decision logic, ongoing monitoring, and, crucially, meaningful human oversight. Transparency is a central pillar of the Act. Employers must now ensure that any AI system used in recruitment is explainable, with clear documentation of how decisions are made. The era of inscrutable “black box” algorithms is over. Instead, HR teams must be able to articulate – both internally and to candidates – the criteria, data and logic that underpin automated decisions. This not only enables candidates to request meaningful explanations of outcomes but also supports organisations in identifying and mitigating bias. For example, if an AI-powered video interview tool is used, candidates must be informed of its role and the system’s decision-making process must be open to scrutiny. Human oversight is equally critical. The Act mandates that certified HR professionals or other qualified staff are involved in validating AI-driven decisions, especially in high-stakes scenarios such as promotions, terminations, or performance appraisals. This requirement addresses a well-documented skills gap: a 2025 KPMG study found that 68% of European firms lacked sufficient expertise in AI governance within their HR departments. The AI Act’s AI literacy provision, effective from February 2025, obligates employers to ensure staff are adequately trained in the operation and risks of AI systems. The Act’s extraterritorial reach is another game-changer. Any company deploying AI systems that process data of EU candidates, or operating within the EU market, must comply – regardless of where the provider or deployer is based. This has already prompted global firms to invest heavily in compliance; for example, Singapore’s GovTech reportedly spent €2.3 million adapting its global hiring platform to meet EU standards. The timeline for compliance is phased, but the requirements for high-risk systems – including those used in recruitment – will be fully enforced by August 2026. Non-compliance is not a trivial risk: fines can reach up to €35 million or 7% of global annual turnover, whichever is higher. However, the consequences extend beyond financial penalties. Failure to comply can lead to complaints, investigations, litigation and significant reputational damage, not to mention the operational restrictions that may be imposed by regulators. Forward-thinking organisations are already moving beyond mere compliance. Dutch fintech Adyen, for instance, publishes real-time bias metrics for its AI recruitment tools, while Barcelona-based JobFluent offers candidates “explanation scores” to foster transparency and trust. These initiatives demonstrate that the AI Act can be a catalyst for responsible innovation, helping companies build more inclusive, trustworthy and competitive hiring processes. The EU AI Act demands a paradigm shift for HR leaders. It is no longer sufficient to prioritise efficiency and automation; fairness, transparency and human dignity must be at the centre of digital recruitment. By engaging proactively with the Act’s requirements, HR professionals have an opportunity not only to avoid regulatory pitfalls but to lead the way in building a more equitable and trustworthy future of work. References: European Commission: Ethics Guidelines for Trustworthy AI Clifford Chance: What Does the EU AI Act Mean for Employers? Hunton Andrews Kurth: The Impact of the EU AI Act on Human Resources Activities Nature: Algorithmic Discrimination in Hiring Go Back