(AI) Denmark Draws a Red Line: AI Will Not Be Allowed to Interpret Emotions in the Workplace or the Classroom
As the new European framework for artificial intelligence is being rolled out, one issue is capturing the attention of legal professionals, compliance officers, and technical teams alike: what happens to systems that “read” emotions from faces, voices, or gestures? The Guidance on the prohibition of AI that infers emotions in workplaces and educational institutions, published by the Danish Agency for Digitalisation, offers one of the first interpretations of Article 5(1)(f) of Regulation (EU) 2024/1689. Its message is clear: it is prohibited to place on the market, put into service, or use AI systems intended to infer the emotions of natural persons in workplace or educational settings, except for medical or safety reasons. This is not merely a declaratory statement; it draws a legal boundary that will shape purchasing decisions, product design, and governance processes across public and private organisations throughout Europe.
The guidance focuses first and foremost on why a red line has been drawn here. Human emotions are contextual, cultural, and individual realities that cannot be reliably “objectified” through mathematical rules. A smile does not always mean happiness; a raised voice does not prove anger. For this reason, even when a system appears to be correct, its reliability is low, and its use can lead to bias and unfavourable treatment. When such use occurs within asymmetric power relationships (employer–employee, teacher–student), the risks to fundamental rights are multiplied. This underpins the specific prohibition in these contexts.
From a practical standpoint, the guidance structures the analysis around three cumulative conditions. First, there must be the placing on the market, putting into service, or use of an AI system with the specific purpose of inferring emotions. Second, the system must analyse biometric data (facial expressions, body language, voice tone or cadence) and, based on that observation, infer an identifiable emotional state (happiness, sadness, boredom, stress, enthusiasm, etc.). Third, the use must take place in a workplace or educational environment. If all three conditions are met, the practice is prohibited throughout the EU. If any one of them is missing, the case falls outside the scope of the prohibition, although it may still be unlawful for other reasons.
The first condition concerns the material scope of the rule. For a practice to be prohibited, the system must have been placed on the market, put into service, or used within the European Union with the specific purpose of inferring emotions. This means that the mere existence of a technology capable of doing so is not sufficient: it must have been deployed or made available on the market for that specific purpose. In this way, the prohibition applies both to those who design and sell such systems and to the organisations that implement them.
The second condition is the technical hinge of the analysis and deserves closer attention. It is not enough for a system to capture traits; it must explicitly infer an emotion. The guidance stresses that the concept of “inferring” requires interpretation: transforming physical or behavioural traits into an emotional conclusion such as “disinterest,” “frustration,” or “satisfaction.” It also clarifies that not all behavioural analysis falls within this scope: the mere “detection” of gestures or the counting of smiles, without emotional inference, does not by itself trigger the prohibition. What is prohibited is attributing an emotional state to a person based on biometric data. The third condition defines the contexts. “Workplace” is interpreted broadly: offices, factories, warehouses, virtual environments such as Teams or Zoom, remote work from home, public spaces where an employment relationship exists, and recruitment and selection situations (interviews, tests). “Educational institution” is also understood expansively: primary and secondary education, universities, vocational training, adult education, and e-learning platforms when their use is mandatory. The focus is on individuals who are in a relationship of subordination or dependence vis-à-vis the organisation. To ground the analysis, the guidance provides two illustrative scenarios that are readily recognisable today. In the first, a company installs AI-powered cameras in meeting rooms to analyse employees’ voices and faces in order to measure “enthusiasm” during presentations; the authority concludes that this case is prohibited: there is an AI system, emotional inference based on biometrics, and use in the workplace, with no medical or safety justification. In the second, schools use technology that monitors students’ faces in the classroom to inform teachers in real time whether a student is bored, tired, or frustrated; this is also prohibited, as all three conditions are met, with an additional imbalance given that minors are involved. The guidance does not ignore the existence of exceptions. The AI Act allows the inference of emotions only where the purpose is medical or related to safety, and only where that objective is clearly justified and documented, and where no equally effective, less intrusive alternatives exist. Examples mentioned include therapeutic uses (support for people with autism) or accessibility uses (assistance for blind or deaf individuals). By contrast, providing management with a “thermometer” of employee satisfaction or measuring students’ “attention” does not qualify as safety or medicine and does not fall within the exception.
How should organisations respond?
The likely starting point is to review any AI initiative that touches on voice, image, or gestures in HR, recruitment, and performance evaluation; monitoring of customer service teams; learning analytics; or classroom surveillance. If the real purpose is to infer emotional states of staff or students, the legal conclusion is straightforward: stop, do not acquire, or do not deploy the system. The prohibition applies from the outset: it cannot be made lawful as long as the intention to analyse emotions in these cases remains.
Second, organisations should rethink their requirements for manufacturers and suppliers. The guidance emphasises that a system may fall outside the prohibition if it does not infer emotions but instead limits itself, for example, to measuring objective indicators of interaction in meetings (speaking time, turn-taking, interruptions) or non-emotional pedagogical metrics in the classroom (submissions, observable participation without emotional labelling). This requires product redesign and contractual and technical guarantees that no covert emotional inference takes place. The nuance is important: even if emotional classification is not shown to the user, if it exists, its use in work or education remains prohibited.
Third, it is essential to distinguish emotions from physical states. A system that detects drowsiness in professional drivers based on blinking and vehicle behaviour may qualify for the safety exception, provided it can be justified as protecting life and health and that no less intrusive alternative exists. By contrast, “detecting stress” in a call-centre operator through voice analysis to adjust their script is neither a safety measure nor a medical intervention; it is a prohibited use. Purpose and proportionality are not presumed; they must be demonstrated.
Fourth, at the governance level, the guidance aligns with the principle of proactive accountability: it is not enough to remove a reference to “mood”; organisations must audit functionalities and document that the system does not perform emotional inference in these contexts. In procurement, contracts should include clauses explicitly prohibiting the present or future activation of emotion-recognition modules in workplace or educational settings and enabling technical audits. Internal compliance programmes should update AI policies, catalogues of prohibited practices, and mechanisms for rejecting use cases that may be attractive from a business perspective but are legally untenable.
Fifth, it is important to remember that the AI Act coexists with the GDPR and labour and education laws. A case that does not fall within the prohibition of Article 5(1)(f) may still be unlawful for other reasons: processing biometric data without a legal basis, lack of transparency or data minimisation, or disproportionate impacts on equality or non-discrimination. The guidance makes this clear: its focus is solely on the category of prohibited emotional inference in work and education; it does not prejudge compliance with other applicable laws.
In conclusion, the Danish guidance on the prohibition of AI systems that infer emotions may serve as an early reference point for the application of the European AI Act. Its interpretation clarifies the boundaries of a practice that, due to its potential impact on privacy and dignity, is prohibited in workplace and educational environments except for medical or safety reasons.
The document reinforces the idea that organisations must review any technology that uses biometric data to infer emotions, ensuring that its uses strictly comply with the law and do not create power imbalances or intrusive forms of processing. In doing so, the guidance helps to consolidate a framework of trust in the development and deployment of artificial intelligence within the European Union.
