A recent essay in The Atlantic warned that so-called “emotion AI” is increasingly appearing in workplaces, with systems claiming to interpret human feelings through facial expressions, voice patterns and even keyboard behaviour. The question now being raised is whether this same technology could eventually find its way into shipping.
Over the past decade, maritime operations have become heavily digitised. Ships are tracked via AIS, engines are monitored through sensors, and port equipment is increasingly managed through cameras and connected systems. The next step being explored by some researchers and technology providers is more sensitive: applying similar tools to people themselves.
That would mean monitoring crew members’ mood, stress levels or “attitude” in real time using cameras, wearables or voice analysis, all in the name of safety and operational performance. In effect, the digital dashboards would extend beyond machinery and cargo into the emotional state of seafarers.
In maritime research, early versions of this idea already exist. One project has tested an “emotion recognition” system for ships combining facial analysis, speech signals and body sensors to detect when a crew member may be stressed or overloaded, potentially triggering alerts to the bridge or shore teams. Another study suggests that analysing tone of voice and speech patterns could help identify mental health issues earlier than traditional questionnaires.
Given well-documented concerns around fatigue, stress and suicide at sea, it is not difficult to understand why such systems are attracting attention.
But there is a major complication. Research on workplace emotion-tracking consistently shows that many employees feel uncomfortable or misunderstood when algorithms attempt to interpret how they feel. There are concerns that incorrect readings could influence scheduling, pay decisions or even job security, while some workers report adapting their behaviour to appear “acceptable” to the system rather than acting naturally.
In shipping and terminal environments, that raises a particularly serious operational risk. Safe operations depend on people being able to speak openly about fatigue, uncertainty or near-miss situations. If crew members believe they are being continuously assessed on mood or attitude, there is a risk they may prioritise appearing composed over reporting genuine concerns.
That could create a situation where management dashboards look stable and reassuring, while the reality on board is more complex and potentially less safe.
Regulators are already beginning to respond. The new EU AI rules prohibit systems that attempt to infer emotions in the workplace, except in narrow medical or safety-related contexts, citing risks to privacy and fundamental rights. For shipowners and terminal operators connected to European jurisdictions, emotion-tracking technology is therefore no longer just an ethical question, but a regulatory one.
The key challenge for the industry is where to draw the line. There may be limited cases where AI can help identify clear signs of fatigue or distress in tightly controlled safety scenarios, particularly if crew members are involved in system design and strong safeguards are in place. However, using inferred emotions to evaluate performance, influence contracts or manage behaviour is widely seen as a boundary that should not be crossed.
The practical question for companies is becoming increasingly direct. As suggested in the discussion, every new digital or safety system proposal could now include a simple checkpoint: does this tool attempt to measure or score how people feel? If the answer is yes, the decision may need to move beyond IT departments and onto board-level scrutiny.
Shipping has already learned that more data does not automatically lead to better decisions, even when it comes to cargo or equipment. Extending that logic to human emotions may not improve safety either.
Research into workplace monitoring also highlights a quieter concern: employees often feel they have little real choice but to accept intrusive systems if they want to keep their jobs, even when they see them as a serious invasion of privacy. That underlying pressure to comply, without broader public debate, is what makes the rise of emotion AI in operational environments such as shipping particularly sensitive.



















