Guest post: Why investors should care about business and human rights in the digital age
Why investors should care about business and human rights in the digital age. Expert guest blog by Isabel Ebert of University of St Gallen.
Isabel Ebert, of the University of St Gallen, discusses why investors should care about business and human rights in the digital age.
Isabel is a speaker on the third module – Business and Human Rights – on the Law and Development Training Programme, which is on Saturday 9 June 2018. Registration for the individual module is still available – details here.
Further details on the Training Programme HERE
Download the Law and Development Training Programme brochure HERE
Why investors should care about business and human rights in the digital age.
Isabel Laura Ebert, University of St Gallen
When you think of business and human rights, you probably picture cobalt-mining African children or South Asian women in garment factories. However, think twice, since these are only two of the many important examples on the wide spectrum that business and human rights encompasses, as the digitization of business operations changes the standard interpretation of the scope of human rights in the business context.
In general, the concept of business and human rights promotes that companies take responsibility for their activities’ impact on human rights and take action to prevent, mitigate and eradicate negative effects on human rights throughout their operations. This applies to their workforce, their supply chains and value chains.
In the digital age, from a business and human rights approach, companies should care about privacy. A company should pay attention whether in its operations, privacy is systematically infringed. By that, systematic privacy neglect can have repercussions on other human rights, such as the right to health. How do privacy and the right to health relate? One might seem like a classic first world problem, whereas the other one would most likely rather be associated with developing countries. Again, think twice.
Let’s assume a major health insurance provider analyses potential clients using data profiling to estimate whether or not she or he should become a customer or whether the volatility is too high to accept her or him into a high-quality health insurance program. Suddenly a potential privacy infringement disclosing genetic diseases becomes a health access issue. Will potential patients be able to afford an expensive avant-garde medical treatment if they are very unlikely to be accepted into an insurance scheme covering the necessary progressive medical care? Should investors into that very health insurance care? How would a company collecting masses of data about their potential clients be able to publicly legitimize such a privacy-infringing step without losing their clients’ trust?
The investor angle to business and human rights in a digital age is an interesting one. Call it Big Data, artificial intelligence or algorithms – your health care provider, your telecommunications provider, your internet provider, your online dating host, your transport subscription or your own employer is already or will very soon engage in collecting increasing amounts of data, personal data, about your private life, friends, family and hobbies – to make predictions determining your behaviour and thus market value for its business.
Data has become the new currency of the global market. Increasingly, it will affect how investors are holding companies in their portfolio to account for human rights abuses in the digital sphere and how infringements will lead to legal cases or also to divestments. Take the Facebook and Cambridge Analytica data breach case, which relates to the duty of care data companies have in treating privacy-related user rights. Investors are currently taking Facebook to court because the revelation that user data was harvested without explicit permission and subsequently used by the research firm Cambridge Analytica connected to U.S. President Donald Trump, resulted in a temporary slump in Facebook share prices. Suddenly privacy turns into a monetary issue. And it will continue to be one.
Moreover, studies show that employee trust in their company decreases when they feel under surveillance and that the employees’ productivity sinks as a result. Hence also from an internal company perspective, privacy matters in a digital age, even when modern surveillance technology becomes more and more affordable and easy to apply. In many service-intense industries, the company’s value lies within the expertise and high qualification of its workforce – as an executive, would you want to see your employee’s productivity decrease for the sake of infringing their privacy with tech-based monitoring practices, resembling surveillance techniques? Certainly not – it does not make sense from an economic perspective, even if leaving the moral impetus beside.
Scholars argue that the inclusive way forward, having your workforce and business strategy aligned for digitization, means being critical data literate, having a participatory approach to tech based analytics in place and being ethical aware of adverse effects applying technology in staff management and business operations.
The UN Guiding Principles on Business and Human Rights, adopted in June 2011 by the UN Human Rights Council, remain relevant also in the digital age. They serve as an important analytical framework to balance out responsibilities of companies towards protecting and respecting human rights in a digital business world.
*The views, opinions and positions expressed within these guest posts are those of the author alone and do not represent those of A4ID