HR Analytics brings some powerful tools to businesses, but with that comes great responsibility.
Now more than ever, companies have access to vast volumes of data and the algorithms or tools that can help them to process and interpret that data. In fact, the amount of data organisations can tap into is unprecedented.
This brings up questions of ethics for companies and institutions. The data they gather can be personal and revealing. It can influence decision-making, but is it always right?
What is legal isn’t always ethical
The legal responsibilities companies have for data privacy and protection have been recently formalised in the GDPR. This makes it very clear what the obligations are for protecting data from a legal perspective, but over and above these legal responsibilities, companies have ethical responsibilities.
What is the difference between legal and ethical? When we talk about ethics, we are thinking of accepted social norms of behaviour. You’ll hear a business owner argue that they did nothing wrong from a legal perspective, yet still, people will believe that they didn’t do the “right thing.”
A common example (especially in the US) is the raising of prescription drug prices. Drug companies have the legal right to do so, but when the wider ethical context is examined, is it right? The company may have already made back their development costs for the drug anyway, and increasing prices can put it out of reach for the patients who need it.
Carroll developed the Corporate Social Responsibility Pyramid in the ‘90s, outlining a framework for CSR. Ethical responsibilities forms one pillar of the framework, stating: “the normative expectations of most societies hold that laws are essential but not sufficient.”
It further states; “Part of the ethical expectation is that businesses will be responsive to the “spirit” of the law, not just the letter of the law. Another aspect of the ethical expectation is that businesses will conduct their affairs in a fair and objective fashion even in those cases when laws do not provide guidance or dictate courses of action.”
Good riddance to bad data
When it comes to HR Analytics and data protection, we’re all required to follow the letter of the law, but can you think of situations where data may be used legally, yet unethically? One of the problems I can see is that algorithms may be used for decision-making without clear recourse or explainability.
The value of the analyses an algorithm can provide is limited by the quality of the data and the skill of those conducting the analysis. What if the data used was garbage in the first place? In the context of HR, what if bad data were to impact the opportunities offered to certain people, or the benefits made available in an organization?
A study from Sky News and Cardiff University examines the use of algorithms in councils and police departments across the UK. While it was stressed by those organisations that the data collected didn’t make the final decision for them, there is still concern among critics that it may heavily influence those decisions with no clear explanation.
Crucially, they point out, the results you get from an algorithm can frame your perception of what is available. For example, when Amazon makes suggestions based on what other people bought, you might ignore those, but you also might be influenced to buy what “everyone else” has. The study found that there is almost no oversight in this new field and data is being used for a range of purposes, from deciding which crime reports to investigate, to analysing benefit entitlements.
“In the digital age, we are all data. But will we like the way we are being processed?” (Sky News Report)
AI and bad data
Another issue to consider is the use of AI, which works via machine learning. It has already been found that sometimes, this serves to reinforce stereotypes or biases. For example, Amazon discovered that its internal recruiting tool was rejecting female candidates. The data the AI was using came from analysing ten years of submitted resumēs – given male dominance in the tech industry, this meant that the AI Amazon used was not gender neutral. (Of course besides ethics, this example does potentially slip into legal territory with gender discrimination).
In fact, bias of many sorts is inherent in humans. This is highlighted in a recent documentary, Bias, where the filmmaker illuminates the nature of implicit biases by discovering her own. Bias can lead to unfair profiling in the workplace and in other areas of life. Naturally, if humans are doing the programming and decoding, bias can impact the results of data analysis. In the Sky News/Cardiff University report, this was a key criticism – algorithms may reinforce bad data from the past.
In terms of AI, a Technology Review article describes how bias can occur from three sources: how the problem is initially framed, how the data is collected, and how the data is prepared. For example, if a bank were to define the problem as “selling the most profitable products,” AI would possibly come up with answers such as subprime mortgages, an ethical nightmare. If a company were to ask recruiting AI to find candidates most like successful candidates they’ve previously had, you may end up with gender bias either way, depending on the prominent gender in the industry.
In her book, Weapons of Math Destruction, Cathy O’Neil describes “the dark side of big data.” She discusses how many models being used today are opaque and unregulated, leading to the reinforcement of discrimination. Where a student’s ability to get a loan for university is dictated by their ZIP code, algorithms can add to a cycle of intergenerational poverty.
The European Commission has produced Ethics Guidelines for Trustworthy AI as a way to try to combat biases and other data concerns with AI. This report makes for interesting reading, describing trustworthy AI as having three components:
- It should be lawful, complying with all applicable laws and regulations;
- It should be ethical, ensuring adherence to ethical principles and values; and
- It should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.
On ethics they note: “Laws are not always up to speed with technological developments, can at times be out of step with ethical norms or may simply not be well suited to addressing certain issues. For AI systems to be trustworthy, they should hence also be ethical, ensuring alignment with ethical norms.”
Ethical responsibilities
With issues such as these in mind, the term “corporate digital responsibility” has been coined recently. This is demonstrated through examples like very clear AI policies on company websites. Companies are increasingly recognising these ethical responsibilities and ensuring that their internal policies reflect them. Issues with data and ethics have become frequently reported in the news.
For example, I mentioned earlier a concern with how data may be used without clear recourse or explainability. This also can be an issue with how that data is collected.
Another thing companies are considering is whether they have a duty to consider the opportunities they have to use their data and technologies for social good. For example, a recent PWC report investigates how AI technology can be applied to global challenges such as climate change, water security and clean air. This technology may well come from private enterprises who have developed it for their own use.
At its core, one of the main ethical concerns with HR Analytics and use of algorithms in general is transparency. Are people aware of and giving permission for their data to be collected? Do they know how it is being used? Are decisions based on a fair process?
These are questions to ask yourself as you go forward with your HR Analytics journey. Good processes and ethical treatment of people and data are imperative.