Using data responsibly is a hot topic. Tech companies must ensure they don’t cause harm, for example, by breaching consumer privacy or reinforcing bias through automated decisions. It’s a moral and legal question, but it may be unclear how businesses and governments should approach their data responsibilities despite increased regulation. We still see egregious data use practice.
Alice Thwaite is technology ethicist and responsible owner for ethics at OmniGOV, MG OMD, working with the UK government on communications and advertising. We chat about what data ethics means in practice.
Gemma: What does it mean to be a data ethicist today?
Alice: Ethics is the study of how to live. An ethicist takes a theory of how humans want to be treated and how we should interact with the wider world, and thinks about how an action or technology could contribute to that goal. We use ethical methodologies and knowledge about the consequences of a technology or business model to transform the business into one that prioritizes ethics.
Could this voluntary charter fix data ethics in big tech?
Infuriating examples of data misuse abound, but one non-profit has a plan to lift ethical standards among all who collect and use personal data.
Businesses have always operated in the realm of ethics, but usually without a systematic approach.
An organization may choose to not advertise on social media because of increased hate speech. That’s an ethical decision. But without a strategy for what ethical ‘good’ means, these decisions are emotive rather than accountable and measurable.
Ethicists co-create frameworks with stakeholders like customers, colleagues and wider society so that an organization’s values become critical in business decisions.
What does that look like in practice?
First, we help organizations understand where they’re making ethical decisions and show how they’ll need a more systematic process to reach their ethical goals. Second, we educate organizations on ethical methodology and issues. Third, we assess practice to understand what the company wants to prioritize and how current practice meets ethical goals, making suggestions like ‘ethical pilots.’ Finally, we implement and measure ethical pilots so they can become policy.
We don’t work in a vacuum – we must be in constant contact with various communities. For example, before establishing diversity, equity and inclusion (DEI) initiatives, a company might think racial equality is important but have no roadmap to get there. With an influence campaign, education, auditing and an implementation program, they may still have a way to go. But they’re now measuring ethnicity pay gap and thinking about inclusive work environments for different people. They can keep following the ‘influence, educate, assess, implement’ process to fruition.
Where do organizations start? What tips do you have for those keen to start doing ethics?
Hire someone to build an ethical program and link them with a sponsor who can ensure the organization implements their recommendations.
There are few ethicists with experience right now. I’d start with upskilling someone you’re already working with who has an interest.
They may have a degree in philosophy, international relations, social science or community work experience. Ensure they can learn as much as possible about business ethics and support their attempts to transform the culture.
Those in data security or compliance should recognize an ethicist has different processes. The goals may be similar, but the process aligns more with social science and humanities methods.
What questions should organizations wanting a data ethicist ask?
Are we committed to change? Do we want a world prioritizing human values like democracy, dignity and freedom? Are we prepared for the discomfort that comes with changing procedures?
If you’re prepared to change, write a job description incorporating things I’ve mentioned and encourage your team members to apply.
Opinions reflect those of the expert. Interview edited for clarity.