Hiding behind data

Artificial Intelligence Data science Equality Racism Ethics Equity

As it’s Black History Month, it’s important to consider the growth of artificial intelligence (AI) and what this means for diversity online. For many organisations, this will be a month of celebrating the ongoing growth in their diverse hire and for some it will be a time to showcase their contributions to Black businesses and Black social enterprises. Much of this showcasing will be hyper-focused on the quantitive data with little regard for the qualitative human aspect; what I consider ‘hiding behind data’.

‘Hiding behind data’ is essentially a form of organisational procrastination that serves a mean that creates no impactful end. This can be seen in instances where organisations rave on about having an executive board with 51% POC or BAME teams (always 51, never 52), for the realisation after reading past the headline, to be that the the reported 51% is highly diminished as they have clustered all “non-whites” together in the hopes of ticking their diversity and inclusion box. Businesses and organisations have spent thousands burying their head in the sand when it comes to diversity and declaring performative war against one another via data and numbers. 

In a world that is ‘allegedly’ trying to dismantle racist structures and systemic racism, we finally for the first time have a blank canvas through our super efficient friend (or foe), AI. We have a blank slate to create an online experience that is inclusive whilst being equitable and understanding that there are differences between us to be recognised and celebrated.

The reasons why hiding behind data is something to consider and negate as quickly as possible is based on the fact that machine learning (ML) and AI engineering is designed by humans. If said humans are not diverse to start with, then neither will our AI companions. Whether it’s ChatGPT or Siri, we need to ensure that we are being proactive with diversity in the AI and ML space.

Visibility is required at all levels to ensure we are creating an online space that is reflective of our offline world. We all have a job to do in ensuring that we are pumping just as much into engineering and software development as we are into the identity, diversity and belonging departments. After all, we all know racism exists online just as it does offline - so what is it going to be? Are we going to be building a series of AI assist’s that develop computer-mediated microaggressions? Because I for one, can just about deal with the ones in real life.

There is a need for robust moral agency through the development of ethical guidelines, frameworks and policies. In the future, we should also look to encourage the roll out of an AI ethics boards to monitor governance and product equity. In an ideal world, being intentional with diversifying ML staff should produce a diverse set of programming that encompasses the lived experience of different people, however, one person’s intersections cannot speak for all - for example, my experience as a Black woman cannot represent the experiences of another. Therefore, the responsibility still relies on the organisations to consult with diversity specialists to ensure what needs to be done, is being done. Let’s pause on all the data crunching and just get back to basics - ensuring that the product is modelling the beauty of the human existence.

Previous
Previous

AI's Impact on Mental Health: An In-Depth Analysis