Only a few years ago we talked about “going online”, acknowledging the fact that the internet was a place separated from our lives. It was where we would go when we needed to check something, buy something or get in contact with someone. From our phones to our homes, our world is overlapping with the digital environment to a point where it can no longer be considered a separate space – as so much of our lives now unfold online. The question therefore becomes one of privacy and trust – what exactly goes behind the pixels on your screen?
This overlap can of course be a good thing, as it comes with some obvious benefits. We no longer have to “go online” to buy something, find information or speak with our loved ones. We only need to let our virtual personal assistants know what we want, and it’s done. And what makes this possible is the intelligent intertwining of the technology with code or Artificial Intelligence (AI).
For decades, the common rhetoric around machines, including computing devices, has been that they are superior to us humans – boasting better precision, objectiveness and less error. Thinking back to the industrial age and mechanical machines, this certainly holds true, considering they were incapable of forming judgements or exhibiting biases. Today’s machines, however, tell a different story all together, running on algorithms that enable them to make calculated decisions. Programmed by people with natural biases, who use past data – also biased due to how it was obtained and recorded – these algorithms are not exempt from such influences.
So, what happens with your data?
We constantly leave a trail of data through our online activities. Our digital footprints don’t disappear, but instead, companies treat our data like a corn crop; harvest it, sort it, process it, and pack it into different offerings which are later sold to other companies to benefit from – including everything from HR to insurance to banks. This data is then used to predict the future.
Or rather, algorithms can approximate it.
These predictions do by no means have to be perfect – they just need to be better than human prediction, which, they often are. Because as much as we’d like to think that we are unique, there’s always someone else in the world with a similar lifestyle to ours, who has bought a specific pair of red shoes or opted for a particular loan. And the probability we would do the same as someone similar to us is relatively high.
Now, if a firm has a hold of this data, it becomes easier for them to show you a specific advertisement at the right time.
You might argue that you’ve always wanted a red pair of shoes and that, quite frankly, there’s nothing actually wrong with an algorithm understanding your wants and suggesting matches for these. And yes, that’s a completely valid argument. But without diverting too far into philosophical discussions on free will, it might be worth taking a step back, noticing that someone else just made that choice for you.
Because maybe you wanted blue shoes but was never given the option. If left to your own devices, would you have bought shoes from a different brand? Might you have spent time differently, searching for companies that resonated more closely with your moral beliefs? With an algorithm based on machine learning taking the reins, suggesting that particular ad with those particular shoes, these alternative realities are all but forgone – offered to the highest bidder instead. It’s money over mind.
Or shall we say, their money over your mind. But it doesn’t have to go this way.
There is data to suggest that less intrusive ads are actually better – a little less creepy, and in some circumstances, less psychologically harmful.
Where does that leave us?
Arguably, businesses have a moral obligation to safeguard their users, even prior to becoming a legal responsibility due to new laws and regulations. Developers too, carry a moral obligation when developing online platforms. As much as focus lays on approaching deadlines or decoupling when writing code, developers should be equally encouraged to explore, think, and engage with these ethical questions.
A platform is, as the name suggests, a place where the buyer and the seller meet to exchange goods or information for money. If built around an ethical core of values, with enough transparency to allow users to make informed decisions regarding their data – the platform suddenly offers much more than a place to buy and sell. It offers a place where people and businesses can feel safe and valued – and importantly, respected.
And it’s up to all of us – businesses, clients, and developers – to make that happen.