ILaw Logo blue text, transparent background
AboutpeopleexpertiseNewsTestimonialsCareersContact
ILaw Logo blue text, transparent background

Shaping our Minds: the Dangers of the Algorithm 

July 10, 2023

Big tech giants like Google and Meta have built their fortunes on the collection and monetization of user data, using sophisticated algorithms to analyse our behaviour and preferences. While this has led to the creation of highly targeted advertising, it has also raised concerns about the broader societal impacts of such data collection.

Shaping our behaviours

There are numerous examples of how data-driven big tech companies are shaping our behaviour and preferences. Social media algorithms are designed to show us content that we are most likely to engage with, highly personalised to our behaviours. 

Now, the algorithm is not inherently harmful if it is trying to feed you more funny dog videos. However, these platforms can push certain lifestyle choices and political agendas, creating echo chambers and reinforcing our existing beliefs.

Perhaps the most alarming example is the correlation between algorithm data-driven technology and increased political polarisation. As users increasingly spend their time viewing and liking content among particular communities or like-minded individuals, the algorithm will instinctively suppress counter-attitudinal information. This means individuals who predominantly consume their news online (89% of 16-24-year-olds) will not see or hear a wide range of topics or ideas, which limits their capacity to reach common ground on various political issues. 

But it is not just politics that has been impacted by the data collected by big tech companies. The food industry, for example, has also been heavily influenced by data-driven marketing. As we have explained in our previous articles, companies like Google and Meta have the capacity to track what we eat, where we buy it, and when we eat it, allowing food manufacturers to create highly targeted ads and promotions for particular groups of people. This means that unhealthy foods can be marketed to vulnerable populations, such as children and low-income communities. 

However, there are some difficult questions to be answered regarding whether these platforms should or can be held responsible for what the algorithm is promoting to its users. 

Regulating the algorithm – who pays?

For instance, can Meta be held directly responsible for its algorithm promoting dangerous or extreme content? Facebook is not quite comparable to a newspaper company where editors must affirm which stories are printed or uploaded each day. Yet, to say that it should have no responsibility over what its algorithm promotes to its users also seems unsatisfactory.  

Some have argued that the compromise is for regulators to increase the burden on these big tech firms to employ more humans moderating the content which is being pushed by the machines. 

According to a report by NYU Stern, Facebook flags 3 million posts per day to review (either via user reporting or AI). Facebook currently employs around 15,000 content moderators, meaning that each moderator must (correctly) review 200 posts per day. However, some of these posts will include 15-minute videos, meaning that each moderator sometimes only has seconds to review a post. This means that human moderators are likely to make mistakes, with Mark Zuckerberg admitting that moderators “make the wrong call in more than one out of every 10 cases”. 

So, if human moderators are to be effective, the big tech firms will have to be forced by lawmakers to employ more people, providing them with adequate training and management to make the right decisions on whether to keep posts up or to take them down. 

Whatever the solution, it is clear that regulators need to step in to claw back how the big tech firms promote and moderate the content on their platforms. 

Ultimately, the impact of big tech on our society is a multifaceted issue. As our reliance on technology continues to grow, with the emergence of AI platforms set to disrupt the data-driven big tech firms, it is important we consider the broader societal impacts of data-driven companies and work to create a more ethical and equitable digital landscape. 

So, how are regulators reacting? Please read our next article: ‘The Push for Privacy: How Regulators are Tackling and Big Tech Problem’. 

About the author

Share

Latest News

More from