The West Already Has A Social Credit System, It Just Isn’t Centralized

Much of the Western world is currently aghast at China’s rollout of its very own social credit system, which from May 1 will rank Chinese citizens according to the ‘good’ or ‘bad’ actions they perform. According to preliminary reports, those who spread misinformation online or board trains without valid tickets, among other misdeeds, will receive low scores, which could result in them being banned from certain services (e.g. public transport). There’s little doubt that the official launch of this system represents a watershed moment in authoritarian social engineering, yet what few people are noting about it is that it already exists in much of the West, albeit in a less developed and more decentralized form.

For sure, no Western government has a centralized system of assigning an explicit ‘trustworthiness’ score to its citizens, and yes, no Western government is aiming to punish people for purportedly “spending too long playing video games.” Still, pretty much every Western nation produces at least as much (if not more) social data on its citizens than China, and even before the emergence of the Sino-credit system much of this data was being used to by governments and businesses to determine how to treat people, who in turn are pressured into censoring their own behavior as a result.

But there’s more, because given the developed world’s continued progress towards an info-driven economy where automation imposes the need to create new jobs, markets and products, it’s only likely that data on our behavior will not only increase, but will be increasingly used in a way that pressure us into modifying this very same behavior. Even if we never witness techno-autocratic governments, a system as monolithic as the Chinese one is likely to emerge via an ‘organic,’ economics- and tech-driven process. And in the process, we will become a little (or a lot) less free.

As an indication of how ‘social credit’ systems already have a long-standing history outside of China, it’s worth beginning with the questionable practice some employers have of checking a job applicant’s social media accounts. In April 2017, a UK survey conducted by YouGov found that one in five employers had actually rejected a candidate because of their online accounts. 75% of the employers surveyed reported that the use of offensive or aggressive language on social media would deter them from hiring someone, while 71% and 47% would be discouraged by references to drugs and by photos of drunken revelry, respectively. Put simply, people in the UK are effectively being punished for ‘bad’ social behavior by being deprived of job opportunities, a practice that it’s in evidence everywhere from the United States to Europe, and has been in evidence since at least 2008.

And it’s not only job opportunities that are often determined by the implicit social credit scores we have in the West. To take another example, one report on China’s credit system ran with the headline, “Would you choose a partner based on their ‘citizen score’?” A provocative title perhaps, weakened only by the fact that there’s a very enduring tradition in Western countries of people Googling and ‘Facebook-stalking‘ each other before choosing to head off on dates. There’s no real data on just how prevalent this is as a practice, but given the number of articles dedicated to it, and given the billions of people on social media, it appears surprisingly common. Admittedly, separate people individually choosing to check out their prospective dates online isn’t quite the same thing as a state assigning a score that grants someone access to a dating app, but the implications are comparable. That is, aware that we’re on display, we begin modifying our behavior and what we post on social media, worried that we might miss out on meeting a new partner.

Other examples of implicit social credit are relatively easy to find. One prominent case involves insurance. In November 2016, Facebook famously prevented the insurance company Admiral from mining users’ social media data in order to set the price of premiums. The thing is, even though Admiral was blocked from mining Facebook, it nonetheless began using personality profiling to set the price of the premiums it offered younger drivers as part of its firstcarquote product. Not only does Admiral require them to complete a questionnaire while signing up, but that same year it began working with the UK-based Big Data Scoring, which prides itself on “tapping into the broadest source of information from across the Internet, using all publicly available information” in order to assess creditworthiness.

Such information includes “location based information, web search results, behavioural tracking, device technical details, mobile app data and much more.” While the company acknowledged in a 2017 blog post that Facebook’s guidelines no longer allow the use of its data to “make decisions about eligibility,” it ended the post by declaring, “There is so much more data out there these days about every individual on the planet that is both legally and technically accessible […] Moreover, the scorecards we build on other data sources nowadays are even stronger than the ones based on Facebook data.”

That Big Data Scoring counts “some of the largest banks in the world, payday lenders, P2P lending platforms, microfinance providers, leasing companies, insurance providers, e-commerce platforms and telecoms [companies]” among its customers should therefore provide some indication of the sheer scale and power of social/behavioral credit scoring that goes on in the West. Similarly, it should provide some indication of the kind of indirect behavior control that could result, at least if enough people knew their ability to receive credit, insurance and other products is often dependent on their “web search” and “behavioural tracking” results.

That personal data is being used to judge people in this way might at first glance seem like a historical accident, or as the result of certain ‘unscrupulous’ actors using our social information to their advantage (and sometimes our (dis)advantage). However, it’s an almost inevitable byproduct of the explosion in personal data the world has witnessed since the start of the century. In classical and neoliberal economics, for example, it’s assumed that market actors (e.g. businesses, individuals) operate with perfect information, or at least as close to perfect information as humanly possible. It therefore stands to reason that, if our social data can help them draw nearer to perfect information about their markets, they will inexorably use it (unless legally prohibited from doing so).

This is evident in the phenomenon known as pricing discrimination (or customer profiling), whereby online retailers draw upon a customer’s digital info (e.g. browsing history, device) in order to set prices at a level they think she or he will accept. And it’s evident in the examples cited above, where actors harness available information on an individual in order to make the best possible decision for themselves, but possibly one that has negative consequences for that individual, forcing them change or censor their own behavior as a result. In other words, social data is social credit, since its very function is to alter how people, companies and institutions treat individuals, who in turn are pressured into altering themselves.

Once again, social credit isn’t anything especially new, with the main difference in China’s case being that its system of credit is directly overseen by the state. However, even if totalitarian state control makes social credit scoring more draconian and frightening in China’s case, two points are worth highlighting. On the one hand, there are signs that Western governments are beginning to turn to social (media) data in order to police people and behavior, with the US and UK governments vetting the social media accounts of foreign travellers and immigrants from at least as early as 2015/16. Given that data is so abundant online, it was almost inevitable they’d begin doing this, especially with the omnipresent threat of terrorism (and the recent political lurch to the right). However, the effect will be to make people increasingly aware they’re being or could be monitored, and that their access to the US or the UK, among other nations, would be jeopardised if they don’t shape up and fly right.

And on the other hand, even though the West’s decentralised system of social credit isn’t overseen by a government, it’s vital to reiterate that its effects are similar. Namely, people become aware that their access to certain things (jobs, partners, credit, insurance, emigration) is predicated on them making a good impression online (and not just on their own social media profiles, but also on those of friends), so they begin censoring themselves and regulating their behavior. As early as 2013, a Facebook study discovered that 71% of its users were engaging in “last-minute self-censorship” (typing at least five words and then deleting them). Seeing as how the study’s authors concluded that the “user’s “perceived audience” lies at the heart of the issue,” it’s likely that the more aware people become of the companies, employers, potential lovers and governments checking their accounts, the more this percentage will increase over time.

It almost goes without saying, but such self-censorship is hardly a sign of people who are particularly free to act as they please. Worse still, it’s likely that the evolution of the globe’s economy towards greater digitization is only going to exacerbate this situation, as people come to produce more personal data and as new organisations seek to carve out niches for themselves by exploiting it in new ways (at least 90% of the internet’s data has been created since 2015). We may observe the situation unfolding in China with a certain self-satisfied horror, but it’s clear we already have our own social-credit dystopia to worry about.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: