Building a social score system: The key questions are WHO and HOW

Affiliate Professor of Strategy at INSEAD and a leading expert on digital transformation

Prof. Chengyi Lin

“Our team had a brilliant idea!”

It was an afternoon session of my executive education program at INSEAD. After one and a half days of content, examples and exercises on digital transformation and innovations, the participants were divided into groups to design ‘digital products’ that could disrupt their own business. Presenting in front of the entire class was the last group comprised of executives from the consumer goods sector.

“We will develop a new glass with facial recognition features. The algorithm will match the face you see to the person’s social media profile and immediately show his or her name, job title, relationship status, personal interests and more. … If you have a crush on this person, the AI can tell you how well the two of you match, just like but better. We call this LoveGlass.”

The Q&A session went vibrantly with lots of excitements about connecting people; it was also filled with serious concerns around privacy and safety risks. Ten minutes soon became twenty-five. In a loud round of applause and a standing ovation, the group won the competition.

Wouldn’t it be nice to know if an individual is trustworthy before we interact with them? Even better, what if we could know their preferences and past experiences so that our time with them can become more tailored to what they like?

Human beings are social animals by nature. We have always been looking for social cues to determine whether, and how, to interact with each other. Historically, these social cues may include; appearances (such as clothes and accessories), words and voices, facial expressions and body language, to name but a few. Despite many efforts, we were not able to develop an effective numeric system to measure ‘how ‘good’ or ‘bad’ a person is’. We mainly rely on our individual judgements for social interactions to assess one’s trustworthiness. In certain countries and sectors, for example, financial systems and criminal justice systems, as well as some limited records and metrics, are available. But they can only provide “scores” or records on a handful of life’s many aspects.

That was before the digital revolution.

Nowadays, digital interactions and social media has made much more data and information available about an individual on various aspects of their lives. Digital platforms host data ranging from individual preferences (e.g. Facebook and Pinterest accounts.) to consumer behaviors (e.g. Amazon, Netflix, and Google activity), and from individual expressions (e.g. Twitter, TikTok and Instagram.) to “offering the wider audience our thoughts such as aggregated peer reviews and recommendations (e.g. LinkedIn, Uber and Reddit).

Each digital platform may own a specific set of data, and there go on to develop a deep understanding of our behaviors in those aspects of our lives. By putting all the available data together, however, a more integrated picture starts to emerge – a new concept called the ‘digital twin’. A digital twin is a digital representation of the physical individual in terms of their data. The integrated historical data may describe how we behave as a person and may even be able to measure our trustworthiness or ‘predict’ our future behaviors.

The birth of “digital twin” also triggers a set of interesting questions: Would the increasing availability of data lead to a new beginning to construct a “social scoring” system? Should we pursue it? Could a “social score” be sufficiently accurate in measuring and predicting individual behaviors? What measurements should the system take and from which aspects of our lives? And more importantly, who can be trusted with our data? Who should build, govern and operate this new system? Will we as individuals trust the measurements and outputs?

How did we get here?

The idea of assigning a ‘social score’ to an individual may sound farfetched and scary. However, the concept of using a ‘score’ as an indicator for individual behavior is not new. In financial system for example, a ‘credit score’ system has been created to evaluate individual’s financial behaviors and predict their risk profiles. Such credit score systems have been implemented in UK and US for years. Similar mechanisms or measures can also be found in financial systems in other countries around the world.

In both the US and UK, the credit scores are provided by three independent agencies. Each individual will have multiple credit scores rated by these agencies. Financial institutions such as commercial banks can use these scores as inputs for granting a house mortgage or personal loans.

Similarly, in the criminal justice system, criminal records are kept as ‘negative scores’ from the past. These may be used as part of the judges’ considerations for sentencing. Outside of the justiciary system, criminal records can also be inquired by companies during the background check process before making hiring decisions.

These systems have been relatively stable in the past decade. Until recently, the rapid digital revolution in China has accelerated the evolution of these systems. For example, the Chinese government has collaborated with Alibaba (who owns AliPay and among others) and developed the ‘Sesame Credit system’.

Historically, China did not have a nation-wide credit score system in the financial sector. Local businesses often rely on bank statements, status of existing business and ‘guanxi’ as to demonstrate trustworthiness. If I have a personal relationship with you, or know you through others in my personal networks, I could extend my trust to you in business transactions.

These traditional practices became much less helpful in the era of digital businesses. First, the explosion of consumer to consumer (C2C) shops and services cannot rely on physical interactions to establish trust. The birth of C2C shops on, restaurants on, drivers on and other digital marketplaces shifted a lot of physical business dealings to the digital marketplace. They called for new mechanisms to evaluate trust in these digital marketplaces. Second, the prevalence of digital payment requires risk assessment prior to the digital transactions. Popularized by AliPay and Wechat Pay, online payment systems are quickly gaining consumer adoption because of their convenience, immediacy, transparency, and network efficiency. A new way to assess risk and establish trust needs to be developed for this digital world. Responding to two major needs, many digital marketplaces developed their own ‘scoring systems’ for digital trust. Most of these systems tried to establish trust based on individual digital behavior records, peer-to-peer (P2P) ratings and reviews.

Chinese governments saw new opportunities from the evolution of digital trust and asked whether the system for digital trust can be expanded more broadly to other aspects of society. ‘Sesame Credit’ was born initially as an experiment. Some concepts of the ‘Sesame Credit’ overlap with the credit score system, such as securing mortgages for homes and personal loans. Others maybe similar to the criminal justice system, such as access to public services like traveling on airplanes or trains. However, because the increasing prevalence of digital aspects in people’s everyday lives through connected devices, this credit score system could expand well beyond business transactions, financial services and criminal justice. It could use digital data to evaluate both digital and physical behaviors and assign a single score to an individual’s ‘digital twin’. For example, security cameras in a smart city can record misbehaviors and link potentially criminal activity to individuals though facial recognition.

Through what has been a brief start to this journey, we can see that the evolution of the ‘credit score’ system is mostly driven by emerging needs – like needs for safety and trust between individual interactions. This evolution was accelerated by digital technologies, such as the Internet of Things (IoT) and Artificial Intelligence (AI). These technologies connect and integrate the physical and digital world through data. A new system that can better understand our social behaviors is in the making.

Now, the question is should we pursue it?

Trade-off vs. ‘Hands off’

One aspect of answering this question is to weigh up the trade-offs of such a new system presents. But it is easier said than done. Many digital benefits could be immediate and attractive, such as convenience, while the risks remain hidden, such as privacy and security. Up until now, many consumers have been continuously making decisions towards more convenience when it comes to digital adoption. What is unclear is whether they are always making such trade-offs consciously.

Data has shown increasing adoption of digital and increasing digital activities. According to the Digital 2020 report, the number of global unique mobile users has passed five billion (a penetration rate of 67%), and active social media users has reached 3.8 billion (a penetration rate of 49%). Consumers have not only overcome the initial inertia and skepticism about digital services, but have rapidly adopted being digital as a new lifestyle. The rise of e-commerce provides a good illustration of this. For example, Consumer Intelligence Research Partners (CIRP) estimated that Amazon prime subscriptions in the US passed 100 million in December 2018, which means nearly 80% of US households now have Amazon prime membership and shop online. Amazon has also seen increasing C2C activities on its platform. In their annual report, independent third-party sellers have represented 58% of the total sales completed on

If we follow Amazon further, we could easily see their activities expanded beyond its e-commerce core. Amazon has developed its virtual assistant Alexa and introduced a series of home devices and wearables that host Alexa. These devices allow consumers to interact with Alexa at home or anywhere and collect more consumer data on search and other behaviors. Similarly, Google has also started embedding Google assistant in many smart devices – further expanding their reach in the online search sector. Although privacy remains a significant concern, consumer purchases actually responded positively to this development. Many US households have welcomed Alexa or Google into their ‘smart’ homes.

In Kaspersky’s own survey, we have observed similar high penetration of social media in most countries. The Survey results show that over 80% of survey participants have Facebook accounts across all countries except for Japan. In China, over 98% of respondents have WeChat. The powerful social media platform was not only used for instant messages and group chats but also for content consumption, online purchases, digital payments and many more. Increasingly, the public has adopted digital in parts of their lives and manage a life in both the digital and physical world.

With all these developments, a large volume of data started to flow to the companies from various aspects of consumers’ lives. The total amount of data in the world has increased 25 fold since 2010 and is estimated to reach over 50 zettabytes in 2020. This data includes structured data from digital platforms and traditional businesses as well as unstructured data from daily life such as photos and videos. These data examples are then made available for businesses, governments, not-for-profit organizations and individuals to generate consumer insights and trigger new actions such as social media marketing, personalized advertisements and promotions.

But what does all these mean for individual consumers? They are lured or coerced into making trade-offs between the benefits and costs of these digital interactions. One such trade-offs is convenience versus privacy, which means passing over data in exchange for products and services. For example, to ruse ride-haling or food delivery services, consumers have to allow companies such as Uber, Lyft and Didi to track their locations and keep a record of their trips. This data may hold important information about consumers’ private lives. Another such trade-off is personalization versus data privacy, which means an individual allows a company to analyze his or her data and provide a ‘personalized’ service based on the insights. For example, Amazon recommender, a machine learning algorithms, may suggest products based on your own purchasing data and another individual’s data with similar purchasing behaviors to yours. In order to receive personalized products and services, consumers need to allow the providers to store and analyze data at an individual level. Unfortunately, consumers are not always well informed or given real choices to make such trade-off decisions.

Serious questions around data security and privacy have already been raised whenever the topics of data collection, storage and utilization are discussed. For example, ‘Google is always listening‘ can be pretty concerning. Simply ‘turning it off’ or asking for consumer consent may not actually solve the programme. In order for individuals to make an informed decision, consumers need to be briefed and continuously updated on: What data is being collected? How is the data shared and used? What are the benefits and impacts? What are the risks? What are the practices around security and privacy?

If making decisions based on trade-offs is already very challenging in one sector (e.g. retail or advertising), a social score system that hosts data from across sectors can only be even more complicated. At the societal level, a social score system could be very beneficial to create a safe and trustworthy environment both physically and digitally. For example, a simple majority of responses in every country surveyed except for Netherlands agree that bad behaviors should limit access to specific public resources such as real estate, transportation, education, and travel.

At the individual level, we could also benefit from the positive outcomes. For example, interactions between individuals may also become smoother, as my participants illustrated through their ‘LoveGlass’ idea. Crime rates may decrease as behaviors become more transparent.

At the same time, how would we behave under constant surveillance (the Hawthorn effect)? How would we be impacted psychologically? What privacy risks would this expose us to? And all together, do the benefits outweigh the potential erosion in our individual privacy?

Trust in the digital and physical worlds

These questions make it challenging to make decisions on the many trade-offs of a social scoring system. At the same time, we need to continue making these tradeoffs in order to improve our current system for the long term. How should we do that? It may be helpful to go back to the fundamentals and ask: why do we need a social score system? What will make the system effective? The answers to both questions are trust.

If trust is both the foundation and outcomes of a social score system, then we need to develop a better understanding of trust in both the physical and digital worlds.

Firstly, trust can vary across countries and cultures. Decades of research has identified three core elements of trust: competency, benevolence, and integrity. In practice, how people trust each other across different countries and cultures vary significantly. The Edelman Trust Barometer 2020 showed an overall increase of 1 pt. in global trust index. But the top 26 vary significantly on the same trust index. We have observed similar results in the Kaspersky survey results. For example, except for Japan, individuals in general would trust the government and businesses to store their data. However, each country varies on how much they trust them. This suggests that it may be possible to construct a country-specific social score system, while a globally acceptable standard may be very challenging.

Secondly, trust is similar in both physical and digital worlds, and it is reasonable to assume that the two will be closely corelated. However, the difference in the availability of data, the interpretations of information, and the process of assessing trust may lead to divergence in the two worlds. We can certainly observe both effects from the Kaspersky survey results. (See graphs below overlaying the Edelman Trust Index and the Kaspersky survey results.) For example, public trust in governments storing their data correlate well with their general trust index related to governments in many countries, while they diverge in others. For example, in China, India and the Netherlands, individual’s trusts in the government storing their data and overall trust index related to the government follow a similar pattern. In other countries such as the UK and the US, although individual’s general trusts in governments are low, they trust their government with their data. In business, we observe similar patterns between trusting businesses in general and trusting business with the data. The encouraging news is that overall, we see that the public trusts governments and business to store their data (over 50% combined totally trust and trust) even in countries where businesses or governments are distrusted.

Thirdly, trust in data varies based on the area of activities. The survey results show how much trust depends on what type of data and used by whom.

The key questions are WHO and HOW

Once we understand that trust is the foundation of a new social score system, we can start to re-think our focus. Although it is important to ask what and when, we should focus more of our attention on the critical questions of WHO and HOW.

The what and when questions are practical: What system should be built? What data should be incorporated and what mechanisms should the system be based upon? When can we build such a system? When will the system be in place? The answers to these questions are tangible tactics and could help us get closer and closer to an optimal system. The process of asking these questions can also help us experiment and continuously improve.

On the other hand, the WHO and HOW questions can be more philosophical – based on ideologies and beliefs and varied across cultures. Critically, they can be the hidden barriers to putting the ‘optimal’ design into practice.

First the WHO. In most countries surveyed, responses vary on who they trust in collecting and storing personal data. They also vary based on the use of those data. At a high level, individuals tend to take a ‘specialization’ view of data. For example, the Kaspersky survey data shows that individuals are in general more comfortable sharing their credit score and financial status data with banks or insurance companies than with government agencies. They also are more comfortable sharing their personal photographs, interests and relationship status with their friends and family than businesses or governments. When looking for a new job, individuals become more comfortable sharing their personal photographs and interests with prospective employers but not their bankers or insurers. This specialization view will make it very challenging for individuals to trust a single agency, be it the government, a business or independent not-for-profit agency, to construct and implement a social scoring system that spans across all of these areas.

It is worth noting that in China, where digital penetration is higher and the ‘Sesame Score’ system is implemented in some major cities, individuals in general trust the government more to store their data. More importantly, individuals are more comfortable sharing their data with government agencies and businesses in many categories. A similar effect can be observed in India, where a national biometrics system, the ‘Aadhaar program’, was implemented. Reasons for these higher comfort levels could include that individuals perceive that the benefits of implementing such a programme outweigh their observable risks.

Second the HOW. This question has more to do with governance than operations. For the social score system to work, it requires checks and balances. How will the system be designed? How will it be implemented? How will the system be monitored and improved? How could the system be kept neutral and objective? How will cases of abuse be prosecuted? These mechanisms will be key to obtaining public trust.

Because of the large amount of input data, the social score system will rely on deep learning or even more advanced general AI technologies to process the data and generate insights. It will not be an easy task governing such new technologies. For example, FDA recognizes that it is impossible to ‘certify’ a machine learning algorithm as its results continuously improve based on new data and better technologies. Their new regulation on health algorithms focuses on the companies who owns and operates the algorithm. Similarly, we need to focus on how to govern the social score agency. In this case, output measures such as the number of individual scores provided, penetration of total population or profit margin will not be sufficient. We need to continuously evaluate the agency and make sure it has integrity, benevolence and competence, three core components of trust.


Maybe it is time for us to come back to my participant’s exciting idea that I shared at the beginning of this article. As social beings, we value our interactions with each other and enjoy living in a society where trust is shared among its members. The idea of a ‘LoveGlass’ could help people find their soulmates; a ‘trust thermometer’ could improve interactions between one another; and a social score system could improve trust in the society. A data-enabled system that can evaluate the trustworthiness of its members could help individuals monitor their own behavior and give insights to others and the society.

However, such a social score system has many downside risks. The data gathering and sharing processes could invade an individual member’s personal privacy. Establishing a centralized data system could be vulnerable to security challenges. The system could be exposed to abuse of power by individuals or interest groups. And the list goes on.

Therefore, whether to design and implement such a system requires careful consideration. The decision will depend on the trade-offs the society and its members are willing to make, who the society and its members are willing to entrust, and how the system will be governed and operated.

Given the variation in each country’s context, a global system may not be feasible. It will fall on each country, at least in the short term, to make their own decision. For individuals, it is important to make informed decisions about your own aspects of their data: who can store and use this information, who can share it, who it is shared with, and what the impacts are on you. It is also important to get prepared and consider whether and how to participate in a social score system should it be implemented in your country.