What is Trust? Does it differ from sharing?
Working as a Blockchain architect, I often find myself in such conversations:
“We can’t share a Blockchain with them because we don’t trust them. In fact, they don’t trust us as well.”
Such arguments often come with anecdotes, like how little information sharing was possible in the past; how the past attempts to connect the two organizations have failed.
It’s interesting to observe that even business people see trust as an on-off button: I trust him; I don’t trust her. Or sharing as an on-off-button: we share information or we will not share information.
The actual work in designing Blockchain solutions requires one to be much more aware of all 50 shades of trust — and various protocols (call it “sharing information” if you like) to leverage on or to utilize trust. Not getting it right, can lead to the wrong assumptions or end up applying the wrong technology or it can result in over-promising or under-promising results to stakeholders. Let me start explaining this by illustrating the different kinds of trust that exist:
Trust that someone will, in the long-term, act in his own interest. i.e. one will not cut off his nose to spite the face. This kind of trust can be granted on complete strangers or even adversaries, with only a few exceptions like President Trump. This is the very kind of trust on which the Bitcoin network leveraged to provide a greater trust.
Trust that someone is honest. This is the kind of trust you have towards your friends, that they will not cheat you for petty gains. There are more refined kinds of trust implied by this: consistency, conclusiveness and openness.
Trust that someone will be available. This is different than being honest — I can trust my friend James to never cheat me, but I can’t trust him to always be available, for all I know he could be having a disconnected honeymoon in Greece, or rock climbing in search of a geocache in the mountains. When you find yourself in a wedding ceremony or holding a burning card in a mafia initiation ritual, you have signed up for this kind of trust, a promise that you are available when your wife or mob boss — they are sometimes not very different — needs you.
Trust that someone will not tell others your secret. This is the kind of trust you expect from Gmail or an electronic medical record systems. If you are a secret agent, it’s the kind of trust you would expect from your elected president.
Trust that someone will do what they say. This is the kind of trust for the people that you do business with. Again, President Trump is an exception.
Trust that someone acts with your intention in mind. This is the kind of trust you expect from an attorney.
I’ll show these different kinds of trust at work in the Blockchain world with examples.
Trusting someone to act in their own interest
Bitcoin miners are a great example of this. Miners are trusted only to the extent of acting in their own interest. Satoshi’s design leveraged on this trust to build a virtual trusted 3rd party, namely “Bitcoin Network”. It provided a lot more trust on an otherwise shaky foundation. Bitcoin is honest — if you send the money to one person, then it won’t say you didn’t and let you spend it again; it is available — there always will be miners who can come to serve you; it has integrity — as long as you pay the market price for the service, some miner will process your transactions; it acts with your intention in mind — no one but you can authorise movement of your funds. How did Satoshi pull that rabbit out of the hat?
There are not many designs of mistrust one can find in Satoshi’s Blockchain design. For example, miners are trusted with the ordering of the transactions within the Blockchain. If it is a perfect design, miners should only be trusted with acting in their own interest, but now they have a second incentive: to order the transactions in someone’s favour. This makes the miners automatically a party in time-sensitive scenarios like decentralised high-frequency trading.
Trusting someone to be honest
I’ll provide an example, followed by a few concepts.
Let’s say you have a Blockchain use case: decentralised car rental service. The user pays on the Blockchain, walks to the car and drives it away. Blockchain can provide proof of the user’s right to use the car but can’t provide the prove that the user has the right to drive on the road. A driver’s license issuer can(the DMV or Road and Maritime Services).
The trust we need from RMS is “honest”. The cunning case here is that they can’t technically lie: if the drivers license issuer says you don’t have a right to drive, then you don’t. The current social structure was set up in such a way that they are the only source of truth of whether one is able right to drive on the road or not.
If the RMS is absent when giving such a proof, we need a 3rd party attestation. In real life, this happens when you drive with a foreign license and the 3rd party would be a professional translator. In our car rental game, this may occur if RMS cannot provide a cryptographic proof for use with Blockchain — which, unfortunately, covers the RMS equivalent in every country. So that 3rd party will be someone who can be trusted to check a driver’s license and provide a cryptographic proof. This 3rd party would be known as an attester, similar to a notary. Attester cheating has happened before, e.g. CNNIC (China Internet Information Centre) in 2015, so we should be cautious about this role.
The concept of honesty is actually a broad concept that includes a few smaller ones:
Consistency — not noting down $5 and charging $50 in the end;
Conclusiveness — not telling you that you had good shoes without also telling you that you matched colors terribly.
Openness — telling you the truth if you ask.
The system design requirements for each are different. For example, consistency is usually enforced by: providing the hash always; providing the truth when or if needed.
Trusting someone to always be available
I’ll pick up the car rental example to explain when not to expect someone to be available. In practice, the drivers license attester should not be expected to be available. Think about an absolutely honest notary service company: they will not make a single mistake on driving license attestations but when you need the car for the weekend, their server is offline and no one is in the office to fix it. Sometimes it’s not their fault. Suppose you are driving in the central Australia in the desert, and a policeman pulls you over. There is no mobile signal but you need to prove your driving capacity. Any design assuming an attester’s availability at the time of validation will fail. And, there is one such design: Facebook.
Currently, the world’s largest identity attester is Facebook. Although they only provide a low value attestation on whether or not someone is its user, they are highly available. The expectation of honesty is rightly implied because Facebook is the authority on whether or not someone is its user. But the expectation of availability is abused here. For example, Facebook users can’t identify themselves to a 3rd party if he is in China, where Facebook is completely blocked. Facebook also failed to allow guests to log in to some airport’s free Wifi as one needs internet access to do so. If RMS based its driver licence on Facebook technology, it won’t pass the central Australian desert case.
There are finer levels of availability, e.g. an available system can delay the communication in a cunning way that breaks the protocol, in order to work around honesty requirements, akin to an honest person wanting to go to the toilet when asked about certain things. How to solve this with complicated protocol is a part of cryptography and distributed computation studies.
Trusting someone to keep your secret
This consists of two parts: trusting him to not sell your secrets and trusting his computer to not be compromised by black hats. For ease of communication, if I talk to a fellow Chinese I would use the selling-secret example, and when I talk to a westerner, I would use the compromised-secret example, therefore making the case relatable in their respective culture. In both scenarios: the intended leak and the careless leak are considered the same.
In the Blockchain world, there are two general approaches to this.
One is not to give secrets and make the network functional without knowing secrets. e.g. The Bitcoin network is designed to function without asking for a user’s secret key, unlike the online banking system, which requires a users’ secret in order to function. It is amazing how far this idea can go. Whole schools of cryptography are devoted to it, with products like zcash (ZEC), which went further than Bitcoin and allowed a user to send money not only without asking for a secret key but also without “asking” where the money came from.
But this method is not without its limits. For example, although a doctor is capable of accepting payment without knowing the patient’s secret key nor where the money is from (and not using a financial service like a bank), she would not be able to make a good diagnosis if she cannot access a patient’s medical report. The “make it work without giving away the secret” path ends here. A doctor will not work with encrypted medical reports, tracelessly-proven-existed medical reports, hash of the medial reports, cryptographically-proven-correctly-generated reports, unless she sees the report itself. She needs the actual, clear-text of the report, i.e. the very secret.
So here is the second approach: to give secrets in shares, so that the holder of each share of the secret key will not have the capacity to leak the entire key on their own. Sadly, this approach never saw fruition in the Blockchain world because of its weakness against a Sybil attack, in which digital identity, the usual answer, wasn’t ready in a large scale.
Trust that someone will do what they say
It is an unpleasant idea that a system would fail to work when someone does not do what they promise they will do. Naturally, any Blockchain designer will want to minimize the dependency on such trust. Bitcoin, for example, removed the need to trust a central bank by designing “out” artificially causes of inflation. It would be great if technology can serve this goal in other areas too, but very often you have no other choice. Think of a Blocckhain boarding pass system. For simple functions such as international money transfer, of which banks only provide primitive functionality, Satoshi can design around the bank completely; but for complicated tasks such as flying with an airline, one can’t design around the airline itself.
After you have paid for your ticket, got the airline to sign your Blockchain boarding pass, held the proof at the boarding gate and got to the final step, the airline has to let you onto the plane for the system to function. There is no way to design the system where the airline’s integrity is not assumed. But it may be reassuring to know that the non-Blockchain system couldn’t do better anyway, evident by the bloody story of United Airlines pulling a perfectly legitimate passenger out of the plane.
What a designer can do is to minimize such dependency. It’s a bit like finding a man to marry: if you can’t find a man with perfect integrity, you should go to a situation in which you do not need to depend on his integrity. e.g. obtain some work skills in case he abandons you; learn some martial art skills; obtain a proof of his criminal past to hold against him; or all of the three. A Blockchain system can be skillful, depending less on actor’s integrity. Lots of smart contracts are already written with financial punishment on unwanted behaviors, e.g. require a deposit from United; others seek fallbacks, e.g. “United’s deposit goes to the profit of the airline which takes the customer United has abandoned”.
The other approach is to draw the circle smaller. Like a dog who couldn’t defend Wolf-size turf, mark a smaller turf around his house. For example, call your system “Blockchain flight booking system” instead of “Guaranteed Onboarding System”. Hyperledger is the typical small-turf system. For such a small-turf “pet” system, it’s naturally easier to find owners but more difficult to find players. That’s why Ethereum has a lot of players in it and Hyperledger has a lot of buyers who want to have it.
Trusting someone to act on your intent
Bitcoin can carry out the transaction once you signed it cryptographically, is such an example. Cryptographically prove your intention unfortunately depends on the user’s capacity to safeguard their keys. A lot of bitcoins are lost forever because their owners could not safe-keep their keys. There are two approaches to this problem:
Since one can’t safeguard their own key, their bank should do this for them. Of course, that means the bank can also not do it.
Empower users who otherwise can’t safeguard their keys. The work on this area, e.g. using secret sharing protocol or trusted computing, is only in the beginning.
For many decades the first approach is the default choice. Almost all the cryptographic keys users have nowadays, like those embedded in VISA/Master cards, are given to the users by the bank, and surveys show that almost everyone trusts the bank.
Delegation to the bank is not without some downsides: users need to pay, and banks have control. Bank’s service fee is hidden in the price of goods and services, and the banks’ control is noticeable when your credit card gets locked up by out-of-habit spending (spending in a unusual place or in unusual hours).
The need for better trust models
As I demonstrated that trust can be gauged by availability, honesty, confidentiality and a few other measurements. I did not wander into the space of protocol design, that is, how to design trust — that would be a big book on its own. What I want to achieve here is to show we don’t know what is trust, thus allowing more discussion around trust in designing blockchain solutions.
Too many ICOs and commercial PoCs are designed with blind trust in nodes — trusting the node owner won’t change its code. Too many ICOs and commercial PoCs are designed with complete logic in smart-contract, disregarding the cost of acquiring that trust. Too few with good protocols. These PoCs will be PoCs for PoCs, since they failed to model trust, bound to be abused as soon as they go production.