Wednesday, January 29, 2014

An actually secure payments protocol, and Coin's broken security

Here is a sketch of what should happen when I pay for a meal at a restaurant:

  1. Waiter brings the bill over, which contains a QR code or some such. QR code contains the name of the merchant, its public key, and an amount. I point my phone at it, up pops a screen that says "Acme Burgers Inc. would like to charge $33.04", I add a tip, bringing the total to $39.04, then press 'Authorize'. The payment app on my phone uses my private key (stored only on my phone in RAM) to sign the tuple ($39.04, timestamp, Acme Burgers Inc public key). My phone renders this signed tuple as a QR code. I get up and walk out of the restaurant, stopping at the door to scan my QR code at a reader. Acme Burgers now has information needed to charge my bank later.
  2. At any point later, or perhaps immediately, Acme Burgers contacts my bank and demonstrates cryptographic proof that I have authorized a charge to them of $33.04, and said bank transfers cash money to Acme and debits my account with them. The signature prevents any other party without Acme's private key from authorizing this transfer, and Acme is prevented from replaying this later due to the timestamp / nonce.

Unlike credit cards, I don't give the merchant access to all my credit, for all time, and hope they only use the credit I've authorized. I've given them exactly the capability they need, no more. The security of the merchant's accounting systems is no longer something that concerns me.

A few notes:

  • This protocol does not require internet access at the time of sale, and it's pretty damn convenient too.
  • The protocol could be made anonymous, so the merchant does not even learn my public key.
  • Recurring payments could work the same way, only what is being signed by me is something like "allow a charge of $7/mo, for the next 12 months", rather than a single amount.
  • Online payments could use the same sort of protocol. Obviously we can dispense with using QR codes as a communication channel (or not).
  • This is obvious stuff, and not in any way new, right?

Now, why the heck doesn't something like this exist? Software lets us solve these problems better, and yet the payments industry is still using the virtual equivalent of distributing and transferring account numbers on scraps of paper.

Here is what passes for innovation in the payments industry:

  • Coin was announced several months ago. It's a single physical card, called a 'Coin', which can store all your cards. By pressing a button on the Coin, you reprogram the magnetic strip to a different active card. Now, rather than giving the merchant access to all your credit, for all time, for one of your accounts, you can give them access to ALL of your accounts. When I pointed out this very real and very serious security issue, I got radio silence...
  • LevelUp lets you pay with your phone. Rather than carrying around a credit card, which gives access to all your credit, you carry around your phone with the LevelUp app, which has a QR code that gives access to all your credit... The security model is unchanged! Nothing stops the merchant or anyone who hacks the merchant's systems from repeatedly using the information from your QR code to drain your bank account. Nothing stops a nefarious bystander from grabbing your QR code and draining your bank account (they just need to have or control a merchant account with LevelUp). What's missing is the step where the user actually authorizes the charge.

Bob: Oh, come on. Credit cards aren't so bad. Even if they are technically insecure, I'm not liable for fraudulent or unauthorized charges, so what difference does it make to me?

Alice: Who do you think pays for all those fraudulent charges?

Bob: What do you mean? My credit card company pays for it. Or the merchant pays for it.

Alice: Yes, and how do you think they pay for it?

Bob: Umm...

Alice: It's paid for via credit card processing fees, which get passed on to you, the customer, in the form of higher average prices. The money has to come from somewhere. So actually, it's you that pays for fraudulent charges, you just pay for them as a tax on every transaction, rather than in bursts. And those are just the direct costs of fraud. The indirect costs are also significant--consider the additional time spent having to review your credit card statement more carefully due to higher probability of fraud. Consider the time spent having to update your payment information with dozens of merchants when Target or some other merchant with your information gets hacked. Consider the additional machine learning algorithms the credit card companies have to run, to analyze transactions and detect fraud. Consider the cost of the false positives in these systems, the salaries of the phone operators who have to be on hand when you call to say 'no, really, I am in Montana trying to buy some beef jerky, my card has not been stolen'. And that's just off the top of my head.

This stuff isn't that hard, is it?


Monday, November 18, 2013

The reactive manifesto is 'not even wrong'

I am sure the authors and signers of the reactive manifesto are well-meaning people, but the entire tone and concept of the site is just wrong.

On a technical level, though, the reactive manifesto is too vague to critique. I could try to interpret and respond to some of the vague claims that seem wrong or silly (for instance, I detect some confusion between an asynchronous API and an asynchronous implementation), but then I fully expect defenders to define away the criticism or say I've misinterpreted things. Its arguments hold up only by being not even wrong.

When you cut through the bullshit, it seems the only actual information content are inane or tautological statements like "being resillient to failure is good" and "one shouldn't waste OS threads by blocking". Do we really need a manifesto to tell us these things? Of course not.

But the point of the reactive manifesto is not to make precise technical claims. Technical arguments don't need manifestos or rallying cries. Imagine the ridiculousness of creating quicksortmanifesto.org to rally people around O(n*log n) sorting algorithms as opposed to O(n^2) algorithms.

No, the reactive manifesto is a piece of pop culture, which I mean in the sense used by Alan Kay:

Computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

In the reactive manifesto, one is invited to join a movement and rally around a banner of buzzwords and a participatory, communal cloud of vagueness. Well, I don't want to join such a movement, and the pop culture and tribalism of our industry is something I'd like to see go away.

I would welcome some interesting precise claims and arguments (that aren't inane truisms) about how to build robust large systems (there may even be the seeds of some nuggets of truth somewhere in the reactive manifesto). But let's not make it a manifesto, please!

Update: I recently received a note from a recruiter, which contained the following gem:

We checked out your projects on GitHub and we are really impressed with your Scala skills. Interested in solving hard problems? Does designing and building massively scaling, event-driven systems get you excited? Do you believe in the reactive manifesto? Let’s talk.

The Apple TV remote has a terrible UX

The Apple TV remote looks pretty but has an irritating UX:

  • The up/down/left/right control is a single continuous circular ring. Thus, there is no tactile feedback as to when you've moved from e.g the 'up' region to the 'right' region. If I try to move too quickly between directions, I often end up pressing the wrong button.
  • Compounding this problem, there is a button which occupies the entire center of the up/down/left/right ring whose function is 'select'. There is zero bevel separating the outer ring from this center button, so again, no tactile feedback whatsoever, and I routinely (accidentally) select something when I'm really trying to move up/down/left/right. This is irritating in a First World Problems sort of way, especially since there's a noticeable delay in opening the (accidentally selected) next screen, and in navigating back. (And for some reason, the back button is labeled 'menu', but let's ignore that for now.)

To compensate, one has to either be overly deliberate or look down at the remote, which makes using the device more of an awkward, tiptoeing experience, rather than a natural extension of my hand. And if you've ever used the remote, now that I'm pointing it out, you're probably realizing you've had the same problem.

Unlike a trackpad, a remote like this is fundamentally a discrete input device, like a keyboard, which have bevels on the home keys for good reason. Making the remote feel smooth and continuous is inappropriate and hurts usability.

We can debate how best to fix this particular product. But I have a more fundamental question--how did this get into production? I made the above observations after about 2 minutes of using the device. I am quite sure the geniuses at Apple could have designed an equally pretty remote without these usability problems. (Personally, I'd remove the center button and add a bevel or channel separating the up/down/left/right regions.) Form has to follow function at some point. Was this just a case of brainlessly following some design rules ("prefer smooth to bevels") without actually thinking about the consequences for the user?

In many ways, good design is not hard. All you have to do is pay a modicum of attention.


Wednesday, September 11, 2013

Actors are overly nondeterminstic

Actors are useful as a low-level concurrency primitive, but as has been written elsewhere, they aren't the best high level programming model for writing concurrent programs.

Let's look at the problems with actors from another angle. Yes, actors are all about side effects, side effects don't compose, etc, etc, but more specifically, the actor model takes the wrong default of starting with an overly nondeterministic programming model, and then attempting (via some very first-order, out-of-band mechanisms that don't compose) to prune away this nondeterminism in order to obtain predictable performance. In particular, due to its excessive nondeterminism, the actor model lacks predictable tools for controlling queueing and memory usage. Where have we heard this sort of argument before? The exact same problems were noted with threads:

Threads... are wildly nondeterministic. The job of the programmer is to prune away that nondeterminism. We have, of course, developed tools to assist in the pruning. Semaphores, monitors, and more modern overlays on threads... offer the programmer ever more effective pruning. But pruning a wild mass of brambles rarely yields a satisfactory hedge.

To offer another analogy, suppose that we were to ask a mechanical engineer to design an internal combustion engine by starting with a pot of iron, hydrocarbon, and oxygen molecules, moving randomly according to thermal forces. The engineer’s job is to constrain these motions until the result is an internal combustion engine. Thermodynamics and chemistry tells us that this is a theoretically valid way to think about the design problem. But is it practical?

We could almost do a search-and-replace to turn the paper into an argument against using actors! Even though an actor (generally) mutates only local state, these side effects are observable to any piece of code capable of messaging the actor, and observability of side effects propagates transitively. So while actors avoid a class of bugs that arise when using preemptive multithreading with mutable state for communication between threads, they don't really change the overall programming model of having unrestricted mutable state, with the possibility of completely different emergent behavior based on nondeterministic interleaving of actor message processing. We have the same general class of errors to worry about and the same problems with reasoning about our code.

Let's look at an example. Suppose we have two actors, P and C, in a producer/consumer relationship. P produces values and sends them to C. In the actor model, at any point during execution P may issue a C ! msg. This means the memory usage of our program is an emergent runtime property, since every C ! msg results in a message queued in the mailbox of C, and any number of messages may pile up depending on the relative rates of processing of P and C. Likewise, the latency of the overall pipeline is very much an emergent property, and depends on the nondeterminism of actor scheduling. Yes, there are workarounds: we can build an ad-hoc protocol specific to P and C, or we can have the actor scheduler implement some global policy for congestion control (like making message send cost a function of the target's current mailbox size), and so on, but approaches like these are band-aids that don't scale. Like locks and semapahors, the deeper issue is a flawed underlying conceptual model.

There's a kind of general principle here which the actor model violates. Any decision affecting a program's asymptotic runtime or space complexity should be under explicit programmer control, using a clear and composable programming model. Actors violate this principle by making a key decision affecting memory usage (when to sequence a message send, and what sort of nondeterminism to allow) an emergent runtime property, and forcing us to encode these decisions using more hoc protocols, which aren't really first class or composable.

Purely functional approaches to concurrency are explicit. There are no side effects, we have to explicitly sequence the effects of our program, and if we want nondeterminism, we ask for it explicitly. So, rather than starting with unrestricted nondeterminism, with the possibility of unbounded memory usage depending on relative processing rates and what interleaving we get, we start with a fully deterministic model and add nondeterminism only as needed and where helpful.

Here's an example from scalaz-stream. For background, the scalaz-stream library represents sources, sinks, effectful channels, and pure stream transducers using a single basic type, Process:

  • Process[F,O]: A process which emits O values, and which may issue requests of type F[R] for any choice of R. Where F forms a Monad, this can be thought of as an effectful source of A values.
  • Sink[F,O]: A sink is represented as a source of effectful functions, or a Process[F, O => F[Unit]]
  • Channel[F,I,O]: More generally, a Channel is a source of effectful functions, or a Process[F, I => F[O]].
  • Wye[I1,I2,O]: A pure stream transducer of two inputs. Emits values of type O, and may request values from the left, from the right, or from both, allowing for nondeterminism.
  • Process1[I,O]: A pure stream transducer of one input. Emits values of type O, and may request values from the input of type I.
  • Tee[I1,I2,O]: A pure transducer of two inputs. Emits values of type O, and may request values from the left or from the right.

To feed a source, src: Process[F,String] to a snk: Sink[F,String], we write src.to(snk). This is fully deterministic--we repeatedly read an element from src, send it to snk, and await a response from snk, halting if either src or snk halts. If we want to allow nondeterminism, we can do so, but we do so explicitly, using a more general combinator like connect: src.connect(snk)(wye.boundedQueue(20))

We're still connecting src to snk, but we're now using a first-class object, a Wye to handle the queueing strategy. We happen to be using the very simple queueing strategy, wye.boundedQueue(20), which will run the src and snk concurrently, blocking on snk only if 20 elements have enqueued unanswered at the snk. But the Wye type is an arbitrary stream transducer and we can build up more complex logic for deciding what sort of nondeterminism is allowed (block if more than 20 elements enqueue unanswered OR if four consecutive values are received from src that satisfy some predicate, and so on). The decision about nondeterminism, which in turn affects the memory usage of our program, is encoded as a first class object, and we can compose and abstract over these objects using the usual powerful tools of FP.

So to summarize, actors are overly nondeterministic. Message sends are actions with side effects, rather than first class values we explicitly sequence and whose nondeterminism we explicitly control. Hence we leave an important property of our program, namely the queueing and memory usage, to various out-of-band mechanisms that aren't visible to the typechecker.

The Scala standard library Future type has a similar problem. The standard library Future type represents a running computation, rather than a description of a computation. This unfortunately means that the actual parallelism and hence runtime complexity one obtains for a Future[A] is an emergent runtime property which even depends on arbitrary details of how various functions that produce futures are factored. This is something we talk about in chapter 7 of the book, when we work through a design exercise of a library for describing parallel computations.

In contrast, the scalaz.concurrent.Task type is a purely functional description of a (possibly) parallel computation. Rather than nondeterminism being a runtime property, determined by the order that strict futures are created, Task is completely deterministic, generally instantiated lazily, and allowed points of nondeterminism are supplied explicitly using the functions of the Nondeterminism typeclass. It's a more explicit model, but that's not a problem. With the usual tools of FP, we can abstract over any common patterns we uncover when building Task values.

Tuesday, September 10, 2013

Why type systems matter for UX: an example

I've written previously about the general problems with the application-centric view of software. Here I'm going to discuss a specific example. First, some background, from my earlier post:

Applications are bad enough in that they trap potentially useful building blocks for larger program ideas behind artificial barriers, but they fail at even their stated purpose of providing an 'intuitive' interface to whatever fixed set of actions and functionality its creators have imagined. Here is why: the problem is that for all but the simplest applications, there are multiple contexts within the application and there needs to be a cohesive story for how to present only 'appropriate' actions to the user and prevent nonsensical combinations based on context. This becomes serious business as the total number of actions offered by an application grows and the set of possible actions and contexts grows. As an example, if I just have selected a message in my inbox (this is a 'context'), the 'send' action should not be available, but if I am editing a draft of a message it should be. Likewise, if I have just selected some text, the 'apply Kodachrome style retro filter' action should not be available, since that only makes sense applied to a picture of some sort.

These are just silly examples, but real applications will have many more actions to organize and present to users in a context-sensitive way. Unfortunately, the way 'applications' tend to do this is with various ad hoc approaches that don't scale very well as more functionality is added--generally, they allow only a fixed set of contexts, and they hardcode what actions are allowed in each context. ('Oh, the send function isn't available from the inbox screen? Okay, I won't add that option to this static menu'; 'Oh, only an integer is allowed here? Okay, I'll add some error checking to this text input') Hence the paradox: applications never seem to do everything we want (because by design they can only support a fixed set of contexts and because how to handle each context must be explicitly hardcoded), and yet we also can't seem to easily find the functionality they do support (because the set of contexts and allowed actions is arbitrary and unguessable in a complex application).

Today I was forced to edit a Microsoft Word document, containing comments (made by me, and by others) and tracked changes. I found myself wanting to delete all comments, and accept all tracked changes. It took a few minutes to figure out, and I very quickly gave up trying to actually discover the functionality within Word's actual UI and resorted to using Google. God help me if I wanted to, say, delete only comments made by me within the last ten days.

This problem isn't at all unique to Word, Word just happens to be a large application, and like all large apps it has no cohesive story for organizing all its functionality. There are menus with hundreds of entries, and toolbars. Of course, the designers of Word have tried to create some reasonable taxonomy for grouping these functions, but on some level the location of any one function is arbitrary and unguessable. It's the same thing in any complex application: Eclipse, IntelliJ, Photoshop, Illustrator, and so on, which is part of the reason why there's an entire subdivision of the publishing industry devoted to books explaining how to get things accomplished with these applications. Every single complex app is like a foreign language, with its own unique and arbitrary vocabulary and grammar, which must simply be memorized in order to become productive.

Nowadays, I think people try to write simpler applications than Word, with less functionality. This makes the UX problem more tractable while throwing the baby out with the bathwater. I want to be able to do complex things when I interact with some piece of software, I just also want all this complex functionality to be actually discoverable! Furthermore, it should be as obvious how to assemble functionality that didn't previously exist, using existing functions.

Type systems solve exactly this problem. Here's a sketch of how a type directed version of Word could work.

  • I click on a comment. A status bar indicates that I have selected something of type Comment. Now that I have a handle to this type, I then ask for functions of accepting a List Comment. The delete comment function pops up, and I select it.
  • The UI asks that I fill in the argument to the delete comment function. It knows the function expects a List Comment and populates an autocomplete box with several entries, including an all comments choice. I select that, and hit Apply. The comments are all deleted.
  • If I want, I can insert a filter in between the call to all comments and the function to delete those comments. Of course, the UI is type directed--it knows that the input type to the filtering function must accept a Comment, and prepopulates an autocomplete with common choices--by person, by date, etc.

What I (most likely) don't want is full text search through the documentation. That is a much too sloppy way of querying for the functions I want, and it's not nearly as seamless as the above.

I'm not necessarily advocating any particular presentation of the above interaction, though that is an interesting UX problem to solve. I'm just saying, this is the sort of discoverable, pleasing, logical interaction I want to have with software I use. The goal of software is to provide a kind of programming environment, and we ought to use the powerful tools of programming--namely type systems and type directed editing, to organize and unify that functionality that software allows users to interact with.

Wednesday, September 4, 2013

How to model a doorknob, functionally

Suppose you're a game programmer tasked with coming up with a way of representing doors in the game. You reason that doors have doorknobs which can be turned to open the door, leading the following typical OO design, overly simplified:

trait Door {
  def open: Unit
  def close: Unit
}

trait Doorknob {
  def turn: Unit
}

This sort of design embodies a certain way of thinking about software. A doorknob may be turned, which triggers some update to the state of the door, and we model these state changes in an entirely first order way, with no separation between the production of an effect (the state update) and its interpretation (the propagation of this state update). The view here is that actions must be produced and executed by the same piece of code.

The actor model works the same way. If we model doors and doorknobs with actors, we might have something like:

object messages {
  case object Turn
  trait DoorMsg
  case object Open extends DoorMsg
  case object Close extends DoorMsg
}
val door: Actor[DoorMsg] = ...
def knob(door: Actor[DoorMsg]): Actor[Turn] =
  actor { case Turn => door ! Open }

Just what is the essence of "doorknob-hood"? This is a serious question. The OO/actor response is that a doorknob is something that can be turned. But that is unsatisfying, because what it means to turn something is left unspecified--the result of turning is Unit! But what does that mean?

A more functional view of the essence of doorknob-hood is that a doorknob provides a pure function that transitions a door from the state of being closed to a state of being opened, a Door => Door. This is just one choice, but you get the idea. With this model, we're separating the production of new state from the propagation of that state. Actually running the action of turning a doorknob and propagating new states is not the concern of the doorknob.

And here are some more examples:

  • A key is not something you insert into a lock, it provides a function from Door => Option[Door] that returns a Door in the unlocked state, or fails if it's the wrong key. (And we could now state more precisely that a doorknob takes a Door in its unlocked and closed state and returns an Option[Door] in its unlocked and open state, failing if the door is locked.)
  • A timer is not something you set, it is a function from a duration to a boolean time series that will spike true at some point in the future. Who consumes this time series is not the concern of the timer.
  • A microwave is not something you turn on, it provides a function from a cold object to a warm object.
  • And so on...

Once you start seeing objects in this way, the functional view makes a lot of sense--we are modeling objects by the collection of effects they produce. Objects are not responsible for interpreting and sequencing these effects; that can be an entirely separate concern.

The design of a decentralized currency (part 2)

Last time, we learned that currency, in its purest form, is simply an accounting device to enforce a cooperative tit-for-tat system of exchange. If Alice provides something of value to Bob, she obtains a token enabling her to acquire something of comparable value from Bob or any other participant in the currency. In principle, we could execute these same mutually beneficial transactions using explicit contracts, except that would be rather inconvenient. Thus, according to this view, "so long as we all agree", it is in some sense arbitrary what tokens we use for trade--it could be seashells, shiny discs of yellow metal, scraps of paper, or 100 digit numbers. That is, there is no "one true" token type to use as a medium of exchange, we merely prefer one sort of token over another for properties like convenience, stability, and so on.

This view, of money as purely an accounting device ("medium of exchange" is a term often used) differs from how people have thought of money for most of human history. For most of human history, money has been tied to some widely valued commodity, often gold and/or silver, with each unit of currency corresponding to some amount of this commodity (often literally, via use of gold coins with some fixed weight, or via more convenient paper notes which banks agree to exchange for actual gold on demand). Commodity-backed currencies tend to arise naturally, without any coordination or central government.

But our goal here is developing a pure currency, a currency which is only the accounting device, not one tied to any commodity that is valued for its own sake (we'll see in a later post that there are problems with commodity-backed currencies). There are difficulties with driving adoption of a pure currency (the usual solution is for a government to mandate its use and generate 'artificial' demand for it by forcing citizens to pay taxes in the currency) and preserving its stability that we'll discuss later. But a pure currency gets at the essence of what money helps people achieve--conduct mutually beneficial transactions with a minimum of transaction costs.

If we view currency as merely an accounting device for this cooperative system, a question arises: why is there so much complexity around our monetary system? Why not simply allocate a fixed number of units of currency (which could all be assigned meaningless 100-digit numbers) and "just get everyone to agree" (or have government force!) use of these tokens as a medium of exchange? To see why this and other related systems like BitCoin are unsuitable, we need to examine the critical role of interest in a monetary system and understand what factors ought to affect the rate of interest.

Interest is broadly defined as compensation for any contract whose payout(s) occur in the future. For instance, when you obtain a loan to purchase a house, the bank charges interest as compensation for an (uncertain) stream of future monthly payments. The same principle, of compensation for future payouts, is at work in all manner of mutually beneficial contracts.

In this post, are are going to try to get to the bottom of why a future payout (as opposed to a present payout) warrants additional compentation. Intuitively, it seems natural this should be so, but we want to be very careful to separate out what factors are at work here. As we'll see, there is one fundamental factor with serious consequences for any monetary system.

What factors affect interest rates?

Let us start by pondering a simple thought experiment. Suppose Alice is a neighbor of yours. She knocks on your door one day and asks to borrow your laptop... for a full year. At the end of the year, she tells you, she'll agree to return your laptop or another laptop of the same model and condition (let's pretend that all your personal data is stored in the cloud, so you don't care if you get back your exact laptop). Alice has a contract all written up and ready for you to sign. She asks you: what do you expect in compensation for agreeing to this contract?

You might request anything as compensation for this (uncertain) future payout: $50 now, at contract signing, or a stream of payments of $5 per month. Or you might demand a 25% 'better' laptop after one year, or a cappuccino delivered to your door every Thursday for six months. What's important here is not what you ultimately decide to request, but what factors affect your request. Before continuing on, you may want to stop and think about this question.

Here are some possible answers. We'll deconstruct them in a minute:

  • As with any contract, a general concern is that you can't really be sure the other party (Alice) will hold up their end of the bargain. Yes, Alice is your neighbor and seems trustworthy, but anything could happen. What if she just skips town? Or sells your laptop to feed a secret cocaine habit then refuses (or is unable) to provide you with a comparable laptop a year from now? We say there is some risk of default, the chance that the person you are loaning to may be unable to pay back the loan. (Or more generally, counterparty risk, the risk that the other party to the contract, Alice, may not be willing or able to uphold their obligations) To offset this risk, you require some compensation.
  • As with any market, the buyer must pay enough to bid the good or service away from its possible alternative uses. During the time Alice is using your laptop, you are obviously not using it. The laptop provides you with enjoyment and utility, and this contract Alice is proposing asks you to put off that enjoyment for a year, purchase a replacement in the meantime, or find other, less convenient ways of doing the same things (using the computer at work, using your iPad instead, etc). For this you expect compensation. Alice must also bid use of the laptop away from others interested in it (for instance, if Tom, Dick, and Harry are all lined up, willing and able to pay $100 to rent your laptop for a year).
  • Lastly, you have some expectation that computer technology will continue advancing, making your computer "worth less" in a year. We can make this argument more precise, but to illustrate the idea, suppose you believe that computers will become 30% better (for "the same" cost) a year from now. Then at the very least, you ought to expect Alice to pay you 30% of the current value of your computer now or provide equivalent compensation, in addition to considering the other factors above. After all, if you didn't price this factor into the contract, Alice could immediately sell your laptop now for (let's just say) $100, then buy a brand new, older model equivalent for $70 one year from now, netting her a $30 profit! Related to this, even if you expect your laptop to be worth the same, there is some uncertainty around this expectation because it is with regard to a future event. If you didn't factor this into the price you negotiate, you are effectively taking on additional risk for no additional compensation.

Now each of these factors are interesting in their own right, but in an effort to get at the essence of the problem, let's idealize our scenario a bit. Suppose that, instead of Alice, you are approached by Trent. For purposes of this thought experiment, let us assume that Trent, who also happens to be your friend, is known worldwide as the most trustworthy person in existence and has zero risk of defaulting on his obligations. If the earth still exists in a year, Trent will provide you with a laptop, no doubt about it. This eliminates any interest you might charge due to factor 1, the risk of default.

Furthermore, since Trent is your friend, suppose you aren't really looking to maximize how much you get from the deal. You only want to ensure that you "break even" and are fine with selling Trent the contract "at cost". Thus, the fact that Tom, Dick, and Harry are all lined up willing to pay $100 is irrelevant. The only thing you have to decide is how much the use of your laptop is worth to you over the one year period, because that is what you will be giving up.

What about factor 3, your expectations (and uncertainty around) economic growth in laptop production. Even if Trent is perfectly trustworthy, and even if you don't mind just breaking even, this does not change your expectation that it will be easier/cheaper to obtain a comparable laptop one year from now. And if it is cheaper to obtain a comparable laptop one year from now, that implies (among other things) that if you wait a year to sell the laptop that Trent wishes to rent, you'll have to sell it for less than you could sell it for now. From this we conclude that that the possibility of economic growth must always be priced into any contract whose payout occurs in the future.

Now consider the related question--suppose that instead of asking for use of your laptop for a year, Trent (or Alice) asks instead for $100, to be paid back in one year. What would you expect as compensation? Notice that the same factors apply to your decision:

  • You expect to be compensated for the possibility that Alice won't pay you back. Perhaps you request $10 now, in addition to the $100 a year from now, to account for this possibility. Or perhaps you request $112 after a year. For Trent, you don't bother with this premium.
  • Even for Trent, you will expect some premium due to the fact that receiving $100 a year from now is not as good as $100 now. This isn't just because you expect the dollar might depreciate by 3% in a year due to inflation, which we'll discuss in the next post; there's just that same opportunity cost of not being able to spend that $100 now on other goods or services you value. After all, with that $100 in your pocket, you could buy those cappuccinos for yourself every Thursday for six months, something that you derive enjoyment from. Lending out your $100 means delaying this enjoyment for a year. We say there is a time preference.
  • Just as it may be easier to obtain a comparable laptop a year from now, it may be easier to obtain $100 one year from now. There are various reasons why this could be so. One possibility is an increase in the money supply or availability of credit, due to actions of the central bank. We are going to ignore this factor, since our goal is a central-bank free currency and we don't want any central authority making decisions which affect interest rates. But the other key factor that may make it easier to obtain $100 a year from now is an increase in the rate at which currency circulates, also known as the velocity of money. We will be talking a lot more about velocity in future posts, but we can see that if on average a dollar changes hands more frequently, that implies dollars are easier to come by.

We are going to focus exclusively on this last factor, not because time preferences are uninteresting, but because this preference is a personal decision. A wealthy individual with more cash than he/she could reasonably spend in the next year may not demand as much interest from a loan due to this factor as someone who needs the cash for some short-term expenditures. Thus, it is somewhat orthogonal--individuals will price this factor into interest rates on loans and other contracts, but the monetary system itself does not need to be aware of these preferences.

Conclusion

We've examined what factors determine interest rates, then narrowed down these factors to a single, fundamental one, which could be summarized in the following way:

Any payout that occurs in the future, in whatever units (be it dollars, laptops, or cappuccinos), must account for the possibility that those units have differing relative value in the future as compared to the present.

In the next post, we are going to reason through how failure to account for this factor in a currency can lead to serious distortion of markets and financial instability. This will make it clear why commodity-backed currencies like a gold standard and simplistic 'fixed' currencies like BitCoin are inadequate and motivate design of 'smart' digital currency that explicitly accounts for changes in monetary velocity. We'll then compare the velocity-adjusted currency we develop to a typical monetary system managed by a central bank.

We've been a bit vague about this concept of "value" so far and what it means for an asset to change in value. This is something we'll delve into further in the next post.