A new Linux /dev/random

11 Mar 2014

A new Linux /dev/random

UPDATE

Forward secrecy may be possible by zeroing out the rate bits after output then churning the sponge. Reversing the permutation will then only reveal the capacity bits.

If they recover the rate bits (eg. from userland crashing too), then they should only be able to recover some entropy used to seed since last time. To recover other users’ output they’d need to recover the rate bits from their output - which is to say you already have the data you’re looking for.

Introduction

Current Linux /dev/random and /dev/urandom was a very successful experiment in OS-provided entropy, and can provide very high quality entropy to userland.

While not being broken in the strict sense, it has an unnecessarily complex design, and it’s concept of entropy-counting and blocking is a pseudo-security feature that it doesn’t actually provide, and can create security problems and deny availability.

Furthermore the cold-boot problem is not resolved yet, relying on userland to provide a seed of unknown quality to ensure /dev/urandom behaves properly.

This is my design for a new /dev/random design, which I intend to implement and submit for inclusion upstream.

The entropy pool

There is only one entropy pool - it has a counter with the estimated entropy fed in (that never decreses), and a Keccak state (SHA-3). The parameters for Keccak are both convenient and conservative - a rate of 1024, capacity of 576. This provides 288-bits of security, which is an uncontroversially high number, and 128 bytes at a time can be read or absorbed.

Note that the normal padding will be omitted, which shouldn’t affect the security.

The /dev/urandom device

When data from /dev/urandom is requested, the call will block if insufficient entropy is collected. This should be a sysctl with a reasonable compiled-in default. The pool only offers 288 bits of security, but the user could require a higher number if they are not confident of the entropy estimates.

The device will return data from the same pool as the input pool in 128 byte chunks, with a Keccak-f round being called after each 128 bytes are read.

If the requested data isn’t a multiple of 128 bytes, the extra bytes are effectively discarded. Before read() returns, the rate bits of the pool are zeroed out and Keccak-f is called one more time.

This provides forward secrecy for the main pool - the data left in kernel memory cannot be used to retrieve data previously returned to userland. This is important because applications cannot control the contents of kernel crash dumps.

The /dev/random device

/dev/random and /dev/urandom should be the same. There is no meaningful distinction between 288 bits of security and “true random”. The definition of 288-bit security is that it would take 2^288 operations to distinguish the output from a random oracle. That’s an insane number of operations.

Proofs, weaknesses, etc

Almost all of the security relies on the proofs provided for sponge functions.

Keccak-f has a large security margin, and the number of rounds could be easily adjusted (even as a compile-time tunable) according to known attacks.

I have concerns about zeroing out the rate bits after output - feedback from cryptographers is appreciated.