The Cyber Edge Home Page

  • A discovery that could improve encryption also could make polling more accurate and offer greater security for voting machines.
     A discovery that could improve encryption also could make polling more accurate and offer greater security for voting machines.

Doing The Math For Better Encryption

The Cyber Edge
May 1, 2017
By George I. Seffers
E-mail About the Author

A breakthrough formula for generating random numbers may be the key to cybersecurity.

They do not necessarily match the hero stereotype, but computer scientists improving methods of generating random numbers just may save the day when it comes to cybersecurity.

Scientists at the University of Texas at Austin have delivered a mathematical revelation that could bring a number of benefits, but improved encryption tops the list. Cybersecurity, of course, depends on encryption, which relies on random data. Although the world is full of randomness—a roll of the dice, a flip of a coin, a lottery drawing—randomness is not always equal. When studied over time, air temperatures and stock market results, for example, actually produce predictable patterns.

During World War II, American forces constantly broke the Japanese cipher system because the passwords used for its encryption algorithm followed certain patterns. Similarly, in the 1970s, flaws in a widely used algorithm for generating random sequences called into question many scientific results.

The Texas scientists, funded by the National Science Foundation, found a way to generate truly random numbers with less computational effort than other methods. David Zuckerman, a professor at the university’s computer science department, worked with graduate student Eshan Chattopadhyay on the discovery, which the university announced a year ago after the two shocked the theoretical math world with their results.

Generating random numbers is actually difficult for a computer, Zuckerman says, because it executes one instruction after the other in a predetermined way: the opposite of randomness. “For that reason, computers use ad hoc sources of randomness, such as timing intervals between keystrokes. However, such ad hoc random sources are really only a little bit random,” he elaborates. “It is, therefore, essential to improve the quality of the random source. Unfortunately, this is impossible with just one generally low-quality random source.”

The computer scientists’ approach is an improvement upon randomness extractors, which Zuckerman helped pioneer in the 1990s. Simply stated, an extractor generates a truly random sequence of numbers from a weakly random sequence. The breakthrough method uses two weakly random sources and churns out one sequence of numbers that is statistically close to random. 

The technique could help make voting machines more secure, ensure more accurate scientific experiments and complex simulations, and better protect encryption keys. “You want to keep your secret key perfectly secret. If you choose it randomly, the chance of someone guessing it is going to be small,” Zuckerman says. “If you don’t choose it randomly—if you have some deterministic algorithm [such as] your birthday or your phone number—these are algorithms that a hacker can also try, and it makes it a lot easier for the attacker to learn your secret key or your password. To have any provable results in cryptography, you always need randomness.”

In theory, the method can improve communication using weakly random sequences. “With our result, if you have two parties that have independent weak sources of randomness, you can still do what you want. Usually, if you just use these low-quality random sources, they’ll fail. You need to process these before you use them directly,” Zuckerman explains.

Previous versions of randomness extractors were less practical because they either required that one of the two source sequences be truly random or that both source sequences be close to truly random. By allowing two weakly random sequences, the new method sidesteps both of those restrictions, according to the university’s written announcement about the breakthrough.

“The quality of the two-source extractor is measured by the quality of the randomness that’s required to make it work. It’s easy to make it work if you use high-quality randomness. The challenge is to make it work even if the quality is very low,” Zuckerman says.

The quality of randomness is measured by something called min entropy. The lower the min entropy of the input source, the lower the quality of the input randomness, and the harder it is to extract high-quality randomness. Therefore, a lower min-entropy requirement means a better extractor. For years, no one could get the min entropy below .499. “It was known how to build a two-source extractor if this number was bigger than one-half, and people were kind of stuck there. Actually, a famous mathematician had improved it to .499. Our results improved it to anything above zero,” Zuckerman adds, citing .000001 as an example.

He toiled over this problem off and on for more than 20 years. “Even for my dissertation I was interested in using models of randomness,” he states.

The follow-up work to his dissertation involved studying one source of weak randomness that included a very small amount of high-quality randomness. “A small amount of high-quality randomness enables you to extract a lot of high-quality randomness from an otherwise weak source,” Zuckerman reports. “We were able to make progress, but still, we were stuck on this problem of two sources.”

The computer science professor and others continue improving the process. Researchers at Tel Aviv University, for instance, have simplified the method. “These are theoretical results. They’re mathematical theorems. We published it, and now other people are free to improve it. This is the way progress in research happens,” Zuckerman offers. 

While the method has been hailed as a major breakthrough, he indicates that he hopes to see further improvements. “The biggest thing is to get an even simpler construction,” Zuckerman says. He also would like to shrink the error rate, which he describes as polynomially small. “In cryptography, we like errors that are smaller than any polynomial,” he adds. “Even in the theoretical sense right now, our error is too large. It would be interesting to improve it.”

Zuckerman predicts that it may be years before the mathematical method is put to practical use. Its implications could be wide-ranging, he suggests. “It’s still largely basic research, but randomness is extremely useful to secure systems and do cryptography and secure electronic commerce and in a whole variety of other areas. Improvements here, hopefully, will lead to improvements elsewhere,” he offers.


Share Your Thoughts: