The Cyber Edge Home Page

  • Senior Airman Daniel M. Davis, USAF, 9th Communications Squadron information system security officer, looks at a computer in the cybersecurity office on Beale Air Force Base. Cybersecurity airmen must manage more than 1,100 controls to maintain the risk management framework. Credit: U.S. Air Force photo by Airman Jason W. Cochran
     Senior Airman Daniel M. Davis, USAF, 9th Communications Squadron information system security officer, looks at a computer in the cybersecurity office on Beale Air Force Base. Cybersecurity airmen must manage more than 1,100 controls to maintain the risk management framework. Credit: U.S. Air Force photo by Airman Jason W. Cochran

Zeroizing Network Shrinks Attack Surfaces

The Cyber Edge
October 1, 2020
By Joseph Mitola III

With pure dataflow, hackers can be defeated.

Users need to transition all networked computing from the commercial central processing unit addiction to pure dataflow for architecturally safe voting machines, online banking, websites, electric power grids, tactical radios and nuclear bombs. Systems engineering pure dataflow into communications and electronic systems can protect them. The solutions to this challenge are in the users’ hands but are slipping through their fingers. Instead, they should grab the opportunity to zeroize network attack surfaces.

Vulnerabilities to network attacks include accidentally visiting malicious websites, opening attachments to spear-phishing emails and a range of zero-day attacks promulgated into open and closed networks. Users, who are commercial off-the-shelf product addicts, enable many of these vulnerabilities. While purchasing systems commercially may lower acquisition costs, it exposes an organization’s mission systems to network-based exploitation by adversaries. If customers demand dataflow computing networks, the billion-chips-per-month hardware merchants will adjust their production runs to deliver the chips and software tools needed to defeat networked cyber threats once and for all.

To do this, users first need to understand that network vulnerabilities originate in the mathematical structure of the hardware. Rather than the current John von Neumann central processing unit (CPU)-based commercial architecture, the much used but little understood Jack Dennis dataflow architecture alternative is required. This transition is not difficult technically, but it is culturally challenging.

The vast majority of cyber vulnerabilities originate in the mathematical structure of the CPU chips and are Turing-equivalent. They can compute anything imaginable, including processes that return errors, crash systems and support malware, and their functions are partial because they are not guaranteed to return a result. This is the “halting problem,” and many users experience it in their cellphones, laptops, smart homes and military communications systems because all of these feature embedded Turing-equivalent computing.

The military communications and electronics systems on which war-fighters depend are Turing-equivalent because of the commercial CPUs, operating systems and layers of software required to deliver warfighting capabilities, and their chips employ the von Neumann architecture. Computing is performed in a set of CPU registers that operate on instructions and data that is stored in random access memory.

At every CPU clock cycle, instructions are loaded from random access memory into the registers, where they operate on the data that is then stored back into it. Encrypted data must be decrypted before the unit processes it. Malware can modify CPU instructions in the registers. Consequently, data can become instructions, creating more malware.

Most instruction set architectures (ISAs) have features intended to prevent data from being inappropriately modified. But operating systems, compilers and scripting languages need to change text into binary instructions. As a result, when hackers discover ISA operating system loopholes, they can subvert system behavior.

Relatively few CPU registers are reused trillions of times per second to execute instructions of the software layers. The operating system controls the reuse of registers among operating systems, security, networking and applications processes.

Register reuse is at the core of the network cybersecurity problem. It has a mathematical structure by which zero-day attacks always can explode faster than patches can be developed and applied. The values of the registers change according to the ISA with every tick of the CPU clock to form what mathematicians call a recursively enumerable sequence. Given the register values {R}, including IO and DMA of the CPU at time t, the values {R} at time t+1 are known.

The ISA specifies {R} unambiguously. Are the current values {R} in the CPU registers safe, or have they moved into a potentially unsafe state caused by programming errors, IO values or malicious code? A monitor function, which is software or hardware that observes the values of the CPU registers to determine safety, tells users whether the registers are safe. A monitor function risk ({R}, t) warns if the values of registers {R} are at risk at time t. If risk ({R}, t) = 1, then a truncation monitor function will halt processing.

Secure operating systems for multicore CPUs can use one CPU to monitor another, implementing the monitor function risk ({R}, t). Termination may be too harsh, so monitor functions may look for a series of conditions before taking action. If users had a complete set of monitor functions, then they should be able to protect the CPU. However, although cyber defenders and software architects believe this is true, this actually is false. A set of monitor functions does not exist that can completely protect a von Neumann CPU.

For example, if users had a complete set of monitor functions, they would have the set of all subsets of all of the monitor function sequences. In 1839, Georg Cantor proved that this power set of the integers (hence any RE sequence) cannot be enumerated because it is so large it is “uncountable.”

Cantor’s diagonalization proof also establishes the power set of the monitor functions does not exist either. Thus, because monitor functions can’t be enumerated, policies and attacks can’t be enumerated, but defense mechanisms can. Consequently, unenforceable policies and unpreventable attacks must exist, and it is mathematically impossible for users to completely cyber-protect a commercial CPU with hardware or software.

However, it appears a pure dataflow computing architecture could be a safe alternative. In 2014, a pure dataflow website was created. It has no CPU, operating system or runtime software. The server is a pure dataflow machine implemented in a field-programmable gate array (FPGA) with no CPU cores.

The website has been under attack by SYN-ACK, protocol stack attacks, malware insertion and distributed denial of service attacks from China, Russia, Iran, TOR, bots and myriad other sources since 2014. A remote entity has not been able to take over the server nor change the data in any way. It is impervious to network attacks because building a website on a pure dataflow computer can zeroize network attack surfaces.

Zeroizing network attack surfaces began with Jack Dennis who, with his team in the 1960s, offered the dataflow architecture as an alternative to von Neumann’s shared CPU architecture in Project MAC’s study of computing for the first major aerospace defense environment. In dataflow computing, each element of data has a dedicated register or block of memory. Logic that processes data occurs between these blocks of data. Alternating data-logic-data-logic-data establishes a pipeline where data flows from one block of memory to another, input to output, with results computed in the logic between the data blocks.

With dataflow, all of the applications’ logic of the system is in the pipeline, so processing is inherently parallel. A CPU requires two or three clock cycles to move data from memory to the CPU registers, to operate on it and to store the results and get ready for the next chunk of data. If there are 100 processing steps in a program, the first element of data is not completely processed until after 100 load-process-store cycles or notionally 300 clock cycles.

A pipelined dataflow architecture causes all instructions between data blocks to execute simultaneously, moving data through at the clock rate without memory store/retrieve overhead, providing high throughput and low time delay. The data can flow in and out at the clock rate, so the throughput rate can be the clock rate. The time delay is directly proportional to processing complexity. Consequently, 100 logic operations would result in a time delay of 100 clock cycles, but after the initial time delay, subsequent data items are output at the clock rate. Compared to the CPU architecture, the increased speed of data throughput is proportional to the complexity of the app. If complexity is 100, speedup is 100 times faster; if complexity is 1000, speedup is 1000 times the number.

Dataflow computing is safe, while CPU-based computing is not. Each memory block of a dataflow pipeline has its own domain-specific non-Turing monitor function, so any risky data can be removed from the pipeline and routed to fault management rather than terminating the processing of the other data.

Most computing in large clouds such as Amazon Web Services, or AWS, actually is performed in FPGA-based accelerators that are pipelined dataflow machines because dataflow delivers the highest throughput per Watt versus CPUs. The life-cycle cost of the FPGA dataflow machine is basically zero for cybersecurity because, in addition to speedup, a pure dataflow machine with built-in monitor functions zeroizes network attack surfaces.

The life-cycle cost of CPU-based machines can be enormous. What did the Stuxnet malware cost Iran as its centrifuges broke? What did the Office of Personnel Management breach cost the United States? The costs of embedded CPUs in network attacks of the future could be measured in vast numbers of lives and treasures.

Dataflow machines can be systems engineered to zeroize networks attack surfaces, avoiding cyber costs while providing better throughput at lower power and with better baked-in cyber protection than any other computing architecture. A pure dataflow web server in an FPGA can respond to a transmission control protocol synthesizer with an acknowledgment packet by conducting the application embodied into its hardware.

When it is time for software development and IT operations time, microservices software defines the application using a conventional language such as C or a block-diagram programming language like Betty Blocks, the National Instruments RF Network on Chip block diagram language, Matlab Simulink or other commercial FPGA languages from Synopsys, Mentor Graphics, Annapolis Microsystems or The Trusted Computing Company (TTCC).

The code is cross-compiled into very high-density logic and then into bitmaps for the FPGA. Each variable has its dedicated memory that is in random access memory hardware, but it is not randomly accessed. Instead, each memory block is allocated to a specific variable and can be accessed only by the logic connected to that variable.

For example, TTCC’s pure dataflow product encrypts random access memory, decrypting data as it is needed in the applications plane, while its control plane has a separate dataflow machine for uploading the applications plane in a safe, trustworthy way.

In the 1960s, the von Neumann CPU advocates accurately argued that the cost of a pipelined dataflow machine is proportional to process complexity, so a dataflow machine of complexity 100 could cost as much as 100 times that of a von Neumann machine. At that time, a CPU could cost $1 million. A pipelined dataflow machine costing $100 million was not affordable. Today, however, CPUs cost fractions of a cent, while FPGAs implement dataflow machines for a few dollars of acquisition cost. The time has come to move to a more efficient and security-computing environment.


Joseph Mitola III is the Chief Technologist at ENSCO where he supports the U.S. Defense Department as chief cyber architect for a Systems Engineering program.​

Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Share Your Thoughts:

While billions of dollars are going to bail cyber intrusion inflows, Mr. Mitola points to solutions to plug the leaks. Even better is his ability to give a sweeping overview that Non-Geeks can understand. Bravo!

I do not fully understand this article, but my understanding is that for dataflow architecture to be secure, the data monitors need to be built-in. What happens when more people become familiar with dataflow architecture? Will we find ourselves in a similar security situation that we find ourselves today with John von Neuman CPU computing? Is this architecture actually inherently more secure or is it just benefiting from ignorance of its existence?

Share Your Thoughts: