Enable breadcrumbs token at /includes/pageheader.html.twig

Learning Lessons at the Edge

A technologist studying a confluence of technologies presents an interesting future edge computing environment, if we can pay attention.
Eng Lim Goh, a senior vice president and chief technology officer, artificial intelligence, at Hewlett Packard Enterprise, addresses the Rocky Mountain Cyberspace Symposium 2022.

Eng Lim Goh, a senior vice president and chief technology officer, artificial intelligence, at Hewlett Packard Enterprise, addresses the Rocky Mountain Cyberspace Symposium 2022.

The computing edge, where devices operate, is getting more and more intelligent. Several companies around the world are pushing the boundaries of data generation and processing, offering a view into future possibilities, said Eng Lim Goh. And if U.S. technologists pay attention, we can bridge some of the capability gaps.

Goh, a senior vice president and chief technology officer, artificial intelligence, at Hewlett Packard Enterprise, spoke yesterday at the AFCEA Rocky Mountain Chapter’s annual Cyberspace Symposium held in Colorado Springs February 21-24.

“We have the cloud, then we have the edge where all the devices are, and how I will define the edge, just for today, is the place where the data first lands,” he said. “That's the edge. Let’s keep it simple. The edge could be a drone or it could be a hospital MRI scanner, where a human comes in, no data yet, but when you do the scan, the data first lands in that MRI scanner where the computer is, and that's the edge. And that edge is necessarily getting more and more intelligent.”

When working with German company ABB, Goh found that they had created a device that listens to any rotating machine, such as motors, pumps, gears or fans.

Employing the device decades ago, they started collecting data on the behavior of rotating machines. “When an engine or a gear or a fan or a motor is turning, it should sound a certain way if it is working correctly,” Goh shared. “If it's not working correctly, it should sound somehow different. ABB began collecting all of this characteristic data decades ago and now today they are listening to all the rotating machines out there in the factories of the world and they are able to tell if a motor is about to fail or if the motor needs servicing soon before it fails.”

ABB collected the rotating machine data with a combination of Wi-Fi and a data concentrator.

“Through Bluetooth low energy, the data is then sent to a Wi-Fi router,” he explained. “And if you need a Wi-Fi router that can withstand 150 mile per hour winds, you use a more robust one. Then the data is sent to a concentrator that can collect 30,0000 of these readings, then through a wide area network and sent to the cloud.”

Goh pointed out the company was successful because it realized early on the value of data.  

An autonomous car company in Sweden has found a way to easily upload massive amounts of data. The company is using autonomous training cars—not commercial car that consumers use—that are packed full of computer equipment.

“They drive hundreds of these cars, collecting data on the road and every time they go out, they collect 10 terabytes of data,” Goh said. “Then they drive the car up to the parking lot of the data center, pull out a bunch of disc packs, walk up to the data center and upload it. There’s no wired connection or a wireless connection. And it’s 100 hundred cars and 10 terabytes of data each, so we're talking a lot of data to move. So, one of the lessons learned here is that we are still not quite there yet in the United States when it comes to uploading data.”

The way that the company collects data on the training cars is sophisticated, through the use of advanced sensors.

“On [their] car, there are lots of these sensors, and the sensors up on the roof of the car are not meant to be on commercial cars,” Goh clarified. “They are essentially called ground-truth sensors. These are extra sensors put in to check [operations] at a high resolution, to verify that the sensors that will go into autonomous commercial cars will be accurate. These training cars used to collect data to train machines will typically collect a lot more data because you want high resolution, ground-truth data to check your commercial sensors before you deploy.”

He noted that this training environment requires the handling of a lot more data in that phase to create smarter machines in the future.

Another possibility for effective data movement in the future could come from charging an electric car, Goh continued. “The car is sending electrons to the battery. You can collect electrons, which is data, and while you are charging you could be uploading data.”

Data collection will require trade-offs, he stressed. “If you have a device like a drone flying out there, can you afford to have enough battery power or other kinds of power to put enough compute equipment on that drone so that it can process some of the data that it collects while flying before sending everything back to the cloud,” Goh said. “Sometimes [you] can't afford to send all the raw data back because of communications limitations.”

While working on a project in Australia, Goh saw an “extreme” trade-off issue first hand. About five years ago, he was called to consult there for a large radio telescope project. “They were thinking, ‘If we want to build a dish that is 1 kilometer in diameter, we have to think about a whole kilometer across and that is going to be too much work,” he observed. “So, they said, ‘OK, we’re going to do something different instead. We are going to use 2,000 smaller 15-meter radio telescopes, putting the thousands of them in a square kilometer and treat it as though it was one big telescope.”

Goh warned the officials that a problem might come with the data integration, when pulling information from 2,000 radio telescopes and trying to build one data picture out of the mass.

“I was naive five years ago,” he laughed. “I asked then what the data rate was going to be from 2,000 radio telescopes when they all come online. They said, ‘It is only going to be about 1 exabyte a day. I fell off my chair at the time.  So, it is not just about collection, it is how you are going to integrate with the computer equipment fast enough to pull in all that data.”

Goh and the Australian research group decided to push some of the intelligence out to the edge, to the radio telescopes, so that when the data first lands, it does some of the processing and the users decide what data to send back to the cloud.

“You are starting to see the edge getting a little bit more intelligent,” the technologist said. “It is able to discern what data you will need and not send back all that raw data.”

He asked the group if they were able to put enough processing at the edge, how much data would they prefer to send back, “and they said they can only afford to send back 1%,” Goh noted. “I said, ‘You mean you will throw away 99% of the data.’”

The researchers had already spent a considerable amount of money for the telescope array facility and felt that although they could take only 1% of the data, it was worth it.

“I reminded them that when it comes to data, what is junk today might be gold tomorrow,” Goh stated. “This is another lesson. You will be in a dilemma, a dilemma that you will have to work out. In this case, they said, ‘We will immediately send back the 1%, and whenever we have the bandwidth, we’ll try to slowly trickle in the remaining 99% and eventually we'll send everything back. But we’ll send the most important 1% first.”

He advised that as data collection capabilities at the edge grow considerably, users will have to make this trade-off decision until bandwidth for data can match collection levels.