Making Sense of Intelligent Bots
The Defense Information Systems Agency is initially employing robotic process automation, or RPA, to several of its processes in finance, public affairs, circuit management, security authorization and procurement, with an intent to build a robust RPA platform for greater use across the agency. The automated software robots, or bots, will perform repetitive, rules-based processes and considerably reduce the workload of humans, the director of DISA’s Emerging Technology Directorate, Stephen Wallace, shares.
In developing automated computer solutions, one of the understandable challenges that the directorate’s RPA team is facing is how to integrate the bots into established networks, Wallace says. “Some of the challenge has been adapting some of the existing infrastructure to work with RPA, getting the platform installed onto the endpoints where we are doing the development,” he states. “When you’re bringing new technology into the organization you face some of these fairly typical things.”
A larger issue, however, is the evolution of RPA capabilities into more intelligent bots fit with decision-making qualities, stemming from artificial intelligence and machine-learning components added to the bots.
“One of the other challenges that the [Defense] Department is facing right now, not just us, is more from the unattended bots,” he explains. “We are deploying—and most of the rest of the department is working with—these attended bots, where the human is actually sitting there more or less in control of the bot. But in order to fully realize the benefits of RPA, we are going to need to move to unattended bots.”
That kind of move brings forth larger challenges, such as how to effectively perform the identity, credentialing and access management, or ICAM, of the unattended bots. “[It] creates an identity problem,” Wallace emphasizes. “And we have to figure out how we actually give identities to these bots, and how do we deal with that. There are some natural concerns that come with allowing these bots to run on their own. I’m confident that we will make it through it, but these are the kind of discussions that need to occur in order to make folks comfortable with these capabilities.”
Wallace sees the application of enterprise ICAM solution as an initial answer, but points to the possible mind shift that has to happen in trusting intelligent robotic processes.
“It is really about credentialing,” he offers. “We all have our common access cards, our CACs, which have our credentials on them. The technology completely exists, so it is actually not really a technology problem that we have. It is more of a policy and a procedure issue that we have to work through.”
The employment of zero-trust architecture would also aid in the use and humans’ confidence of smart bots, he continues. “A least-privileged or zero-trust approach would make sure that a bot has the absolute fewest rights that it needs to do its job,” Wallace clarifies. “It would have just enough rights to do what it is supposed to do, but nothing further, so that if something bad does happen the bot is contained, and it doesn’t leave the outside boundaries where you allow the bot to operate.”
And a clear understanding of those boundaries is necessary, he adds. “It is the same principles that we apply to humans that we would need to be applying to the bots. The general principles of ICAM and the very least amount of privileges that they need to do their job, to make sure that they can still do their job and certainly be effective, but they can’t move into areas where they shouldn’t be.”
Since the agency is in the early stages of employing RPA, DISA has time to consider how to appropriately apply artificial intelligence and machine learning to future bots. “We haven’t yet started to dabble in the AI implementations of RPA,” Wallace reports.