Add new comment

I am a fan and long time follower of AI and ML. As a cyber security person, the line in this article "The accuracy and reliability of ML is completely dependent on the data that it trains or learns from" gave me pause to think about the integrity of ML programs. How to prevent being fooled by an attack that intentional poisons data? This is certainly a factor that requires thought to ensure the effectiveness of generated programs. I understand we are on the cusp regarding AI and ML and am interested to understand how the DoD is approaching the risk factors and potential threats in these early stages.