Search:  

 Blog     e-Newsletter       Resource Library      Directories      Webinars  Apps     EBooks
   AFCEA logo
 

Add new comment

Lewis makes a good point about the definition of "Big Data". Big is a relative term. Looking at it temporally, what was big even 5-10 years ago is small by today's standards (Remember "No one will ever need more that 640K). But there are also other relative aspects and those deal with the types of problems one is attempting to address with the data. A graph with a few thousand nodes is not all that big size wise but performing reasoning on that graph is very big computationally.

In participating in the NIST Big Data Public Working group I know we struggled and debated endlessly the definition. Most folks start with the 3Vs (Volume, Velocity, Variety) and each of those brings a dimension of complexity to the problem. Many of us had other Vs that contribute to complexity like Variability and Veracity.

The key with any Data (big or not) is defining techniques and approaches that allow you to efficiently and effectively extract/derive knowledge from that data.

I believe that "Big" Data came about because now there are approaches that permit this type of exploitation for data sets that were not previously exploitable. The Big Data thrust so far has primarily been focused on storage and representation of the data. That is now transitioning more to an analytic and algorithmic focus (to include visualization) which to me is the harder problem.