I can't count the number of times I'll be talking to some sales rep and they'll describe how they scan the data within whatever application they're demoing and "suggest" items using "big data techniques". In almost all cases they're talking about a few thousand or hundred thousand records, tops.
I've found that when non-hardcore techies talk about Big Data, what they really mean is "they have some data" vs before, when they had zero data.
From the article:
"Consultants urge the data-naive to wise up to the potential of big data. A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes.
What these consultants mean is that by having just some data compared to the silo'd data that is the norm in US healthcare, they could save a lot, and they're right. My previous company had a large data set (20+ million patients) and we'd find millions of dollars of savings opportunities for every hospital we implemented in, but that's because we had the data, not because we were running some kind of non-causual correlation analysis like the article references. It was just because we could actually run queries on a data set.
Off Topic - how annoying is it that when you copy & paste from the FT, they preface your copy with the following text?
High quality global journalism requires investment. Please share this article with others using the link below, do not cut & paste the article. See our Ts&Cs and Copyright Policy for more detail. Email email@example.com to buy additional rights. http://www.ft.com/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabd...
I then asked what tools they used. He responded with a well-known relational database. I then asked the total size of their dataset, with a good idea of what the upper bounds would be. He responded "around 100 million events" since the product started, 6 months ago.
It's really sad because they may end up under fire despite the effectiveness of their work.
Big Data is a lot like teen sex.
Web 2.0 was some sort of a shift over web 1.0, the line between publisher and consumer melted. Cloud is etherealizing computing and data. There was a thread a few days ago about the film Her. "Where is Samantha" (the AI) is borderline nonsensical statement. It doesn't come up to a viewer. That's because people are used to cloud as an idea now It doesn't really matter that servers, replication, dumb clients, remote data or whatever were invented a long time ago.
I really don't mean to be snarky, but please try to use paragraphs. It'll make it much more likely people will read your post.
I ask not to be snarky, but it might be the case that it's "big data" to someone else, but not necessarily to you. I figured it was a relative term for your industry/business, but the hacker crowd definitely seems to peg that amount in the millions of data points before calling it big data at all.
Seems fair, but I'd rather clarify.
"Big Data is any thing which is crash Excel."
Many a true word spoken in jest.
It would also be easier to engineer it so the terabyte file was entirely in RAM by distributing it across multiple machines (although single machines with TB ram capacity are no doubt continuing to become more common)
Sure, store it on a single tape or disk and distributing the computation won't help. You need distributed storage to properly leverage distributed computation for otherwise I/O bound processes.
I know it can at least get up to several million, didn't have a chance to test it beyond that! :)
On normal data you can iteratively explore and visualise it hitting return and seeing plots or model results instantaneously or at most a few seconds.
When you have time to grab a coffee after hitting return then you have bigger data.
If you carefully think through what you are about to ask the computer to do before pressing return then maybe you have big data.
I actually think this is a better description than just size of files or data distributed across many computers as an algorithm that just streams over a massive dataset maybe in parallel can be less challenging than one that has to hold a much smaller e.g many Gb dataset fully in memory.
It's a measure of the size of the problem to be sure, but it is not a measure of the size of the data, or an indication of what techniques might be required to solve the problem.
The more sophisticated a statistic, the more high dimensional the data, the more sampling required, or the more of the dataset it requires to memorise at once - then the smaller your big data threshold will be.
It depends a lot on your point of view too. If I google something now it may bounce across lots of crazy server farms but to me I don't feel like I'm doing big data.. the person who built it all probably feels differently.
The problem is, if you define "big data" as something that depends on the algorithm, then it makes no sense to include the word "data" in it. The expression "big data" as it is commonly used refers to flows of data so big that you need specialized approaches even when applying simple transformations to the data.
Since the dawn of computing we've always wanted to solve problems, with small or large amounts of data, that required complex algorithms. The usage of a new expression is justified by the fact that huge flows of data are now available to many companies (mainly because of the web), not because these companies are attempting to perform extremely complicated transformations to the data.
TL;DR: Big data means big volume/flow of data, and not (as you are defining it) using large Big-O complexity algorithms on some set of data. In fact, the size of big data precludes applying large Big-O complexity algorithms to said data.
However... Small Data, that which traditional researchers handle, is normally much, much smaller than that: perhaps 10-10000 data points (and most often on the small end of that). An experienced researcher can essentially can know everything about this data set, including its outliers and quirky points, and get a good sense of it by drawing out simple graphs.
There is clearly some disconnect between these two ideas: is that "Medium Data"?
I would accept a concept of "Big Data" as data that cannot easily be eyeballed to get a sense of what's going on, so 10000+ points would count (under some circumstances). Maybe the concept of "six sigma" is useful - enough data that you would reasonably expect a six sigma outlier.
Mathematically/statistically, the storage limit is not a particularly important milestone: the ideas and methods don't change once you reach this scale (except for potential parallelisation).
I think there are also at least two meanings of "Big Data". The more popular one is simply a trendy name for good old and boring "statistics", but with a twist that the data comes by way of the Internet, social media, all that.
The second one (and a little closer to my heart) is what ">1 machine" means from a developer/sysadmin perspective. This is where the hadoops, hives, cassandras, etc. come into play, and it's A LOT to learn, even for seasoned developers.
I think it's also a little intimidating for people who have become very comfortable with the typical rdbms stack. Parallel processing can be hard to understand, it's not something you can tinker with on your laptop over the weekend, and it's not surprising to hear all the "your big data thing is stupid" comments.
What are standard data management approaches? I don't know. Usually they mean single machine relational db's.
But the thing is that once you get to a certain point on these three you need specialized solutions. High volumes of transactional data with real-time reporting might be handled well by something like Postgres-XC, but that won't handle data of sufficient variety. High velocity data may be best handled with something like VoltDB, but it can't handle volume. Etc....
Data pertaining (in whatever abstract sense) to the business, generated by systems outside of the business are 'big'(external data.)
Nothing to do with rowcounts directly IMHO.
Big Data is machine generated by systems. Typically its logs, IoT etc.
Comment data on this site alone would be a pretty big task to analyze.
~(From a math professor I worked with)
Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field's heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the "new AI" -- focused on using statistical learning techniques to better mine and predict data -- is unlikely to yield general principles about the nature of intelligent beings or about cognition.
 HN thread on : https://news.ycombinator.com/item?id=4729068
EDIT: I'm getting downvoted, but your statement is incredibly vague and I believe wrong. "Big Data" might be overused as a buzzword, but it's not a "nonsense concept". "Thinking is hard", I assume you are talking about strong AI, and it's not related to this at all. Saying it's "hard" adds nothing of value, and we don't even know if it's true (in the sense that when someone does figure it out, it might seem simple and obvious in retrospect.)
I see no connection between big data and AI. Like everything you can of course apply AI to it but I think step one is getting the analytic side down pat.
And also agree thinking may not be hard. It is hard to create a thinking machine (Other than using DNA) but I don't necessary think there is anything special to it actually thinking.
I'd be disappointed if Chomsky actually thought this way, would need context.
If it takes decades of hard working geniuses to figure it out, then even if "it seems simple and obvious in retrospect", it IS hard.
“There are a lot of small data problems that occur in big data,” says Spiegelhalter. “They don’t disappear because you’ve got lots of the stuff. They get worse.”"
This should be the main learning point. Humans can be astonishingly bad at dealing with stats and biases which can led to erroneous decisions being made. If you want an example where such decisions by very smart people can have catastrophic consequences, look up the Challenger disaster .
I rarely see people stating their assumptions upfront, which doesn't help the problem (I guess it's not cool to admit potential weaknesses). The more people/companies that get into 'big data' (without adequate training) the more false positives we're going to see.
 - http://www.theatlantic.com/technology/archive/2012/11/noam-c...
 - http://en.wikipedia.org/wiki/Noam_Chomsky
 - http://en.wikipedia.org/wiki/Peter_Norvig
Norvigs rebuttal, http://norvig.com/chomsky.html
This analogy is particularly illuminating,
"“The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics,” Stuart Russell and Peter Norvig write in their leading textbook, Artificial Intelligence: A Modern Approach. AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?"
While the Norvig-Chomsky debate is about the philosophy of the science of AI, it has practical implications to practitioners who tend to apply statistical techniques as if they are popping a pill. Engineers applying statistical learning, etc. should understand the limitations of the techniques, as outlined by Chomsky in the debate. The outcome of the Chomsky-Norvig (or Hofstadter vs. everyone else in CS) debate is less important than the arguments put forth by both the groups.
I think that's indicative of Wired breathless enthusiasm for technology that turned my off buying the print version many years ago.
Scrape away some of the hyperbole and it is true that data driven management has made many companies more competitive and, if I dare mention the hobgoblin, efficient.
Hunches and ideas can only get you so far. It is important to visit the data gemba and do the genchi genbutsu.
It seems pretty much everything they write about is supposed to change the world in a major paradigm shift.
Many people associate "science" with things: cells, microscopes, the inner workings of the body. But science isn't a set of things; it's a process, a method of thinking, that can be applied to any facet of life.
Big data is similar, in my opinion. It's not so much about the stuff — the size or diversity of a company's datasets. It has more to do with the types of observations you're making and the statistical methods involved.
This distinction is important for two reasons:
1. If Big Data is recognized as a process rather than a circumstance, businesses will be more deliberate in deciding whether to use the methods. They will weigh the benefits of, say, MapReduce against other approaches.
2. The idea that "Big Data" techniques have everything to do with size is somewhat misleading. A comprehensive query of a 50,000 user dataset can be more computationally expensive than a simple operation on a 100,000-record dataset.
One of the most obvious examples was this one:
A data set of all known meteorite landings turns into
"Every meteorite fall on earth mapped"  with looks like a world population maps sprinkled with some deserts known for their meteorite hunter tourism.
The actual distribution can be theoretically described as a curve falling towards the poles.
While this example is pretty obvious, one could expect similar observation biases in other data sources. A danger lies where data analyst do not bother to investigate what their data actually represents and then go on to present their conclusions like it would be some kind of universal truth.
previous discussion of this: https://news.ycombinator.com/item?id=5240782
I fear that now that SOAP and enterprise buses have gone their way, they look a new buzzword to sell. More solutions looking for problems...
Now, just like with every other technological solution, we only learn about the limits of its use by overuse. There's plenty of people out there storing large amounts of data and getting no valuable conclusions out of it. But the fact that many people will fail doesn't mean the concept is not worth pursuing.
Chasing what is cool is a pretty dangerous impulse. The trick is to be able to tell when it can pay off, and to quickly learn when it will not, and cut your losses. Maybe you don't need big data, just like maybe your shiny cutting edge library might not be ready for production.
"They cared about correlation rather than causation."
Analytics are a tool to help find correlations and patterns so that humans can do the hard work of determining and testing for causation. Computers are doing their jobs; humans aren't.
In one sense, if you can observe real phenomena, you don't have to guess at what is happening. For businesses that collect troves of it, they may need statistics 'less' because the sample size may approach the population size.
But calculating basic (mean, standard deviation, etc.) statistics is hardly the most interesting part. Inferential statistics is often more useful: how does one variable affect another?
As the article points out, the "... the numbers speak for themselves” statement may also be interpreted as "traditional statistical methods (which you might call theory-driven) are less important as you get more data". I don't want to wade in the theory-driven vs. exploratory argument, because I think they both have their places. Both are important, and anyone who says that only one is important is half blind.
Here is my main point: data -- in the senses that many people care about; e.g. prediction, intuition, or causation -- does not speak for itself. The difficult task of thinking and reasoning about data is, by definition, driven by both the data and the reasoning. So I'm a big proponent of (1) making your model clear and (2) sharing your model along with your interpretations. (This is analogous to sharing your logic when you make a conclusion; hardly a controversial claim.)
"Facebook’s mission is to give people the power to share and make the world more open and connected."
What it actually does... (that will be left to the reader.)
"Big Data" is often sold as one thing by Enterprise software folks. But what value the data, or processing of it actually has is usually much more dependent on the user and his context (like FB!) and usually doesn't fit as nicely onto a PPT slide.
Articles like this usually confuse the PR definition and the analyst definition.
It's nebulous. I've seen it applied to machine learning, data management, data transfer, etc. These are all things that existed long before the term, but bloggers just won't STFU about it. Businesses, systems, etc. generate data. If you don't analyze that data to test your hypotheses and theories, at the end of the day, you don't understand your own business and are relying on intuition for decision making.
So you said "if i work for facebook and i want to figure out something about my users", and for whatever you were doing, looking at your existing user base might be the right thing to do. Perhaps, though, you actually want to know something about all your potential users, not just the users you happen to have right now. Whether or not your current user base offers a good model for your potential user base would then be a pretty important question, and one that almost certainly isn't answered by "big data".
I think that, as with most of statistics, the key point is "think about your problem", and that focusing on a set of solutions rather than the problems themselves can get in the way of that.
BigData vs. Theory, Java vs. C++, Capitalism vs. Socialism, Industry vs. Nature, Good vs. Bad, etc.
BigData allows to store a lot of data and provides a means run some computation on that data. Not more, and not less.
This just kills my vibe, man.
New favorite phrases "data exhaust" and "digital exhaust".