
The big data successor of the spreadsheet - misterdata
https://medium.com/p/rewriting-excel-for-the-era-of-big-ger-data-part-2-c13cc40f248e
======
PaulHoule
I think the problem now is not a better excel but a better access;
spreadsheets are pretty good at what they do but biz people use them when they
should use a database because they find databases less intuitive.

~~~
jewel
I feel the same way. In the 90s you'd find small businesses (and some larger
ones!) built entirely on a single Access database on a network share. At the
current small business where I work, we use google sheets instead, and it
leaves a lot to be desired.

For the past few years an easy-to-use database interface for small businesses
has been my primary idea of what I'd like to build if I founded a startup.

~~~
rpwilcox
QuickBase(.com) may be something close to what you want: while mostly aimed at
> 10 people businesses. It's not great, but it does have a reasonable amount
of power... especially for apps with say less than 10 tables.

DabbleDB was a good one for individual users, until Twitter bought it and shut
it down. Google says grovesite.com is a competitor... and I also think
Dabble's founders were debating rewriting it.

There's always the old FileMaker Pro, or databases of that age / time. (4D??)

~~~
jasoncrawford
@jewel, @kikidrew, @rpwilcox: We're building this:
[https://fieldbookapp.com](https://fieldbookapp.com). We're still in private
beta, but message me (jason@fieldbookapp.com) and I can get you an invite.

------
err4nt
Soulver is the best evolution of a calculator I've ever seen, think of it like
a dynamic paper tape of calculations you can work with and alter and it keeps
the results updated.

------
hammerandtongs
"""The interfaces require the end-user to know about the concept of a table,
rows and columns, the concept of logical expressions, as well as aggregation,
even though none of those are directly visible nor have a ‘real life’
analog."""

This sentence expresses the problem I see with designers.

Instead of making that disappear into magic why not focus on actually
purposely and clearly educating people about these CORE concepts?

edit: Think of how you would do this if you were Khan academy...

~~~
ethanbond
The pragmatic reason is that it takes a lot of time. As a product designer at
a company that actually tackles exactly this problem, we generally have
approximately 0ms to stop a user's workflow in order to teach them a cool
concept.

Building intuition DURING their workflow is where things get really exciting
for designers though.

Edit: Should point out that I'm not quite sure if you're suggesting we teach
this in school, or...? But I read this as a designer's job to teach this in-
product.

~~~
hammerandtongs
0ms is a good goal for calling lyft or uber.

It's probably not a great goal for coherently querying data in a flexible and
powerful way.

Having very little friction when your user has attained some fluency is a
great goal however.

------
JBiserkov
[http://en.wikipedia.org/wiki/Powerpivot](http://en.wikipedia.org/wiki/Powerpivot)

~~~
igrekel
I was about to post a comment on power pivot myself.

We were building a lot of tools in R but eventually switched most of it to
power pivot. My onl complaint would be that it isn't practical to refresh it
with new data, it requires more manipulations than I would like.

------
tomlock
I work in a big, dumb enterprise with too many spreadsheets and Access
databases.

What made me move away from Excel spreadsheets:

1) As in the infamous case, its too easy to miss extending a range when you
add new data[1] 2) There's no good way to manage concurrent access 3) Doing
aggregates of aggregates is really hard

What made me then move away from Access databases:

1) Excel and Access attempt to guess the "type" of cell contents
automatically, and this creates issues (Is 060E2 a product, or a scientific
number? Of course, its 6000. Is 3/6/2015 a US date? Access will figure it out
without you.) 2) There's _still_ no good way to manage concurrent access 3)
Storage limits still aren't that great if you want to deal with more than a
few GB of data

For me the ideal spreadsheet would solve all these problems :) I moved to
Postgres, instead.

[1][http://www.bloomberg.com/bw/articles/2013-04-18/economists-s...](http://www.bloomberg.com/bw/articles/2013-04-18/economists-
spreadsheet-error-upends-the-debt-debate)

------
bigger_cheese
At my workplace we use SAS Enterprise Guide for some of this stuff. I like it
because it has a drag and drop GUI that non technical users can utilise to
extract data (It executes SQL queries in background) and do simple operations
(like aggregations, statistics, graphs, merging, filtering etc) At the same
time it has a decent programming language underpinning it and pretty advance
statistical capabilities.

On the down side it still suffers from some of the problems the article
references. Some non programmers, even quite technical people simple don't
grasp concept of querying and joins.

It is also horrendously easy for the auto generated SQL to be horribly
inefficient the default appearance of the plots and other graphical output
looks ugly etc, which leads to people doing things like extracting subsets of
data from SAS and then pasting it into excel to manipulate it. Which is kind
of self defeating.

------
beamatronic
The "pipeline of blocks" idea goes back a long way (Visual Smalltalk anyone?),
I'm excited to see it applied in this way.

------
sgt101
Datameer - big data spreadsheets

Scratch - best thought out block language

