Hacker News new | past | comments | ask | show | jobs | submit login
Tad, a tabular data viewer (tadviewer.com)
322 points by tosh on May 30, 2017 | hide | past | favorite | 111 comments



Nice, some initial remarks:

- Sad to see it's an Electron app, but I don't want to start this flamewar all over again

- Mid-sized CSV file opens fairly quick

- If you want to build this into a full-fledged app, be prepared to handle a lot of CSV edge cases, also think about supporting AVRO, Parquet files

- Regarding filtering: I can filter on CONTAINS and '=' but not on NOT-CONTAINS or '<>' I think? This is annoying to filter out empty strings (didn't check in much detail so I might be wrong here)

- Regarding filtering: I have a lot of columns, would be nice if the dropdown would allow me typing some characters to prune the list and quickly find what I'm looking for

- You might think about including some statistics per column, e.g. variance or entropy to allow for exploration in case many columns are present and you quickly want to highlight the more "interesting" ones

Congrats on putting something out! Some competition in this space: https://exploratory.io/, https://github.com/saulpw/visidata and http://www.delimitware.com/ (as others have mentioned).


I believe my http://easymorph.com would also count as competition. It loads CSV files and queries databases, filters, pivots/unpivots, cleanses, aggregates, merges. Free edition. Windows only (yeah, Mac owners, I know).


Looks fantastic. I've been complaining for years that there wasn't any program like this, and even started many times working on it. Thumbs up!


Thank you!


> Sad to see it's an Electron app, but I don't want to start this flamewar all over again

Then stop mentioning it. Just don't use anything electron and don't mention it. I will use my RStudio and VS Code.


I think it is ok to express displeasure at the state of something without just silently doing something else and hoping that the entire ecosystem catches up to you. For example, I take every opportunity (like right now, for example) to shit on node.js and npm for that very reason. There is so much evangelism on one side, it is nice to hear the opposing view points.


> I think it is ok to express displeasure at the state of something without just silently doing something else and hoping that the entire ecosystem catches up to you.

It is, but it's not okay to bring up a common flame war topic and then say "but I don't want to start a flame war". If you don't want to start one, just don't start one. Starting it, and then saying you didn't want to, is disingenuous at best.


It's a legitimate thing to mention as being displeased about. The very fact that it's such a hot topic demonstrates that, since it indicates that it's a subject over which smart people can disagree.

The "I don't want to start a flamewar" bit is also a reasonable addendum. It's a terse way of saying, "I don't like this feature, but, people who disagree with me, please don't get up in arms about this, you're free to like this feature if you want."

Frankly, if there's anyone trying to start a flamewar, it's not the original poster; it's people who are edging toward caping up in response to OP's (quite mild) comment.


Ok, so what are some legit criticisms of node.js and npm? I'm no fanboy, but every time I try to ask this, the answers are along the lines of "if you install packages in a different order, you get different results." Which is fair, but everybody installs using `npm i`, so that's mostly a moot point.

Is there any substance here?

I think it's fair to talk about this, as long as we go out of our way not to reduce it to flamewar territory. The app is built on Electron, after all.

Maybe avoid mentioning memory usage, since that point has been done to death.


What counts as a legit criticism? So much of our field is taste-driven. And so much criticism was aimed at old iterations of something, gets carried on in flamewars, but no longer applies to the latest version (e.g. "Java is slow").

All the JS-the-language criticisms apply, those are probably the biggest at this point. The classic "node.js is cancer" is still somewhat relevant though I never thought it made its point well enough on how forced-asynchronous is at least as bad as the forced OOP and checked exceptions we put up with in e.g. Java again. Callback hell was long a common complaint, until people found Promises / the waterfall structure, but then the complaint is debugging hell. The new await stuff in node 8 sounds nice at least.

For a long time npm didn't get stuff over https by default, I think it does for a year now? The only other criticisms I can think of are just shoddy engineering that turns into drama (I remember seeing something about their colorized output would slow everything down quite a lot) or more taste-level things like how seemingly trivial concepts like leftpad are core libraries (that go through multiple versions because it couldn't have just been done right the first time) and the drama involved when something everyone depends on has an issue.

Disclaimer, I haven't done production node since ~2013. It was actually mostly enjoyable, and I wouldn't mind doing it again, despite on the taste-level I think the whole node ecosystem is just worse than many other options (when you have a choice) for many specific problems. But life is a continuous lesson in Worse is Better.


One drawback of Node.js is that it's single threaded. Moreover, even if somehow it became multithreaded today, JS is absolutely not ready for that. (Btw. yes, async I/O is nice, but threads are useful for more than just I/O.)

NPM is quite basic, it pretty much just downloads tarballs recursively from a webserver according to some version spec. The CommonJS module format is fairly loose (and kind of outlived its usefulnes) and so there's no agreed upon package structure and everyone just does whatever they feel like doing, which leads to a huge overall mess. The whole ecosystem is fragmented and very few things are really agreed-upon.


> NPM is quite basic, it pretty much just downloads tarballs recursively from a webserver according to some version spec

This is about as valuable as "Rails is a webserver that responds to some HTTP verbs". Common.js isn't even a part of npm (the correct capitalization) or node.


> Common.js isn't even a part of npm (the correct capitalization) or node.

I never said it was, it's not part of it per se, but it's the 'blessed' module/package system on Node.js (the correct capitalization). Which is really sad, because CommonJS is retarded (a file has to actually be executed in order to get it's require & export characteristics and they might arbitrarily change in runtime). It's sad that Node.js as well as the JS ecosystem in general is fairly hostile towards ES6 modules - basically any current support of ES6 modules is just translating then / dumbing them down back to CJS. Hopefully this'll change in the future as ES6 gets more adopted and CommonJS will be left to rot (a fate it deserves and should've already met ages ago).


A mature framework should not have this kind of problems https://github.com/nodejs/node/issues/12115


I don't want to start a flamewar, either, but I'm glad that the top poster mentioned that it's an electron app. Having that knowledge ahead of time allows me to manage my expectations.


All great feedback, thanks! Some quick replies: - The omission of '<>' from the operator list was just an embarrassing mistake on my part. #facepalm #WILLFIX - Also agree that a drop-down list of checkboxes (ideally a searchable one!) with an 'in' operator for low cardinality columns would be great. But the UI for that is a bit involved, and probably also need to provide some user control over when to do this since gathering distinct values can be an expensive operation for large data sets. - Totally agree that it would be useful to provide more interesting column stats. Thanks for the suggestions and refs to other tools -- very helpful.


You can check RStudio's data viewer for reference https://support.rstudio.com/hc/en-us/articles/205175388-Usin...

It's a modified dataTable(so search is builtin. I didn't see search in Tad yet), with selectize for categorical variable filters(which is a searchable dropdown list)

Another good to have feature is to show column index. We often need to manipulate the columns in code, a column index is helpful.

Based on column index, you can also select a subset of columns faster with numeric input -- over 100 columns are normal, using checkbox to select is too cumbersome.


Good job, curious about the performance when doing multiple AND, OR clauses on 1m+ rows? You'd need compound indices or?


Absolutely all the analysis (including filtering) works by generating SQL queries and that SQLite evaluates. Tad doesn't generate any table indices, and I'd be reticent about adding that. However you should be able to Tad as a sqlite viewer from command line (sqlite://foo/bar.sqlite/tablename) and it should make use of indices on your existing SQLite table if you have them.


Good summary, thanks for the tldr;

+1 for the comment about Electronic/Javascript, also don't want to start a flame war heh.

exploratory.io looks really neat, it's a shame about the pricing and I'm not quite sure if it's a native app once you purchase or if it's an in-browser online-only app, because if it's online only 1) I'd be worried about uploading data to a 3rd party and 2) The internet in Australia is _terrible_ so uploading / downloading larger data sets might be painful.


From their FAQ (https://exploratory.io/faq)

- I have very sensitive data. Is my data safe?

Any data you import into Exploratory Desktop always stays on your PC and never leave your PC unless you explicitly publish (share) it to Exploratory Cloud (exploratory.io). If you decided to publish the data to Exploratory Cloud for sharing or scheduling, you can share it in a private way so that so that only you and others you have invited can view it. The data is also stored in encrypted. Please take a look at our Privacy Policy for more details.

- Where exactly my data is stored after importing?

All the data you import into Exploratory Desktop is saved as a binary form (R's Rdata format) inside your repository, which is located under '/.exploratory' on your PC.


Genuinely curious, what can be used instead of Electron for cross-platform desktop app?


Tk - It can look a bit dated, but it's easy to use, comes bundled with Python or Tcl, and bindings in most popular languages. Can run on Windows, Linux, macOS.

Qt - The IDE is great. It doesn't look native. Either dynamically linked, or commercially licensed. Runs on Windows, Mac OS X, Linux, Android, iOS, and a range of embedded hardware.

wxWidgets - Native backends, so it always looks like it belongs. Bindings in a ton of languages. Can be both simple or complex, depending on what you need it to do. Runs on Windows, Mac OS X, Linux, and in-progress for Android and iOS.

JavaFX - Java's replacement for Spring. Runs anywhere Java does. Fairly flexible, and easy to use.

Kivy - A Python framework. Mainly aimed at touch-compatibility. Runs on Windows, Linux, macOS, iOS, and Android.

LCL - A Lazarus framework. (Think Pascal). Really easy to use, with great drag'n'drop and the like in the IDE. Runs on Windows, macOS and Linux.

nuklear - Fairly easy to use, with bindings in a lot of languages. Sometimes requires a bit more work getting it to run on some platforms.

This is not all, but the ones I find easy to use (as easy or easier than Electron), and easy to set up and deploy.


Thanks for such a detailed answer. Can you elaborate why those would be better than Electron?


It depends on what you want from the framework.

Electron doesn't look native, which a lot of consumers don't like. Others don't care. So, depending on your audience, it can be an extra hurdle that some like wxWidgets don't have.

Electron is difficult to get performant. You need to really think and test your perf. Qt, Tk, and a few of the others are much faster, much easier. And if you put in the same effort you need to put in for a fast Electron program, you can end up with blisteringly fast speeds.

Electron bundles are large to download. Some places this doesn't matter, others it does. (Think Australia, Africa and the like where tiny download caps exist). IIRC Qt, the biggest dependency, is about half the size of Electron. Tk and wxWidget are small, nuklear is just a header. It's tiny.

Electron is not good with touchscreens (or "harder to get right"), but more and more people have them. Kivy is great for that usecase.

Electron is "just code". Qt Creator and Lazarus' IDE are phenomenal for putting GUIs together. You'll be surprised how little code you need.

Electron is primarily JavaScript. Some people like that, others don't. If you want an easy language, you can use Python, or Nim. Want more control C or C++. Value both time and control? Go with Java.

Electron has its place.

If you have a really tight time-to-market window, or this is just a tiny personal project, and JavaScript is your "goto" language, then awesome. Use it.

But, the cross-platform GUI world is big. You have a dozen or so mature, stable, proven frameworks.

So if your project takes off, or you have the time to do things right, evaluate if any of these libraries, including Electron, fit your needs.


How's Java giving user more control than Nim? The latter allows for hardware access like C/C++


I said C/++ was more control, Java was a middleground between the two. Nim is easier, but its memory usage and performance are harder to be sure about, because it hasn't been around as long.


With Java you can get it through JNI.

But if you're going that route, you'll be making your life a lot harder in the cross-platform department.


Other people have mentioned Qt, which is great, but I'd like to point out there it also support QML which is probably better your you.

* It is backed by Qt and rendered by OpenGL, so it performs very well.

* It has very easy syntax, especially automatic variable binding, so there is very little boilerplate for event handingl.

* It supports JavaScript. You can code the logic in either JS or C++, depending on whether you need the performance or not, and it can be easily split between the two.

* They actually support two sets of pre-made controls, one that looks like desktop UI (http://doc.qt.io/qt-5/qtquickcontrols-index.html) and one that looks like Android or iOS (https://doc.qt.io/qt-5/qtquickcontrols2-index.html). Or you can design your own UI from rectangles and other shape. Or combine all three.


Qt is an option. Supported on Windows, Mac, and Linux.


Tk is crossplatform, free and open source: https://en.wikipedia.org/wiki/Tk_(software)

It might look a bit dated on some platforms, but it will be lean and fast, unlike Electron.


I can understand the appeal of writing a non-internet app that uses html/css/javascript. But if you want to do that, why not just run a server locally and use a normal web page? Seems better than running a custom chrome instance.


A bit old-school, but many use JavaFX, especially for enterprise apps, even if people on HN may bash on Java for whatever reason.


"Many" is untrue. I'm a huge Java fan but FX never picked: https://www.codenameone.com/blog/should-oracle-spring-clean-...

Enterprises still use Swing and some have touched FX as Swing is in maintenance mode and they don't have a choice.


Why is it sad that it's an Electron app? Can you articulate a reason why that's an automatic negative? Would you rather software, such as this, not exist than be Electron-based?


Not that we'd want to start a flame war or anything.


exploratory.io looking promising. Thanks for the links and since it's an electron app then there's hope for a web version :)


If you prefer working from the terminal, you should check out VisiData (https://github.com/saulpw/visidata). It's a curses tabular data tool that can start browsing terabyte .csv files immediately, with few dependencies and no clicking.


Been using this daily. Great tool. I almost forgot the awk commands :) BUT I still love awk!


I was looking through the comments hoping to find a tool like this. You did not let me down, fellow HNer! I try to use the command line whenever I can. Somehow there is a command line tool for just about everything, except for the obvious cases where it not useful like visual media and graphs.


I don't want to dissuade you if curses-based UIs are your thing. But one thing that's been crucial to me from Day 1 is that you can type "tad foo.csv" at the command line and immediately get a usable view of your data with no extra config or tweaking needed. I mostly launch tad from the command line.


I just launched "tad 311_Service_Requests_from_2010_to_Present.csv" (a 10GB dataset from data.cityofnewyork.us) and it ate up all my memory without ever showing any data. "vd 311_Service_Requests_from_2010_to_Present.csv" shows the first rows instantly, and I can start rearranging columns while it is still loading, and I can press Ctrl-C to stop the load (and the rows already loaded are still available). No config or tweaking needed.


Looks fantastic! I love the combination of SQLite fixed schema with the hierarchical view.

Some advice on the website: Do some browser sniffing (I know) to display a screenshot of the software on the user's operating system. This immediately answers the question "Does the software work on my computer?" Also, the source code link should be near the Download section. Not everyone is trained that the triangle GitHub icon in the top right means "source code".


The screenshot suggestion is a good one, whenever I open a GUI software page and see a screenshot of Windows or MacOS I often just close the tab and move on.


To me it's often that when I see screenshots of MacOS I usually assume that it's Mac only software and most of the time it's true


Great suggestions for the landing page, thank you!


Just noticed the installer changes the file association for CSV's so they all open with Tad, without asking me first. Please do not do this.

Otherwise, nice app. How does it scale to larger CSV files? I have been working on something similar (https://warp.one) for Mac, which streams CSV files (apparently you use a SQLite database behind the scenes as cache?)


Upvote for "don't change default file handler without asking".


Really sorry about this! I do specify Tad as a handler for CSV files, but becoming the default handler was not my intent at all; I will look into what's causing this.


Just gave it a quick test run on a 6 million row by 8 column CSV my software produces. Fairly slow to open on an iMac, but got there in the end. Looks nice. Sorting fast. Have got an additional column called 'Rec' for some reason.


Related to this, if you're looking for a CLI tool for handling CSVs, xsv [0] is looking promising.

Based on prior experiences with CSV, one of the big problems I've seen has been figuring out what text encoding is being used. This problem appears to be more prominent with people outside of the US. It looks like Tad is using fast-csv, which I don't think will properly handle different file encodings. Life would be so much simpler if everyone just used UTF-8.

[0] https://github.com/BurntSushi/xsv


You basically have two approaches:

1. Write your CSV parser with an assumption that the data is ASCII compatible (this means it works with either UTF-8 or Latin-1 out of the box, possibly modulo non-ASCII meta characters). To support additional encodings---such as UTF-16---either the CSV library or the caller must transcode first.

2. Write your CSV parser such that it can work on multiple different encodings. For example, this means looking for `\x2C\x00` when parsing UTF-16LE data instead of just `,`. This introduces implementation complexity, and you'll be unlikely to support the full gamut of encodings that other tools support whose job it is to do that sort of thing.

(2) is kind of weird but probably quite a bit faster than (1), although I can imagine it being useful in very niche circumstances. e.g., "I have a boat load of UTF-16 encoded CSV data and transcoding it to UTF-8 to use this CSV parser isn't worth my time because ______." I can't actually fill in that blank, so solutions in (1) tend to be the way to go.

Now... If you're building a full on CSV tabular viewer, then I might understand why it should handle encoding for you automatically, but when it comes down to it, the viewer is still going to need to choose between (1) and (2). Unless they want to hand roll their own CSV library, I imagine they're just going to pick (1), and when possible, transcode the data first. In that case, it shouldn't really matter whether their underlying CSV parser supports alternative encodings or not.


The use-case in which I got to see a lot of CSVs was in a data importing tool. Personally, I think that everyone should just use UTF-8. But since a lot of people don't do that, we were forced to try guessing the encoding while providing a preview, along with a way to try manually try other encodings.

I think the only other time I've seen something that hacky has been with a date parser that would try to guess the format for you. That pushed me to the firm belief that ISO8601 is the only sensible way to store dates.


Csvkit [0] has been my go-to for a while now. I never heard of XSV before, but I will try just about anything with BurntSushi's name on it. Ripgrep is an amazing tool.

[0]: https://github.com/wireservice/csvkit


This is absolutely terrific! I often work with csv or tsv and large data, and I just want to quickly look at it without loading excel or exporting to google sheets. https://atom.io/packages/tablr Tablr for Atom is good too. One feature request. I very often get data in JSON format (then convert to csv), it would be amazing to also support JSON. Thanks for sharing.


OpenRefine http://openrefine.org/ for years has been my go to tool when looking at csv or tsv. I really love this project.

> OpenRefine (formerly Google Refine) is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data.


Oh thanks for reminding me, I did look at that a while ago, it's absolutely great. The transforming steps are amazing. Time to have another look at it. Exploratory.io is similar (nice transformation steps using dplyr) but for me it was just a bit too close to actually writing the R code.


I LOVE writing code for R for scraping. I am actually serious. The Hadleyverse and the package tiddyverse have changed my programming life (Well learning Racket helped a ton also since there is Lisp in the foundations of R)


+1 Instead of "can I import X", please allow import preprocessor to import any kind of data. People can write customer pre-processor script. ( sometime for sanity check ).


Wow, this seems simple but I got excited immediately when seeing this. Since I have been writing code to export data for clients, I have grown so tired of Kingsoft Spreadsheets and Google Sheets lagging like crazy with any sizable amounts of data. This will be a cool new tool to show my coworkers tomorrow and I'll be using it. Performance seems very snappy so far!


I invested some effort to keep it performant even with fairly large CSV files, including a custom port of some C++ code for fast CSV import. My current favorite example is the Met Museum's 228MB 450k row collection data set; takes about 12 sec. to open in Tad on my 2013 MacBook Pro. Definitely not lag free (and hard to achieve that without going to some serious column store data warehouse like Amazon Redshift), but still reasonable. https://twitter.com/antonycourtney/status/869252722624561152


Thanks for putting this out there!

There are some projects out there using memory mapped files to do fast CSV parsing. Could be a nice way to speed up the memory loading and scroll it in real time. Can't find the link to the library I saw it used in, but it might be an interesting venue to consider. Another library that does it seems to be astropy fast ascii IO module [1].

[1]: http://docs.astropy.org/en/stable/io/ascii/fast_ascii_io.htm...


Try benchmarking OS read() calls vs. either sequential or random reads using memory mapping, whenever I do this OS read() calls end up being quite a bit faster.


Are you familiar with the R package data.table? Its CSV parser is blazing fast. Pandas (the Python tabular data library) also implements a speedy CSV parser. Both are written in C under the hood.


I really don't understand how anyone can use Google office apps, the UI is painfully slow. I was using Gnumeric before, but I'll try this out, thanks OP!


The only two selling points for me with Gnumeric are a rough function equivalence with MS Excel 2003 and the output to LaTeX tables (which is _absolutely awesome_). Other than that, it's always been pretty buggy for me.

Not that LibreOffice gets a pass, but it doesn't crash nearly as often for me.


They have one and only one "killer feature" over various less-sucky apps - they're in browser, so it's much easier to collaborate on a document with random people on random operating systems (including mobile).

But honestly, if you care about collaborating on text and not its formatting, then I'd suggest hosting an Etherpad instance somewhere :).


You should definitely try Delimit (http://www.delimitware.com), I was impressed how easy I could handle >1 mil rows with it.


I couldn't see any way to export data, once I had filtered or pivoted it.

Is this supported at all, or do I need to use the copy mechanism? I have a ~4GB file to filter down and analyse, and this almost looks like a nice tool for business users to use to explore the data themselves, but they need to be able to export to excel at some point.


Very nice !

Mainly for the cascaded pivot option.

First remarks : - Requires \n or \r\n end line, not working with \r

- Requires comma as separator (not possible to change it or I don't find how)

- Does not support (for exemple) ANSI encoding


Just out of curiosity, which systems uses \r as newline these days? As far as I understand it, Mac OS did it prior to its version 10 (its Unix version), but the modern macOS does not.


fwiw, a Save As CSV from "Microsoft® Excel for Mac" 15.33 got me this:

test.csv: UTF-8 Unicode (with BOM) text, with CR line terminators

Apparently, even recent software is not up to date on what the line separator should be.


Not sure about that, the files are coming from an old system on which we don't have any information.


Interesting, I would describe it as an SQL client with an easy interface to load column based files. It's built on top of SQLite, which gives an indication of the kind of performance one can expect.


SQLite is pretty fast actually. More problematic is that this uses Electron. Loading a tiny 200-line CSV file took 120MB for me lol


Thanks for the heads up on Electron - I was going to look into onboarding this for our team until that. We have more than enough in the way of crufty crapplications already.


Can it run a proper local server, so that one could connect using an already open browser? (Maybe even remotely.)


A little UI comment if the the creator is reading this. It actually took me a while to find the search/filter button.. only to see that it's actually on the bottom. Perhaps make it more visible (on top maybe? and show the form right away instead of having to click the filter link).


They lost me right at the front page. Left justification of numbers is just wrong...


The brief screenshot and short description look cool – does anyone have a screencast demo?

EDIT: once you install it, a rich README pops up which includes more screenshots and some example datasets. Fun to play around with


Dies immediately when launched on Debian Jessie:

Error: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /tmp/.org.chromium.Chromium.ibqKoR)

  $ strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep 
  GLIBCXX_
  ...
  GLIBCXX_3.4.19
  GLIBCXX_3.4.20
  GLIBCXX_DEBUG_MESSAGE_LENGTH


I think you should be able to rebuild the electron npm module from source to have a compatible runtime.


Feature Suggestions

(top level comment to hold various suggestions, so upvoting can be used per-suggestion to bubble the best to the top)


Quick Filter

i.e. Currently to add a filter you to the bottom, clicking filter, selecting a column name, selecting equals, entering a value.

Instead, right clicking on a cell and selecting "filter > equals" would apply a filter to that column for values matching the selected cell's value. Likewise "filter > contains", "filter > does not equal", "filter > greater than or equal", etc.

These filters would then be appended to the filter at the bottom, so could still be managed there; but just saves some effort when first populating.


Export filtered/pivoted result to CSV/TSV:

- a very important use case for my team in support of business users who are comfortable in Excel which cannot manage large (>1M rows) files. Tad is useful even with 10M+ rows, a user can import, review datafile and filter/pivot to desired subset. The next step for them would be exporting and creating formatted tables or charts for analysis/reporting documents.


Aggregate Function: Count Distinct

Like count, but only counts each distinct value once. Useful for judging data quality (i.e. if you have 600 items with 599 distinct values, chances are there's an invalid duplicate; if you have 1 distinct item chances are that column's not of interest; if you have a few distinct values, you have a potential pivot candidate, etc).


DATE support.

Help mentions INT REAL, and TEXT. Having support for dates would be useful; especially if this enables us to treat dates as multi-part values; e.g. pivot by year & month instead of by the complete value.


Aggregate Function: Count

I've seen that COUNT is implemented for numeric values; but not for text. There's no reason to limit COUNT to numeric (unlike AVG and SUM).


Good points and great suggestions, thanks!


Very cool! I'm interested to see how easy it is to create pivots on the fly. We use react-pivot[1] all over the place, but it needs js to set them up.

[1] https://github.com/davidguttman/react-pivot


I downloaded this when this post came up and have been using it steady. It's good. Reliable, does a job that needs doing, has a few bells and whistles.

Definitely still early stage, but that will pass.

I have showed it to a couple people I work with and they have started using it too.


Hey, congrats on putting this out, one small thing: Homebrew is showing v0.8.3 as the latest version however v0.8.4 has been released (probably an easy fix), also I was wondering if there were plans to make this a native / non-javascript app?


Nice project page.

If only there was a tool/service to create such project pages automatically.


Are several, sort of. I think these kinds of pages are becoming more accessible to design and public with static site generators.


Another Electron app, yay...


Oh joy, more Slack pings from the data team (check my username).

Nice work!


One area to consider growing this, existing SQL clients/viewers are pretty awful. It would be awesome if I could directly connect to a DB


If you use MySQL, you should definitely check out Sequel Pro. It's very polished and very usable.


What kind of features are you looking for that are missing in apps like Postico?


Maybe I missed it but how is this better than Excel?


I don't think I ever claimed it was better than Excel, which is targeted at different set of users and use cases. That said, here are some reasons why some users might prefer Tad to Excel: It's free and MIT licensed with the sources on github. It's available for Linux, Windows and Mac. It works on any tabular data source. The UI for creating a pivot table is clear and intuitive and efficient (just a few clicks); many users find Excel's pivot table interface difficult and confusing. The underlying typed data model is a better fit for many data sets. And finally, it is many times faster than Excel for loading and working with large CSV files: My large test data set takes 12 sec to load in Tad vs 40 sec. for Excel.


"It works on any tabular data source."

Just on that point, Excel allows you to define the field delimiter on import of text files and handles a ton of different formats out of the box, so I doubt this is a point where Tad is better than Excel (though Tad does look promising as a tool to quickly check data files).


Perhaps the better question is, how is this better than Google Spreadsheets? Most of your reasons still apply, but I believe google docs handles large spreadsheets pretty well (after an initial loading phase).


Google Sheets isn't open sourced and MIT licensed is one big difference


I think these are all very legitimate reasons, thanks for the info!


It doesn't appear to cost anything at all.


Is this using JUCE?



If a have a table with an email column, can I pivot by email domain?


Ah of course it's a Node/Electron thing.


How on earth did you not call it "Tada"?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: