Hacker News new | past | comments | ask | show | jobs | submit login
Computer vision basics in Excel, using just formulas (github.com)
687 points by alok-g 49 days ago | hide | past | web | favorite | 92 comments

This is an amazing idea! It's also a testament to the extreme power and efficiency of the core Excel code that everything works so smoothly despite this being not at all what Excel was designed for. There is something about everything-- data and "code"-- being so instantly and interactively available for inspection that makes everything seem simpler and easier to grasp.

Indeed. I have found Excel to be surprisingly fast at running this.

I can also draw parallels to literate programming in this: The code and visualization of what the code is doing are very tightly integrated.

That's true. It's also important to point out that a lot of what runs neural networks is just multiplying a bunch of numbers by each other and getting the "best fit." It's all matrix math. I was surprised how straightforward it was after taking Andrew Ng's coursera course, since ML is considered to be so advanced and cutting edge.

Well matrices just encode linear functions, so it's not the most surprising thing that you'd use them for calculating a lot of linear functions.

90% of the world’s software developers don’t realize that they are software developers and their language is Excel formulas.

While that's true, a signification fraction of the remaining 10% is software devs cleaning up after the damage done by people who have over-estimated their ability and created a bunch of business solutions in VBA...

Those same business solutions have generated more value than costs.

You only say that because the value you mention is usually visible whereas the costs are usually hidden or not obvious.

For example, in my experience Excel is really good at providing job security to those who develop complex spreadsheets. But the cost to the organization tends to be enormous as well, because spreadsheets are impossible to scale, which in turn makes the processes that rely on them impossible to scale.

Any business that runs into that sort of scaling problem is already doing quite well for itself. How many of them would have gotten to that point if they needed to hire a software engineer to make every prototype that eventually led to the final working system?

I currently work for a business that has done quite well replacing Excel solutions with proper databases. We've literally worked through the collapse of one company (and its rebirth after it was merged with something else) partially due to the sheer impossibility of managing Excel solutions that had grown out of control. Yes, they mainly grew out of control due to management problems, but the way they did so was remarkable in its rapidity and absurdity. Users were being bought machines with 16gb RAM so they could actually track their sales.

16GB of RAM costs about as much as employing a programmer for a day.

You’re missing the point. If your sales tracking alone takes up 16GB of RAM on the client, that isn’t scalable - and potentially a horrible experience to work with. Your point is correct in the single case, but I think we’re discussing beyond that.

Excel is a lot more scalable than doing it by hand. I think the point that Excel--even with its poorly scalable performance--has made programming ridiculously accessible in extremely valuable ways is spot on. In fact, it has creates unending market demand for software engineers--customers invent their own apps (in Excel), prove it out in the real world gaining revenue all the way until it slows due to scaling issues, and software engineers just need to rewrite it for them in a scalable environment. It seems pretty optimal to me.


Do you have a workflow or methodology that you use or do you analyse and develop from scratch?

What are you replacing the spreadsheets with, desktop to web apps?

The people developing complex spreadsheets have job security because they are domain experts. They can't be replaced with a generic developer, even if the spreadsheet is converted into custom code.

Yep. I've seen this pattern of "We need to convert this Excel-based process to an application". Then it takes the domain export + business analyst + developer to make any changes to the process. The final product is more polished, but way more expensive and slower to iterate.

plus, when a generic developer writes the custom code it eventually evolves into an endless cycle of implementing Excel features the domain expert is use to having a click or two away.

Plus, in my observation, there is often a dismissive attitude towards Excel among the generic developers which blinds them to the risk of this “re-implementing Excel’s features” cycle.

I worked on a team that helped business users who had "outgrown excel", i.e., they had hung themselves with the rope Excel provides. Almost always their scaling problems were solved simply by better Excel practices: better management of the calculation mode; setting the RTD throttle interval to 1-2 seconds; replacing Bloomberg's streaming data function (BDP), with the native alternative {=RTD("BLOOMBERG.RTD...)}; optimizing the division of labor between what is done on the sheet with formulas versus with vba\xll code; meta programming, i.e., creating all or part of your calc sheets and formulas with code so that you get the understandability and observability of Excel formulas while avoiding the things that are hard to do with formulas, such as grouping, joining, filtering, looping; making 3rd-party add-ins workbook-specific such that they're not always on (many of these add-ins listen to application-level events like selection_change and on_calculation which can diminish the performance of all open Excel models, even the ones that don't use that add-in).

I am continually telling my engineering manager that just because he can do somethign in Excel it in no way means that he should.

He is a smart guy, but refuses to learn even basic Python or R, despite doing some very significant statistical work in an area related to preventing machines harming humans.

I just wonder, even if initially perfect, how many spreadsheets have been unknowingly perverted by someone sitting on the mouse, or a pet cat treading the keyboard.

Indeed, Excel is usually not the right tool. It is very powerful, but organization of the code/logic, correctness of the calculations, and readability/maintainability are all left on the developer. Mistakes are hard to spot. The levels of discipline and meticulousness needed to use the tool well are high.

I have at times remarked that typical Microsoft Office installations should exclude Excel! :-)

And I am saying all this about Excel even after being a power user myself and the primary author of the OP. :-)

Mathcad, Mathematica, etc. could be good alternatives if not Python or R.

> because spreadsheets are impossible to scale, which in turn makes the processes that rely on them impossible to scale.

Spreadsheets are entirely possible to scale, that just tends to be a skillset in and of itself, and domain experts organically creating a complex spreadsheet likely don't have the background in process engineering and software design principles to do so themselves.

Generally speaking, you can usually refactor an unmaintainable and complex spreadsheet to mimic software engineering best practices, all without dropping down to VBA. Leveraging named ranges[1], locked cells[2] and formulas[3], data validation[4], and error handling[5]. Combined with some defensive validation checks, you can generally sort out the complexity issue nicely.

Additional enhancements (based on Excel 2010+) can be made using tables and structured references[6], factoring out "data" worksheets into their own workbooks[7] and linking to them from the "user" workbook, adding relational integrity via a data model[8], or scaling data size via PowerQuery[9] (which stores data within the Excel file in a highly compressed, columnar format that transparently gets processed by a local instance of the same VertiPaq engine[10] that powers SQL Server Analysis Services)

You can also drop down to VBA or Javascript[11] if you truly want/need to jump out of the rails of the built in options above. Or in more common cases (which leads to the hell-to-maintain spreadsheets that are more common), if you want to bypass all of the nifty built-in functionality above and do something quick-n-dirty. But if you leverage the above capabilities, you can mature a spreadsheet-based solution quite well and have a battle-tested, stable PoC that can be handed off to a software developer for migrating into a more permanent application.

[1] https://trumpexcel.com/named-ranges-in-excel/

[2] https://support.office.com/en-us/article/lock-cells-to-prote...

[3] https://support.office.com/en-us/article/display-or-hide-for...

[4] https://support.office.com/en-us/article/more-on-data-valida...

[5] https://www.exceltactics.com/definitive-guide-excel-error-ty...

[6] https://support.office.com/en-us/article/overview-of-excel-t...

[7] https://www.microsoftpressstore.com/articles/article.aspx?p=...

[8] https://support.office.com/en-us/article/create-a-data-model...

[9] https://en.wikipedia.org/wiki/Power_Pivot

[10] https://www.microsoftpressstore.com/articles/article.aspx?p=...

[11] https://docs.microsoft.com/en-us/office/dev/add-ins/excel/ex...

Good point. I've seen miracles occur simply by refactoring an Excel workbook such that the inputs, outputs, and calculations to map the inputs to the outputs were all delineated into their own clearly-formatted bounded context. And also, when you do drop into vba code, simply organizing the code so that interactions with the sheet are easier to understand, and isolated -- and focusing on "seriality" (no feedback loops, minimize code triggered by events) can transmogrify vba code from the Bogeyman into a friendly neighbor.

For sure with the transmogrification! If the rest of your workbook is properly designed, dropping into VBA for some array operations and direct references[1] can drastically improve performance as opposed to per-cell calculations and looping. And can cleanly compartmentalize functionality in a way that can be refactored into other languages during post-spreadsheet migration as needed.

Although if it fits your usage, PowerQuery and M[2] can be even more performant, if for no other reason than the data being in a more efficient/compressed format. With the nice side effect of creating logic that's transferable as-is from Excel to PowerBI or SQL Server Analysis Services (making for a clean migration path as your solution matures).

[1] https://www.microsoft.com/en-us/microsoft-365/blog/2009/03/1...

[2] https://docs.microsoft.com/en-us/powerquery-m/

You haven't seen anything until you have seen large process plant importing and exporting data from Excel via OPC to allow the non programming process engineers to tweak performance.

Really not the intended use or appropriate criticality management of Excel as a tool, let alone the O/S it runs on, yet it is done.

If you want to see the power of "basic" operations. Watch this video of Dan Ingalls, co-inventor of Smalltalk, demo his software to do OCR on Devanagari text in 1980! https://vimeo.com/4714623

This is such a cool demo

I had a friend doing this in 2003. He had a spreadsheet that could read road signs with a lot of white noise applied. He called it foveola vision. It was super impressive. He later converted it to a C library but the concept was essentially the same.


See http://www.deepexcel.net/ - and educational April Fool's Day from 2016.

I used to show these spreadsheets to make it explicit that all operations are simple, as in addition, multiplication, max and ReLU.

Very nice. Spreadsheets are also great for doing quick monte carlo simulations. Things like finding the solid angle of a cylinder from an arbitrary perspective quickly become algebraicaly intractible. Raytracing with gnumeric is comparably easy.

Remind me of this video of Matt Parker (standupmaths): https://www.youtube.com/watch?v=UBX2QQHlQ_I

OP co-author here. :-)

Yes, someone told us about this video when we first showcased this work. This video and a few more such works that we have discovered since then are linked in Q&A #7 in the readme. :-)

Congrats for this work! I haven't read the page far enough before commenting it seems (: Anyway, I hope this offered 13 minutes of fun to some HN readers

The text mentions a talk you gave. Was that recorded anywhere?

We don't have a good-quality recording of this that we could release. I have been looking forward to making one in the future. Thanks for the interest. :-)

Addendum: The text notes inserted within the Excel files partially cover for it, as that's roughy what the talk had, other than a possibility of live Q&A. The Q&A present in the readme is based on questions we have been actually asked. :-)

Another video about what you can achieve with Excel: https://www.youtube.com/watch?v=bkQJdaGGVM8 (HDR Image processing)

This reminds me of a fast.ai video where they use Excel for a CNN.

Found the video: https://youtu.be/gbceqO8PpBg

Yeah thought of this as well.

Anyone remember the line of PlayStation (PSX) bowling games? I was director of the studio that wrote "10 Ten Pin Ally", "Brunswick Circuit Pro Bowling", "Flintstone Bowling" and others. The bowling physics engine was originally written by the founder of the studio in Excel. This is the same guy that made the Vectrix game console (https://en.wikipedia.org/wiki/Vectrex), and he found it easier to work in Excel than the fixed point math & C compiler for the original PSX.

Spreadsheets are highly hackable sandboxed self-contained runtimes. They’re a really great way to deliver self-contained client-side software that can quickly evolve.

What is the most widely used database? Microsoft Excel. I was blown when i first learnt this fact.

That is sort of like calling a garbage tip a wharehouse - yes there is suff there, and yes you can find if you try hard enough, but yiou dont ever really know exactly what is there until you try to look for it and it could be way better organised.

I would probably lean towards describing Excel as a data store rather than a database because it doesn't preserve many of the properties that make a database a database, such as acid compliance. Would anyone disagree?

Yes, acid compliance is not a prerequisite for what most people call a database either laypeople or not.

by users or in machines? If it's the latter, it surely is Sqlite?

This makes me wonder... has anyone bothered with hardware-accelerated Excel? Not just graphics acceleration. Seems like something you could do with an FPGA.

I bet Microsoft has an FPGA-accelerated version of Excel in a lab somewhere.

I think the nicest part of this is for people to be able to poke at and inspect every part of the code - very cool. Normally these things are hidden in large loops! Here you can tug on a single thread and follow it through.

Chart to data would be awesome. It takes a lot of time to adjust a png into an image possible to redistribute with custom branding. It's only surface detection so must be possible.

insightful and commendable effort to explain a complex topic of computer vision and CNN with lucid simple hands on step wise example in excel

The idea of transposing the 2-dimensional structure of image to the 2 dimensionalities of MS Excel is very intuitive.

Interesting how they published a paper on GitHub. I wonder who else will adopt this format.

Anyone who wants their study reproduced!

I heard a girl once saying that her dad can prove anything using Excel (she was talking about how her Dad has raised her and her brother), everyday I'm more convinced she was right.

I'm beginning to wonder who/what does/did more harm to technological progress - Microsoft or aversion to Microsoft?

Sounds of the stuff they do is incredible, just not talked about.

Which of the two probably depends on the particular situation, as it often is in technology.

Supposedly there was a time period where the Android team was considering .NET/C# instead of Java. In the short term their choice of Java was best for them but in the long run it led to things like the Oracle lawsuit and its threat of changing copyright law forever in a bad way - so it's interesting to consider what an alternate timeline would look like.

IE6 was amazing at the time compared to its competitors, but then it quickly became a hindrance - a good example of how the situation can flip the moment the ecosystem changes.

Don't do stats with excel. It's all wrong, Microsoft won't fix the bugs.

They really kind of earned their reputation in an honest and direct fashion. Aversion to Microsoft works great, doesn't it? Need a spreadsheet? Use gnumeric. Calculation errors are bugs and those bugs get fixed.

> Don't do stats with excel. It's all wrong, Microsoft won't fix the bugs.


See, for example these slides:


Or more formally papers like: https://doi.org/10.1007/s00180-014-0482-5

Both of which are a bit out of date so some things may have been fixed.

It's generally not a great platform for numeric work, but some things can be improved if you know the issues. For examples the last time I checked (a while ago) things like sum/std/mean would not do anything intelligent with large columns/rows leading to accumulation errors if you did it naively, but you can work around stuff like that if you know it is there... but you will end up re implementing which makes it painful

I know of some numerical instability and a faulty or inefficient implementation of the normal distribution formulas (NORM.INV, I believe). Was a while back when I ran into this, so can't give more details.

That being said, basic financial modelling in Excel is killer.

No doubt they have some of the best software out there (not all of it, mind you) - Excel is a marvel. I think where a lot of us get wary is _how_ Microsoft sells their software. They're more than happy to charge you money every year whether the software gets major updates that year or not - and whether you, as a user, need any new features or not.

No human atrocity has yet surpassed the killing of Netscape. /s

People make (made?) same arguments about Google.

Helping Humanity™ doesn't change the fact that too much power/marketshare centralized in any industry is bad for competition/furthering tech in that industry.

Cloudflare and Gmail have become great products - but at what cost?

this is an awesome approach to demonstrate something very complex in extremely easy way.

This is just awesome

Could someone report how feasible it is to run this in LibreOffice Calc?

It's in the linked readme:

"While the files open in LibreOffice (tested in version (x64)), it is slow to the level of being unusable. We have not tested in Apache OpenOffice."

Woah. What about WPS Spreadsheets?

I perceive it to be much faster than LibreOffice, and just a bit slower than MS Office.

I suppose it could be used as a cool benchmark for trying to optimize LibreOffice

In my opinion this might because of xlsx compatibility overhead. Using ods format might speed up a bit.

Microsoft has had really sharp people working on spreadsheet performance for many years. I remember reading a blog post from I believe Joel Spolsky or someone talking about what excel is doing behind the scenes to achieve high performance and I was pretty impressed.

One example that comes to mind was that spreadsheets are just memory mapped files and the layout of the file on disk is identical to the data structures in memory. This allows them eschew translation to a data interchange format. So they got performance at the cost of interoperability, which is probably what's hampering open office & friends.

That’s certainly history, if you use a modern file format such as .xlsx, and, likely, also if you use the old format.

Microsoft likely changed several in memory structures when Excel went 64-bit, if not earlier.

One thing that Execl does is multi-threaded recalculation (https://docs.microsoft.com/en-us/office/client-developer/exc...)

> Microsoft Office Excel 2007 was the first version of Excel to use multithreaded recalculation (MTR) of worksheets. You can configure Excel to use up to 1024 concurrent threads when recalculating, regardless of the number of processors or processor cores on the computer.

Somewhere, there is probably someone running hundreds of threads for excel (likely in a beefy VM/VDI). It is probably wired so deep into their business that they are afraid to move to other methods (that are more scalable). But such is the power of excel. What you see is what you get is not to be underestimated.

IMHO there's no reason to memory-map the interchange format itself.

I would predict/expect that both LibreOffice and MS Office (with their modern XML-based formats) are actually mmap(2)ing some temp file and treating it as an "on-disk working-state heap", and then importing from interchange formats by allocating from that heap / exporting to interchange formats by chasing pointers that end up inside that heap. (This is, after all, what every RDBMS does for its working state. It's pretty optimal.)

Even if you have a memory-mapped interchange format, I'd still expect them to have a separate disk-backed working heap for all the stuff that doesn't belong in the file but is nevertheless very large (e.g. cached intermediary computation results of spreadsheet cells); and, if they have it, they may as well just use it for most things by default. Thus, I would expect that even in old versions of MS Office, the in-memory data structures were actually an interchange format of sorts—not the ones being updated with each keystroke, but rather ones that'd be memcpy(2)ed into on export. (This also prevents you from either having to add a page-table structure to your file, or else constantly "defragment" it as data structures change.)

Not sure why you were getting downvoted these are pretty reasonable comments. The database example and use of B-Trees you mentioned is a good one.

It’s definitely possible to create a performant and portable document specification and others have.

I just strongly suspect that the performance issue that libre office and others have is more of a manpower issue and not having equivalent resources and knowledge of the excel formats rather than some shortcoming in ms’ own file formats.

> One example that comes to mind was that spreadsheets are just memory mapped files and the layout of the file on disk is identical to the data structures in memory. This allows them eschew translation to a data interchange format. So they got performance at the cost of interoperability, which is probably what's hampering open office & friends.

That improves save/restore performance, but in and of itself doesn't do much about execution performance of macros.

It would help program runtime as well because the macro could be operating on data not loaded in memory. Not having to marshal and un-marshal data can save lots of execution time, especially in the face of non-contiguous reads/writes. Databases store information using B-Tree's for example so that they can calculate the offset of the data and jump directly to it. It would probably take a lot of gymnastics to get this from an XML or JSON file interchange format.

I'm sure they could probably come up with something that is both portable and performant but it's probably not a big priority at Microsoft.

OP co-author here. It could be worth a shot to try (I'll do when time permits).

However, I am doubting it would help given how often LibreOffice was freezing with it (without crashing). My suspicion is that the formula recalculation is triggering more often that it should, and I could not find any way to prevent it. Excel provides options for manual triggers; if Calc does too, that might just solve the problem.

PS: I am not deeply familiar with LibreOffice. :-)

Also Microsoft makes tens of billions of dollars per year from Excel and even if only 1% of that goes back into performance, that’s a lot of optimization.

Yeah! Shame it's actually 0% :-(

Is that a just a knee-jerk joke?

We are in a thread about rendering an image by zooming out enough on an excel spreadsheet and then manipulating it in near real-time by applying a formula on the cells.

It's pretty damn impressive! I get it's 'cool' to hate MS but seriously...

How much better, faster, more efficient is it than gnumeric?

Having been paid to do that task in a prior life, I can assure you its greater than 0%.

prior life, I would believe. "was greater than" might have more accuracy, right?

Its not like I stopped all contact with friends there. They still have people working on performance.

I'm not trying to be rude to you. I'll believe you that you worked there, worked on excel and maintain contact with people still htere and that people are "working on performance" and you can believe me that we just can't see the results of that work.

But yes, I'm a bit jaded. I think you'll understand why that is too. The vast fortune in revenue that doesn't fix bugs put me off excel in a big, big way. That was 2008 or so.

Here's Andrew Gelman in 2013 on the topic:

"Microsoft has lots of top researchers so it’s hard for me to understand how Excel can remain so crappy. I mean, sure, I understand in some general way that they have a large user base, it’s hard to maintain backward compatibility, there’s feature creep, and, besides all that, lots of people have different preferences in data analysis than I do. But still, it’s such a joke. Word has problems too, but I can see how these problems arise from its desirable features. The disaster that is Excel seems like more of a mystery."


We've heard from microsoft so many times they have people working on all those bugs too. I remember the sheer disappointment in testing the newly fixed rand() function after all the fanfare by filling a page with =rand(), conditionally formatting when negative to be red and seeing the page turn largely red after a couple of F9 re-calculates.

I simply don't believe that excel programmers are idiots and have been for 15 years.

That leaves "they say they're working on it but they are not, not really." At the top of my alternatives. Is there a better one you can suggest?

Enlighten me please, how does this epic cluster of fudge happen? You were probably there while it was ongoing.

Parts of excel work well, /all/ the stats should have been removed 15 years ago as unfit for purpose and zero will or ability or effectiveness in getting a fix and making good.

Excel, just don't. That is a pretty reasonable response, don't you think? Yeah it's sad. I don't relish it but let's not pretend it's all ok, yeah? But I guess the masses of revenue keep coming in so I guess it is all ok from microsoft's point of view. Are you ok with that yourself?

Um, there isn't an easy way to say this, so I'll just say it: calm down.

I'm an Excel and VBA guy. The IDE hasn't been updated since office 97. It's not great.

But the thing that Microsoft understands iS that people buy your software if it doesn't break their workflow. Backwards compatibility is the most compelling feature when you've got an install base in the millions.

Now, they have fucked up. A lot. There is a bug that counts 1900 as a leap year. The statistical functions don't work. They can't dump VBA no matter how much they want to. They tried with VSTO and officejs but nobody is buying. I get it. They are stuck and the only real way out is to break compatibility. But haven't we seen what happens when you go down that road. Python 3. Perl 6. Acrimony. Discord. And for what? Your spreadsheet to break and you have to debug it? What if you have a spreadsheet that's never been documented with a million formulas. You probably have a day job, you need your tools to work and Microsoft understands that.

The 1900 leap year bug is for Lotus 123 compatibility. Nobody cares about that now but it was critical in the beginning - Imaging moving to Excel and all your dates are off by a day.

These days it sticks around because of compatibility with older versions.

Saying "calm down" like that is spectacularly rude and derogatory, did you mean to or was that an error? If you want someone to get angry that's exactly how you do it, fwiw. Anyway maybe you're not trying to just turn it into a flame war? Sure, ok. Let's just address those points as coldly as I can manage.

1. "The Ribbon" The most useless, workflow breaking, unwanted garbage change I've ever encountered. No exaggeration. A change microsoft dropped resources into instead of fixing the bugs, sucking time from users and upping their stress and frustration levels. It's the poster child for workflow breaking. Excel. A higher microsoft priority than fixing bugs. Contrary to your claim they clearly understood that they had enough market power to force it on users breaking their workflow and make the users pay for it.

2. Microsoft love breaking compatibility. Every damn upgrade of office somwhere else by some other customer meant you couldn't open a spreadsheet containing a single column of numbers because it was "incompatible." You had to request they convert it and re-send if they knew how and probably you or your employer would be forced to upgrade to avoid that hassle while get slugged with the ribbon you didn't want.

No upgrade treadmill anymore, now they can just charge you yearly without having to play that awful upgrade treadmill game. They don't have to fix bugs either, as we agree and they don't. They chose not to. So that needs to be pointed out every time it comes up to counter a little bit of the horrible stealth of it.

They did /try/ to fix rand() with much fanfare by breaking it different but no version issues. Was broken, is broken, never use it. The only stats function I'm aware of they tried to fix? Really? No? Like "sure, but maybe some people are dependent on getting wrong results and allocating resource based on error?" Is that really the excuse for not putting a tiny part of that mountain of money to stop kicking customers? It's objectively awful. Compatibility with utterly wrong that anybody relying on that wrong has a massive issue.

1900 isn't a leap year and dates are stored as number of days since Jan 1 1900 so fix it and every single date rolls back a day, hilarious and everyone forgives that including me. Stats functions are not like that at all and do not require version changes. Just the will and resource to fix them. What is needed is to actually care.

So that leaves your quote:

"They have fucked up. A lot."

That one stands up. But I don't think you've really embraced the depths of the disaster that is excel and why we should encourage everyone to avoid it. Because (to paraphrase) the error is quite deliberate. It's not worth it to them to fix. They know the bugs are there. They no the bugs are material. They no the bugs stop the software for being fit for purpose but they'd rather not spend the resource, which they could do quite easily. They give software a bad name with that attitude. Are you really happy with it? Really? How much harm do those bugs do every day, in your opinion?

Excel, just say no. Really. I'm sorry if you hate hearing that and you were proud of work you did there or whatever. I was pretty bummed when I came to that realisation myself.

Microsoft earned a reputation with regards excel and they maintain it, even if it seems they don't maintain the actual software.

This sounds a bit like Comic Book Guy. For every person who has the opinion that Excel is “so crappy” there are probably 100 like myself who place it in the highest category of application software quality, with applications like Gmail and Adobe After Effects. You can prepare a list of weaknesses of any of these, the same way you could write about all of the mistakes that Michael Jordan made in basketball games.

Accesibility has a quality all of it's own. Literally tens if not hundreds of millions of people can do computational tasks they otherwise would not be able to do.

Considering the "every(wo)man" approach it ability to be useful is near genius, but just because your volkswagon won't get you to mars, it doens't make it crap.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact