But there are still massive pain points or improvements that could be added. For instance:
- linking between excel and powerpoint. Almost anyone who will show an analysis, whether in a bank, accounting firm, consultant, etc will prepare some table and charts in excel and will want to use them in powerpoint. The link between the two is unstable, screws the formatting, or requires lots of manual steps.
- Microsoft should really look into Apple’s take on excel, with a sheet being a canvas on which you can add tables, charts, the tables have scroll bars if they overflow. Instead of the rigid single grid per tab, where if you show two tables one over the other you need to align the columns even when they have different data types.
Another thing I'd like to see is to be able to treat cells as objects rather than single scalar variables. For example, to represent data that has more than two dimensions.
Add a reference to Microsoft.Office.Interop.Excel, and get a handle on the Excel Application object using ExcelDna.Integration.ExcelDnaUtil.Application.
Alternatively you can use the excellent NetOffice.Excel (instead of Office.Interop.Excel), which will have less functionalities but with better version support.
What I've found with interop is there's a little more debuggability/edit-and-continue friction that can accumulate into a large loss of time during development. Maybe that loss is offset, or more than offset, by doing the interop with a vastly superior language. I'm not sure. Any thoughts about this trade-off?
It sounds crazy but the CTO is a cofounder and he found it is much cheaper to just do everthing that way. They have been running like that for over 20 years.
In my own time, I've visited manufacturing plants where I've seen customers send the factory a spreadsheet containing nothing but a VBA project to exercise a (hardware) test jig for quality control.
This just reminded me that when I was young, I couldn't afford VB5/6, so I would program in VBA in the office products.
It had once been company policy that engineers didn't have compilers or other programming tools on their computers (because "we have programmers for that"). But engineers are a resourceful lot, and they did have Excel with VBA.
On the “we have programmers for that”, I see another culprit appearing: IT security, who decided unilaterally that everone else in the 100k people organisation only needs Office and are trying to impose security solutions based on that assumption. That includes application whitelisting, or automatically encrypting all office documents (as a result non office programs can’t consume spreadsheets anymore), etc.
I hope it was long time ago. That's the most idiotic policy I've ever heard. :-)
Contrast that with a 20something can grad who while competent isn't as smart as that non tech guy, and has indoctrinated the cs way of doing things w.r.t. test coverage and unit tests but often will lack the insight of what the program is actually trying to do. Frankly I would also like to choose the former than the latter.
They would turn into a huge mess unless the coder is very diligent and understand every part.
CS 'indoctrination' is not a failure of education. Surgeons have been 'indoctrinated' to know pre-op procedures.
EDIT: Let me explain my thinking a bit more - we can try to categorize software development into two categories:
1. People write messy, unorganized code that is often in the head of just one or a few folks, who are smart but unorganized and often without formal CS training. The code often has almost no tests and doesn't follow any of the standard best practices of software engineering. The code is often impossible to pass down to new people, often ending up forcing the new people to redo a large fraction of the work.
2. People write clean, modular, testable code with good unit and integration tests, a robust build framework, etc. The code is written in a manner thats super easily transferable, most devs don't even have to understand the entire codebase to start meaningfully contributing.
Obviously, the second category is the preferred category. A good SDE with a CS bg should follow (and often do follow) the second method. However, category 2 could, at least from my experience, be split into two sub-categories:
2A. The framework for both the code and the dev ecosystem was laid down by (often just one) really good engineers who think through what the problem really is and make sure that the fundamental structure and architecture of the codebase works towards solving that real world problem. This kind of code is absolute pleasure to work with and extend.
2B. The framework and the majority of the code are written by average engineers; often the first few eng hires in the company put the groundwork and make poor design choices and the engineering team that follows never wants to change anything fundamental because that's "tech debt" which the company can never afford to take a step back and look at. The average engineers have a good heart, but often their test cases never test real-world edge cases, they often don't even remember the architecture of the code they themselves wrote a few months back, and the code breaks all the time. Furthermore, the engineers would generally balk at adding any new feature because the codebase is fundamentally evolved into something that just cannot be extended without significant rework, and often they cant even see how they can rework it to add the required feature. In the end, good SDE practices and testing doesn't do shit if the person who wrote it didn't think hard enough.
Here again, 2A is the preferred method of doing dev, but unfortunately, the thoughtful smart SDEs aren't that many, and would often be found in a well-paid job in a big company. Most regular devs can't step up to that level and the end result is 2B happens.
Now the question is, which is the better of the two evils, if they are the only choices? 1 or 2B? I'd choose 1. My experience has been that while the scrappy code is unmaintenable by anyone new, at the least the guys who wrote it (assuming you can keep them long enough) will at least own up to it and make sure it keeps running, and they can at least try (and practically, generally succeed) to ship a new feature as opposed to the 2B case where often the categorical answer would be no.
It's fair to say "a program not covered by unit tests can still be good enough to get the job done" (& to suggest that this is under-appreciated), but not having sufficient test coverage significantly raises the barrier to entry for those not familiar with the code ("noobs") from contributing.
(EDIT: I wrote my comment before parent's edit)
At the end of the day, unit tests or not - you have to understand the code you are working in. This means new programmers will not deliver features as fast as those who have been working on the project for a while. They are going to need to learn the code base regardless of whether it has tests or not.
I have many similar annecdotes. Having a good understanding of the underlying domain and what numbers to expect is critical to writing correct models and too often I see dev teams in banks that have no understanding or curiosity of what they work on.
This is a very predictable outcome when a requirements doc is tossed over the wall followed by patient waiting for the output.
Again, it happens all the time so maybe this advice isn't enough or practical or something, but there is a well recognized solution and I would wonder whether it was done in this project.
The people saying Devs dont do enough effort to understand the domain are the same people who try to keep the "domain" overly complicated to justify their jobs/positions
I think the fault is actually the split between IT (a support function) and the product group, which often doesn't have/isn't allowed embedded devs.
IT managers are often toxic for delivering, never take ownership, SLA bullshit, too much PRINCE2 and ITIL.
If you can get developers embedded, huge wins are possible. But they have to bring the client with them, not code away in isolation.
I also figure the company you refer to has some real benefit by focusing on one technology everyone knows.
But how and the heck do they manage a gate in a spreadsheet?
They work but they are not pretty and not easy to maintain. VBA is technically a "complete" language (I want to say Turing-complete but that is not a meaningful trait), so it is possible to do a lot with it, but one ends up having to re-implement (sometimes badly) stuff found in other languages in order to write the main parts of the code.
Part of what makes VBA deceptively easy is the control over the interactive elements of Excel (a lot of stuff is done with the Range object), but unfortunately that also introduces state that you can't always control, which entails write extra checks to make sure the state is correct before proceeding. This is especially true if your users are on different versions of Excel (I once wrote something in 2010 that doesn't work in 2016).
There are now other options like QueryStorm , which lets you write C# in Excel and connect to SQL databases.
There are also a bunch of Python-Excel solutions that are based on manipulating COM objects, but I've learned that when dealing with a Microsoft stack, there are advantages to using Microsoft-native languages like C#.
Coming back to the article, it mentions adding arrays, vectors, and records to Excel itself; this will make Excel much more powerful because it has traditionally been a cell-based computation system, which has disadvantages that higher-level abstractions overcome (like vectors and tables).
It also mentions writing Excel functions in a first class manner instead of relying on a separate procedural language like VBA. Operations on arbitrary sized arrays will also help it transcend Excel's issue of operating on fixed size arrays -- this will clean up a lot of very messy formulas.
These developments will be interesting to watch, because it brings Excel much closer to a true functional computing system, and gets closer to Quantrix  territory.
 Quantrix is a multidimensional spreadsheet, and a commercial successor to Lotus Improv.
curious. can you elaborate?
One of my collaborators (Ben Zorn) is one of the people featured in the linked article. We’ve been working on visual debuggers (and other things) for non programmers in Excel for a few years now.
There are also odd little things that Python database libraries don't implement, like SqlClient.SqlBulkCopy. Native bulk inserting is generally unsupported on Python, so I either have to spit out a CSV and use bcp.exe to bulk insert, or I have to use TurbODBC, a Python library (which has certain limitations, like not supporting certain datatypes like VARCHAR(MAX) or GUIDs).
But, less hardcore, also my own projects; picturepush.com and flexlists.com for instance have been running for > 10 years making me money. Flexlists was hacked / down once because of a stupid fix gone wrong (just winged it before taking off in a plane). I have many of those; downtime etc will cost me literally money and clients. But running it multi-region AWS etc will cut massively in my margins and, as I see in the many companies I do c-level interim/consultancy for, it really actually results in more downtime at most companies because more moving parts.
OS updates I carefully study and only do them if they expose an exploitable issue. Which is almost never the case as most exploits actually require server access. The mentioned hack was my own fault and showed me that I must use the demands I have for the systems I write for clients also for myself. I am redoing my applications (some of them) in C#, F#, TLA+ and Coq at the moment.
My goal as an engineer is to only need to update the frontend to current standards and let the rest be untouched. I have managed so far.
There are clever ways to answer these questions but if you condescend to Excel as a development platform, and believe you can't build anything great with it, you may never brainstorm enough to uncover them.
And I've got to say, they made a great choice. He's one of the most effective communicators I've ever seen when it comes to teaching new ideas.
- arrays and objects living in a cell
- mix of spreadsheet and textual views
- functions in spreadsheet cells, and running spreadsheets as a messaging service (as a lambda function / API)
- dynamic array sizes
To be honest, mobile is not a focus as I think it's hard to be productive without a proper keyboard. I'd like to make the iPad + kb experience better in the long-term though.
Thankfully IE11 has Office interop via COM, so Mesh (or a site embedding Mesh) can talk to existing spreadsheet investments.
And on the locked-down front, Mesh doesn't require IT permissions to run (unless JS is disabled in your browser).
 "Spreadsheets are Code" - https://www.youtube.com/watch?v=bdfNvYPxkOY
If you’ve got any feedback, please let me know! I’m david@. Thanks!
However, with great power comes great responsibility ...
Unfortunately for Excel, it’s essential to Microsoft’s product strategy for keeping people on Windows, which means that Excel on other platforms feels almost as if it were sabotaged compared to the “native” experience in Windows. I feel like Excel could really thrive if only it could separate itself from the mothership and focus on its own interests.
That way my people can still use Excel but I have a way of validating and controlling it.
Hint: as far as I remember compression ratio is 0 (stored) - best to keep such on re-save for compatibility.
I knew the starting and ending units, but Google calculator and Wolfram alpha have made me sloppy.
Microsoft: Let's make better spreadsheets
Has anyone integrated Excel with a db like PostgreSQL?
There is a company called DataNitro that sells a Python add-on for Excell which allows you to use Python instead of VBA. It works pretty well, but I no longer use it.