To build this proper was going to take our small team at least a couple months. The sales people didn’t like this. I had an idea however to extend a SpreadsheetML parser I’d written a few years prior with a formula evaluator. I had a prototype working within a couple days, I reimplemented a large number of Excel functions warts and all, and since Excel saved all the last computed values along side the formulas it was easy to automatically test. It was the first project that really taught me the value of unit testing.
It was a fun project and took around a month in the end, which was still over the amount of time sales had given us. The client actually sent us a fair number of revisions to the spreadsheet such that had we built it out any other way would have been a major project, but was as easy as swapping the spreadsheet the tool read. I left the company a few months later but I was sad to hear from coworkers they never did anything else with the tool. I thought it was a really neat hack.
The main downside is that the software development culture in these companies are at least a decade behind the state of the art and you will have to deal with weird things like TFS 2012 and no one having heard of the phrase "Unit Test". On the upside since there is no software development culture in place, there is less argument when you tell them you're moving all your code to gitlab and want to use F#.
Being limited to only Office for a few months has not made me think "how horrible it is not to have real tools", but rather "how amazing is the amount of money and effort people waste on 'enterprise solutions' that are both orders of magnitude more expensive and inferior".
And this was something that the business had to do daily - you can imagine how effective it was to start the calc in the morning, have it hog all your CPU for hours then get the results after lunch if you were lucky and did not have to restart the whole shebang.
Yes, Excel is brilliant at communicating with business users and getting stuff done quickly. But it does not scale and has some problems with the normal software development flow - just try to put your Excel files in version control and you'll see what I mean...
I was handed a T-SQL script once, that produced a report on a few thousand items in a document management system. I was asked to use it to get statistics on a few hundred databases with millions of documents each. Which would have taken a few centuries at the speed that it ran. So I looked at it and realized the core was a procedural loop, which ran a separate SQL query for every row of output. Once the procedural part was replaced with one SQL query, it ran thousands of times faster and then all that remained was to replicate it to a bunch of databases and build the report in parts.
My rule of thumb is that if anything is too slow these days, it's because something is terribly misconfigured or something is severely wrong with the algorithm.
Let me give you another perspective: One of the reasons the Enron scandal happened was because traders for a major canadian bank,my former employer, kept the trades on their deskop Excel and audit and risk had no way of knowing the actual risk to the bank.
When I was hired to build an expensive enterprise app that allowed only limited set of financial models, every single user hated it at first: it was both orders of magnitude more expensive and inferior". But the bank now has full uptothe minute knowledge of its risk exposure and the system has proven itself.
The kind of alternatives I was questioning are BI reporting solutions - I have especially painful memories of Qlikview, and I just googled the licensing fees and they are insane.
I think the solution is rarely automate with VBA - however, using Excel with a little bit of VBA can be a useful first step on prototyping what a solution could look like.
But people do waste a ton of money on enterprise solutions that are just overkill for a given problem. This usually occurs in organizations where management has no real technical background and those who do don't have any real authority.
They're still here. When we had to generate reports for municipalities, and ingest new data from non-tech sorts of people. What did we choose? Excel 2003 XML. Because it's what we could import easily.
I feel ashamed at how much we put some of the smartest and hard-working people to work on yet-another-crud-rest-api.
Why? Excel is inherently poorly specified and the file format support matrix is a bit… unwieldly. Interoperability is terrible, any sort of multi-user use case becomes an exercise in archaic procedures, and the various scripting languages, while powerful, are janky as fuck.
Cannabis tech is where these problems are....
And i have a big network and some amazing work being developed....
I have a beautiful problem space for you: the newly legalized cannabis market.
(Btw im doing a cannabis tech consultancy and anyone who is interested in working in this space the ploblem-scope is very interesting)
Please contact me.
> "The final spreadsheet application is quite simple"
Is it? I easily understand the logic of the F# code, but I couldn't make another "similar" application so easily unless I was just copying the dependencies. I wouldn't really know what I'm doing, maybe it's not so simple!
(this isn't meant to be a criticism to the article itself, I just wanted to share this uneasiness I get when checking some code I don't know much about that's presented as "simple"/"short"/...)
"Look how easy it is to make a text editor!" - Drag the "text editor component" over a blank form. Look how easy it is to make a web browser! - Drag the "web browser component".
Or, maybe it was in PHP, "As an example, here's code to format an integer as Roman numerals." Oh, that sounds interesting... Expecting a cool algorithm, then discover it's (for some reason) a standard library function.
But it is easy to make those things using those tools you mentioned and for the problems they were intended to solve these systems work perfectly fine without you needing to know how they work underneath.
However sometimes you need more than that and you do need to open the box and see what is inside and how to work with it. Both VB and Delphi allowed that and at least Delphi had extensive documentation (and full framework source code) about all aspects of it (VB was a bit different in that it didn't had documentation for itself but as it was built on full Win32/COM tech the knowledge was still part of the Microsoft tech - e.g. the MSDN CDs in VS6 had everything you needed to know - though Delphi was probably better on that front as a self-contained tool). And escape hatches were placed all the way down (e.g. getting a window handle was just a property in both systems, you could directly call Win32 API functions directly from both systems, interfacing DLLs was trivial, making custom controls was also trivial - at least in later versions of both systems, etc).
Last time i used PHP was many many years ago so i cannot judge that, but from what i remember it wasn't like VB6/Delphi that built on top of an existing system (for all their nice facades, they were inherently related to Win32 - an issue that still exists for LCL/Lazarus that reimplements VCL as a cross-platform library using native widgets) but instead was a system of its own that provided a raw(ish) interface to existing libraries through its module system.
This is a situation where building a new applications with handmade layers that produce the outcome you want is more efficient than gluing together existing code.
You're coming at a complex problem by initially filtering the solution space. This discards much of the potential solution space for which the average case may be more expensive than The average case in the space after culling. However in many situations that solution space has many gems which are more efficient. Companies' tolerance to selecting those rather than using existing technology varies.
More often than not you would just use an existing product if you are not going to add any value.
But I get the example about roman numerals, editors and browsers.
I would consider it a browser that used an engine. Now a days everyone but firefox is using WebKit.. not much of a difference.
Curious how you would do that today without vb.net
If it's a complicated problem that can be boiled down to a simple API, I'll reach for it for sure. But I never have good luck with complicated problems wrapped in a complicated API. Usually I only need to solve 5% of the problem, so I'll either find a library that focuses on that 5%, or I'll code it myself.
You might argue that if I tried to become an F# developer I would get to know most of those on days one and two, and I wouldn't have to worry about many of them as they are just auto-generated or whatever. And I would agree. But those are the files that exist on the repository and that the developer needed to make this project work, despite making a "toy project". And they are there for a reason, and are related to other, way more complex pieces of software. Not like there's anything inherently wrong with it, but that's the "hidden" knowledge and complexity I was commenting about. Not something that's particular to F#, though.
So what's the point of the comment? You could make this exact same comment almost any time someone posts any cool project they made. 'Look at all the dependencies! Project tooling, IDE, runtime platform, version control, libraries.' It doesn't add anything useful to the discussion about a specific project.
I never criticised tooling, libraries, etc, themselves, I even made that explicit a couple times, and it's true I've said that it might happen in all languages. That still doesn't mean it's equally bad in all languages, or that it can't be improved, etc. I wasn't trying to start a discussion about this, it just happened. It was just a thought that I transcribed as a comment, in case it gave something to think about to someone.
Software complexity exists, yes. But in your original comment, you did say: 'the code is actually simple and short, the logic is easy to follow...' I think the fact that modern software is built on a complex base shouldn't take away from when it accomplishes something in a simple, elegant way. Abstractions are after all the key enabler of software systems. Criticizing them and the complexity they sit on top of, seems like missing the point.
For example, the Excel blog post is using virtual-dom directly (without higher level libraries like React) and getting that to work was a few lines of simple code with one mutable variable, but that's all you need to get a nice declarative Elm-style architecture.
Biggest clear plus is maybe full stack development support. Checkout SAFE stack (Saturn + Azure + Fable + Elmish).
There is also Bolero, an alternative for Fable how to run F# in client side. Where Fable compiles F# to JS, Bolero relies on Microsoft Blazor project which has a whole .Net runtime on WebAssembly allowing to run .net binaries directly on browser. I think this is still quite experimental but if you need/want some dependencies from .net ecosystem it might be easier than you think.
The key high level features that F# inherits from Ocaml is a succint syntax due to type inference and 'if it compiles it's bug free' development.
It's easy to dismiss this as "type theoretic stuff", however these features we are talking about are very natural to use and when dealing with static typing and FP you bump into them quite a lot. And given that OCaml has some of the best type inference in the business, this happens without you even knowing about it.
F# is also missing a means for doing ad-hoc polymorphism like in Haskell. So no type class capabilities, no higher kinded types either. This greatly diminishes what you can express in it and the gymnastics you have to pull off to work around it are not fun.
Here's for example what you need to do in absence of GADTs: http://fssnip.net/mq (n.b. this pattern is called "tagless final", resembling "visitor" from OOP land).
I love this sample because if it looks complex, well it's in fact more complex than it seems, as the type system is actually not fine with this usage either, so you end up with forced type castings anyway.
F# is a fine language, all things considering and not having certain features might be considered an advantage in certain circles.
However do learn OCaml and Haskell, because they have plenty to offer and you'll learn useful abstractions that in F# are not very accessible.
For an ADT that doesn't need the power of GADTs: pattern-matching on a List<'a> gives you either an Empty or a Cons(x, xs), where you know up front before you do the pattern-match that x and xs are of type 'a and List<'a>.
For a GADT that is not an ADT: pattern-matching on an Expression might give you "Const(a) where a is an int", or "Equal(a, b) where a, b are bools", etc. Without doing the pattern-match, you can't necessarily tell what types you've ended up with.
let int3 = int1 + int2
let float3 = float1 +. float2
let int3 = Int.Ops.(int1 + int2)
let float 3 = Float.Ops.(float1 + float2)
There's a substantial diff between OCAML and F# at this point, once you get past "core" programming with functions in both languages. Some of the more prominent things F# has that
* Computation Expressions
* Perf abstractions and compiler analysis for them, namely Span<'T>
* Type Providers
* The .NET generics system
* Anonymous Records
* Slices and ranges for slicing or generating list-like data
And as the parent user said, there's plenty OCAML offers that is missing from F#. I think it's a good idea to consider them quite differently, despite having the same functional core.
* OCaml has now landed monadic and applicative binding syntax ( http://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny-s... ), which I think makes it a superset of F# computation expressions.
* Regarding Span<'T> (or my preference 'T Span :-), I believe F# and OCaml have developed different optimizations and constraints. For example, in OCaml allocation is really cheap. But there are still well-known techniques for minimizing it.
* OCaml also has the PPX system, which allows to programmatically transform a program's syntax tree into a new tree. I believe this allows somewhere around the same power as type providers and quasiquotations, combined.
* Anonymous records are cool, OCaml objects are pretty similar in that they are structurally typed. Also interestingly neither of them supports pattern matching–if I recall the F# limitation correctly.
* Slices and ranges are cool, in practice there are pretty powerful and well-known OCaml libraries that provide those, e.g. https://ocaml.janestreet.com/ocaml-core/latest/doc/base/Base...
You can do: let ( + ) a b = ...
let (+) a b = a#add b
So if you know how to formulate the problem so it leans heavily on the type system, you can implement something like a binary tree container, and presume it is correct when it compiles.
Following the same parameters bestows rest of the codebase with this robustness.
See for example the ACM article "Ocaml for the masses" by Yaron Minsky of Jane Street for very lucid practical examples https://queue.acm.org/detail.cfm?id=2038036
Better bugs in this context is bugs at a higher level, eg. application logic issues.
But I believe this trend started with Go lang and later picked up by rust community.
It's nice to see some F# code. I am happy a lot of new programming language innovations happening, I did not pick up rust, because the programming paradigm is similar to other programming language with focus on memory safety. I will try F# may be later, these days more busy with learning lisp as many people recommend it to become a better programmer even though it's not popular as Go and Rust.
Personally I observed it only when I myself started programming in go, at one point in time in the early 1.0 version.