I hear you on the opportunity side and I can't see that changing. The good news is in recent releases there's a lot less boilerplate - "dotnet new console -lang F#" results in two files, a short fsproj file and a single line of Hello World.
This is an impressive achievement, given there’s a whole language plus IDE. Kudos to the author. I couldn’t see any indication of what the author plans to use it for - I hope he can share more below?
One of the most delightful places I’ve used Lua recently is in TurboLua, which gives the Lua VM access to all the necessary signals/pipes for writing high speed network applications, among other things. (https://turbo.readthedocs.io/en/latest/)
Do you see there being a way to make a TurboLuon with current specs?
What do you mean by "natively"? Ahead of time compiled, or just working? If the latter, the present VM is already available on the mentioned systems. In my version, Gingko, I additionally removed a lot of outdated code to increase platform independence.
nice! do you feel like oberon has something that gives it an edge over more currently popular languages, or is it just a matter of personal preference?
Actually, I only use original Oberon when I'm migrating the old Oberon systems. My interest lies in finding out how I would have to modify original Oberon to be as productive as I am used to with e.g. C++, but still adhering to the goal of simplicity. My version, which I call Oberon+ (and to which Luon is quite similar, whereas Luon is even simpler), goes in this direction.
Actually an "edge over more currently popular languages" from my humble point of view is the goal and maintenance of simplicity. The term is subjective, but if you look at many of today's completely overloaded languages, it is intuitive to understand.
I have to ask, "why?" in the spirit of if you're smart enough to write an Oberon on top of Lua and then use that to write a Smalltalk VM then you're certainly smart enough to get around a complicated language and tolerate a lack of some simplicity.
Probably a similar reason why equations are simplified in mathematics. If something is not represented or implemented as simply as possible, there is obviously overhead or redundancy. In the sense of the lean philosophy, that would be waste.
Simple solutions are also less prone to errors and easier to modify and expand. This reduces the probability of errors and makes it easier to maintain and update the system.
Simplicity makes systems (and programming languages) easier for users to understand and use. This leads to greater user-friendliness and reduces the learning curve, resulting in a more positive user experience and, in turn, a lower error rate.
I'm sure there are many more reasons (apart from the obvious proof by authority, which is based on the statements of, for example, Einstein or Wirth).
I get the appeal to write an IDE from scratch, especially if you are already an expert in writing GUIs with your framework of choice! I wonder if it would make more sense to spend that time writing a language server protocol daemon. That way, you could make your language available in any IDEs your users like that support LSP.
I'm usually working on older machines on which the IDE's supporting language servers would be much too slow or wouldn't work at all because of incompatibilities. I like lean tools with little dependencies. But there is a parser in moderate C++, so maybe someone else will implement such a daemon.
It's excellent hardware, I have many redundant copies and thus high availability, the system just works, I automatically take care of efficient implementation, and the generated executables have high probability to work on all newer Linux systems. And I'm too old to always chase for the new new version.
Optimizing your code for slow machines is a really great thing. I wish more people would do it. If your code runs well on a slow computer then it'll run well on anything better too. A lot of code these days only runs "well" on very fast computers...
I mainly develop TXR on a refurbished Intel box with a CPU that's like from around 2009 or something. It takes at least three times longer to rebuild everything than on modern machines. (It does have 32 gigs of RAM which is good for multiple virtual machines.) In fact I have one Dell box that was a server in a startup company that shut down in 2010. And that box builds the code significantly faster.
I am actually glad that this person developed the IDE. Please download the IDE and try it. It is exceptionally fast. When I compare the speed with that of IDEs like Visual Studio Code etc, the difference is night and day.
I would start with a business case - what are the benefits, is it going to generate revenue or reduce costs, when do the benefits start to be realised.
Then look at the cost in starting to develop what you need, and how you’re going to get started.
Are there existing COTS systems (e.g. SAP, Dynamics, Salesforce) that are extensible but can do 60% of the base functionality out of the box? Can you start by integrating two systems or using your low code platform to prove the concept? A couple of engineers/freelancers/external shop for a few months is a lot cheaper than hiring a whole development team… others in the thread have given you a reasonable estimate of that.
Think about what’s the MVP needed to start showing ROI. Maybe do a smaller business case for that.
Look at the payback period internally and ask what hurdle rate is needed to invest from your CFO.
FWIW, I started programming F# during your tenure at MS and in a couple of years it became a really capable cross platform option - partly MS, partly community, but definitely helped out by you. Thanks!
This was me - my first .net language was F# (although I'd dabbled a tiny bit in C#); it was hard to learn the .net standard library as well as trying to learn functional idioms...
As a thought experiment, this got me thinking what would a solution just using .NET core look like? Obviously I don't have all the details but I think you can do all of those things in .NET (with the exception of the embedded stuff probably).
* Bluetooth and maps libraries are available for Xamarin, [1] [2]
* Control server could be written in C# using the .NET core worker template [3] and deployed as a SystemD service and then the SDK deployed as a package to enable local development.
* Cloud bridge deployed as a service or website using raw sockets, SignalR or Azure IOT hub (depending on requirements).
You'd end up with 2 languages instead of 5 and I suspect you'd be able to factor some code out into libraries as well...
> * Bluetooth and maps libraries are available for Xamarin, [1] [2]
I don’t have lots of direct experience with these in particular. I have played with the BLE shim libraries provided for Dart, React-Nativ, and Cordova back in the day. All were maybe ok for single characteristic rare interaction. But as you go up in connection frequency, characteristic count, or update frequency, things degrade in robustness quickly. I spent a solid couple of weeks getting our Kotlin/Android version up to being able to handle 1000 reconnects without hanging the chip. BLE is hard.
Given that even with the native map stuff, we had to jump through some “clever” hoops to get the kind of map overlay feedback we were looking for and at tolerable update speed, I remain skeptical that yet another abstraction layer wasn’t going to be an additional hindrance.
> * Control server could be written in C# using the .NET core worker template [3] and deployed as a SystemD service and then the SDK deployed as a package to enable local development.
We have limited flash space. It’s too small to accommodate the gcc toolchain. If/when we want to do that, we used a mounted sd card to accommodate the space needed for doing hosted development. It’s pretty cumbersome compared to having Python right there. I would guess that if gcc wasn’t much of an option, c# was going to run into similar issues.
> * Cloud bridge deployed as a service or website using raw sockets, SignalR or Azure IOT hub (depending on requirements).
My choice to use Elixir here would be the most arguable. The nature of how we do MQTT communications securely (uniquely secured by each connection path) dictates either a single threaded many-socket mqtt client (we’d have to do this from scratch) or just use lots of threads, one per client connection. It is not uncommon to have 20,000 threads active in our current solution. Based on some peers comments, that would be a lot. But at that point, I guess we’d pursue the many clients on less threads approach. Which could have been in whatever. I really just wanted an excuse to take Elixir for a spin, and this was an area it could/did shine.
Very interesting - thanks! It's an interesting exercise thinking through some of the constraints and some of the engineering tradeoffs. Out of interest, how much development is actually done on device vs being done on another system and then loaded onto the SD card?
It's a diaeresis symbol rather than an umlaut, used (infrequently) to show that it's pronounced co-or rather than coor. Quite archaic, unless you're the New York Times, who use it as part of their house style, but not wrong!