Importing Git repo to Fossil https://fossil-scm.org SQLite database uses much less disk space. Fossil can have Wapp etc CGI server extensions integrated:
The early prototype of HotBot aka Inktomi was done in AOLserver (then called NaviServer). I was a grad student at UC Berkeley, Tcl was the new Perl and embedded scripting was the new CGI.
For production scaling, we rewrote it in multi-threaded C++ running on Sun hardware. This was not only pre-AWS but you had to build your own machine room, which meant severe constraints on budget and physical space: getting a world-scale service into a single rack was the difference between success and failure.
Anyway, AOLserver and Tcl rocked, other than performance, no complaints at all for this use.
HotBot was my go-to search engine before Google got popular, so thanks :)
The article seems to imply AOLserver ran pretty well (28,000 hits per second in the late 90s), and AOL used it with their users. How many hits were you getting that it didn't work? Or was it the nature of the workload (presumably db queries for search results) that were the bottleneck?
You had to write C code for anything compute-intensive. TCL was precompiled to VM and was as fast as TCL can be. aol.com ran it for a while. Eventually other properties like digital cities used it with a bunch more optimizations that I wasn't directly involved in.
Great to see some discussion about this ecosystem. I literally started playing around with NaviServer just 9 days ago. It's definitely worth a glance. There are a lot of great utilities built right in: database connection pooling, a key-value cache, and more [0]
Tangentially related, I've been making my way through Greenspun's Software Engineering for Internet Applications [1] and have especially found the metadata section to be enlightening [2]. I like to think of myself as a competent engineer, but this quote hit too close to home
If you're programming one Web page at a time, you can switch to the language du jour in search of higher productivity. But you won't achieve significant gains unless you quit writing code for one page at a time. Think about ways to write down a machine-readable description of the application and user experience, then let the computer generate the application automatically.
Wow. Blast from the past. Greenspun's company ArsDigita used to run free web development boot camps. I did one of the boot camps back in 2000. It was the first time I'd ever heard of such a thing.
Apparently, there was also an ArsDigita University, which ran for a year:
Wow, good memories. I was into Greenspun's writings, ArsDigita, AOLServer, OpenACS. Even managed OpenACS releases for while. Lots of good people there at that time, with concepts that only later became mainstream: application servers, database connection pooling, templating, user-registered tags, decent databases (mysql wasn't included in that list).
We ported the ArsDigita Community System to PostgreSQL, added a lot of features into it with the subsequent addition of .LRN, a learning-management system the Berklee School of Music funded IIRC, and many other things.
Aaron Schwartz, Ben Adida, Don Baccus, Michael Cleverly, and many many others besides myself (Roberto Mello) were contributors.
I remember AOLserver was evaluated at company that I worked for. They were using Roxen at the time and decided to stay with that. I remember both AOLServer and Roxen had such amazing features, that Apache felt like a huge step backward. But of course, Apache worked well one smaller servers,
Me too! But more than 20 years ago. And it mainly helped me get beyond simple CGIs into database connections.
I love that the whole book was online but also in a very slick coffee table format. With a bracing sense of humor and not afraid to try and distill very technical topics for less technical people. There is something of the DIY, can-do, participatory side of 90s computer culture in there. Deeply.
Aww, AOLserver and OpenACS. Some of the best sites I worked on were OpenACS sites. In fact if fact when they shutdown Uptime, I ported his AOLserver/Oracle code to AOLserver/Postres and ran Uptime, from my home, on a Sparc pizza box for years.
AOLserver and TCL are still on my mind for new projects. Just haven't pulled the trigger .. yet ..
Sounds like one could describe OpenResty as a spiritual successor to AOLServer/NaviServer. “Take a well-engineered, low-level-coded web server, and plop a powerful, lightweight VM into it” seems to be fruitful ground.
AOLServer is still valid but outdated and so NaviServer is the now the newly active maintained version of AOLServer, taking it back to it's roots.
TCL is a fun language and integrates well with most including perl and python. To be able to call a procedure based on a URL adds none of the frustration that MVC brings that I've found.
If your looking to pick up a new language to toy with; I would recommend TCL as an language to take a poke at. It does have an old unixy feel to it, but I've found it's mature and very solid. OO is also possible.
Nothing could be simpler when everything is a string, it leads you down a path of voodoo and creativity.
proc displayHackerNews {} {
set hn "Hacker News" ;#Set the variable (hn) with "Hacker News"
puts $hn ;#Prints the variable ($hn) to terminal
} ;#end procedure
displayHackerNews ;#execute the procedure
% Hacker News
If statements are easy
set hn ""
if { $hn eq "Hacker News" } { puts $hn } else {
set hn "sad face"
puts $hn
} ;#end if
Loops are fun
set hn "Hacker News" ;#Set a variable
while { $hn eq "Hacker News" } {
puts "$hn" ;# print "Hacker News" to terminal in an infinite loop
} ;#end loop
Multi-threading is a breeze.
set MyThreadID [thread::create {
puts "Hello from my new thread"
thread::wait
} ;#end code-to-run
] ;#end thread
Thanks for posting your comment, it lead me down a rabbit-hole that might help fill some current downtime. I only took a brief flick through https://www.tutorialspoint.com/tcl-tk/index.htm but I found it more approachable as a total noob to TCL. As always, ymv & can't vouch for anything on there except it works/looks ok with JS disabled. Thanks, just might add TCL to the skill-set :-)
The syntax is quite simple, with only a few rules of evaluation -- too simple for those used to the Algol family, too complex for smug lisp weenies.
Then you had the RMS flame war, back when he was still considered influential for programmers.
Then they kinda missed the boat when it came to packages. Never got something like CPAN going, whereas Python and Ruby did.
I can't remember exactly, but I think there wer some corporate shenanigans, too, with the whole Sun/Scriptics situation and Java stealing its milkshake, weirdly enough.
Speaking of siphoning off milkshakes, Lua did this for the scripting part. Syntax that was a bit more regular, implementation that was even easier to embed (and don't get me wrong, Tcl is/was pretty great there).
And nowadays, you either get lucky with some specific niche, or it's hard for a dynamic scripting language.
I'm a cloud engineer today with about a decade of experience. I mainly build and maintain K8s clusters for those who've made the unusual choice of going with EKS.
That being said, I did not understand until I was 17 that you could...just open a browser...and the internet was on.
As AOL rolled out broadband to its previously dial-up customer base, they did not want to confuse their very technically unskilled customers. So just like dialup, where a PPP connection has to be made initially, they preserved the concept in broadband. You still had to click all of the buttons as if you were signing into AOL. This also had the advantage of keeping people on the AOL browser.
That reminds me of seeing a video featuring a bloke from the millennial generation (perhaps a little younger) who today does lots of mathy stuff in Lisp, but when he was a kid/teenager computers came in two varieties: "old computers" (e.g. the Apple II) which could be programmed, and "real computers" (modern (late 90s) PCs) which could not.
Kind of a sad reflection on how we went from booting to BASIC and encouraging programming by the end user, to booting into Windows (or worse, a smartphone OS) and almost suppressing programming.
> they did not want to confuse their very technically unskilled customers
That's not really a good explanation for the design.
AOL kept the "AOL client" paradigm because your browsing was actually done via an L2TP tunnel back to an AOL datacenter. AOL provided some proprietary content, caching, and had things like parental controls (a very big deal for many users) as value adds over your broadband service. The idea was that you could get the same AOL experience no matter the provider or service type.
There were other benefits, such as masking your home IP address from advertisers - all they saw was an AOL tunnel endpoint address. AOL invented the "ad id" pseudonymous identifier to satisfy some of the tracking desires of advertisers w/o letting them re-identify you (given then current techniques). Apple uses the same concept today.
Our startup in the late 90's, had a similar product, uBusiness.
Also using a mix of TCL, C and Apache.
Later on, we created an IDE using VB 6.
Although using TCL was a great experience, it also taught me that not having a JIT/AOT compiler just doesn't scale, as every performance bottleneck just got rewritten in C.
Rewriting bottlenecks in C is pretty much the design philosophy of Tcl. Ousterhout argued that even with dropping down to that level, all over productivity was high enough (and not every developer would need to know the nitty gritty).
In that respect, I think it wasn't other languages that ruined that party, but IDEs, which made working in the more performant B&D languages less painful.
The main problem I see with this philosophy, is that Tcl and C are two completely different languages
So switching from Tcl to C, I think might require two teams in most organizations, one writing in Tcl and another writing in C
For large organization, this is probably not a big problem, but not so for small teams
This is why I had high hopes for gradually types or optionally types languages, but it seems this doesnt really work in practice, Raku is not fast, Dart (as far as I understand) dropped gradual typing, it remain to see if Red will make it work
Right now one of the slow parts is the regex/grammar feature. So code that uses that feature is a bit slower than it needs to be. The compiler uses the regex/grammar feature, so your code compiles slower than necessary. Since this feature has mostly been left alone for years, it makes sense that it is a bit slow. Hopefully someone will volunteer to improve this soonish.
Much of the slowness is actually caused by the dynamic design of the language. (Which are perhaps its most awesome set of features.)
As for the slowness caused from dynamic typing, it mostly goes away after the runtime type specializer built into MoarVM gets a chance to optimize it. This is the step before the JIT, and it is actually more important than the JIT for performance. (It also optimizes the dynamic features noted above.)
The type specializer could in the future make it so that bog-standard Raku code is eventually as fast, or faster than bog-standard C. (The runtime type specializer has more information available to work with than a C compiler could ever hope for.) Of course hand optimized C would probably still outperform Raku at that point, but you can also hand optimize your Raku code.
There has already been one report of Raku (Perl6 at the time) being faster than the C/C++ equivalent for one workload. There have also been several important optimizations since then. (I'm fairly sure the slowness of the C/C++ version had to do with copying strings and looking for null terminators of strings, neither of which Rakudo on MoarVM does.)
---
For more information watch the videos of, or read the slides from:
“Escape analysis and related optimizations for Perl 6” -- Jonathan Worthington (jnthn)
and
“Perl 6 performance update” -- Jonathan Worthington (jnthn)
(If you are only going to watch one, make it the latter one.)
https://wiki.tcl-lang.org/page/NaviServer
Tcl has SafeTcl, that can be used to run untrusted code:
https://www.tcl.tk/software/plugin/safetcl.html
For smaller usage, there is Wapp web framework that is TCL+SQLite:
https://wapp.tcl.tk/home/doc/trunk/README.md
Importing Git repo to Fossil https://fossil-scm.org SQLite database uses much less disk space. Fossil can have Wapp etc CGI server extensions integrated:
https://fossil-scm.org/home/doc/trunk/www/serverext.wiki
For syncing between Git and Fossil:
https://fossil-scm.org/home/doc/trunk/www/inout.wiki
https://fossil-scm.org/home/doc/trunk/www/mirrortogithub.md
For packaging Tcl apps to single executeable, there is Tclkit:
https://blog.tcl.tk/52