Hacker News new | past | comments | ask | show | jobs | submit login
Wapp – a single-file web framework by the creator of SQLite (tcl.tk)
411 points by networked on Mar 30, 2018 | hide | past | web | favorite | 135 comments



One of the things I like about Mojolicious (a pretty full-featured async web framework in Perl in an incredibly small amount of code) is that it has a Lite variant, which allows building your whole app (routes, app functions, helper functions, models, data, etc.) in a single file. Once you've gotten it off the ground and the one small file starts to become one big file and it starts feeling hairy, you switch to the full version, and break out all the pieces into the usual directory layout (which can be done mostly automatically).

With regard to Wapp and some of the comments here disparaging Tcl...I've often said Tcl gets much more hate than it deserves. For a lot of tasks, it's fine. I wouldn't pick it for anything, but I understand why some folks still do. And, I did build my current companies first website with Tcl (OpenACS+AOLServer) more than a decade ago. It was more enjoyable to work with than all of the PHP CMS-based sites I've built for the company since then. I wouldn't rule it out, if there were some project I wanted to use that happened to be written in Tcl.


I used to love Perl until the world told me to stop because the syntax was too noisy. Now I'm embedding JavaScript expressions in CSS in JSX in ES6 and being told this is an evolved state.


Agreed. Of all the things one could complain about in Perl, "noisy" syntax is among the dumbest and most superficial (but, among the most common).

An informed rant about Perl might include these two function calls behaving differently and one almost certainly containing a bug, because function arguments are always a list:

    some_function(@an_array, $a_scalar);

    some_function2($a_scalar, @an_array);
Assume some_function receives @new_array and $new_scalar, and vice versa for some_function2. This trips up literally every new Perl developer (our UI guy, who is mostly a PHP and JS dev, ran into it just a few weeks ago).

Lack of function signatures is another good complaint about Perl. It's still not realistic to use them in code that ships for old Perl versions (like for deployment on leading server distributions).

There are others, but, yeah. I also hate how simplistic criticism of Perl is these days. It's knee jerk and poorly-informed in most cases. That doesn't mean I'm recommending Perl for everyone, or that I'd pick it for every new project, just that I wish the dialog around it weren't so stupid so much of the time.


FWIW, in Perl 6, gradual type checking and function signatures do exist, and parameter flattening is only applied if the signature calls for it.

    sub foo(@array,$scalar) { }
    foo( 42, [1,2,3] );    # compile time error
Signatures are also first class citizens in Perl 6: check out https://docs.perl6.org/type/Signature for more information.


I was excited about Perl6 until I ran a prime number crunching benchmark which ran slower than everything else. I'll stick with Perl5 or move to Ruby. We too use Mojolicious and are very happy with it.


How long ago did you try that number crunching benchmark?


Recently, using Rakudo Star 2017.10. The algorithm adds numbers of the form i+j+2ij into a hash of non primes, then takes k not in that hash and pushes 2k+1 into an array of primes. These are basic operations, but Perl6 simply takes minutes to find primes up to 10k. Even Tcl, which lags behind anything else but Perl6 does it an order of magnitude faster. If you increase to 100k then you could probably go have a pizza in town and it would still be crunching after you were back. So it's definitely not the JIT compliation stage that makes it so slow.


As the author of this recent post https://perl6advent.wordpress.com/2017/12/16/ I'd be really interested in seeing your code for primes. You might enjoy seeing the output of `perl6 --profile` to see if there is any glaringly obvious place its being slow. You get a nice interactive html report.

I find it kind of funny primes are constantly used as a first filter on using Perl 6, despite it being one of the only languages with an efficient built in .is-prime method on integer types >;P I have a more contemporary version of Rakudo built locally too, so can see if this is something that's already gone away if you dont mind throwing your code in a gist/pastebin somewhere?


perl5 0m0.156s https://p.thorsen.pm/2d70bb2da612

perl6 0m6.615s https://p.thorsen.pm/ab1769fe2778

This is Rakudo version 2017.10 built on MoarVM version 2017.10 implementing Perl 6.c. It's built with radkudobrew. Results are consistent across several platforms, I've also built on an iMac with OS X 10.1. This one is built on Ubuntu 16.04 x86-64.


I'm not exactly sure I understand the algorithm, but if it is only supposed to generate prime numbers upto 1000, than it appears to erroneously include `999` as a prime number.

FWIW, if I would just be interested in the prime numbers upto 1000, I would write that like this:

    (1..1000).grep( *.is-prime )
Which for me executes within noise of your Perl 5 algorithm. For larger values on multi-CPU machines, I would write this as:

    (1..2500).hyper.grep( *.is-prime )
Around 2500 it becomes faster to `hyper` it, so the work gets automatically distributed over multiple CPU's.

Am currently researching as to why your Perl 6 algorithm is so slow.


The algorithm is an implementation of the Sieve of Sundaram. 999 might have slipped in by an off by one error. Thanx for the suggestion of using is-prime, but in order to benchmark multiple languages, I need to run the same thing everywhere.


Turns out that even though primes are integers, in your Perl 6 version, every calculation was done by using floating point. And this was caused by the call to the subroutine. If you would do:

    sieve_sundaram(1000)
instead of:

    sieve_sundaram(1e3)
then it all of a sudden becomes 4x as fast. In Perl 5 you never know what you're dealing with with regards to values. In Perl 6 if you tell it to use a floating point, it will infect all calculations to be in floating point afterwards. `1e3` is a floating point value. `1000` is an integer in Perl 6.

Also, you seem to have a sub-optimal algorithm: the second `foreach` doesn't need to go from `1..$n`, but can go from `$i..$n` instead. This brings down the runtime of the Perl 5 version of the code to 89 msecs for me.

Since your program is not using BigInt in the Perl 5 version, it is basically using native integers. In Perl 6, all integer calculations are always BigInts, unless you mark them as native. If I adjust your Perl 6 version for this, the runtime goes down from 4671 msecs to 414 msecs for this version:

    sub sieve_sundaram(int $n) {
        my %a;
        my int @s = 2;
        my int $m = $n div 2 - 1;
        for 1..$n -> int $i {
            for $i..$n -> int $j {
                my int $p = $i + $j + 2 * $i * $j;
                if $p < $m {
                    %a{$p} = True;
                }
            }
        }
        for 1..$m -> int $k {
            if ! %a{$k} {
                my int $q = 2 * $k + 1;
                @s.push($q);
            }
        }

        return @s;
    }

    sieve_sundaram(1000);
So, about 11x faster than before. And just under 5x as slow as the Perl 5 version.

I could further make this idiomatic Perl 6, but the most idiomatic version I've already mentioned: `(1..1000).grep( *.is-prime)`


Can we please continue this discussion out of this thread, on a proper Perl forum? I haven't found any contact pointers for you or Ultimatt.


https://stackoverflow.com/questions/tagged/perl6 perhaps?

Or on IRC: #perl6 on irc.freenode.org ?

Or the perl6-users mailing list?



Perl 5.20 got function signatures. The community was pretty stoked about that.


Yeah, but as I mentioned, it is not realistic to use them for installable software, yet. At least not for my needs; we're building software that needs to be easily installable on the leading server distros; we can't ask a million or two people to upgrade their Perl before installing our software.

So, I'm excited about it, too, but I can't use them until CentOS/RHEL 7 reaches end of life (and that's assuming CentOS/RHEL 8 gets a 5.20+ version of Perl, which isn't an entirely safe assumption). It's easy to blame CentOS/RHEL for this, because they ship an old-as-heck Perl version, but it's also reasonable to question why it took 20+ years for Perl to get function signatures in the core language.


Perl evolved from bash so it has a lot of cruft but you can learn good practices pretty easily. JavaScript has no excuse.


JavaScript was written in 10 days in 1995 by Brendan Eich with the purpose of being a «glue language that was easy to use by Web designers and part-time programmers to assemble components such as images and plugins, where the code could be written directly in the Web page markup.» His first choice would have been inspired by Scheme, but unfortunately Netscape forced him into a Java-like syntax for marketing purposes. https://en.wikipedia.org/wiki/JavaScript

The fact that it has managed to evolve into a decent, if poorly typed language, with some outstanding implementations, while maintaining 99% compatibility with "onmouseover" scripts from the 90s is a testament to its sound design.


Perl has sub signatures since 5.20.


CentOS 6 has Perl 5.10.1. CentOS 7 has Perl 5.16. I work on installable software that needs to Just Work on the majority of servers without hassle. So, I don't have sub signatures, yet.


RHEL and CentOS are notorious for running old Perl. 5.20 was released in 2015, that's 3 years ago. Perl 5.16 has been EOL since 2013/11.

https://www.cpan.org/src/

Do you still run anything on Node.js 0.10? 0.10.36 was released in 2015, the same year as Perl 5.20. I doubt the mojority of modules still support it.


The world is what it is. I just live in it. We, realistically, cannot insist on a new Perl on every system we support (we're talking about a million or two installations). We build tools that are meant to be very easy to install, require minimal CPAN dependencies, etc.

We target 5.10.1. When CentOS 6 is EOL, we will target 5.16, and so on. I don't like it, but I can't make Red Hat ship newer Perl versions. Software Collections has Perl 5.20, which is great for some folks, but it's not a good option for our projects, either.

It's worth noting that Python has the same problem on RHEL/CentOS. Maybe even more pronounced, because some system tools rely on Python, and it can even break them if you change your personal Python to something else (I use pyenv, and I have to remember to change my Python back to the system one when running Gnome Tweak Tool, and the like).


Here’s the python entrant. A pleasure to work with.

https://bottlepy.org


I miss my aolserver days. It ns_put itself into my heart perfectly.


So do I, and I'm not afraid of upvar anymore.


Mojolicious is fantastic! A plus is that its complete and has zero dependencies on other modules. So its very quick to install.

It is good to see mojolicious be mentioned here at HN.

Mojolicious::Lite docs http://mojolicious.org/perldoc/Mojolicious/Lite#SYNOPSIS


Mojolicious is beyond fantastic! I have a hard time touching anything else, as every step of the way I think "damn this could have been so much easier with mojo..."


I've not tried Perl since, but when I did it was the only web framework I ever tried and it was pretty nice.


Note that Hipp before working on SQLite was a member of the TCL core board and wrote something similar in the early 2001: A database-driven web server in a self-contained executable named TWS where dynamic content is served from TCL.

http://www.hwaci.com/sw/tws/index.html


I love Tcl. The community is great. I watch the TCT (Tcl Core Team) and they are very meticulous about what changes are being made in the language. Tcl has added some things in the last few years as well. It has an object oriented framework built in (TclOO) now, which most people don't know about. I think I love it because it IS so different.


This. It's like watching master craftsmen work on some parts for the sheer joy and challenge. (Pity the parts that don't have the craftsmen working on them mind you)


I really enjoyed working with it when I was writing Expect scripts (a long, long time ago) and found myself writing some Tcl scripts alongside the Expect-specific bits. What sorts of things are you using it for?


I use it for sysadmin scripts and quick Tk gui utility stuff.

I have some ideas that I want to express in Tcl as well.


I like the idea but I’m not quite ready to learn a new language at the moment. Can anyone recommend something similar in Python? The single-file part is less important to me than the simple API. I’m a simple guy and my app will be simple as well :-)


Bottle [0] is single file, simple web framework for Python. It's pretty easy to use and well documented. It has no dependencies outside the standard library but can optionally use a number of more robust web servers.

Flask [1] has similarities and IIRC the author of Flask, Armin Ronacher, has in the past proposed merging it with Bottle. However, the single file and dependencies thing is where the author of each framework disagrees.

Flask has tools that help with structuring larger apps and seems to be more widely used than Bottle.

Personally, I prefer bottle for things I know will remain small, just for the lack of dependencies and that I personally find the documentation easier to read.

[0] http://bottlepy.org/docs/stable/

[1] http://flask.pocoo.org/


Thank you, I haven't looked at Bottle. The Hello World is actually comprehensible to me and I was able to mentally crunch the entire tutorial page no problem. I checked out Flask but was overwhelmed, went through the Django tutorial and really couldn't get my head around what I needed to transfer it to my own code. LOL I told you I'm simple... SlowBro may be slow, but he's a bro...


Flask has _really good_ documentation. Sitting down and just reading the whole manual cover to cover was one of the most productive things I did when I was getting my head around MVC.

It's not in Python, but I also really like the tiny Camping framework in Ruby. It's tiny enough and well documented enough that you can read and (more or less) understand the whole thing, even if you're not great at Ruby.

My path when learning web development kind of went from Camping to Sinatra to Rails, with periodic detours to Flask. The high level concepts are very similar.


I'd say Flask will literally let you do that. Though CherryPy will as well if you really want to. I usually don't do CherryPy that way unless I'm building something small and contained enough. If you need templating I used Mako with CherryPy as well. Just look at the "Hello World" for CherryPy you only need that one file to run a web server, supports also running under WSGI and hosting other applications that need WSGI hosting.

CherryPy:

http://cherrypy.org/

Mako:

http://www.makotemplates.org/


I've used cherrypy + mako in a few projects (https://github.com/kakwa/dnscherry and https://github.com/kakwa/ldapcherry).

It's quite easy to pick-up and does the job very well. It supports running directly over http, but you can also run it in wsgi or fastcgi modes without issues.

And it's also quite stable in term of APIs, my projects are now 4 years old, and I didn't have any breaking changes in their API. It makes it quite easy to support various distributions and version without kludges in the code to handle different versions or having to bundle cherrypy with the application. Even my monkey patches of the framework (slight change in the configuration parser) are not breaking ^^.


My project is 1 year and 6 months old or so and yeah I've had a similar experience. CherryPy is a great framework, and Mako not sure why I chose it was also an equally great choice.


CherryPy is pretty cool. I wish it got more notice. Bottle is another single file Python framework as well.


Got to use it for production at my job when I was given the freedom to pick any framework. Don't really regret it, though I wouldn't mind trying out Falcon for comparison.


The equivalent framework in Python is probably bottle.py: https://bottlepy.org/docs/dev/


Bottle looks like the answer. I read through the entire Tutorial in about 30 minutes and understood almost everything without even needing to try it out first. Got to the end and, "That's it? Wow that is simple." One can learn all they need in a short time and get cracking on some apps.

First app is a web config interface on a Raspberry Pi-based IoT device. Web config app will be almost identical server-side so as to allow config from anywhere. I was going to do it all in CGI but this is much improved.


As others have said, Bottle is great, and I use it in almost every side project that uses Python.

However, it should be noted that TCL is famous for being a language that can be described in 12 rules:

http://wiki.tcl.tk/10259

And the tutorial is really good:

https://www.tcl.tk/man/tcl8.5/tutorial/tcltutorial.html

Obviously, there's a lot more online reference material and a bigger community for Python, but if you are interested in learning TCL, don't be afraid to go for it.


Oh I actually wrote a fairly complex expect script once, many years ago, and forgot most of it :-) It wasn't bad as I recall, but I'm on a timeline so not eager to pick it up again right now. Thanks all the same.


I would recommend giving Pyramid a try: https://trypyramid.com/

It's a framework that allows you to start with a single file, but also allows you to more logically then split your application up and grow with a huge amount of plug-ability while providing many sane defaults to start off with.

It is not a batteries included framework, so it gives you the flexibility to figure out what is best for your application vs being forced into a certain convention.

Full disclosure: I am a maintainer for the Pylons Project with a focus on: Pyramid, WebOb and waitress.


Cool I've bookmarked it, maybe give it a shot if Bottle isn't for me.


Falcon has good performance characteristics, nice API:

https://falconframework.org


IIRC Falcon only does REST, it has no template rendering.

(To me, that's a plus)


Perhaps you may find Hug of your interest… :)

http://www.hug.rest/


Intriguing. That koala sure looks welcoming...


Bottle is truly single-file, no dependencies and faster than Flask.


I've used bottle for little tasks, and it is fantastic.


I like aiohttp for simple Python HTTP apps.


I hate to be the young whippersnapper parrotting buzzwords, but CGI is theoretically supposed to lend itself really easily to deployment onto serverless runtimes, right?


CGI is one of those "simplest thing that could possibly work" tools: for every request to a specific url path, the webserver runs an executable. Headers passed in the environment, stdin and stdout plumbed to the network socket.

Back before serverless, we called it shared hosting.

Of course, fork() is not especially fast, so people came up with Fastcgi: persist the process and let it handle multiple requests. Then people started writing java where the startup time was prohibitive, and "application servers" like Tomcat came into being.


> Of course, fork() is not especially fast

fork itself is pretty fast, relatively speaking (unless you have a large virtual address space and 4K pages or similar). fork+execve+process-runtime-initialization is much slower.

On my X201 (anno 2010), with 10000 iterations, sequential fork+exit(0)-in-child+wait takes:

    $ /usr/bin/time ./forkfest
            2,49 real         0,38 user         2,19 sys
while sequential fork+execve("/usr/bin/true",...)-in-child+wait takes:

    $ /usr/bin/time ./forkfest2
           10,99 real         3,10 user         7,94 sys
EDIT: also, My X201 is clocked at 1199 MHz (50%) for power saving reasons


On paged systems fork() is usually reasonably fast, but for CGI you do fork()+exec() and while exec() is also reasonably fast, the stuff that the happens in child process between it's start and entry into main() (ie. dynamic linking, libc initialization) is somewhat on the slower side. (Not to mention interpreter startup for interpreted languages)


Yes it's slower compared to a single multithreaded evented server, but that fork gives you process separation which is a huge security feature and resilient to one bad request taking down your whole server. You can aslo write your code blocking + single thread which is so simple and easy to debug


This seems to reinforce the brilliance of the Erlang VM. Basically, make "fork" super super cheap. Full process isolation, "let it crash" error handling, and lots of other benefits.


> Of course, fork() is not especially fast, so people came up with Fastcgi: persist the process and let it handle multiple requests.

FastCGI is a classic example of second-system syndrome. I once looked at the spec. It was ridiculously overdesigned. SCGI (Simple CGI) solves the same problem (reusing a server process for multiple requests), but its spec literally fits on two pages: http://www.python.ca/scgi/protocol.txt - It's also supported by nginx by default. (Don't know about Apache etc.)


And linking back to the topic, FastCGI was developed by Open Market, who sold a big ecommerce hosting system in the 1990s. It was written partly in Tcl and used Tcl format for writing structured logs.


> Back before serverless, we called it shared hosting.

You're right, but Windows was different. In Windows/Asp.Net land (which was a good chunk of shared hosting in the early 2000s), Shared Hosting providers were using AppDomains to isolate sites. IIRC, AppDomains were isolated by the .Net runtime rather than more robust process isolation.


I'm actually working on this now: [1][2]

CGI was abandoned because of the slowness of process forking, but with modern kernels, new compiled languages, and better reverse proxy servers, I believe that the speed difference is trivial in regards to the advantages:

- Deployment can be done by simple file upload.

- Process isolation adds security and reliability.

- CGI scripts/binaries are vendor agnostic and can be run/tested locally

- CGI scripts can use any language, are stateless, promote hyper modular design (aka functions) and are generally minimalistic in nature.

[1] https://bigcgi.com/ [2] https://github.com/bmsauer/bigcgi


https://bigcgi.com/development - "Fast-CGIish Integration"

Are you thinking something like "FastCGI, except over stdin/out instead of a listening socket"? Since that sounds like it could work pretty efficiently if (unlike nginx's fcgi implementation) it supported multiplexing.


I think what I meant by that was basically building a generic FCGI wrapper that could automatically scoop up generic CGI scripts and run them in a shared process automatically. Like this: https://github.com/gnosek/fcgiwrap

I like the idea of communication via stdin/stdout though, I'll add that to my notes.

I actually was going to remove that from the roadmap entirely, because lately I really like the process isolation and statelessness of regular CGI.

Thanks for your comment!


Those first two points apply (almost) to PHP. Probably the nicest parts of it are ease of deployment, and the isolated “throw the world away” process model per-request. I’ve played with SCGI myself in Nim, it’s quite fun to explore.


This was the best thing with PHP. In comparison, I spent most of Thursday digging through a problem with log4net which causes it to block all other web server threads if someone throws an exception and it has to do heavy reflection to generate the log message. This cascaded up into a complete process failure.

Shared state and locks are evil.


After working all day deploying webservers in applications in containers in container management environments in virtual networks in virtual servers, its really nice just to drop files in a folder and update a webapp. I've always loved PHP for this, and that's basically what bigCGI is supposed to be about.


Ha! You know what they say, if we just add one more level of abstraction, all of our problems will be solved! Nobody will even have to know what's going on inside. Famous last words.


CGI predates serverless by quite a bit. It’s basically a standard interface for a web server to let any arbitrary executable handle a HTTP request.

https://en.m.wikipedia.org/wiki/Common_Gateway_Interface



Is there a non-censored link? :-(


You can use use wiki zero mirror (www.0wikipedia.org sufficient to put a zero before any wiki link) until illiterate government of superstitious majority hopefully disappear.


+1 for the "0-prefix" tip; cool!


Thanks, also works when subdomain is different


Whats censored in that page?


Based on their profile, I gather they're referring to the fact that Wikipedia is blocked in Turkey.

https://en.wikipedia.org/wiki/2017_block_of_Wikipedia_in_Tur...



How about this one? http://archive.is/1WmHz


I know that.


Yep, I've been thinking a lot about "serverless is pretty much CGI".

But actually, Lambda can keep your server open to serve multiple requests without spawning new processes all the time.

I realized that the key feature is not having the server open all the time, only when necessary. So I took systemd's socket activation idea and added socket deactivation, i.e. sending SIGTERM to the server after a period of inactivity: https://github.com/myfreeweb/soad

Plus, one of the things that really deserves more hype and popularity is CloudABI https://nuxi.nl an ABI that lets you have one binary that runs on multiple operating systems as is and always runs sandboxed.

The serverless runtime of my dreams just takes CloudABI binaries (packed with files in a tarball) that listen for HTTP on a given socket, and does the activation-deactivation thing to only run on demand :)


Easily yes, but I don't think it would be efficient. AWS Lambda, from what I can tell, doesn't launch a new process for each request; instead, it just "freezes" the process between each request, and then call a function to handle it.

In that regard, it's closer to FastCGI than CGI itself.


There are two versions, one with SQLite built in. The choice of TCL is super interesting given its pedigree and could easily bring TCL back from its quiescent state.


I worked for the company that powered most of the world's (legitimate) online gambling companies.

We had a C application server and used TCL as a business logic language. It was a great fit for writing code fast and ease of maintenance.

You can hear me get a laugh when I say we use TCL in a talk I did here:

https://www.youtube.com/watch?v=zVUPmmUU3yY


Curious to know if this C application server is an open project or home crafted closed code?


If the company in question is Openbet, it's closed source.


I wish it were opened up. It was a thing of beauty.


Seems unlikely, IMO. I tried Tcl/Tk and wanted to like it, but it's all over the place. Half the docs you find are outdated, and figuring out how certain pieces connect together is a bit like walking through a maze.

Worse yet, a lot of the tools aren't open source or freely available, even for tinkering. I'll happily pay for a license if I end up using it for business, but I'm not going to pay hundreds of dollars up front just to play around with a niche language.

There's lots of great languages out there which include completely free and open source tools. Maybe I'm spoiled.


What you’re saying it quite the opposite to what it actually is. All docs are in one place (well, two places both hosted at tcl.tk), one of them is the actual docs (also installed as manpages on your system), and the other one is the wiki, a huge source of useful information written by both developers and other users.

I can't think of how Tcl/Tk docs can be massively outdated given that Tcl/Tk maintains almost 100% backwards compatibility, so most of the old advice works just as is.

Regarding open source, nearly all tools are not only open source, but usually under very permissive licenses (e.g. Expat or BSD-2), and there are in fact very few commercial closed-source tools (basically Komodo, what else?).

Are you sure you actually tried Tcl/Tk and not something else?


This (or anything else) isn't going to bring back a resurgence in Tcl use. A simple web app framework like this is something many languages have, and I think in this case it's best seen not so much about the web framework being the centrepiece. Instead, for the existing (and generally aging) body of people who have a good sized chunk of their work in Tcl, this becomes another utility to build an interface to that other code.

As an aside, Tcl was influential as a source of inspiration in the early days, ranging from event-driven programming, to web servers big (AOLserver) and small (Brent Welch and Stephen Uhler's <200 line server/HTML parser) back when "normal" was forked CGI scripts.


It's likely tcl because the origins of sqlite we're an extension of tcl.


The "everything is a string" mantra Will be back!?! The Millenial programmer will try to comment out a line of code while the TCL interpreter Will insists that is a validade code...

Come on guys, let some technologies dies peacefully, Ok? We don't want those days back :)



I skimmed it very quickly, but quite a bit of that 'defense' of Tcl seems to be assuming that the problem with "everything is a string" is performance. It isn't.

There are very good reasons that stringly typed programming is frowned upon and 'performance' is very seldom one of them.

There's also seems to be a weird assumption that everthing-is-a-string means that you don't have to think about types. It's actually the exact opposite since now you don't have types for the compiler/runtime to help you to combine values in well-defined ways.

(I realize that the thing was written ages ago and we/the author may have learned a thing or two in the mean time.)


"What is Tcl"

http://wiki.tcl.tk/299


If you take issue with my post, then please be specific rather than throwing a link to a big page at me.

Seriously, am I somehow supposed to divine your criticism of my post from that link?


I'm sorry you interpreted my attempt to provide you with additional information about why some people might see Tcl in a positive light as criticism.


The problem with stringly typed languages is that they don't fail fast or clearly. You end up having to spend more time debugging and diagnosing weird behavior when things go wrong and less time actually productively writing code.

This is a problem with weakly typed languages in general, but stringly typed is the nadir of weakly typed.

The problem increases geometrically with the size of the software system - you can't make building blocks to build other software on because the foundations are too unstable, so the only thing that really works is short programs that don't do very much - or, as some people put it, "toys".

He mentions strict format checks as a way to offset that, but I don't think that's nearly enough.


Not to disagree with you, but - what "modern" alternative would you recommend as a replacement for "wapptclsh", which is a compact statically-linked binary with wapp and sqlite libraries baked in?

Binary "wapptclsh" + your .tcl script + optionally .sqlite file, just 2-3 files are sufficient to make an app chroot/jail/container, if I understood correctly section 2.0 here: https://wapp.tcl.tk/index.html/doc/trunk/docs/compiling.md


For those concerns, I'd say Go. You can statically link in any pure-Go library trivially. SQLite specifically may require a C file because of its nature, but for accessing databases that take network connections there's a pure-Go driver for most things you've heard of, and there are other static database options for Go if you want them (though they tend more towards NoSQL; if that's really a problem personally I'd just bite the bullet and include the SQLite's library).


I think that both languages for the large and for the small have their own places. Something optimized for the large-scale doesn't automatically work well for the small.

For example, I always feel that Tcl is better than Python for GDB scripting. At least, I really don't like typing parentheses in an interactive console. Also, there is no 'real' (read: general purpose) programming language that is able to beat BASH in terms of ergonomics, even though we all know BASH sucks when the program is getting large.


At the risk of sounding like a broken record, I'll say that Perl is better than bash for BASH like stuff in every possible way.


If by the "ergonomics" of bash, e3b0c meant the interactive usage, I'd have to agree that bash is a much better interactive shell. Once you start programming it, though, it gets bad fast.

One of my heterodox opinions is that "shell" is actually two languages, an interactive language for moving around the system and executing commands, and a non-interactive language for programming system interactions, and they really shouldn't be the same thing because there's a list of things as long as my arm that should be one way for one case and the other way for the other. "Error handling", for instance, is an entire category on its own; what a human sitting at a shell wants and what a program wants are just night and day different, and at least one side is going to lose if you try to straddle the gap with one language.


In html, everything is a string.


There's a standard serialization of standardized DOM objects in which everything is a string. But the fact that the serialization is a string isn't very interesting, because pretty much everything has a serialization or easily could easily have a serialization where everything is a string; the underlying capabilities of what is being described is the interesting thing, and is where all the differences between data types and formats arise.


That's a ... terrifying way to think about HTML.

HTML is not a serialization of code objects. It is a document authoring format. The DOM is just a way of modeling the documents so that code can perform various transforms on them. But the document is the important part, not the model.


But not every string is HTML.


I really liked programming in Tcl way back when. The ease of producing utility UI programs with Tk was a benefit even if the main part of the program was purely server-based (I used it for data management, reporting etc.).

And while the language had its warts, the syntax was relatively easy to comprehend, so even if things went southwards, it didn't take long to see where you made your mistakes. As long as you didn't upvar and uplevel the heck out of it.

Also very easy to create DSLs with it and drop down to C if you needed a function to be a bit faster.

Tcl would be a lot more popular if it weren't for Sun's divided interests, and if they managed to get something CPAN/GEM/NPM-ish up waaaaaay earlier.


> Gitk was the first graphical repository browser. It’s written in tcl/tk.

Source: https://git-scm.com/docs/gitk

Actual tcl source: https://github.com/git/git/blob/master/gitk-git/gitk


I was quite happy with those days!


Me too, it was the foundation ground of our .com wave startup.


Well, at least you know for sure it is a string, unlike with a couple of languages I've had the misfortune to use in the past year or so where you really do need to check what the hell you ended up with ;)


YAML can be pretty confusing at times.


I have the great fortune of dealing with YAML every day. The sheer number of ways in which string literals can exist is eye-opening, as is the interpretation differences across libraries. That the YAML spec has not been updated in a decade is icing on top.


I wrote this because while I like the basics of YAML, I think 95% of the spec is idiotic and needs to be tossed away:

http://hitchdev.com/strictyaml

Ideally there should only be one scalar type (strings), two ways to represent strings (simple | multiline), just one way to represent mappings and lists. No nodes/anchors, no "YAML is a superset of JSON" silliness and absolutely no implicitly typed scalar values.


TCL 8 changed that to bytes, if I remember correctly (too lazy to search now).


Tcl 8 brought “dual-ported values” (among other things) to the table. Values were stored in “objects” (not to be confused with object-oriented programming objects) whereby the “string representation” was always available (“everything is a string” (eias) is a core logical tenet of Tcl, and has not gone away), and a “native representation” is available for (context-dependant) performance reasons. What does this mean? If you have “set a 0.8657309”, and do math ops with it, backing object will have a native float type available to do its work by. Previous to Tcl 8, the logical language promise of eias held, but the implementation within the Tcl interp was a char* array of bytes, too. That was a long time ago, though.


Our savior, save as from bloated package management, API inconsistencies and Server-side Javascript.

Thank you!


I'm not a web developer but the times I tried to do some web development I found all the things cool kids talked about overly complicated and fell back to simple things like simple scripts on the server side and simple javascript on the client side or even semi dynamic/static webpages.

I always found interesting how the Tcl community solve things, using simple and powerful Tcl metaprogramming features (see beautiful examples on wiki.tcl.tk).

I've done a couple of small business solutions in Tcl/Tk as desktop applications. Now I feel more encouraged to try to develop new ones for the web :-)


Aolserver’s legacy lives on?


naviserver! It's also being actively developed.

https://bitbucket.org/naviserver/naviserver/commits/all


As an ex-AOL employee that makes me grin :-)


There's a bug on the site, if you click "files" then scroll down and click one of the links under "Further Information", you get a 404


Thanks for reporting this. It is an interesting problem, that I'm not sure how to solve (being yet early and I without coffee).

The README.md file is the homepage. It is intended to be displayed using the URL https://wapp.tcl.tk/index.html/doc/trunk/README.md and the hyperlinks are relative to that URL. When you click on the File menu, it shows the README.md file using a different URL - http://wapp.tcl.tk/index.html/dir?ci=tip - and the hyperlinks don't work for that URL.

Perhaps the right solution is to rename the README.md file to something different so that it is not displayed by default from the Files menu...


Perhaps a ``<base>`` tag (set to directory of the included file, i.e. that of README.md) would suffice, since all other URLs on the page seem to be anchored to the site's root?


FIX: Get Coffee.


I get it that TCL is often the tool of choice with programmers that want to build a simple UI that gets things done. It's easy and cross-platform.

But I have yet to see a beautiful TCL application. Web apps today can't just work, they have to be user-friendly and familiar to a wide audience. Would love to be proven wrong, but TCL does not seem to fit the bill.


This is more or less a TCL version of Node's Express.

I'm not necessarily advocating for that... but what on earth does anyone's choice of server-side framework have to do with the "user-friendliness" or "familiarity" of the client-side UX?

You can serve up a bleeding edge Vue.js app from a PHP backend, or you could serve up a bowl of jQuery spaghetti from Rust or whatever new language comes out next week. These matters can sometimes be tangentially related, but are largely orthogonal.


I'm probably misreading this article. I've always known TCL as the quick-and-dirty user interface of choice. I guess that's not the focus here.


I honestly doubt you clicked the link at all before commenting, and likewise doubt that you've actually had any first-hand exposure to Tcl in the past.

You SEEM to be referencing the "Tk" portion of Tcl/Tk. Tk is the desktop GUI framework that comes with most distributions of Tcl. But it has no apparent relevance to this or any other Tcl-based web framework.

This is kinda like dismissing a Java server-side web framework because Swing is ugly, or dismissing C# on the web because you don't like WPF.


Okay, happy to admit when I'm not fully informed, and this is one of those times. I have interacted with TCL/TK and have made small programs with it, but I have not used TCL directly in the past. In fact, did not even realize TCL was a full language. When I read this article, given my background with TCL/TK, I understood it to be a web framework using a weird desktop GUI... which would be quite odd.


Yes, when I did something like that nearly 20 years ago it was quite odd! But turned out to be terribly useful for us at the time. (Google 'proxytk' if you're curious)




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: