Hacker News new | comments | show | ask | jobs | submit login
On the Road to Pure Go X11 GUIs (janouch.name)
198 points by wolfgke 72 days ago | hide | past | web | favorite | 118 comments



A journey of a thousand miles begins with a single step. Indeed.

So you are at the first mile of that road...

On the second mile you will discover that X11 is just a protocol to present bitmaps on monitor surfaces + plus generator of events coming from some of input devices.

So on the third mile you will decide that you will need system of widgets and you will get a tree of HWND's, GtkWidget* - UI DOM elements tree as an ultimate abstraction.

Somewhere on the way further you will come up with the bright idea that look and feel of those UI bricks in the tree needs to be customizable. So you will try to invent style definition system/language. And CSS will be born again.

As a result, at N-th mile, you will end up with something close to Sciter/Go : https://github.com/sciter-sdk/go-sciter that already works not only on X11 but on Windows and Mac too.


Or you can stop halfway and be content with it being functional even if it is ugly. I'm pretty fond of Tk.


Or Tk, yes, if you prefer buttons a la turrets of Panzerkampfwagen VI Tiger and musik of Sabaton.


Ttk enables modern widget rendering in Tk for those who can't appreciate the benefits of drawing without pixmaps everywhere. Been available for 10+ years.


10+ years ago Tk was already outdated in what concerns UI design.

Motif widgets on System 7 and Windows NT/2000? No thanks.


You missed the part about TTK, which makes TK use modern widgets, hence is nowhere near like "classic" TK and very much native in terms of rendering.

See https://tkdocs.com/tutorial/widgets.html#button


Any GUI developer knows that UI/UX is much more than just rendering the same.

Back when Tk still mattered even the attempt to emulate Win32 L&F was just partially done.

I very much doubt that it has improved.


It's not emulation, it's wrapper on top of native ui toolkit. win32 for windows, cocoa for macos.


I am speaking about when Tk mattered, around 2000's.


I’m going to borrow this comparison when someone recommends Tk :)))

Solid tech, but c’est la vie, software still folows fashion, as you so amusingly showed :)


Who wouldn't!?


> So you will try to invent style definition system/language. And CSS will be born again.

If he arrives at QML then he will be doing something awesome.

https://en.m.wikipedia.org/wiki/QML


Yes! I never understood why QML didn't take off more...


Because many devs want to get paid without paying for the tools they use to sell their work.

Naturally with such culture one gets Electron instead.


Licensing issues? The situation around Qt licensing is still not clear, AFAIK.


It is pretty clear, want to give your work for free to others as you enjoy using Qt for free? Then give it away for free.

Want to earn money with the work from Qt devs? Give them something back to pay their bills.


> Want to earn money with the work from Qt devs? Give them something back to pay their bills.

that's absolutely not mandatory with Qt's license (though a nice thing to do). Besides, a non-negligible part of contributions to Qt aren't made by The Qt Company but by others such as KDAB, etc... which won't see a cent.


KDAB already earns their money selling Qt expertise and advocacy, which they contribute back to Qt community.

My point was about those that want to use Qt as free beer, don't contribute anything back and even feel entitled to complain on Qt Creator and Qt bug reports.


> Want to earn money with the work from Qt devs? Give them something back to pay their bills.

I found that the model is good but the problem is that the Qt commercial licenses are quite expensive. You would have to reach really far to find an example of a product with a greater per-developer cost.


Qt commercial licenses are pretty cheap when compared with traditional enterprise prices, which has become their target market, given that they are the only ones willing to pay for tools.


They are cheap compared to what? They are more expensive than the highest tier of Labview licensing (!), way more expensive than the highest tier of Visual Studio (MSDN) subscription, Adobe CS, AutoCAD... I'm not cherry picking, they're literally the most expensive subscription based commercial software I'm aware of.

What are they "pretty cheap" when compared to?


Oracle, SQL Server, SAP, WebSphere, Rational...

Qt more expensive than Visual Studio Ultimate?!?


Hopefully it's apparent because of the extremity of the comparisons being made here (a production SQL Server deployment is more expensive than a Qt seat, yes...) how high Qt's per-developer cost is today.

In my naivete I would have said a decade ago that a single author of traditional shareware-type stuff should just include their source code. After seeing what happened to the Paint.Net guy, people taking his work and just renaming it (as well as removing credit and license info) and reselling it, I'm afraid closed source is still the way to go if you want to not get ripped off while distributing Windows consumer software. So the high cost of the Qt license becomes relevant there.

It's fine, it's their right to price their software however they like, etc etc. It's just a scary commitment if you want to experiment with putting a small product out there.

> Qt more expensive than Visual Studio Ultimate?!?

They're just calling the top VS tier "Enterprise" now. The subscription version of VS Enterprise (which includes office and Windows all the way back to XP) is $3k a year now.


Qt more expensive than Visual Studio Ultimate?!?

Yup.


Seems pretty clear to me: https://www.qt.io/download

Most of it is licensed under LGPLv3, some under GPLv3, and there are commercial licenses available for almost all of it.


IIRC, sciter isn't a free (libre) software, right? That might be a problem for some applications.


It takes quite a lot of money to convert the project to OpenSource. Social responsibility and maintenance, all that. I am accepting offers, if that is interesting.


Looking at the example screenshots of Sciter applications, it is now apparent to me why the GUIs of Anti-Virus applications all look so similar.


Sciter seems cool. Too bad it’s proprietary software.


Too bad my supermarket lady doesn't accept my push requests.


People want free software exactly because their supermarket lady doesn't accept their push requests.

They need to save money from somewhere, and until now, supermarkets never gave away stuff for free, where programmers do.


Yeah it is such a big offence to expect to be paid for our work.


Who is this response addressed to? Where did I say something to that effect?


To everyone that feels proprietary software is work of evil and we should just give our work for free.

Want open source stuff? Fine, but then pay the developers that created the software for their efforts.

Note I say the ones that created the software, not the ones that came after forked the repository and get to get money instead of the real authors.

That is why FOSS tools for RAD desktop development are so behind enterprise tooling.


a hint: it's very customary to inject a proper disclaimer into such kind of posts


So... many... frameworks ... What's the C version of Sciter?


https://sciter.com/download/

> include – C and C++ include files that define the public Sciter engine API: window level functions, DOM access methods and utilities.


Though I do not dare enter the realm you describe, I applaud your quest for a go-based X11 ecosystem and hope you find considerable help from the go community to at least accelerate your vision/model to a proof-of-concept.

You may get the usual off-handed opinions when golang is involved - "Rust ...", "Yeah but Go doesn't ...", etc.

I do wish you had a timestamp of your article for reference in the future.

Cheers


> I do wish you had a timestamp of your article for reference in the future.

Well, there are third party HN reply notifiers, sounds like someone should make Remind-Me-HN. Then again, now that I've said that, I suspect there are probably plenty of services that allow you to supply a note and email you at a later date (other than just a calendar app...)


> I keep thinking about Wayland. It's really a double-edged sword. Aside from my pet peeve that it by-design blocks, for inane security reasons, my favourite feature of sdtui, which is X11 selection stealing, it also offers no mode that would work over a network—for example, to transfer picture data you can only use EGL or shared memory.

"it by-design blocks, for inane security reasons" or "for inane security reasons, my favourite feature of sdtui, which is X11 selection stealing". Either way, ik curious what the inane security reasons are. Also, why is blocking bad in Wayland?

It seems like one of the biggest complaints about Wayland is that it doesn't work over a network. Does anyone actually work that way? I've tried it a few times, and every time I tried it it turned out to be unusably slow.


Wayland does not support network transparency because you're supposed to use a VNC server. Making the X protocol network-transparent worked because draw commands don't make for much traffic, but today you'd be hard-pressed to find any X application that does not just send entire bitmaps (or rather, OpenGL textures) to the X server. It's similar to how Mac OS moved away from a PostScript-based display architecture, if my knowledge of Mac OS history is to be trusted.

Having said that, I don't know if there are working VNC implementations for Wayland. Due to Wayland's security architecture, a VNC server (or any other thing that grabs screen contents) requires permission from (and thus cooperation with) the compositor.


GNOME and KDE have working support for remote desktop. Weston supports RDP. Sway (and other wlroots-based compositors) have been working on protocols to support it.


Mac OS never had any PostScript-based display architecture, NeXTStep and Sun Views did.

OS X replaced PostScript with Quartz, which uses a PDF like architecture.


GTK2 apps used to work fine over a network. On a LAN connection there was almost no difference from local. If your "modern" toolkit does things the stupid way around and just sends entire bitmaps of changed windows per update, rather than using the core protocol or XRender to redraw the window, that's not X's fault.

Wayland is pretty much a project whose unstated goal is not only to raze X to the ground, but also to salt the earth where it once stood so that none of its core principles survived the transition. That's why the security reasons are "inane".

It would be possible, and desirable, to design a secure X server with zero alterations to the X protocol itself, and most clients wouldn't know the difference. But again, security isn't really the goal. The goal is to destroy X so thoroughly that no one builds anything on top of X, even from a conceptual standpoint. Otherwise they wouldn't have gone to the trouble of deliberately eliminating network transparency, server side rendering primitives, window managers, etc.


I've been looking at Wayland for a little while now, and have been slowly coming to a similar conclusion based on the limited exposure I've had to it. I hadn't quite reached the view you describe here, solely because I was holding out a little optimism that things weren't really that bad.

What I'm stuck on is the whyyyy. X has its faults, most definitely, but... X works for quite a lot of people, too.

I've found that one of the most helpful additions to my worldview has been a sense of implementational balance - and particularly the understanding that reacting to perceived extremes by counterbalancing all the way in the direction of the opposing extreme does not produce balance, but rather drowns out the old extreme by all the noise made by the new one, unless the counterbalancing effort incorporates an embrace-and-extend model.

I get the impression Wayland is only being worked on by people who've either been heavily bitten by X, influenced by those who have, or influenced by a rumor amplification-chain of toxicity about X.

The 3rd is both the saddest and likeliest part, but understandably so - there was a nontrivial amount of X11 vitriol back in the 90s and 00s.

I think this is primarily because the hardware ecosystem and the various vendor implementations were all indeed atrociously broken, so X had to be a bit broken too in order to make everything work. Graphics drivers (PCI access!) did their work in userspace, so X had to run as root. The kernel had no idea about the video mode; it was just told "if you try and print anything to the screen right now you'll confuse the GPU so badly so please just don't" and X handled the rest. GPUs were going through formative times (circa 2006) with the introduction of fundamental OpenGL functionality (now used for compositing). X wasn't modular, _and_ there were two vendors. And of course 256-bit color paletting was still something that got talked about and needed to be maintained through the whole stack.

These were simply growing pains.

Today, X.Org is modular, which is a small mercy for everyone's sanity; KMS simplifies driver development (as in, X doesn't have to be involved at all anymore); GPUs are somewhat (!) more stable and offer a more consistent GL API; nobody really complains when the display doesn't support <16bpp (ie requires paletted display) anymore. The list goes on.

Things are absolutely not perfect... who knows, there may be an XInput3 :)... but things are on fire significantly less.

So, it is IMO utterly unreasonable to lump the good parts of X in with the bits that maybe made a few core devs gnash their teeth and kill the whole thing for guilt by association!

Sadly it looks like this is what we're stuck with.

So, X11 will live on - but as a kind of negative space that future software historians will puzzle over and go "why did this Wayland thing state that they avoided such an oddly specific set of functionality?" This kind of negates the devs' apparent desire to make everyone forget it, haha. If they'd just taken the good parts and made something 11X better than X11...


The people who designed Wayland have decades of X11/XOrg implementation maintenance experience between them.

For more details I recommend Daniel Stone's LCA talk that was linked by emersion elsewhere in this thread.


The challenge for anyone looking to incrementally improve X11 is that “modern desktops” (i.e. Mac & Windows) leverage totally different design approaches vis-a-vis centralized authoritarian IPC and graphical hardware access.

I’m glad people decided to put together their heads to try to collaborate on a new standard, and horrified they decided to make it Linux-only.


I don't think than X11/Wayland should work over network at all. They don't have enough information do this efficiently. On other side it works REALLY well on toolkit(GTK/Qt) level. Try GTK Broadway

https://developer.gnome.org/gtk3/stable/gtk-broadway.html


If only there would be a some kind of cross platform way to send a tree of UI elements and some kind of client to render it...


I know you are being facetious (HTML!), but this did remind me of the Fresco project:

http://wiki.c2.com/?FrescoFramework http://berlin.sourceforge.net/

Hard to believe that 15 years has already passed since I heard about it.


I just ran QEMU in my web browser and I feel really weird.

On the one hand, VNC suddenly just became obsolete. On the other hand... Ubuntu Core is playing with getting Webkit-GTK running directly on top of DRI (via Wayland), and I hope like mad there's never an alternate reality where Chromium takes the place of X11... eep

(I mean it already kind of has, with Electron :S)


X2Go (the OSS fork of "nx - no machine") works really well over a network. Though I really think it might be best to do the rendering client-side and push video frames. Though I do concede that the two models are really two separate ways to do computing, based on where your actual compute resources are located. With X11 forwarding, the rendering happens server-side, at the user's terminal [1], while with e.g. VNC the rendering happens client-side on the remote machine and the video frame is presented to the user, who is basically sitting at a dumb terminal. HTML delivered over HTTP would be the most wide-spread technology that uses user-side rendering today.

[1] The terminology might seem weird to you with the server-side being the user's side, or what you'd usually term the "client", but it makes sense - you run the X11 server on your PC, then connect over the network to a remote host using e.g. SSH, so now you have an X11 server and an SSH client running on your PC, connected to an SSH server on the remote. Then you open a socket on the remote side connected to the X11 socket on the local side via the SSH tunnel and remote programs initiate connections to your X11 server. You type at your terminal, that gets picked up by your SSH client and sent to the SSH server, the SSH server relays your typing to the remote shell, which execs the remote X11 client, which connects to your X11 server. Simple!


> Does anyone actually work that way? I've tried it a few times, and every time I tried it it turned out to be unusably slow.

I use it pretty much daily at work to run interactive programs on beefy data reduction machines (just ssh in with X11 forwarding turned on and it just works). Over the local network it works pretty much as good as locally - lightyears ahead of VNC or RDP - at least when using programs that don't use ui toolkits that are built around wayland model..


> It seems like one of the biggest complaints about Wayland is that it doesn't work over a network. Does anyone actually work that way? I've tried it a few times, and every time I tried it it turned out to be unusably slow.

I spend most of my time in Emacs on Linux. At an old job I had to use a Windows desktop, so I used a full screen Cygwin X server to display an Emacs window running from one of our Linux test servers. This also had the advantage that Emacs would keep running in Screen, even after my dev machine was turned off.

Note that you can't do this using GTK, due to a long standing bug https://gitlab.gnome.org/GNOME/gtk/issues/221 . I avoid this by compiling Emacs to use the "lucid" toolkit.

Note that Emacs also works in a terminal, so I could just use SSH or MOSH rather than X11, but I much prefer the GUI version.


https://pipewire.org/

If I understand correctly that is the next gen protocol for this type of stuff, correct me if I'm wrong.


Since no one seems to have answered your other question:

The intended meaning is almost certainly "It blocks X11 selection stealing for inane security reasons."


It's so interesting to see a sudden surge in desktop GUI frameworks (or at least a sudden surge in interest if not actual new tools). I have been exploring this area for the last few weeks for a new project and have stumbled across the obvious ones like Qt, GTK, WkWidgets, JavaFx (which IMO has so much potential but not much development is being done now) and interesting ones like Juce, Sciter, NanoGUI.

Of course, the elephant in the room is Electron, which in a way showed how much a new take was required in this area but also has a lot of limitations wrt app size, processing power etc.

The one I have been following (or rather waiting eagerly) is Scenic by Boyd Multerer which was incidentally announced / released yesterday after being shown in a very early state last year at ElixirConf. I find the approach of leveraging Elixir and OTP for GUI very interesting and the video should give you a good idea [1]

[1]: https://www.youtube.com/watch?v=1QNxLNMq3Uw


> With that out of the way, I've been able to find

Don't forget the quite popular https://github.com/andlabs/ui, albeit missing several needed widgets (but work is active there and the base C lib at https://github.com/andlabs/libui). Granted uses Cgo but that should not be a deal breaker for interfacing w/ the native OS libs.


he mentions that in the previous paragraph:

> First, people have naturally written bindings for Qt, GTK+, ui, ...


Ah, I must've missed it initially. Regardless, seems with enough constraints you can justify NIH. Granted the project has merit on its own but not sure why they are so afraid of Cgo when doing native OS interaction.


What happend to using native platform controls?

All these toolkits look out of place on any platform.


"What happend to using native platform controls?"

This appears to be X11-specific, and X11 essentially does not have native platform controls. Also if you're in X11 you've quite likely already given up any real hope of all your windows looking the same. You also seem to be casually assuming this would be cross-platform, but this would be the equivalent of implementing something in raw Win32 API calls. There's zero cross-platform concerns here, on the grounds it'll never be cross-platform. In fact, given that it's pretty likely the move to Wayland would be completed by the Linux community before this project could reach a point where it would be something you might consider using seriously for a non-trivial project, it's debatable whether it's even good for one platform.


X11 doesn't have native controls. The closest would be motif, or Xaw, but nobody sane uses those. GTK and Qt are both equally "native". Chrome uses its own GTK-derived UI lib, IIRC . As long as OP's ui implements accessibility and follows the desktop's settings as close as feasible, I see no problem.


This has always been my feeling. Unless you are a designer and are trying to design a consistent look and feel for your application, why wouldn't you just design for the native look and feel?


As more and more apps are web based, this seems like an antiquated notion. Consistency lost, and I doubt it's important any more. Why should "desktop apps" follow those rules at all? Even Apple and Microsoft violate their own rules with their flagship applications.


> Even Apple and Microsoft violate their own rules with their flagship applications.

Sometimes there's violating your own rules to push the State of the Art forward and this will eventually become the new standard. Then there's violating your own rules because you don't value consistency.

Most organizations with enough software are pretty guilty of that latter one, but destroying any semblance of consistency because it is economically convenient for the development team doesn't seem like a step forward to me.


I disagree simply because the push toward electron and the like are rooted more in developer convenience and the corporate drive to turn client developers into a one-size-fits-all dirt cheap bottomless resource than in any kind of user preference.

Also, a big part of an individual’s platform preference boils down to things like conventions, look + feel, workflow, etc. By using electron one is tossing these choices made by the user in the garbage bin in favor of whatever the developer feels like supporting or what the designer thinks would be fun to design.


Efficiency in the market and development process drove client developers to be a dirt cheap bottomless resource. Before iPhone apps (and later Android), you could create a new application and sell it in a box for a good living. The only "box apps" which most people will still pay for are grandfathered in things like Photoshop, Matlab, and Office. Even new video games are mostly in-app purchases or ad-based. I don't like electron based apps, but that's not what devalued developers.

Besides, I wasn't talking specifically about electron-based apps, I was talking about apps in general which don't use the native widget toolkits.

Also, I can't speak for everyone, but my preference for a platform boils down to whether it provides a Unix shell and how pushy it is about forcing me to download updates or upload personal information. Consistent widget look and feel just doesn't matter when I spend most of my time in a browser or shell anyways.


> Also, a big part of an individual’s platform preference boils down to things like conventions, look + feel, workflow, etc. By using electron one is tossing these choices made by the user in the garbage bin in favor of whatever the developer feels like supporting or what the designer thinks would be fun to design.

Other multi-platform toolkits do not do better. They all violate platform's conventions. QT, Swing, GTK and co do have their own look and feel and conventions that do not match the platform's they are in, so I don't see how worst the web is in that aspect. Don't single out web techs on that specific aspect.


Qt at least tries to feel native (unless you’re using Qt Quick) and goes further than the others listed in feeing native. If one cares, it’s pretty easy to make a Qt app feel “almost native”.

That said, I favor projects that have per-platform true native clients with a platform agnostic core over those that chase the “dream” of write once run everywhere. Transmission is a great example of how well this can work.


Yeah, I greatly respect transmission for that design


Some of us still care.


even in win32 land this idea is going away.

more and more windows applications are theming/drawing their own look and feel onto their UIs to give more consistency across the organization's applications rather than consistency with windows.


I thought X was on its way out?


It has been on the way out for the last 10 years. I suspect that X11 outlives Wayland simply because the latter will be replaced by yet another technology but X11 applications will still be supported.


Someone mentioned GTK3's Broadway backend in another comment.

Try this (the two mentioned commands come with the gtk3 package):

- In one terminal, run 'broadwayd'

- In another terminal, run 'GDK_BACKEND=broadway BROADWAY_DISPLAY=:0 gtk3-demo'

- In _your Web browser_, go to http://localhost:8080/

- Reattach jaw


Oh boy

What could be a real use case for that


X has been dying for possibly longer than Apple. Which is quite impressive.


X, along with FreeBSD and Lisp have been dying for a few decades now at least. I think the quote goes "doesn't look any deader than usual".


I think the eventual goal is to get the major DEs and toolkits to stop supporting their X backends in the near future. Then X would be effectively dead but for ancient legacy applications.

This has already happened for ALSA with some applications. If you want to use BlueZ or Firefox, your options are either PulseAudio or no sound at all under Linux.


They will still support X as an independent server application no different than running Xorg in Windows or MacOS. That can never be obsoleted.


What about FreeBSD is dying?


I believe it's referring to the old 'It is official; Netcraft confirms: *BSD is dying' statement.


TIL the PS4 ran a dead OS


For the past 30 years or so...


>Ultimately, since I can't go wrong with X and I can go wrong with Wayland, the choice was easy.

Keep in mind that Wayland has been designed by people who've been maintaining X11 for years.

>it also offers no mode that would work over a network

X11 has a huge number of issues when it comes to network transparency. tl;dr is: it's not designed for this, you should use a proper remote desktop protocol instead. More info:

https://www.youtube.com/watch?v=GWQh_DmDLKQ

>Of that, EGL even requires Mesa and thus Cgo.

That's not true anymore. You can use the DMA-BUF protocol instead of wl_drm. I'm not sure how to export a DMA-BUF without Cgo though… But that's a different issue. If you're going to use the GPU you'll probably need Cgo anyway.


> X11 has a huge number of issues when it comes to network transparency. tl;dr is: it's not designed for this

X is very much designed for network transparency. The modern extensions that sling around pixmaps for every little graphical operation are what isn't. I can still remotely run X11R5 clients thanks to X and its network support.


You can, if by "network" you mean low latency LAN. Modern network remoting protocols work over high latency WANs as well, which X11 fundamentally can't do because too much of the protocol requires synchronous round-trips.


You can run X over a WAN with the eyecandy turned off. It could be run over a slow modem. The problem lies in everything that was tacked on assuming it would only be used locally.


Synchronous round-trips for atoms were there in 1980s' X11.


I think it would be much easier just to use Qt or GTK. Otherwise you'll have to write your own widget toolkit, make it cross-platform, solve hundreds of issues (like hotkeys not working in alternative keyboard layouts), this is going to be a titanic work.


The goal of the project, apparently, is to make a GUI toolkit in Go with no C/C++ dependencies whatsoever [1]. Of course going that route requires coming to grips with the fact that using a raw display environment requires implementing a lot of stuff like mapping keyboard events to actual text strings.

[1] I guess we're supposed to ignore the fact that the X11 server is a massive C application.


But you don't link against the X11 server, you communicate via sockets with it. So the client application has no direct dependency of the X server implementation. Like a web server, the X server could be written in any language.


If you're looking to program a GUI, you'd be best taking a look at Go's parent AOS/ Active Oberon, and the accompanying GUI (Bluebottle : A Thread-safe Multimedia and GUI Framework for Active Oberon): http://e-collection.ethbib.ethz.ch/ecol-pool/diss/fulltext/e...

Might be best to port this to Go?


Why not leverage skia ?


> It's an overlooked revolution!

wat?

Go is maybe past peak hype, but it dominates in the cloud/containers space and is massively popular.

I don't like it, but this statement still seems odd.


This is one of my frustrations about the Go community - it seems much more enthusiastic than usual to reinvent the wheel in "pure Go" for what feels are abstract ideological reasons, where other languages just wrap existing libraries and call it a day.

I wonder if it's the little things about the experience that add up to that - e.g. the usual approach to statically linking all Go code in a single binary leads to no deployment dependencies, and Go community considers it a good thing; but the moment you start invoking C code, you need to drag the corresponding .so along. Or maybe it's the part where the weird stack structure necessary to enable goroutines also impedes FFI performance, and it's a language that's naturally fast enough for that penalty is noticeable.

Either way, the Go ecosystem feels... deliberately insular, for the lack of better term?


"weird stack structure necessary to enable goroutines also impedes FFI performance" is probably the greater part of that equation.

Keeping parameters straight between Cgo and Go is another fine kick in the nuts. That's true with almost every FFI of course, but with FFI from Python to C, there's usually a little performance boost to make up for the pain.

Go however, is fast enough to begin with. FFI is all pain there.


Regarding Go:

>Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.

Sorry couldn't resist. I'm a Systems Admin. An old-fart at that. Every single encounter I have had with Go has been a burning pile of unpleasantness. Most recently, the Go-Ethereum client.

So, referring back to the quote; could we just not shoehorn this into everything? It may be fast but it's fragile FWIW.


As an old time sysadmin I'd expect you of all people should know better than to make assumptions about a programming language based solely on a small handful applications written in that language.

The ironic thing is what you described ("fast but fragile") better describes the traditional systems languages that are used to build X11 components: C and C++. If I had to describe Go as a systems language I'd say it was the exact opposite: slow but stable. Sadly it's also pretty inflexible for any real low level stuff (in my experience).


> Sadly it's also pretty inflexible for any real low level stuff (in my experience).

Which is what the poster above you meant by fragile. C and C++ rarely break in most spaces, these days. I can take a C library and expect it to build and just work 99% of the time, and I can build something in those languages and expect it to be flexible enough to work most of the time as well.


> I can take a C library and expect it to build and just work 99% of the time

You cannot imagine how many times I've heard that GCC upgrades broke builds. I've never heard of Go upgrades breaking builds.


If only that were actually true. I've done a lot of work porting C++ code to various different flavours of Linux and Unix as well as compiling stuff for Linux from working source and it's rare that things will "just work". Granted a lot of time the real pain point is manually having to deal with dependencies (even Go's old $GOPATH solution is better than the stanard C++ approach!). Then you have compiler flags, compile time options, distribution specific patch files, dependencies for the build scripts themselves, etc. And that's before you've even touched on compiler specific variations, language versions and undefined behaviours. There is a reason why the source-orientated repositories like FreeBSD ports, Gentoo's portage and Arch's AUR have wrappers around build scripts and it isn't because things "just work 99% of the time" ;)

Go does have its short comings, I'd be the first to admit that, but the actual reason I started writing Go was because it was far less brittle trying to write portable code than C++ and at the time I was having to do a fair amount of cross compilation on platforms of other CPU architectures as well as different host OSs (Rust wasn't mature at that point, .NET wasn't open source and I've never enjoyed working with Java so I thought I'd give Go a try)


really? I find that usually automake systems, despite how disgusting they are to deal with from a developer-standpoint, almost always tend to 'just work' from a user-standpoint.

I can't really speak very much for C++ systems, as I understand the dominant build system for that is CMake, which personally I have found completely abhorrent from a user-perspective. (I once tried to build a 32bit program on a 64bit system, and CMake completely and totally refused to do so! There were no override flag that actually affected the program, and it didn't look as if there was any standard place to inject flags to the compiler, I still have no idea how one goes about doing it, to be quite honest).


> Every single encounter I have had with Go has been a burning pile of unpleasantness.

Can you elaborate?

> It may be fast but it's fragile FWIW.

Go explicitly focuses on language stability over fancy new features.


> Go explicitly focuses on language stability over fancy new features.

I think that might be the problem... C is probably the most stable language around in terms of features, but it is probably the language that is most prone to bugs and security flaws that is in common use.


C also has raw pointers, manual memory management and a bunch of other unsafe practices that Go doesn't. In comparison to C, Go is practically a scripting language.


Totally agree there. However, I think the metaphor might still hold that Go applications are brittle due to lack of features due to the ugly hacks you have to use to get around the lack of features. For instance, due to the heavy usage of manual error checking, it can be very easy to call a function and forget that it returns an error thereby ignoring that error. Bringing down your whole application due to a single error is bad, but silent failure is almost always much worse.


Go will complain loudly if you don't use a captured error value. It will also complain loudly if you don't capture the error value. This makes it harder in Go to ignore an error than most other languages. In fact a common complaint of Go is that it forces you to do something with the error when you would rather not.

I'm left to wonder what you are imagining would result in not handling errors on accident in Go.


Yes, but it is very difficult to introduce the same kind of bugs in Go code considering you are no longer doing intricate memory management, etc.


Considering that Go by default compiles to a static, native executable with close-to-0 system dependencies, I can't imagine why you'd hate it.

I deploy Go code all day long to various clusters consisting of hundreds of machines, and I'll take whatever Go deployment issues you can throw at me all day long, any day, over "simple" python/ruby issues.

It's all been moot since I went with Docker anyway.


I wrote my first Go application that was to be run on a very Linux instance (1 GB), serving results to HTML from an in-memory map of some 150MB. The memory growth was really out of control, panicking with "out of memory" under even low load until I added debug.FreeOSMemory after serving each search result. Even with that, I had to change the instance to 2GB. It was a one-off excperiment, but I missed releasing memory by myself to ensure a manageable working set.


Your program certainly had a bug. Feel free to paste a link to it here, I may be able to debug it.


Yeah definitely a bug.

I've worked on systems that allocated and freed hundreds of GB every 10 minutes, written in Go with no such issues.


Is the code online somewhere?

It really sounds like there was something not-quite-right in there. Might not be too hard for someone with time + clue to spot it. :)


that's called a memory leak: https://en.wikipedia.org/wiki/Memory_leak

I've seen happening in Go multiple times when people forget to close the response body after making a http request.


One would think that syadmins love native executables with no system dependencies.


Errr why?

They come with no associated bits to hook them into the rest of the system. eg no launch/process monitoring things (start-at-boot and similar, metadata and "package info" for being able to pull down upgrades along with other system packages.

Go stuff is fine for a lot of deployment cases, but in production situations the rest of the er... crap that normal packages come with does serve some important purposes. At least on *nix systems. :)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: