Hacker Newsnew | comments | show | ask | jobs | submit | IgorPartola's comments login

And even with all these warts, K&R C is a good start for learning or re-learning C IMO. To grok C you don't need to know the exact syntax. You only need to realize that everything is either a byte sequence or a pointer to a byte sequence... represented by a byte sequence. This includes obvious things like arrays or "strings" or structs or unions, but also functions themselves. Once you internalize this you know C. Of course you'd also have to learn another language: the C preprocessor, but it's relatively small and only a few of its instructions would be needed to make productive use of it. So don't get hung up on the outdates syntax, and instead just read the book. You can probably buy a used copy for a few bucks or borrow it from your local library, then spend an evening reading it.

Then learn to use make (Makefile) and how to write a ./configure script, or better yet learn to use autotools. Oh you don't want to write all your code by yourself, so better use some external libraries. For that you better understand your preprocessor, compiler, assembler and linker. Code not working?... get yourself familiar with a debugger. Static or Runtime code analysis?... yes please.

Find a open source project to support. Get familiar with the code base and the community. Write a patch, send it, get it refused, work with the community, finally getting your patch in.

See, it's not that hard learning "C" and becoming a "C developer"... 21 days at most!


I've used autotools and I am definitely not a fan. A 10 line Makefile is probably sufficient for most projects and it'll take you 20 minutes to get one working if you take a 15 minute break.

External libraries are a reality in any programming language. The popular ones that come with your OS of choice are likely very well documented. You don't need to understand the preprocessor, compiler, assembler, and linker to use them. In fact, I'd argue that for most programming you only need to know how to invoke the compiler as the rest of the steps are already automated. K&R C will teach you enough about how to use header files to be able to use external libraries.

A debugger is useful, but something like gdb is not necessary to write C. In fact, I'd say Valgrind is far more useful, and once again, it'll take 20 minutes to learn.

As for supporting open source projects, why start there? Do you learn Python and immediately jump to submitting patches to Django? No, you write your own code and learn from that. Same here, as you are presumably learning C/JavaScript/Ruby/Erlang/CL/etc. to use it, not to add a line to your resume that you are a contributor to some open source project.

No, you won't become an expert developer in 21 days. But since C is such a small language with so few features I'd argue that learning how to use it is more about getting the memory structure right in your head than about learning syntax. By comparison, Python (my primary language lately) has many more types and constructs to worry about.


The complexity of a language is not proportional to the number of builtin types it has. If there is a builtin to do one particular thing, and you just use it for that, that's simple. You don't have to build it up yourself by tricky combinations of tricky components (easily exposing you to severe mistakes like buffer overflows unless you already have a good vocabulary of idioms down cold). You don't have to worry about each available type unless you are using it, so it is comparable to a separate library in C.

Yay... my first grey comment. Why U no undarstend sarcazm?

All I'm saying really: Learning C is easy, becoming a halfway decent C programmer is hard.


Well, HN is not reddit, ppl here are not getting used to that.

Is the storage attached directly to the box or is it a SAN? I'd like to use one of these for a personal email server, but don't want to bother if a fried disk or disk controller would result in significant downtime for my email.

It's a SAN, mounted as a NBD.

This looks cool and I am going to give this a try. The problem for me is, as is usually the case with such project, the packaging. If this thing is production-ready, then why must I check for installed dependencies by running random commands [1]? If it's a Python project, why isn't it distributed on PyPI? I don't want to download stuff from BitBucket manually and install it by executing setup.py. I understand that the project supports multiple OS's. That's great. But there are simple steps that can be taken to make installing this thing via automated tools (Puppet, Chef, Ansible, etc.) easier than how it's set up now. A Debian package would be so nice for Ubuntu/Debian.

[1] http://www.rath.org/s3ql-docs/installation.html#dependencies


There is plenty extensive packaging: https://bitbucket.org/nikratio/s3ql/wiki/Installation

The documentation is just somewhat messed up...


Nevermind the things I said above. Thank you for pointing this out. This is exactly what I was looking for.

    #define i++ i--
Or even better

    #define i++ ++i

-----


I don't think '+' is valid in a C preprocessor macro name.

-----


I suppose. But also can we agree to raise the ridiculous speed limits and actually enforce them? 55 and 65 mph is too low and nobody drives that slow. See https://www.reddit.com/r/todayilearned/comments/1npuah/til_t...

In my mind if you are doing 110 in a 65, sure, you get a ticket. But I distinctly remember being in a car driving through Virginia (speed limit 70 mph) in a large number of cars all following each other all doing 72-75. It was a straight bit of road, everyone was mostly in the right lane (sign said to keep right unless to pass). Out of nowhere a state trooper pulls up on the left, passes, gets in front of our car and pulls over the guy in front of us. I should mention that the car in front of us was from out of state. Now, it is possible that that guy was getting pulled over for something other than speeding, but as anyone that's driven through VA can tell you, the state troopers there are very eager to give out speeding tickets, so chances are it was for speeding. Why did that guy arbitrarily get pulled over instead of our car, or any of the two dozen around us, all doing the same speed? Probably because it was a nice car with an out of state license plate. 75 mph is not an unreasonable speed on a straight road during daytime with good visibility and weather. It's basically left to the police to determine what the real speed limit is, which seems wrong.

</rant>

-----


I live in Virginia and I find it difficult to believe that a driver was pulled over for traveling 75 mph in a 70 mph zone. You usually have a 10% or 10 mph buffer between the speed limit and the cop being grassed enough to think that you are worth the trouble. They have plenty of other things to issue tickets for, and they are also engaging in 'profiling' in the hope of finding drugs or money which means more cash or a nice car for their department.

Instead of looking at the law as it limits your freedom to make a judgment call about how you operate the car, look at it instead as supporting your freedom to not have yourself and your family killed on the highway.

You're spending 2-3 hours driving across the state, so why is the 15 minutes you'd save going 75 mph so important? Just set your cruise at 70: I can almost guarantee that you won't get a ticket, and if you do, you will be able to get a speedometer calibration certificate for court.

-----


Virginia is different than most states.

http://jalopnik.com/what-every-driver-should-know-about-spee...

-----


Near me in VA there is a highway with two signs quite close together. One says "Speed Limit 55" and the other says (I won't have the words exactly right), "Trucks slower than 55mph must stay in right lane." The combination of signs seems to acknowledge that most traffic is exceeding the speed limit.

-----


I've seen this. You don't want some log truck lumbering along at 53 mph in the center or left lane. Also it would be very surprising to be hassled for going 60 mph here.

They can, and do, issue citations for not obeying road signs like this, and I am glad for it.

-----


That's exactly the issue. You should not be surprised when you are hassled. Currently, you can be pulled over for doing 59 in a 55. As far as I can tell nobody does 55, unless it is in a 45. You shouldn't have to guess what is acceptable in each county or town. You should know what the real speed limit is.

-----


Is there a 4 mph "forgiveness" in the law somewhere? I thought that technically it was an infraction to travel 56 in a 55, which is equally ridiculous as being ticketed for going 59 in the same spot.

I own a car that encourages the driver to go fast (think Autobahn speeds) with its power and ride comfort. I am still going slower than 60 in the 55 zones.

-----


Right. The real speed limit in VA is close to 80 mph (though you can get pulled over for doing less, presumably if "driving while black" [1]). So why aren't there signs that read "SPEED LIMIT 80"? Why is it up to the officer's discretion to decide if today it's 80, 78, 75, etc.? (I know why: federal government, funding, etc.)

I am in no way against speed limits being enforced, I just don't like the idea of them being set by the officer's mood and zeal on any given day. I'd rather know what the speed limit really is rather than trying to guess it.

[1] https://en.wikipedia.org/wiki/Driving_While_Black

-----


Federal funding is no longer tied to speed enforcement. Highway speed limits are set artificially low because it's convenient and profitable for the police. (Given that statistics do not reliably support the usual contention that higher speed limits are more dangerous, the burden of proof falls on those who would argue otherwise.)

Maybe you're not speeding, but rest assured, there's some other attribute of your driving or your car's condition that will allow law enforcement to pull you over if they want. This isn't a coincidence. For better or worse, the police will tell you that they catch a lot of genuine bad actors this way.

It's not that they want to be able to stop everyone... it's that they want to be able to stop anyone.

-----


It's not unreasonable no. So why not stick to the speed limit and actually lobby your local government to change the limits.

</rant>

-----


This is such an unreasonable standard. There is literally no action I could take that would change the law beyond spending the vast majority of my time on it. I can't do this. Expecting people to drive 55 on an 8 lane fucking highway is insane. You support this is more insane. Stop.

-----


I obey all speed limits in part because if speed limits can be written on signs without meaning anything, then they will become more and more absurd. If I'm in a neighborhood where the posted limit is 25 but the residents drive 35, they're all going to have to face the contradiction as I pass though. And I do pull over sometimes because I'm pretty sure slow traffic ought to yield.

-----


This is exciting. Alternatives currently are things like Dropbox (proprietary and somewhat pricy) and TorrentSync (proprietary). I look forward to firing this thing up on my own server and have a private remote file storage. I do currently run a NAS but without a VPN connection home it's not as useful.

-----


Actually, an excellent FOSS alternative is Syncthing. https://github.com/syncthing

-----


There's also Seafile, which I've been using for a quite a while now and really impressed by.

-----


I tried to package and run Seafile, but it was kind of complex, and under-documented. Also, the first time I got it to run on my server, it ran my Linux server into the ground (IIRC it kept the CPU so busy that I couldn't do anything through SSH -- had to reboot the server to get back in). I also got the impression the open source community isn't that active -- i.e., it's mostly maintained by the commercial backers (some Chinese company, I think).

On the other hand, Syncthing is a clear demonstration where deploying Go code is dead easy, although I really don't like Go's packaging all that much.

The one thing I still want from Syncthing is a way to have client-side encryption, e.g. have remote storage with zero knowledge of the actual data I'm storing.

-----


What specific trouble do you have with it? I have it setup on a server with a semi reasonable deployment/update mechanism managed through fabric and ansible.

I understand the setup was crappy (you have to leverage their script), but I believe Debian is looking at packaging it so maybe in the future that flow will be better.

-----


Or owncloud which has clients iOS and Android.

-----


Meta comment: I often times find great articles on Gizmodo. While other Gawker sites like Lifehacker are more mediocre, Gizmodo often features interesting and more in depth posts like this one.

-----


"Deloitte started using these four simple questions to do performance reviews. You won't believe what happened next"

-----


HR departments hate it!

-----


They got all their employees happy with this one weird trick!

-----


So here's what hopefully won't be considered a trolling question: I have seen a lot of "100% Go" projects over the past several years and that's usually presented as a big feature. Some pretty trivial things have been redone as brand new in Go, and then suddenly gain lots of attention. What is so magical about a project written in Go vs C, Python, Ruby, Rust, JS, etc.? As a user of the software I won't care what it's written in, if it's done well. If it's done poorly, I am much more likely to look for better alternatives than to fix it (if only I had about 240 hours in a day...), so what's the advantage?

To me, an advantage in usability is having a PPA with properly built .deb packages. If I have to use a language-specific package manager that I don't already use regularly, you've likely lost me, unless I really need this functionality. If it doesn't come with a proper daemon mode (correct forking, PID file support, proper file or syslog logging), sample config file, man page, or an init file, that's even worse. I am much less likely to use this in any type of "production" environment if I have to maintain those pieces myself. Running things in a screen session is so "I'm running a Minecraft server".

That is not to criticize your work. You've done a great job! serve2d looks very interesting and I might actually have to give it a try sometime.

-----


I think the the "100% Go" stuff is appealing in that you just have one file that works across OSes, with minimal screwing around.

For things that become part of the OS, yes, I'd rather they come via some install approach that includes the necessary integration. But for anything else, I think a lot of our packaging approaches are dedicated to saving disk space and RAM, which is something that matters way less to me now than it did 15-20 years ago when CPAN and APT were designed. In 2000, disk prices were circa $10/GB [1]; now we're looking at $0.50/GB of zippy SSD [2] or $0.03/GB of spinning rust [3]. RAM is similarly about 2 orders of magnitude cheaper. [4] Given that, it makes a lot more sense to burn space to minimize the chance of a library version conflict or other packaging issue.

Another thing that has changed greatly is the pace of updates. 15-20 years ago, weekly releases sounded impossible to most. Now it's common, and some places are releasing hourly or faster. [5] Thanks to things like GitHub, the whole notion of a release is getting hazy: I see plenty of things where you just install from the latest; every merge to master is in effect a new release.

Given that, I think both Go and Docker are pioneering approaches that are much more in sync with the current computing environment. I'm excited to see where they get to.

[1] http://www.mkomo.com/cost-per-gigabyte-update

[2] http://techreport.com/review/27824/crucial-bx100-and-mx200-s...

[3]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&Is...

[4] http://www.jcmit.com/mem2015.htm

[5] www.slideshare.net/InfoQ/managing-experimentation-in-a-continuously-deployed-environment slide 27

-----


The number 1 selling point of something written in Go is that it's much easier to package. The result of a compilation is a standalone binary that can be copy-pasted everywhere, as long as the architecture matches what was input at compilation time. This means:

- no more having to deal with dependencies at packaging time, which makes packagers' job simpler because all they have to care is the one and only standard way to retrieve dependencies and build the binary. (Much like the standard way of doing things in C would be ./configure && make && make install, with the added bonus point that the dependencies are also taken into account). This also means that there's a higher chance that the software will be packaged in the distribution of your choice, because the bar is lower

- no more having to deal with dependencies at runtime, because each binary has everything it needs inside of itself. In practice this means "scp as a deploying method". It's an even lower common denominator than packages.

> If it doesn't come with a proper daemon mode (correct forking, PID file support, proper file or syslog logging), sample config file, man page, or an init file, that's even worse.

This is orthogonal to the choice of programming language, though. On top of that, I believe the application shouldn't deal with forking, it's the job of your supervision system to deal with daemons. All an application has to do is log whatever happens on STDERR and let the system handle that.

-----


How exactly do you not have to worry about dependencies? Does the Go SDL include every routine you could possibly need and is always 100% correct and bug free? If you build your static binary today and tomorrow there is a vulnerability in your libssl dependency of choice, don't you now have to recompile and redistribute a new binary? Seems like a terrible and insecure way to do things. Instead of a distro developer worrying about security updates, you have signed up to do that yourself.

As for logging, there are loads of logging libraries that support both stdout and file logging. My policy is to support both for my project and it has been almost no burden so far (in Python and in C). Not everything is containers, and having a feature like logging does not mean it cannot be used in a container.

-----


> If you build your static binary today and tomorrow there is a vulnerability in your libssl dependency of choice, don't you now have to recompile and redistribute a new binary

Technically there's only one ssl library you should use, it's the standard one. This doesn't change your overall point that when a part of the program must be upgraded, the whole binary must be upgraded as well and re-deployed, which I totally agree with. If your software is a server that you host yourself and you have full control over the deployment chain, as is the mindset behind Go, then re-deploying a dependency or re-deploying a binary is more or less the same.

Regarding logging, I'm really partial to the approach advertised by 12 factors (http://12factor.net/logs): let your software handle the business, let the supervisor handle the software's lifecycle, and handle the logfiles outside of the software, because there are factors specific to the hosting machine that in my opinion shouldn't be the concern of the software.

-----


Not sure what you mean by standard ssl library. There's OpenSSL, GNU TLS, LibreSSL, and a number of other implementations. If you mean the one that comes with the OS, then wouldn't that by dynamic linking?

Re-deploying the binary, and getting the updated binary are two different things. When OpenSSL has a vulnerability, they announce it after the fix is out, distro maintainers release updated packages, I run `apt-get update && apt-get upgrade` or equivalent. It is on OpenSSL and the distro maintainers to release the update, and on me to apply the update.

When we are talking about static linking, it's now suddenly on the software developer to release a new binary or on me to rebuild the binary I have from source. Now I have to keep track of (a) which dependencies each project uses, and (b) which vulnerabilities come out. Not being familiar with Go, does it have such dependency tracking framework where I can update all packages where dependencies have been updated? Of course once I know that I need to perform an update, it doesn't matter if I run `apt-get upgrade` or `go build foo`.

As for logging, I advocate doing both. I really should make a separate blog post about it, but here's what I expect a well-behaved daemon to do:

- Always start in foreground and log to stdout (otherwise it seems like it exited without any output)

- Use the -d and --daemon flags to go into background

- Use the -p and --pid options for specifying the PID file

- Use the -l and --log options for specifying the log file location (if not specified or is - use stdout)

- If uses a config file, use -c or --config for the location of the configuration file. Default to standard OS location.

This way all possible modes are supported (running under a supervisor process, in a container, as a stand-alone daemon, or in the foreground while in development/testing/experimentation), and it is very easy, even in C to write software that meets these requirements.

-----


A bit of dynamic linking reading: http://harmful.cat-v.org/software/dynamic-linking/ http://harmful.cat-v.org/software/dynamic-linking/versioned-...

Dynamic linking is not synonymous with security or ease of use. It's known to significantly reduce performance, both when loading as well as runtime whenever an external symbol is used, and the memory savings aren't too great. One has to remember that static linking also means that unused symbols are left behind, as the compiler knows what is needed.

Ability to update components of an application is one of the ideas behind dynamic linking, but in practice it doesn't work that well, and often requires that the application is updated to link against a newer version, which of course can only happen when the distro updates. This also includes LTS distros and backports, which you either have to wait for or kill support for.

There's also a difference in how Go vs. things like C and C++ handles linking, due to Go actually knowing about multiple files, and who uses what. This is quite a bit different than the copy-pasta that C preprocessors generate. We just ported a major product from C++ to Go, which gave us significant performance boost with considerably less code and complexity (This is not a "Go is better than C++", Go just provided a lot of things we needed in the std library that were hard to get in C++). The "necessities" were dynamically linked (libstdc++, libgcc_s, pthread, libc, ...), but our own libs were statically linked in. The binary was 51MB. The equivalent, completely statically linked Go binary is 8MB. It also does a clean build in <500ms, rather than the 3-5 minutes it took for the C++ project.

... But do remember, that Go 1.5 brings dynamic linking for the ones that want it. While the Go creators don't seem super fond of dynamic linking, they are providing it.

Regarding updating, "go get -u packagename" will update all dependencies, assuming the dependencies are go-gettable. Vendoring changes the picture a bit, in the sense that applications will bundle their own version of things to simplify things a bit, but that doesn't really change the go get -u command. Follow with a go build, and your application is up to date.

-----


Just to clarify, I am not saying anything bad against Go. It seems like a strict improvement over C++ in a lot of ways. I am simply arguing that in the world where distros are made up of third party software and are maintained by a small team of distro developers/maintainers, dynamic linking is better than static linking. It's not synonymous with security, nor is it the panacea of performance or memory saving (both of which I personally care less about than flexibility). It is simply more convenient from the point of view of a distro maintainer, and because of that the end user.

-----


I understand, I just don't see where it is an improvement.

My rant is mainly triggered by dynamic linking being the standard without many people questioning the usability. It rarely it works as intended, especially with versioned symbols. If I depend on openssl, and an update comes in, then one of 3 things can have happened: 1. They updated the old symbols 2. They implemented new symbols, but left the old ones behind 3. They implemented new symbols, killing the old ones.

1. means that the behaviour of the library under my application changes, which can lead to unexpected bugs. 2. means that my application is not experiencing the fix at all. 3. means that my application won't start anymore, due to dyld errors. 3 is what happens when you update something to a new version in the normal manner.

This "multi-version" mess also makes the libs more complicated than they're supposed to be. My ubuntu libc, for example, includes symbols from 2.2.5, 2.3, 2.3.2, 2.4, 2.5, 2.7, 2.8, 2.11, and 2.15, just to check the very last symbols. It's a mess.

For users, it's mainly a headache when trying to get newer versions of packages that depend on newer libs. This isn't much of an issue if you're, say, a gentoo or arch linux user, but if you're maintaining Debian systems, and need a package from a newer repo but can't/don't want to dist-upgrade to testing/experimental, then you're practically screwed short of compiling things yourself.

For distro maintainers, it's a mess as all packages depending on the lib for that distro release needs to be recompiled when releasing a new version of the lib, which is a lot of work.

Dynamic linking and versioned symbols is also the very reason that causes sites with binary releases to have a binary for Windows, a binary for OS X, a binary for Ubuntu 14.04, a binary for Red hat 6.3, a binary for Arch Linux, a binary for..., further increasing the inconvenience for the user.

The only time you have benefits from dynamic linking is in the very rare scenario 1 of updated libraries, where everything is done exactly right when modifying old symbols so nothing breaks, which is a bit unlikely unless the bug fix was very simple. It also has to be serious enough that the library maintainers see the need to backport the bugfix to the old library versions, rather than release a new one. Otherwise, it's only dragging things down, both in performance, resource consumption and maintenance overhead.

-----


> This also means that there's a higher chance that the software will be packaged in the distribution of your choice, because the bar is lower

Static linking and bundling of dependencies is a no-no in most distributions. If anything, the Go model is a headache for package maintainers to deal with.

-----


> no more having to deal with dependencies at runtime

So, it is the same that linking the libraries statically? C and C++ has done that like since forever.

-----


Have you ever actually tried producing a statically linked C/C++ binary? I've been programming in C/C++ for 10+ years. Static linking is a huge pain. My latest efforts have led me to create holy build boxes inside carefully controlled Docker-based environments just to be able to produce binaries that work on every Linux. With Go you can just run a single command to cross-compile binaries that work everywhere. Minimal setup required, no expert knowledge required.

-----


> Have you ever actually tried producing a statically linked C/C++ binary?

Many times actually, I prefer to deploy just one file whenever I can.

> just to be able to produce binaries that work on every Linux

That's a Linux design/decision thing. Linux binary compatibility is ... well ... challenging. In Windows is not hard at all (Not sure how is it in MacOS since I have worked mostly for iOS in Apple's world).

-----


That sound like pretty strong agreement, unless you are just targeting Windows?

-----


Very true, I had python and ruby in mind. What Go makes different still is that static linking is mandatory.

-----


Hey, if you said "written in C", I would be similarly excited about its ease of deployment.

-----


Go produces a single static binary. Consequently deploying go apps is as simple as copying a single file around, that just works.

-----


That's a very weird way to look at it. Static linking was there before everything else. gcc/ld and, well, any other C/C++ toolchain can do that as well. There is a reason this isn't usually done. It's like you are trying to spin a bad thing into something good.

-----


The reason this isn't usually done is that executable size was significant relative to storage capacity up until the early 2000s or so, and people tried to economize by deduping common parts of their executables via shared libraries / DLLs. This worked well enough to catch on, but came with an extremely high cost in added complexity, and over the years a whole layer of additional infrastructure was created in order to manage it. The industry progressed, storage capacity grew dramatically, and executable sizes stopped mattering, but the use of shared libraries / DLLs continued out of inertia. As time passed, people started asking - why are we doing all this? And some of them invented a reason, which was the idea that one could swap out pieces of existing executables after installation, and thereby fix security problems in an application without needing to involve the application's developer in the process. This works about as well as you'd expect if you had spent years trying to fit all the rough edges of various third-party libraries together with varying degrees of success, but the idea caught on as a popular post-hoc justification for the huge layer of complexity we're all continuing to maintain long after its original justification became obsolete.

As is no doubt obvious from my tone, I'm not buying it and am very happy to see signs of a pendulum-swing back toward static linking and monolithic executables.

-----


It's hard to be sure why exactly shared libraries and dynamic linking appeared. Your explanation about reducing file size for smaller HDD and RAM footprint is probably one of the reasons, but I don't believe it's the only one - I don't remember many shared libraries from MSDOS days (where with 2MB of RAM and 40MB of HDD, storage was really scarce). In fact - I don't remember any libraries! To run Doom you just borrowed the floppies from a friend and it would 100% work. The same for Warcraft 1.

I believe it's more similar to Database Normalization from RDBMS world than to anything else. And the most important objective of Database Normalization is considered "Freeing the database of modification anomalies".

My own experience with shared libraries is pretty positive. I have fixed OpenSSL vulnerabilities many times by just updating the OpenSSL library and restarting all services. Compared to my own experience with docker where after waiting for a few weeks I had to change my base images (as nobody was updating them) updating just a single shared library and having the vulnerability fixed is way easier!

This, of course, is true if those who maintain the software you use do care about backwards compatibility (which tends to be true for the "boring" stuff and false for the stuff considered "cool" - looking at you nodejs library developers who break my build at least once a month).

-----


There are other important downsides to static linking. Namely, critical security updates to shared components. It's better to have to update your tls library once per system, than to update every app that came with it. And that's the best case scenario where the developer notices and the app is actually repackaged.

-----


Yes, that's the argument I was referring to, and as I said in the comment you're replying to, I don't believe that the cost is worth the benefit.

-----


Well, you wouldn't want every executable to copy some unknown version of OpenSSL, and you'd get into all kinds of problems if you had several different versions of glibc around.

But for most libraries it may really be overkill.

-----


I see OpenSSL and libc as being effectively the "system version" as you would know it in Mac OS or Windows. It's OK to link dynamically against the operating system platform, but generally a program knows what version of the OS it is built for and expecting to be compatible with. Users know that upgrading the OS is generally worth doing but may break things.

What we lack in the unix world is a coherent division between what ought to be a very small, stable, well-understood set of fundamental system libraries suitable for dynamic linking and the vast array of utilities a developer might choose in order to get an app built without having to reinvent everything from scratch.

Upgrading libraries out from under a built, tested executable is not something we should be doing lightly, because there is no possible way to know in advance whether the apps depending on the library have succeeded in programming to the interface rather than the implementation.

-----


Just because you don't like the single binary that works everywhere, doesn't mean that others find it a problem. One approach doesn't fit every possible situation.

-----


Even if you think the wasted ram and the security issues isn't a problem, why is that an argument especially for Go, when almost every other language can be built into a single static file as well?

-----


Because it is the normal, and only way for Go. Static linking isn't as easy for other language platforms, as some issues crop up (a google/SO search shows many questions). Often it is as simple as not having the static libraries available, or having difficulty linking with them because they still want to dynamically load other libraries.

ie other languages may not work, is less tested, and not normally done.

-----


The original authors of dynamic linking concluded that the cost was way higher than the benefits, both in memory usage and general performance, but the client demanded it. Dynamic linking is the number one binary compatibility issue on Linux.

Go 1.5 has mechanisms for dynamic linking, though.

-----


Believe it or not, you can freeze binaries for python apps too, if you like: https://wiki.python.org/moin/Freeze

-----


I've done it for Windows, Linux and Mac before. Note that these solutions freeze the Python side of things but do not freeze the platform side. For example they do not include the system libraries. Consequently running the frozen python app on a system that is a different distro, older or newer OS version, or has different system packages installed often leads to the frozen python app not being able to start.

Slide 19 of this if you are interested: http://www.bitpim.org/papers/baypiggies/siframes.html

-----


@boomzilla: Go head and try it IRL. It's a huge PITA and there are plenty of gotchas.

-----


As I alluded to below, the power of this feature should probably not be underestimated.

-----


A staid, solid, conservative outlook, a good perspective for others to realize is out there. I would say that concerns like daemon mode and logging are a lot less in vogue these days- a program ought concern itself with running, and outputting to stdout, and if you have needs past these it's expected you have tooling you can deploy that makes that happen.

Daemonization is at least a fairly standard feature, but with logging there's so many people with such varied concerns that getting fancy, trying to meet people's many needs, can lead to a lot of program bloat very quickly. Instead of going at these on a case-by-case basis, and now that we are more container-centric, it makes sense to run in the foreground and put your output on stdout, let the rest of the system support that utterly uncomplex pattern.

-----


I think we're learning pretty quickly that 100% Go (or Rust, or Python, or Perl, or OCaml, or ...) is a good idea for security. Especially is you're dispatching between ssh and ssl services.

Go has advantage over scripting in speed, over C/C++ in memory management, over strict FP languages in popularity, and over Rust in being stable and known for longer.

-----


I suspect there will always be a way to hide the text in a way that a search engine won't be able to detect. You can do this without applying any properties directly to the element containing text (raise other elements above it), hide it with JavaScript, or any number of such techniques.

-----


I believe Google is moving towards graphical understanding of the text as well and it should be able to render the pages and see what is going on the page... those days will soon be over and technical excellency in SEO will talk.

-----


What if you use custom blank fonts? Or simply detect the User Agent and serve a different CSS file?

-----


I think its known Google can masquerade as a "normal" visitor if it suspects you of cloaking (non identifying UA string and coming from a non disclosed google crawl IP)

https://support.google.com/webmasters/answer/66355?hl=en

I think they also use this to notify you in webmastertools when/if your site is hacked and is doing it to avoid detection by the normal user.

-----


Or detect the user agent and simply serve different content? I'm pretty sure that Google penalties this though.

-----


Yep. Here you go: https://support.google.com/webmasters/answer/66355

Here's the full "Don't do this sh!t" article from Google: https://support.google.com/webmasters/answer/35769#quality_g...

-----

More

Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: