Hacker Newsnew | comments | show | ask | jobs | submit | login

See also "Git from the Bottom Up": https://jwiegley.github.io/git-from-the-bottom-up/

(originally a PDF in 2008)


Much better article IMO. Introducing the low level commands that the higher level ones wrap around is a much more fun and interactive way to understand the .git schema to me.


Sigh, yet another nail in the XMPP coffin, at least as far as the general public is concerned.

Remember when we had ICQ, and AIM, and MSN Messenger, and Gadu-Gadu, and we were dreaming of a unified messaging system?


So, I don't do web or software development, let me tell you^W^Wrant about how chat in 2015 feels to use. I'm 27, I grew up on IRC, I know my ICQ UIN by heart. Chat has always replaced SMS for me, and most groupware as well. I was happy when Facebook came along, and suddenly even the non-nerdy friends were compatible with my preferred way of having an endless conversation about random things. Chatting feels natural to me, allowing to keep in touch with good friends but not capturing my attention the way a phone call does. It also interleaves very nicely with menial work.

I don't care much about cloud or private cloud or local app. I've used irssi on a server that everybody connected to via ssh. But I switched over to cloud services as soon as I had more than one device that could send and receive messages. Everything else was way too tedious, as I never knew which device was connected, where messages went, where unread notifications went or whatever. Nothing to do with closed vs open, json vs xml or whatever social pattern. XMPP was lacking features, simple as that! From a user perspective! XMPP didn't work!

Give me persistent group chat, shared chat history with a search function, synchronized unread/read statusses, and I'll be leaving the cloud with flying colors. My current best bet for a text messenger is Skype, but the app is way too clumsy on Android and Windows. Close second is WhatsApp with a few groups, running on a large phone with a well-trained SwiftKey2. Both feel about as great as ICQ6 with its banner ad, and none feel as great as Adium or Trillian did comparatively back in the day.


> ... persistent group chat, shared chat history with a search function, synchronized unread/read statuses....

Across all my devices - Web, Linux, Android and FirefoxOS. No advertising.

Telegram - http://telegram.org

I've been using it for several months now and am very happy. Would love integrated voice calling, especially to landlines/mobiles, and hope someone develops a plugin for this soon (perhaps the guys at Jaconda[0] will do it).

I have just one regular contact still using Facebook messenger, and our conversations are now very disjointed, as sometimes I don't login to that service for several days at a time. I don't use Skype as a messenger service, but do still have about four or five contacts who are wedded to it for free voice calls. I use WebCallDirect[1] for cheap calls to landlines/mobiles.

[0] https://jaconda.im [1] http://webcalldirect.com


My biggest concern with things like this is that AFAICT their business model is kind of foggy. Quoting from their FAQ, "Pavel Durov, who shares our vision, supplied Telegram with a generous donation through his Digital Fortress fund, so we have quite enough money for the time being. If Telegram runs out, we'll invite our users to donate and add nonessential paid options to break even. But making profits will never be a goal for Telegram." https://telegram.org/faq#q-how-are-you-going-to-make-money-o...

That's a very nice sentiment, but it's not a sentiment that fills me with confidence that I can rely on the continued existence of the free thing they're giving me.


Telegram looks really interesting. It has an open protocol and API so it should be easy to integrate with other systems; there is a libpurple port for it[1]. Maybe Telegram could be a replacement for XMPP?

[1] https://github.com/majn/telegram-purple


Wow, Telegram looks great! Now if I only could convince all of my friends to switch...


You are so right. I really wanted XMPP to succeed and I used in happily back in 2002 or so. But it just doesn't seem to have evolved at all. You pretty much nailed it with persistent group chat and shared chat history. Skype can really get on my nerves, but it had this covered since forever, and trying to get people on XMPP without equivalent features is hopeless.


I used to use Skype for persistent, cross-platform group chat with my close group of friends. But we recently switched to using Slack, and it's far better.

The only thing that's missing, is end-to-end encryption.


Can you go into detail about what features were lacking?

For what it's worth, a lot comes down to what features the server admin has activated. I've heard many complaints about XMPP but this is the first I've heard someone mention lack of features as a problem with it.


In addition to logging as mentioned, I find that file transfers (encrypted!) that Just Work between users regardless of the network/client/server used seems to be missing.

I really like XMPP, and the extensibility it has is brilliant, but at the same time leads to a crazy level of fragmentation for the instant messaging use case. You're basically guaranteed that you can chat to others and maintain a buddy list, but anything much beyond that is a toss up, depending on the server used, how it's configured, and what client each user has.


Server-side chat logfiles that are distributed to all connecting clients? I've never looked into the protocol, but I'm pretty sure that one is lacking.


There was certain controversy about which protocol to use for sharing history between clients for both private chats and MUC, but now XEP0313 seems to be the future of it.


There are implementations for servers, e.g https://code.google.com/p/prosody-modules/wiki/mod_mam_muc, but I have yet to see a client reliably supporting this.


I haven't heard of an extension that does that either, but I'm not sure that would scale well for busy long-running Multi User Chats.

But the benefit of XMPP is that a new extension can be proposed that does this if enough people want such a feature. Personally I would argue that a better approach would be to make Public MUC logs accessible and searchable via a web interface and to have the client log chats going forward from when they join (with offline messaging from the server so you don't have to be online the whole time).


The problem with XMPP MUC's is that it's modelled after IRC, as that's what the protocol hackers know and love. But (please forgive the hyperbole) normal people don't use IRC, they use Skype. And the Skype model is so foreign to the IRC / XMPP techies that they'll always dismiss any attempts to add these features, often with spurious technical doubts as you just did. Ignoring that Skype has these features since a decade and has taken over (some parts of) the world because it has them. (Skype alone today has about ten times more daily active users than all IRC networks together!)


Not to forget whatsapp. 1000 times more daily active users than IRC. Basically the same model as Skype. (And likewise quite unlike IRC.)


1339782 add me


I've been saying for years that an update to the IRC protocol to allow for push notifications without requiring users to have a shell and set up a bnc would be pretty much ideal.

The constant ping-pongs really dissuade me from using IRC on my mobile as it kills my battery life. Being able to set some kind of "mobile" mode and receive push notifications if I get a hilight would be ideal.


I've ended up using the www.ircCloud.com service as my bouncer. It gives lovely native mobile apps with proper push and has a much nicer/more usable interface than any other solution I was capable of rigging up myself.


Some of the cloud-based IRC clients do a lot of this (I'm a big fan of IRCCloud). But it would be nice these kinds of features were baked into the protocol. When something like a majority of users want a feature, it should probably be part of the protocol.


Now we have twice as many incompatible services. And still dreaming. :/


This is what we get when we embrace closed platforms. Free Software gets you nice, simple, easy to use services and closed platforms will always manipulate you for their needs.

This is why I want to make a Free Hardware cell phone. I have made one wireless product already and wrote the frequency hopping stack myself, but something like a phone needs better data rates and a more advanced protocol.

Still, I dream of a simple cell phone that runs vanilla linux and has apps for IRC, XMPP, and the other functions you would want. The key component would be that the protocol would be designed from the ground up to respect user privacy, including a "broadcast" mode for towers where certain low data rate data channels are streamed without requiring any transmission from the handset. Then you could follow IRC or twitter during a protest without any risk of being probed for your location.

The protocol would also be built with anonymity in mind as much as possible, so phones would route data using rolling anonymous ID numbers and spoofing location could perhaps be trivial for plausible deniability (unless that opens up a vector for DoS).

I met the "Game of Drones" guys last night and they seemed interested in my wireless. I can only do 1km at 20kbps with current boards and 10km at 20kbps with amplified boards, but I think they might be interested in a solution that would work for FPV and that could be a good excuse to develop something that would also work for a cell phone.

I am 100% Free Hardware all the way, so maybe your dreams can come true eventually. :)


> Free Software gets you nice, simple, easy to use services

While there are many upsides to free software, usability and user friendliness were never one of them.

Hackability, sure. But most people don't care, let's be frank. I will use Hacker News because it works despite being proprietary software.


But free software can do things proprietary can't, like maintain trust with the user and provide maximum good. Think about all the good things people could do with facebook but have been blocked because facebook wants to protect their one narrow use case. Social Networks won't reach their full potential until they're totally free software, because facebook has such narrow interests.

So sure, you're happy for now, but billions of people are getting spied on and getting sick of it, and out of work programmers are increasing for number every year. As users get more fed up with proprietary companies jerking them around, I feel like people will start to demand Free Software. However this may be what all engineers who fall for Free Software think... I make Free Hardware so my plan is to prove why Free is better to people. We'll see if I succeed! I just recently realized that was the issue so I am working on that now.


Hacker News is not proprietary software. It's open source under the Perl Foundation Artistic License 2.0 (FSF compatible). It's even written in Arc Lisp, which is nifty.

Some of us do care.


Nice, I did not know. Really

Where is the source code?


The language: http://www.paulgraham.com/arc.html

Source code https://github.com/arclanguage/anarki/


It seems like the source for Arch discussion, that seems to be similar, but slightly different, to Hacker News.

But it's better than nothing.


It's worse now.

Back then, we had Pidgin, Trillian, Kopete, etc. that would let us connect to all our messaging services with one client.

But nowadays nobody even tries to reverse-engineer protocols anymore. Where are all the people reverse-engineering Hangouts or Facebook Messenger? I wish we could teleport the people who reverse-engineered AIM, Yahoo!, and MSN to the present day, because they don't seem to exist anymore.


It's worth noting that XMPP was based on XML. New hotness is JSON, or maybe even compressed binary protocols.


I don't know if you're being ironic about JSON. Note that jkarneges, who comments elsewhere in this thread, is the creator of Psi [1], arguably the best XMPP-focused messaging client.

The value/burden of XML has always been a topic of debate for XMPP. In retrospect, I think it contributed to its lack of appeal, though the extensibility and readbility (ehm, arguably) it provided were unique back then.

I've long wondered about which alternative base protocols could be used in place. JSON is OK, but may be as much a fad as XML. I've wondered if ASN.1 could be used, but ProtoBufs sound like they're a better fit [2] in that they're simpler, more space-efficient, and backwards-and-forwards compatible (and thus extensible, XMPP's main feature) In fact, it's what Google already uses themselves.

[1] http://psi-im.org/ [2] https://groups.google.com/forum/#!topic/protobuf/eNAZlnPKVW4


What is the purpose of layering your chat protocol over another protocol at all?

SMTP has no "base protocol" in this sense. HTTP, nothing (unless you count RFC 822).

It's hard to think there protocols would have had the same life time if they were based on XML, JSON, or protobufs. (Yeah, HTTP over XML, that should be enough to give you nightmares. But welcome to DAV and XMPP.)


If you're looking for a happy medium between the readability of JSON and XML and the efficiency of ASN.1 and protobufs, take a look at canonical S-expressions[1].

There's an advanced representation, which looks like this: (message (header (sender "Billy Joe Bob") (sent "2015-03-26T12:02:00Z")) (body "Hey guys! Let's meet up for lunch!")). It's possible to encode any byte string using Base64 or hex. It's also possible to encode types with data: (message (header (sender "Billy Joe Bob") (sent "2015-03-26T12:02:00Z")) (body [text/html]"<p>Hey guys! Let's meet up for lunch!</p>"))

While there are multiple advanced encodings for the same data (e.g. foo or "foo" or |Zm9v| or #666f6f#), there is a _single_ canonical encoding for any datum: the messages above would be (7:message(6:header(6:sender13:Billy Joe Bob)(4:sent20:2015-03-26T12:02:00Z))(4:body35:Hey guys! Let's meet up for lunch!)) and (7:message(6:header(6:sender13:Billy Joe Bob)(4:sent20:2015-03-26T12:02:00Z))(4:body[9:text/html]42:<p>Hey guys! Let's meet up for lunch!</p>)).

A huge advantage of this canonical encoding is that it's amenable to cryptographic hashing and signing; a weakness of JSON is that one has to layer requirements atop JSON itself (e.g. alphabetising object properties) in order for two parties to be able to hash the same datum and get the same value.

Another advantage of canonical S-expressions is that it's straightforward to define a mapping between them and HTML: "<p class='foo'>This is a <em>nifty</em> paragraph.<br /></p>" could be represented as ((p (class foo)) "This is a " (em nifty) paragraph. (br)). There are other possible mappings between S-expressions and HTML, of course, but I like that one. Another might be (p (/ (class foo)) "This is a " (em nifty) paragraph. (br)).

[1] http://people.csail.mit.edu/rivest/Sexp.txt


> there is a _single_ canonical encoding for any datum: the messages above would be (7:message(6:header(6:sender13:Billy Joe Bob)(4:sent20:2015-03-26T12:02:00Z))(4:body35:Hey guys! Let's meet up for lunch!))

This reminds me a lot of bencode, with the advantage for bencode that it doesn't need any fiddling for non-printable characters: no more base64, no more hex.


The base64 & hex stuff is only used for the advanced, human-readable bits; on the wire it's just straight length-encoding and byte strings.

I'd say that bencode's advantage is a built-in standard for integer encoding (with canonical S-expressions one must decide between ASCII decimals or little/big-endian bit strings), and a clearer standard for a dictionary/map/hash (a canonical S-expression would probably use an alist-like structure like (map (foo bar) (baz quux)), but one could also go with (map foo bar baz quux), (map (foo bar baz quux)) or some other encoding.


XML is horrendous. Especially to parse/scrape. JSON on the over hand is a breeze.


Only if you don't understand XML.

* XML has a formal, class-based description language (XML Schema) with strong typing, polymorphism, and - best of all - self-descriptiveness.

* Languages like Java have a seamless, bidirectional mapping to XML Schema.

* XML has a rediculously powerful and elegant transformation language (XSLT) which makes scraping, selective data extraction and processing trivial.

The problem with XML is that people who require instant satisfaction are not willing to invest the time to understand it, and the mature tooling ecosystem around it.

The XML ecosystem solves problems, and contains solutions to problems, that the JSON / JavaScript ecosystem can only dream of, and is hell-bent on partially re-inventing.

If you need strong-typing and self-descriptiveness, you're out of luck with JSON. Binding JSON to a strong-typed language like Java or Haskell is a total ball-drag compared to XML + Schema.


I don't see why I can't use XML, JSON, MsgPack or YAML.

Couldn't the parsing be a pluggable component? Just set a standard on how data is structured and let third-parties figure out how data is parsed.


And that would improve the XMPP adoption and experience by ... ?

Are you saying mom and dad aren't using XMPP because the message is sent using XML based stanzas? Facebook is ditching XMPP because of the X?

It doesn't matter?


>And that would improve the XMPP adoption and experience by ... ?

Saving battery on mobile devices for one.


Wait, wait. We're talking underlying data format here.

Are you saying that serializing something to JSON (or whatever you fancy here) vs. to XML .. saves battery?

I mean, XMPP is certainly not the battery friendliest tech right now (elsewhere people discuss push extensions for example), but .. that's not related to the use of XML.


Having to support a panoply of marshaling standards would suck battery more, not less. I doubt XML vs. JSON vs. YAML per se makes any substantial difference in CPU usage. You might get a little mileage in going to a compact binary format and reducing data transmission, but is that worth it?


The funny thing is that XMPP was created when XML was the current hotness.

SMTP survived despite changing fads. If we're ever going to standardize IM (or anything), we have to accept that protocols may use older technology. Someday JSON will be old too. Let's not make these mistakes again.


At the time when XMPP was getting standardized and was still mostly known as Jabber, I started implementing a Jabber chat client with a friend.

The problem with XMPP isn't that it was based on XML. No, what made it annoying was that the dudes who made it decided that instead of basing it on exchanging individual messages, i.e. XML documents like everyone else does, everything must instead be put inside a so-called XML stream.

IIRC it basically meant that the exchange started with a start tag that wasn't terminated until the connection was closed. Since nothing at the time was designed to work with unfinished XML documents (remember the end tag doesn't come until you're done), all the convenient standard XML tools/libraries wouldn't work.

So I don't think XMPP is a stellar piece of work. But it's of course much better than some proprietary crap, and it's sad to see it lose support, although I imagine to Google and Facebook who both probably couldn't care less about interoperability, having an open XMPP interface is probably more of a liability (spammers, enables people to skip their ads) than something they get much perceived value out of.


Even though it seems like a strange decision, using XML for the stream framing made the protocol nicely pure. It theoretically meant you didn't need to write a parser (this was a rare thing for a network protocol). In practice, though, you're right, most parsers at the time didn't work well with network streams.

Of course, lack of adoption by the big providers was almost certainly political rather than technical.


Maybe we need a new standard JSON protocol, I bet part of it falling out of favor was the XML.

Messaging is one of those areas that is actually pretty simple that corporations that want to own channels have munged up pretty bad into a complex mess. Companies internally even have a couple or few.

Side note: AOL/Timewarner actually owns the IM patent from ICQ (http://edition.cnn.com/2002/TECH/biztech/12/19/internet.aol....)


Yeah, I don't think the issue with with the implementation, it's with the fact that it can't be controlled that companies didn't like. We either build systems to use ourselves or let private companies control our communications.


> Remember when we had ICQ, and AIM, and MSN Messenger, and Gadu-Gadu, and we were dreaming of a unified messaging system?

We'll always have SMS.


SMS? Or SMS, iChat, Hangouts, Whatsapp, and the thousands of others that use your SMS number as username? Some (iChat, Hangouts) even steal your SMSs and place them on proprietary networks; If you switch devices from iPhone to Android or vice-versa, messages get lost.


SMS, the network.

> If you switch devices from iPhone to Android or vice-versa, messages get lost.

I moved from an Android phone to an iPhone, back to an Android, and then back to an iPhone again. I never had my messages in limbo, and its easy to fix (at least, on the Apple side) if it occurs: https://support.apple.com/en-us/HT204270

Note that link also comes up as a search engine provided result by google just by googling "disable imessages". That's stupid simple.


First of all, I'll wager a supermajority of iPhone users do not understand any distinction between iMessage and SMS. Apple designs away the distinction. Secondly, when an ex-iPhone user ports away their number and is able to send SMS's without issue and receive SMS's from most of the world except for iMessage senders, how are they supposed to 1) discover they have lost messages before they lose something critical, and 2) discern that the cause of their inexplicably lost messages, is that they need to "disable iMessage", when they are not even an Apple user anymore?

I'm not an Apple hater, I use Apple products, but this is a huge fuckup and the OTT fragmentation is a real issue vis-a-vis SMS.

And SMS is not IM so your premise from the start is a nonsequitor. It's telephony, it's wildly and widely overpriced and non-open.


I think you're right about the lack of widespread distinction between SMS and an iMessage. It's sad.

Even sadder are the news articles (like ones you find in the technology section on the BBC website) where they refer to instant messaging as wonderful and then "if you're a bit old fashioned" and still use email...... At least emails can be easily retrieved and archived.

Perhaps people don't care about the transport medium (phone network or Internet) but it is an important distinction. It's even more important for Google users because Hangouts is very flaky for me, unlike SMS.


It doesn't matter if it is stupid simple, it shouldn't be necessary in the first place.


So start your own communications network. Market it. Make it open. May the best solution win.


It's not about the best solution, and everyone needs to stop pretending it is. This sort of thing comes down to which solution has the most corporate backing. The only exceptions are when an open solution happens to hold on for long enough (on the order of years) for a major player to realize that its good and maybe they should give it a chance (example: OpenStack's support by PayPal right now, or when Linux finally started going someplace in the early 2000's with Canonical and RedHat). Give me an example of 'the best solution' winning despite major corporate backing of the proprietary competitor?


I don't think it's even as simple as corporate backing, though that certainly plays a part. It's significantly the arbitrary whims of the population; Whatsapp was not corporate backed, and still captured huge marketshare.


  > We'll always have SMS.
Not much good for talking to people on things that aren't phones.


I was thinking about this recently. After having some trouble with my iPhone 5 sending SMS's last year and my Nexus 6 sending and receiving MMS's this year I feel that SMS/MMS reliability has gotten really bad. I know it's due to a cocktail of buggy OS's and the cell phone towers themselves but I never thought my old Motorola flip-phone from 10 years ago would be more reliable than this stuff.


List last updated in 2010.

I believe Fabrice Bellard deserves to be added.


In short, created LZEXE at 17, went on the create QEMU (general CPU emulator, also used as a frontend of KVM), and FFMPEG (which pretty much every open-source (and many not) multimedia app uses).

Less famous, but still quite the accomplishment, are jsLinux, the first in-browser full Linux emulator, at one point holding the record for calculating the most digits of Pi (using a desktop computer!), creating the first software 4G LTE base station (that runs on a standard PC), and more.


I want to nominate John Carmack.



I would nominate Jacob Appelbaum and Moxie Marlinspike.




Yes, I was also missing his name. The hacks he pulled off for side-scrolling in Commander Keen on PC, '3D' of Wolfenstein 3D and Doom, and of course his inverse square root implementation: http://www.codemaestro.com/reviews/9. He definitely deserves to be in this list.


Me too :) I very much expected him to be on the list.


I'm looking for a good collection of Linus' own emails discussing some of Git's more advanced topics, as I remember he discussed some subtleties that may be missed by other documentation such as the Pro Git book and the manpages. (maybe said documentation has caught up, but I'd like to check to be sure).

So far, all I've found is a few threads collected at yarchive.net [1], which is promising, but how can I find more? Google fails me :(

[1] http://yarchive.net/comp/index.html search for "git"


I don't think you need to go much further than LKML archives. He wrote git because of the bitkeeper incidence and then handed over the maintainership very shortly after. There was also some mail about ranting when to rebase and when to merge more recently, but it is also in LKML. I don't think he regularly writes to other public lists than kernel anyway :)


I don't know OpenBSD's history vis-a-vis webservers, and these slides aren't clear about why they needed to build their own.

As a random guess, is it because Apache focuses on features, nginx on speed, and OpenBSD wanted a focus on security?



OpenBSD was stuck on Apache 1.3 for ages because the license changes for Apache 2.0+ was incompatible. They also ended up maintaining their own fork for quite a while because patches to improve security were not being accepted upstream (I think)


License is fine, but it's getting feature bloat and the local patchset was getting unwieldy.

Conclusion: roll your own httpd that way you don't have to deal with this anymore.


Not really following OpenBSD, I didn't realize they considered Apache 2 license unacceptable til reading the above. For anyone else curious about the initial discussion of this, it took me some browsing to find http://marc.info/?l=openbsd-misc&m=107714762916291&w=2 (2004)


I tried to use Opam on Debian Unstable a few months ago. It was 1.1.0.

When requested to install a second package with overlapping dependencies as a previous package, it would try reinstalling all dependencies, including previously installed ones, and would fail.

That is, in my book, an epic fail for a package manager, let alone a supposedly >1.0 version (if that even means anything today). I'm still annoyed about that. Searching online found recommendations to get the latest version from Github.

1.2.0 has since been released and included in Debian. I keep meaning to give it another go, but I must admit being wary after that first experience.


> When requested to install a second package with overlapping dependencies as a previous package, it would try reinstalling all dependencies

This was unfortunately a mistake made by upstream Debian. Rather than reimplement the constraint solver badly, we use the Aspcud external solver. An upgrade of Aspcud changed its command-line interface, and OPAM 1.1.0 wasn't upgraded to OPAM 1.1.1 at the same time (which detected the new version of Aspcud and worked with it).

A similar issue (to do with the libdose interface) has been in the Ubuntu package ages and not addressed by upstream despite a detailed bug report: https://bugs.launchpad.net/ubuntu/+source/opam/+bug/1401346

> Searching online found recommendations to get the latest version from Github

What else could we recommend?

It's really frustrating to know that a source-compiled OPAM works fine, but that the packaged binary versions are broken and our only recourse is to wait six months for the next OS release of Ubuntu. On the bright side, the OPAM 1.2.x series has been very stable and is now making its way upstream, so this is hopefully a temporary issue.


That explains a lot. I thought opam was behaving better now, than I recall it did at some random point in the past.

I'm really grateful to everyone doing all this work. This is a typical example of the kind of "excitement" mixing distro package managers and language package managers can lead to. The only recourse is to file a bug, maybe provide a backport if feasible, and move on -- and have everyone (package maintainers, distros) be a little wiser, and (hopefully) everything working on the next release.

Anyway, I see I have opam in my ~/bin -- so clearly I've compiled it myself at some point. No wonder, as I'm still running stable on this laptop:


Maybe there should be a note in the wiki about it, although it should be resolved when jessie is out?:


(Yeah, I know, it's a wiki. I could just add it. I don't feel quite confident enough to go and clutter such a short little page, though. Different if there's a page where "someone is wrong on the internet".)


Thanks for your work!

In that particular timeframe (Sept/Oct 2014), the Debian maintainer just hadn't uploaded a newer version of the package: https://tracker.debian.org/pkg/opam

It's a pity, because I think Debian's Experimental section would've been ideal for this.

I've been using Debian long enough that I'm a bit ashamed not to be able to package things on my own yet...


I agree, opam was rough a while ago - not sure about now. Fwiw both erlang and haskell have also been rather painful on Debian. As far as I know, ruby still is, to a certain extent. Go is sort of in between by virtue of building mostly stand-alone binaries, if you can get the program to compile.

Haskell's cabal has gotten better; it's still easier to run xmonad/mobar from Debian packages, but at least cabal with packages/cache in ~/.cabal kinda-sorta works.

It seems the path towards package managers that both work well and work well with stable/distro packages is long, ardous and filled with repeating the mistakes of others. Npm seems to have learned a bit from python - if you stay away from -g(lobal), it tends to mostly just work (not sure about c/c++ dependencies - for better or worse a lot of npm packages are mostly pure js).

Python with virtualenv works very well, except for a few hairy c-heavy packages (pygame, sci-py) - but running "sudo pip install ..." is a recipe for disaster (or a mongrel, unpatchable mess). Keep to virtualenvs and you can stay mostly sane (or use the isolation provided by buildout -- preferably in concert with virtualenv).


I'm not sure we're talking about the same thing. Are you lamenting the difficult interface between distro package managers and language package managers? Or lamenting that language package managers haven't learned the lessons of previous package managers, distro and language alike?

I was lamenting that Opam 1.1 wasn't dealing with a trivial package installation situation. I mentioned Debian to indicate that was the source of the version I was using, implying that there may have already been a newer version out there that I could've been using, but then explicitly saying that 1.1 should've dealt with the situation in the first place.

FWIW, I definitely agree with the terrible situation of language package managers in general. I use Python sporadically, and I'm always scared of running pip. Virtualenv I'm very unhappy about because, while it definitely fulfills it's task marvelously, I think it shouldn't even have to exist in the first place! Why doesn't core Python deal with the problem itself, searching for dependencies in the main script's subdirs, the user's libdir, and lastly the system libdir? Admittedly, I don't know the intricacies of the Python ecosystem that prevent that.


> I was lamenting that Opam 1.1 wasn't dealing with a trivial package installation situation.

Well, it's all connected. A package will need a certain version of ocaml itself, it's standard-lib (or alternative) other libs it needs. These will all depend on various c libraries to a certain degree. So if opam started re-downloading a lot of dependencies that should be available, that sounds like either a) some part needed a newer version, or needed to recompile with different flags, or b) opam didn't find the dependencies installed by dpkg.

Python and Debian go pretty well together, probably partly because a few core utilities use python. So with some creative use of "apt-get build-dep" you can usually get the needed c libraries that things depend on, and so have a better time installing things that need c-code in virtualenvs.

I'm not sure how you think linking to (possibly rapidly moving) c-libraries from a language that runs across a wide range of operating systems and versions should work, if not with something like virtualenv (or buildout). There will, I think, always be a need for packages that are moving faster than stable distributions can/should keep up with -- and there needs to be a sane way to deal with that.

With c code, you can stuff things in /usr/local -- but keeping that in sync as you patch the main os (and do dist-upgrades) and as you need to patch the various libraries/dependencies... is challenging. And that's just for c. When you put another layer on top (ruby, python etc) -- it doesn't really get any easier.

One option is to go the way of bsd ports/gentoo -- but there are at least two issues/trade-offs with recompiling everything from source: 1) if you're mixing and matching versions and compile-options, you quickly end up with unique binaries. Which is to say, binaries that no-one else have tested before you. Granted any bugs you encounter will be in the program and/or the compiler -- but you'll have the joy of finding them. In production, most like. 2) It really doesn't make much sense to recompile "all the things" if the goal is to end up with the same binaries anyway.


FWIW, with opam 1.2.0 installed on Debian jessie (no system ocaml), after doing "opam update;opam upgrade" -- cmdline installed fine.

cohttp wouldn't install, due to a dependency, conduit, not installing properly[1] -- possibly due to the old libssl-version (with backported fixes) in Debian stable. I do have a sid and a jessie chroot -- but I've yet to try running it there -- part of the point of chroot is automounting of the home folder in the chroot(s) -- and that'd give me a .opam that wouldn't work under stable; or I'd have to jocky around with altearnative .opam folders...

[1] An error along the lines of:

File "lib/conduit_async_ssl.ml", line 28, characters 18-108: # Error: This expression has type # Ssl.Connection.t Deferred.Or_error.t = # Ssl.Connection.t Core_kernel.Std.Or_error.t Deferred.t # but an expression was expected of type unit Deferred.t # Type # Ssl.Connection.t Core_kernel.Std.Or_error.t = # (Ssl.Connection.t, Error.t) Result.t # is not compatible with type unit # Command exited with code 2. ### stderr ###


Convincing arguments have been made that it's usually a better idea that co-founders get equal equity than not, to prevent arguments about said split which are much more likely to doom your startup. It's like that point about arguments in a relationship: would you rather be right or be together?


Edit: I see you added that second paragraph after I wrote this out. I'm sorry for your situation, that must've been horrible. However, it sounds like in that situation the toxicity of the co-founder is more to blame than any initial attribution or equity split.


Thanks for the link.

> The founders should end up with about 50% of the company, total. Each of the next five layers should end up with about 10% of the company, split equally among everyone in the layer.

This is the typical, and exploitative, arrangement in silicon valley! In today's climate, the founders often get money very early and start hiring right away. They have no real personal risk in the venture, and even if it fails completely their "founder" status will serve them well at the next go-round.

The founders had an idea and some rough prototype, but the product is built and the company direction is executed by the next 10 people, and the next 10, and so on. But while the first 10 Employees get to share 10 percent of the company, they sit side-by-side with the 3 founders who have 10-20 times as much as any one of them.

We all take it for granted that the founders' contribution should be worth so much more than mere employees. But who writes these blog posts on how to distribute equity, with 50% to founders and 10% to each "layer" after? Well, it's not the employees. It's the investors and founders themselves, who need to solidly stand behind the idea that at a company that faced failure every day and with every competitor launch and had to get every aspect right, in the end the people at the top should enjoy mega-riches and early retirement, while the lowly workers enjoy a nice bonus equivalent to a year or two salary.


If you believe in avc.com's guide of 50% to founders and 10% to each subsequent "layer", I would counter that the founding team is itself a layer, and each layer should be compensated equally. There is no justification for the first layer (founders) owning as much as the other layers combined.


If you believe it's exploitative, why not take the other side of the trade? Go become a startup founder yourself.

Lots and lots of people in Silicon Valley do that, and ultimately, that's what's causes market correction. If there are way more startups out there than talented engineers capable of building products, then the engineers can negotiate a much better deal for themselves. Or they don't and go out of business, but if that's the case, then your initial assumption that they have no real personal risk in the venture doesn't hold.

I know a senior engineer (Boston area, not Silicon Valley) that's made multiple millions multiple times as an early employee. She comes in to startups after they've fucked up their v1 so badly that they can't bring it to market, negotiates a very sweet equity package, fixes the product, and then cashes out when they IPO or get bought.


> Go become a startup founder yourself.

The issue isn't me. And, I might already be a founder. That's besides the point.

I'm happy for your friend. That's excellent, to be able to negotiate well. Most people don't. And most people are TOLD, repeatedly, that 1% is an "excellent" percent even as the earliest joining a company.

The word "exploitative" is as tricky today as in centuries past. If the employee doesn't want to work for peanuts, why not go somewhere else? The market will eventually correct, compensate everyone fairly (by some definition of "fair"), etc. Well, my argument is not that the market itself is broken, because employees enter them under free will. My point is that engineers (early and otherwise) should not accept that their contribution is worth so much less than the 2-3 people on top. This is especially true for the first engineer, who joins at 1% next to the founder at 40%, but it applies to every after as well.


I've been both a founder and an early employee. As an early employee, I always received a salary, and knew I could leave any time I felt like it. My level of risk was low, and I was perfectly happy with my equity knowing I had a nice upside without much downside at all.

As a founder, I haven't paid myself in months, and have commitments to my customers such that I 100% can't just shut things down and leave to do something else without killing a lot of relationships and getting a terrible reputation.

Of course, I can only speak to my own experience, but I'm satisfied with my amount of equity in both situations.


I've been a founder and an early employee as well. As a founder, you sign on for the bad times. As an early employee, I always got duped.

As an early employee, I've had to go without a paycheck on multiple occasions. I had to go without healthcare for several months even though I was told the company already had it in place.

Sure, I could leave anytime I wanted. But I would forfeit all my stock if I left. Even if I left because they stopped paying their engineers. Besides, the money was coming. Why leave now? They promise they will make it up.

I've been told that everyone in the company had to take a 20% salary reduction to keep things alive. I could have left then too. Again, forfeiting my shares. But again, I bore the downside of the business without anywhere near the potential upside.

The important part is that none of this was malicious. The founder just had no idea what he was doing, and thought they had to lie for the good of the company.

As an early employee, I hired people into both the companies I'm speaking about. I'll never do that again. I haven't ever done that again. I urge everyone to not be the first engineer.


As an employee you're limited by the quality of the C-suite.

If you swap equity for salary, it's important to understand that you're not gambling on the quality of the product or the idea, but on the quality and integrity of the people you're working for.

You probably won't have enough information to make a good decision about their quality and integrity until you've been working somewhere for a while.

But generally, if one promise doesn't work out, you have good reason to suspect others won't too.

Equity is really just a promise. So you should have a lot of evidence of reliability and integrity before you count on it.


I think you're quite right. I tend to feel that people want to do good, and make judgments based off that.

Since my early mistakes, I've started telling myself "You're not negotiating with the person across the table, you're negotiating with unknown parties and circumstances in the future".

That kind of removes the human element.


In all situations, you really have to look after yourself. If someone ever asks you to work without pay, they're asking you to up your risk. Demand more equity. The greater the risk, the greater a return you should get, otherwise you're making a bad investment.

Founders can be assholes. So can investors. Always look after yourself (and your team, if applicable). Too many assholes and horror stories not to be wary.


Every time Elon Musk makes this kind of announcement, his engineering departments groan.

As I understand, they have folks actively managing Musk trying to prevent him from promising the moon :)


Too late, he promised Mars.


... I opened myself up for that one didn't I :)


Anybody knows if any needed equipment is actually installed in his cars? From what I've understood with Google cars, the cars would need quite a lot things like high tech radars.


They've been including the hardware in the cars since about 3 weeks before they first announced these features earlier this year.


I hate this sentiment. Yes, Elon makes huge promises. And you know what? He frequently gets there. Everyone quoted in the article says "No". Elon says "Why not?" And you're worried about the engineers.


"He" doesn't frequently get there. The engineers get there. Or they don't. If they don't, whose fault do you imagine it is?


You do realize that he co-founded and is running both a successful electric car company and a successful rocket company? And before that co-founded an insanely successful payments company?


He didn't co-found neither Tesla, much less PayPal.

I like him, he has done well but his PR has done even better.


Yes, but he also frequently makes false (or at best incredibly misleading) public statements about how far along they are on such projects. (Source: friend-of-a-friend is one of those groaning Tesla engineers.)


I worked for him directly. That's how he gets amazing stuff done. And clearly it works. If employees aren't up for it, probably best to find something else.


Were you an engineerer or a seller?


PM. So I was on the hook for getting the engineers to deliver the extraordinary requests.



Doxing is not allowed on Hacker News.


WTF? Posting public information is now considered doxxing?

You don't get much more public than a LinkedIn page.


I have committed a faux pas among the cognoscenti. Message received!

The point, though, is that posting someone else's personal details in order to malign them is a breach of the civility HN calls for. I don't see how that could be more obvious. Fortunately, users flagged that comment. Let's have no more of this.


To be clear about this, for future reference: is it doxxing to post the contact information for someone's public office, when speaking to people about their concerns is part of that person's job? (Indeed, even posting this information in anger, per se to encourage "attacking them"—but when the targeted person has encouraged such "attacks", created a separate channel for them, and thinks of receiving them as "just what they do" rather than something scary.)

For example, I don't think I've ever heard it called doxxing when someone puts up the address+phone number of a congressman and encourages people to write in. Nor when someone posts the "personal corporate" email of the CEO of a company to explain how to "go over the heads" of the CSRs of that company or to tell them about how you're boycotting the company.

I don't think doxxing is about contact information, per se; it's more about the line between someone's public and private personas. If someone actually has the equivalent of a public-persona "complaints hotline", then I would think it would be just fine to post that, no?


It's not the definition of the term "doxxing" that makes a post be unclassy.


This information is very easily found from searching based off their username. Is HN actively preventing people from registering with usernames that are on other sites and services? If not, what are the limits and connections between one's username and one's username on other websites? Further, what is the limit between one's username and full name and/or projects that are indexed on every major search engine? Should we not title this post "Tesla CEO" else we dox that individual?


It doesn't matter how easily the information is found.

By "doxing" I meant posting someone's personal details as a way of attacking them. None of us would want that done to ourselves, and we owe the same consideration to others.

Nor is it needed for substantive discussion, which is the purpose of HN threads.


Am I doxxing myself by using my last name as my username, or by putting my real name, current position at my company, and my email address in my profile?

The problem with your golden rule there is that you make assumptions about what other people want.


> Am I doxxing myself by using my last name as my username, or by putting my real name, current position at my company, and my email address in my profile?


But it's clear that the term "doxxing" (which I've never used before and apparently can't even spell) is a giant distraction. How about we just stick to the point about no personal attacks.


I just didn't see what happened above as a personal attack. He referenced his professional experience and someone provided evidence of that career.

But you're right, this is hardly an important issue to me and we could go on about "doxxing" for days, so I'll consider my peace made.


For what it's worth I read the tone of the comment quite negatively. So did the person that made the comment, given that they used a throwaway.


This is exactly what I was getting at, glad someone else saw that. I didn't see posting relevant credentials that are easily found through usernames containing PII to really be doxxing because it's "self-doxxing", and the fact that those credentials are relevant to the discussion.

This is the problem I have with HN moderation, they have rules that they selectively enforce but it's all built on their heuristics which they never flesh-out. Neither of us can go to a page on HN and run through a checklist to determine if content in a post will or will not be considered flaggable/bannable because it's a "closed source heuristic" if you will. There are guidelines sure, but the enforcement seems very wishy-washy and selective.


Afaict it's like, be a decent person, or at least try. Not sure why that's a problem.


He said "as a way of attacking them." Are you attacking yourself by posting your personal information? No? Then the answer to your question should be obvious.


You're also making the assumption that if you're fine with it then so must everyone else. If we're to do all this assuming, then let's assume on the side of people's privacy.


How? I'm just saying it's not cut/dry. I'm not making any assumptions.


Your statement reads, to me admittedly, that since you put your public info out there that no one should care if someone else outs their info to the public. In context of the discussion.

But you can't dox yourself, in the usual meaning of the term.


The point I'm trying to make is that 'ease of discovery' is indeed a relevant factor here, and furthermore in this specific case the user was citing his professional experience with a username that corresponds to his own.

I'd be willing to bet he's perfectly alright with someone posting his professional credentials on this site. I know I would be (and have, in fact).

So is it really doxxing at all, then?

This isn't a topic I'm all that passionate about, I just wanted to point out the differing opinion.


Doxing public informations ?

If I say that Obama is POTUS, am I doxing him ?


If you listen to his interviews, he qualifies most of his statements, certainly more than 90% of the professional business BSers I know. Definitely he qualifies and hesitates much more than most CEOs, that's probably why people emotionally invest in his statements, they trust he at least performed some critical analysis before spitting out the conclusion or tweet.


What are you trying to accomplish with this line of questioning. Yes, people fail. What's your point?


His original point was that people are crediting Musk with what is actually the success of the engineers working there. The public loves to give credit to figureheads. Naturally, this upsets the people actually doing the innovation who get ignored.


This comes up a lot on various forums.

The more technical the crowd the more you hear this kind of thing but it really does bear mentioning that if these engineers could be doing it without these figureheads then really, they should do that.

They fact that it's plausible has no bearing on the fact that it is often simply not the case.

Why did Apple nearly fail before Jobs return?

Why are Tesla and SpaceX a direct result of one mans vision?

I'm not saying that no one else is involved in these businesses, I don't think anyone is stupid enough to assert that these are not examples of fantastically great organisations made up of brilliant engineers and probably project managers and lawyers and all of the other parts that make up a great companay but yet the fact remains that the 'figurehead' is there.

Why is that if these 'figureheads' serve no purpose other than to court the media?


The more technical the crowd the more you hear this kind of thing but it really does bear mentioning that if these engineers could be doing it without these figureheads then really, they should do that.

This is an oldish thread, but I want to point out that the main reason engineers need figureheads is to attract capital. The second reason is to unify the engineers toward a common goal.

I do believe engineers deserve a lot more credit from society, and also that engineers underestimate the contributions of non-engineers. I feel that the subtly snide way you worded your statement ("really, they should do that") is needlessly derisive to both groups.


You have my apologies for any perceived or actual snark.

I hadn't intended to insult anyone and in terms of derision, I am certainly in no position to be doling it out.

Going back to the comment in question though, it is kind of the crux of the matter as I read it.

There are some posts further up that are very sarcastic about the contributions of these 'figureheads' and my statement to those people stands for itself: if these 'figureheads' are not necessary then do it without them.

I go on to provide very popular examples of where this simply isn't how things work.

Note that I have never even so much as intimated that it couldn't work like that just that it doesn't often seem to.

I will admit that there is perhaps some of my own insecurity slipping in here. As someone with a technical background who now works in a less technical capacity I think I may have taken some of these comments personally and allowed for them to pile onto an already toppling pile of impostor syndrome type thinking.

The fact remains though that to build something you need engineers, you also need architects, you also need someone to bankroll the project. Asserting that one or more of these players are more important that others does seem illogical to me and it would probably take a lot to change that outlook on my part.


That's exactly what I mean. Why state the incredibly obvious? Is there anyone who actually thinks Musk sat there and built the Tesla in his basement? Am I supposed to reach out and personally shake the hands of each engineer?

The engineers "thank you" comes in many forms. The fulfillment of the job. The privilege to work on cutting edge tech. The satisfaction of their customers. Their salary and so on and so forth.


> The fulfillment of the job


> The privilege to work on cutting edge tech


> The satisfaction of their customers

> Their salary and so on and so forth

Irrelevant, that is/was part of their contract regardless of outcome (otherwise R&D departments wouldn't exist, failure tolerances wouldn't exist, etc).

Musk merely gave directive, he didn't implement or do this R&D on his own. Calling this Musk's success is like saying Einstein design and built the atom bomb thus he is solely responsible for the death of many a Japanese. But society doesn't take that point of view, instead only that he contributed to it, not that he owned it through and through. A good leader leads their subordinates, but they are not the sole factor in their subordinates achieving success. If a leader does not recognize their subordinates, they will soon find they have no subordinates to lead.



All hail King Musk


And a solar energy company. Well technically he helped his cousins start it and serves as the company's Chairman[0].

[0] https://en.wikipedia.org/wiki/SolarCity


And Zip2. He's at least 4 for 4 and maybe 5 for 5. And these aren't little web sites or mobile apps. Payments ($1.5b), automobiles ($25b), rockets ($10b), solar power ($5b).


Yeah, his record pretty amazing. I mean, even his least successful startup zip2 'only' sold for $300m, earning him a 'measly' $20m, and that was his first one after college.


You can't reach goals you never set. That's how we went to the moon.


It's the engineers' job to resist with good reasoning.

It's the visionary's job to convince them that we people only think the impossible things are impossible.

That's basically how I do my internal dialogue. I shoot down an idea of mine because it's too brittle, vague, and difficult. But I still want to build that something. So, I keep thinking and end up saying to myself, "Well, maybe I could do something that's like an ugly partial implementation, just leaving out the hardest things: it won't be what I want but I can write something that resembles it." And then I write the first prototype and end up having something here to play with. However, I still keep wanting more and maybe I get an insight that allows me, having first played with the first build, to make a better approach with a new set of tradeoffs but such that will get me closer to what I want. Gradually I approach what I want, possibly never quite reaching that point, but still getting closer and closer.


> "It's the engineers' job to resist with good reasoning. It's the visionary's job to convince them that we people only think the impossible things are impossible."

While I don't disagree with you, context matters greatly here. This is the same logic of every middle managing pawn or upper management narcissist who self-styles him or herself the "visionary" you describe. Someone like Musk has the technical aptitude and experience to accurately assess what is technologically possible or impossible and estimate how much it will cost and how long it will take. He has also surrounded himself with highly talented technical people who, from what I can tell, he listens to.

Unless someone has previously envisioned and brought to fruition some visionary product or service, we should remember that the most likely explanation for their insistence that the seemingly impossible is possible is some combination of their ignorance, incompetence, and narcissistic delusion.


Yep, this is the point I was trying to make way upthread. If "the engineers just couldn't do it" is treated as a physical reality due to the objective hardness of the problem, and Musk recalibrates his future time-estimates based on that, great! If "the engineers just couldn't do it" is treated as a failure of the engineers, and people are fired for failing to conform to Musk's rosy estimate, not great!

And the latter is what you'd presume by default of a manager. It might not be true of Musk in particular, but in absence of explicit evidence to the contrary, it's likely, which is why this kind of overeager optimism can be downright scary-sounding coming from a high-level corporate executive.

I have a feeling that, like you said, Musk listens to these people. Maybe he actually knows their potential better than they know it themselves; knows what they can pull off when driven, that they wouldn't think (or especially claim) themselves capable of otherwise. Maybe, in other words, he's like the protagonist of some military ensemble drama series. (Jack O'Neill in Stargate SG-1, say.)

And given how successful he is, maybe he is that guy! Someone's gotta be. But that guy is really rare. Most corporations, sadly, don't have that guy anywhere in them. And without that guy, you've just got unrealistic promises, followed by flops, followed by finger-pointing.


Browning understood Musk's impulse:

“Ah, but a man's reach should exceed his grasp, Or what's a heaven for?”

Robert Browning, from Men and Women and Other Poems


I don't see why they would groan in this particular instance (although I can believe it in general). He's not promising anything in the near-term that doesn't already exist in other shipping cars, or that his own people haven't already demoed.


> Every time Elon Musk makes this kind of announcement

... he does it not because it is easy, but because it is hard!

For his engineers.

Not so much for him.


You're discounting the value of his leadership and company stock.


As I understand, this is exactly what manipluating the federal interest rate is about targeting: they're changing the incentives on different investment types. When federal interest rates are low, this pushes people to search for other (riskier) investments.

Also, I wonder if 401ks and IRAs have that much weight in the grander view of "invested" money saved away, considering their tax-deductability, and thus appeal as a savings plan, is limited and constrained (this year: $18k for a 401k, $5.5K ($6.5K if you're over 50) for IRAs).


> Moreover they are even more likely to pay proper attention to it if you are not rude.

It's funny how some people don't learn this.

Years of internet commenting has taught me the uselessness of posting in anger, and the value of a well-argued, well-written point. Strong emotions one way or another diminish the authority and impact of what you say.

It's easier said than done, of course: strong emotions are often specifically what lead you to make the effort of commenting.


I disagree, obviously.

If "strong emotions" were enough to diminish impact and authority, the ad business would simply be placards of bullet points. Even on HN, well-written and cool posts routinely are ignored or actively downvoted.

In the grand scheme of things, politeness is a good idea--but there are a few places where being nice simply won't get you as far as calling somebody out passionately on their wrongness. Low-level systems programming on large codebases with smart and stubborn and harried people is one of those cases.



Applications are open for YC Summer 2015

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact