Hacker Newsnew | comments | show | ask | jobs | submit | socratic's comments login

Was the intent ever not to get talent acquired?

I'm very confused by all of the anger at Google and Facebook for acquiring these companies. The companies look like they were designed to get talent acquired in the first place! They both have very small teams (2--5), Sparrow at least appears to have taken very small amounts of funding (just seed a year or two ago), and I can't name a consumer desktop application (especially a really generic one like mail with tons of free competitors) that has become a major $1bn+ (or even $100m+) business in the last several years. There probably aren't enough eyeballs for ads, and App Stores have made consumer software cost expectations too cheap. What's more, Sparrow is pretty much designed to make Google's offering better.

Is this interpretation wrong? Could these small teams have built independent companies (rather than attractive teams for talent acquisition) on Mac desktop software in 2012?


> App Stores have made consumer software cost expectations too cheap.

Not just that. Previously when you released a paid desktop app, you weren't expected to give out free upgrades to the user. With the App store model, if a user purchases your app at V1, and five years down the line you're making a V4, the user will probably still feel entitled to getting the "update" (note, I said update, not upgrade) for free.


What do you mean by update vs. upgrade? Update = bug fixes only vs. upgrade = more features? Or something else?


Oh sorry I should clarify myself. But yes, you are correct.

Update is basically like bug fixes, or resolving incompatibilities and stuff like that. [For eg. Windows XP -> SP1 -> SP2 -> SP3]

Upgrade would mean adding new features. Moving along with the times. New technologies etc.


Got it, thanks.


... and I can't name a consumer desktop application (especially a really generic one like mail with tons of free competitors) that has become a major $1bn+ (or even $100m+) business in the last several years.

Exactly. An "exit" can mean a lot of different things depending on the company, situation, financials etc.


Why does a 2-5 employee company have to get to be a $100M in the first place? It's pretty sad if this is indeed true.


reinhardt: I agree. I think the reason for this kind of thinking ("have to get to $100MM") is that developers are brainwashed (by all the TechCrunch-style hype about the few startups that make it very big) and forget common sense - that they can live very well indeed even if pulling in much less revenue than $100MM - hey, 1 to 5 mil or even 0.5 mil is enough for a pretty luxurious life for most people anywhere in the world, unless you want to buy yachts or something like that.


Are there any good books (or other resources) on modern PHP?

I last used PHP back with PHP3 (and then went C++ => Java => Python => Python/JavaScript => Ruby => Python/R), but a bunch of code I want to read at work uses PHP (with Zend). I no longer remember most of what I learned about PHP3, though obviously the PHP syntax seems to be at least somewhat readable as a sort of amalgam of Perl and C++ syntax and idioms. What does, e.g., Facebook use to get engineers who don't know PHP (but might know C++ or Python) up and running?


As far as I know, at Facebook we don't give pre-emptively/usually give people a book to learn PHP. Those that don't know PHP (or are rusty) can generally get the syntax/keywords right in an hour, and then there's a longer time spent getting familiar with both the standard library and our own frameworks/libraries.

There is a lot of good code to look at and learn from, and the Bootcamp program teaches a bunch of our own library and some guidance on what good and bad standard library functions are for certain cases. We also have a "newbie" group where people can ask questions about the codebase. And if you do something without knowledge of a built-in (or Facebook) function/class and make things more complex than they need to be, your code reviewer will let you know how to do it easier/more idiomatically. Every once in a while there are language tech talks, and sometimes even whole-day training sessions (we did one on exciting new C++11 features).

Besides that, I definitely spent a lot of my time with php.net/function_name open and exploring when I couldn't remember all the functions available or the parameters and order.


"Real-World Solutions for Developing High-Quality PHP Frameworks and Applications" by Sebastian Bergmann (referenced in the original article) and Stefan Priebsch. It won't go in to things like "how do I use usort" but it touches on important things like testing.


Does anyone know if namespace packages actually work?

I've been wanting to put up a few utility libraries on Github. I don't think they're good enough for PyPI, and I mostly just want to be able to pip install them and have access to a few minor functions that I use periodically. However, I don't think they deserve their own top level library name, ideally, they'd all be under "socratic.*" or similar. However, there seems to be some sort of mess with init in the top level of the package, some pkgutil fix, some sort of incomplete PEP (maybe 402?), etc. Should I just give up and name my internal libraries "socratic_{name}" and be done with it? Or does having multiple packages with the same namespace work?


> Does anyone know if namespace packages actually work?

They used to work somewhat. I had flaskext.* registered as a namespace package but unfortunately it conflicts with pip which is why new Flask extensions name their package `flask_bar` instead of `flaskext.bar`. The exact problem is that setuptools uses pth magic to put libraries into a namespace package which conflicts with pip's idea of installing packages flat into site-packages.


It gets bad if you use pip and easy_install with namespace packages. If you stick to one or the other you should generally be okay.

But yes, just use socratic_{name} – it makes everything easier, and "." and "_" are just string differences. The only good reason IMHO for using namespace packages is because you are breaking up an existing package into multiple packages and you want to keep the dotted names. And even then I might just prefer a compatibility package that does the mapping and move things to entirely new package names.


it has some rough edges, sometimes when I try to build the docs for one it has a hard time importing both packages until I kick it a few times, but they work. I'm doing it for these two modules:

http://pypi.python.org/pypi/dogpile.core/ http://pypi.python.org/pypi/dogpile.cache/

edit: also they work fine with pip so not sure what armin's issue here is.


> edit: also they work fine with pip so not sure what armin's issue here is.

See this issue which ultimately made me ditch them: https://github.com/pypa/pip/issues/3


Does Python have a BPEL/BPM-style workflow engine like Ruby's ruote?

This doesn't seem to be it, but I would love to have a workflow engine which is designed for long running tasks, with periodic human interruption, conditional flows, and so on.


I'm not sure how they compare to Ruby's "ruote", but there are quite a few Python workflow managers and libraries out there.

GC3Pie (http://gc3pie.googlecode.com/) is a Python library for running many-task workflows featuring interruptible execution, interfaces to SGE, PBS and LSF clusters, and composition operators to build dynamic task dependencies (so, not just DAGs). Disclaimer: I'm one of the developers.

Weaver (http://bitbucket.org/pbui/weaver) is a Python front-end for building workflows that can run on top of the Makeflow engine, supporting SGE, Condor and WorkQueue as execution back-ends. (See the comment by its author "pbui" on another HN thread: http://news.ycombinator.com/item?id=4047100)

NiPyPe (http://nipy.sourceforge.net/nipype/) is a Python workflow engine especially targeted at Neuro-Imaging processing (but the core framework is generic, as far as I understand).

I'm pretty sure this list is not exhaustive: many people seem to be re-writing the same core functionality, coming from different fields and/or with slightly different requirements.


If this post wasn't on Hacker News, how would someone reach this page without knowing what R is? Would you expect a JavaScript tutorial page to explain what JavaScript is? Or a statistics tutorial page to explain what statistics is?

More broadly, your suggestion makes a lot of sense for websites about software projects. For example, the R software project page (the first hit if you Google for "r") says that R is "a free software environment for statistical computing and graphics." However, when would anyone seek out a set of tutorials about something without knowing (at least at the one sentence summary level) what that thing is in the first place?


Is that what Blank is arguing though?

I remember listening to your office hours at TC Disrupt, when you were talking to the guy from Omniplaces. You said: "This is commercialized research? Ouch. It’s often a solution in search of a problem..." (Of course, that was part of a broader commentary about how the startup shouldn't be competing with Google in search, which I don't necessarily disagree with.)

But I think that's what he's talking about. Sun, Google, Cisco, Akamai, VMWare, and a variety of other technology companies came fairly directly out of commercialized university research in systems, databases, networking, virtualization. Are there YC companies that are commercialized university research? Such companies at the very least seem very different from companies like Airbnb which seem like they evolved from a consumer problem rather than innovative technology. What's more, it seems like basing investing decisions on current consumer behavior and problems seems much more sensible than doing so based on technology, which is what Blank seems to be arguing and what you seemed to be arguing at Disrupt.



The problem with commercialized research is more the attitudes of the founders than what they're building. They usually approach starting a startup as a solution in search of a problem. Sun (and Google) were exceptions in that even while working on the project within a university, they were building a product. We'd love to fund that kind of project, but they're rare.


Isn't the fact that they're rare his point?


No, Steve is saying

    ... material science, sensors, robotics, 
    medical devices, life sciences, etc... 
    VCs whose firms would have looked at these 
    deals or invested in these sectors, are now 
    only interested in whether it runs on a 
    smart phone or tablet.
Paul is saying that founders aren't coming to him with the former sort of startups.


But couldn't that be said about Facebook too?


Facebook was born in a university dorm room, not a university lab (or department lounge). Facebook is a great product, and has spawned some interesting technology created to solve a massive scaling problem, but what cause the scaling problem to begin with was a product created with a good intuition for what people love, built with some existing, not academic, technologies.


I was more thinking about the problem looking for a solution part.


Do they know you're there? Do they realise that they can do interesting research, and build a product, and get a chance at funding from you?

Has anyone tried to solve that publicity problem?


Is there a better signal than high-pay for choosing a technical internship? Looking at Alexey's list, with the exception of hedge funds, Google, Facebook, Twitter, Dropbox would be my top choices for where to do a technical internship, and they're also the highest paying. I suspect this is because a good engineering culture leads to engineers that cost more and get better equipment which eventually filters down to interns. (Obviously, there are exceptions, though.)

More broadly, is interning at a startup a good place to learn technical skills (rather than marketing, networking, funding, etc.)? Most of the information on running common web stacks, for example (nginx, rails, django, node.js, varnish, postgres, mysql, redis, etc.), is either freely available on the web or even can be bought as a service (heroku, app engine, aws). By contrast, the only way to find out how engineers have solved problems (web serving, analytics, etc.) at truly huge scale (e.g., 10+% of the human population) is to actually see the solutions at a big company (though sometimes companies will publish details of 3+ year old infrastructure). Big companies also tend to have stricter code review cultures, while small companies tend to just need code written now. All of this seems to point to learning significantly more at a (technically excellent) big company, even though the "output" of the intern might appear to be less.

In fact, doing a startup seems like it might be more like resume stuffing than working at Google or Facebook these days. Working at a small startup, you get to talk about your impact, how you built a mission critical piece of infrastructure, whereas at a big company you're probably only trusted to write some small features that are not in the critical path of full time engineers. What's more, you get to say that you worked at a startup, which everyone pretty much respects around California, even if the startup dies or wasn't very technically interesting. But the potential for learning quality engineering through code review and understanding the existing infrastructure seems a lot higher at the big company.


Startups vary. Just because the majority of startups on HN are making web apps at medium scales does not mean everybody is.

I've looked around at a bunch of internship and the most technically interesting ones have all been at startups. Now, I have to admit that the majority of startup internships I've seen have been in the bland web-based software category. And, perhaps, larger companies have more technically challenging internships on average.

However, average is not what you should be interested in. And the exceptional startups I've seen are more exceptional than what you would do at a bigger company. This makes sense--there are more startups, they are more varied and less conservative than bigger companies and they cover more niches, so the variance in technical difficulty is going to be greater. Bigger companies also have more friction: existing processes, gigantic code bases, very specific requirements, large investments in existing tools...

Also, it's much easier to find startups in your particular field of interest. I've talked to companies doing interesting work in machine learning, bioinformatics, robotics and even type systems (I haven't seen any interesting work with type systems at big companies at all). And these are just things that happen to interest me in particular: there are probably interesting startups in whatever field happens to interest you as well.

So I think startups are actually rather good for doing something cool and novel, especially if it's something off the beaten path. You just have to find the particular awesome startup that interests you rather than joining another web/mobile-based company.

Now, there are some advantages to seeing how a bigger company operates as well. Understanding how to organize hundreds of programmers, maintain gigantic code-bases, use significant resources efficiently and survive in a larger corporate setting are all very important.

Spending at least one summer at a bigger company would be is useful if only for these, just like spending time at even a technically boring startup is great for the non-technical reasons you listed. But for learning technical skills, especially more specialized and advanced ones, I think a startup (but not just any startup) is a great choice.


I've talked to companies doing interesting work in machine learning, bioinformatics, robotics and even type systems (I haven't seen any interesting work with type systems at big companies at all).

What start-up is doing work in type theory?


I talked to somebody from the Ashima group[1] about their gloc prsoject which they just released a version of. Having thought about it, I suspect there are some other companies working on it as well, like maybe Typesafe.

[1]: http://blog.ashimagroup.net/category/ashimaarts/


Note that Google Summer of Code is probably "better" than the amount it pays would indicate.


Is this true? It's good resume filler, but you won't "meet" many people through it; your mentor, although probably a member of a big corp (most OS stuff has corp sponsors), is just one person.

So the networking opportunities are limited, a lot of the work is the "easier" (less-critical) stuff for projects (although on the other hand - maybe the more interesting projects).

It will definitely look good on your CV, but any better than getting a good internship elsewhere?


Nitpick: GSoC is not just OS related stuff. One of the areas that they are funding (exciting to me personally) is development of CGAL which is the most popular and powerful open-source library for computational geometry (http://www.cgal.org/gsoc/2012.html)


If I'm reading you correctly, you definitely made money on the App Store. But it sounds like you are still tens of thousands of dollars underwater in opportunity cost. Is it wrong to think about it that way?

You get "more than half" $3400/month, let's call that $1700/month over the last two years, for $40,800. The two of you spent two man-months developing the app originally, and maybe a month or two since then. Let's call that 3.5 man-months. In some sense, your only cost was about $4,100 (cost of living + failed designer + $500). However, you could also have just worked for someone else. Two years ago (when you developed the app), it looks like iOS development was getting billed at about $150/hr (in Austin, SF, and elsewhere, though one guy in Ann Arbor quoted $75/hr) [1]. 3.5 work months is about 595 work hours which is about $89,250. (Put another way, if your contracting hourly rate would have been $70/hour, you would have just broken even now.) If that math is correct (and I've made all sorts of assumptions), you would have done financially better to work for someone else (especially someone at the top of the charts), no? In a sense, did you "spend $100,000 on an app" in opportunity cost? How long do you estimate your app will continue paying out a "dividend"?

It sounds like you are saying: apps don't make that much money, so don't spend lots of money developing them. But apps seem to take at least a few months to develop, so it seems inevitable that apps will cost a lot of money if the developer/designer time is market price. (A <$10,000 app in 3.5 months would be an hourly rate of <$17/hr, for example.) Is that true, and if so, do you think aspects of that will change over time?

[1] http://news.ycombinator.com/item?id=1251155


You can't just sit down and start coding for your $150 or even $70/hr though. Here are some factors I can think of that change the calculation substantially:

- You have to take time and put in effort to find good contracts. Not all work is billable.

- You will probably want to be able to show clients an app you've already created. If you can make that app a $40k earner then you are in great shape.

- You will most likely have to be willing to put up with less fun and less flexible work for clients.

I think the $40k from 3.5 months of work is amazing and would be a no-brainer for a huge number of people. The big downside is the level of risk involved and the difficulty in duplicating that level of success.


That's also time they spent working for themselves instead of for someone else. Which is valuable.

Plus the possibility of this app immediately enriching their life if it's something they developed to scratch their own itch instead of the theoretical itch other people have, or the itch that the person who hired them has that they don't share. (Like the time my not-driving self worked on a promo site for an app designed to crowd source the problem of finding a parking spot in a busy city. Fucks given beyond the minimum to satisfy the client enough to get paid: 0.)


Most valuable piece of info here for App developers: Make your app universal. It doubled our sales.

If your thesis is that you'd be better off as an iOS contractor than making your own apps, then you might be right. But this was our first app, and my partner went from someone interested in design to being an actual designer as part of this experience. I consider that alone to be worth more than $89,250, because it is a multiplier on what we have been able to do subsequently.

Also, the first months this app was doing around $200 a month. It took effort on our part to change that. We have done multiple things that caused the sales of our app to double as a result. In fact, every major release of our app has caused sales to double. In fact, this leads to one of the big secrets of the appstore. There are 600,000 iPhone apps but only 200,000 iPad apps. We get equal sales from the iPad as iPhone. We stared out as iPad only, then added iPhone- boom, sales doubled. (I was hoping for a triple or more because there are so many more iPhones, but alas, no.) Make a real universal app, you'll do better.

I believe there are some things we could do that would double our sales a couple more times-- for instance adding some social aspect to the app, adding iCloud support might both result in a doubling, which would throw your numbers off for the comparison.

I didn't put this app out as an example because I think its an example of what one could potentially make-- I put it out there as an example of what could happen if you do an app that never makes it into any of the top lists or gets any marketing. It was meant as a minimum example, not a maximum example.

This app is was also an experiment and has been used as such. It would have been easy to put out a revision in the last year (and there's a feature we really need to add) but by waiting a year we've observed how not updating the app has affected sales. Now when we put out the revision we'll be able to confirm our hypothesis about past revisions.

I expect this app to continue paying out this dividend for the foreseeable future, which I'd say is about 2 years.

This "dividend" is completely dependent on the way Apple markets apps to people. We don't show up on any lists. When Apple changes their algorithm, we feel it, much like google changing their algorithm impacts websites. So far, Apple has been changing the algorithm to highlight quality apps, and that's a good thing so this has benefited us, but at some point we may run afoul of one of the rules in the algorithm. (the one year without an update was a test to see if there was a rule like that-- and I think there is, I think we've been punished for going so long, but the slack has been taken up by growth in the total addressable market.)

With any business effort the question really isn't whether "apps" make "money", but how much leverage you get. What makes money is work and smarts, right? You can spend X amount of time and make Y amount of money a year building a SaaS Website, or building an App or working a job. Apps give your more leverage than SaaS Websites which give you more leverage than employers-- but that's just a generalization. It really depends in part on how well you understand the market, but also a great deal on how much effort you put into it.

I was pointing out here that we have an app we haven't put much effort into that has made rather good money given the level of effort.


Is there a standard set of tools that are being advocated here, in addition to the repository structure?

I've been mostly working with Rails lately, and I'd like to continue using tests, mocks/stubs, and sensible build rules, but I'm not sure what the preferred Python tools are. What's the best test::unit equivalent? What's the best way to mock an object? Do people really use Makefiles rather than something else (like SCons)? Is there some way to use virtualenvwrapper without bash (e.g., M-x eshell)?


I use Make because it's fast, simple, always available, and bash has autocompletion for it, so I can type "make t<TAB>" to run my tests, for instance (or "make <TAB>" so see what commands I have available). I use make in my python projects for things like running tests, coverage, building the documentation (sphinx) and removing .pyc files.

For deployment I use fabric, but I have Make targets for the most used commands (again, it's nice to have completion). For example, these are two targets to deploy to my server and to my test machine:

            fab -f deployment/fabfile.py prod deploy

            fab -f deployment/fabfile.py test deploy
I use either pytest or nosetests to run my tests, mainly to have better and colored output.

I don't think you can use virtualenv(wrapper) without bash, but you can use use it with M-x ansi-term. But I got tired of trying to config emacs to run Python the way I wanted and now I edit my code an emacs and run the code in IPython in the terminal. Ipython's autoreload [0] is a huge help.

[0] http://ipython.org/ipython-doc/dev/config/extensions/autorel...


Why do you remove .pyc files?


When doing a large refactoring or removing modules all-together, leftover .pyc or .pyo files can cause some very hard-to-identify import errors.




Make is better than SCons if you are not compiling C. Lots of people would use fabric instead. The stock unittest in the stdlib is quite good, especially in python 2.7+.

For mocks, wait until mock (http://www.voidspace.org.uk/python/mock/) is in the stdlib or download it yourself.


If you are on Python pre-2.7, the unittest2 (http://pypi.python.org/pypi/unittest2) library backports most of the unittest 2.7 features.


All activating virtualenv does is set up a bunch of environment settings (PATH, PYTHONPATH, etc). They provide activators for bash and csh; it wouldn't be hard to set up the environment for M-x eshell, even if it was a little more manual process.


Has anyone made a comprehensive comparison of the top web frameworks and how mature the libraries/plug-ins/services around them are?

Frankly, Rails and Node.js seem to have much more in common than apart. Both are based around relatively inconsistent languages (though conforming to some BDD or Good Parts style can help), both have deeply flawed runtimes in terms of performance and especially parallelism, and they seem to have extremely high overlap in the people who program in both frameworks.

What I care about is libraries, plug-ins, and services. Specifically, if I have some new web task, what I care about is that there is some library/plug-in/service that exists to perform that task (e.g., user auth, Facebook API, exception logging, Heroku, resque, state machines), that the library/plug-in/service is mature enough that I can pretend it works, and that it has enough of a community around it that it won't break in the future.

Is there some comprehensive list of use cases, plug-ins/libraries/services, and relative maturity of those plug-ins/libraries/services across web frameworks? For example, the maturity of the various ORMs across frameworks, or the various testing tools, or the various Facebook libraries. It would be great to see it change over time, and to know when to jump ship to the next flawed language/runtime that has good plug-ins/libraries/services.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact