Hacker Newsnew | comments | show | ask | jobs | submit | c3's comments login

Step 1: Be Stewart Butterfield.

-----


Being Stewart Butterfield didn't do much when he lost a bunch of money making an online game.

-----


This is the second failed game that he's been a part of that spawned a successful startup.

-----


Step 2, make a failed game, pivot, succeed wildly :)

It's worked twice already!

-----


Step 2: Make a consumer web app.

-----


I'll modify that slightly,

Step 2: Take your internal messaging app and make it a commercial product.

-----


Isnt there a Step 1.5 - Make your own internal messaging app then?

-----


Step 3: Charge a couple of bucks per month for it and lose money for the first year but that's okay as long as we break even we will make money eventually right? I mean that's what investors want us to do right?

-----


fyi they don't need to power cycle your machine to remove it from the data center (if there is physical access); there's a battery backup that clips onto the power prongs while it's still 3/4 in the socket.

-----


there's an app called Habit List (ios) that pretty much does that.

-----


It seems to be a fairly simple patch for 1.8 series, too:

    https://bugs.ruby-lang.org/projects/ruby-193/repository/revisions/43776
just a few lines truncating input in util.c.

Break your ruby here:

    JSON.parse("[1."+"1"*300000+"]")

-----


update: I just tested this; if you're on 1.8.7, you can manually apply the patch I linked in the parent comment and recompile. There is nothing preventing a backport.

-----


There's also a commit right after the merge commit to change

  if (nf > DBL_DIG * 2) continue;
to

  if (nf > DBL_DIG * 4) continue;

-----


This article looks fairly accurate (for what I understand of the SF market) Things are a fair bit cheaper/easier in Portland. I do actually own a food truck in Portland (as well as a software company).

We made a profit on about three different days last year. The other 300 were not profitable :( It would have helped a lot if we had family members working for chips and hugs; paying staff is a huge cost particularly if you want quality gourmet-style food and hard-working employees (we have mostly 5-star yelp reviews)

Before our pod landlord decided to shut down abruptly, combined with our chef/my girlfriend getting diagnosed with brain tumors, it was costing about $3k/month (net loss) however that's the awful wet northwest winters driving people away from outdoor pods. Once the warmer weather starts back up again and we find a new pod, we should be in the black, but the business model I'm working on involves a lot of carts with a central kitchen, in clusters in a few cities. If anyone wants to chat about investing, let me know :)

We got started for about $20k total including the 'truck' which is actually a trailer that is stationary.

obligatory link, http://theheartcart.com

-----


but the business model I'm working on involves a lot of carts with a central kitchen, in clusters in a few cities.

As someone who knows an awful lot about the food truck industry in multiple cities across the US, please, please do not proceed with this idea. Food trucks are not an economy of scale. They are boutique mom & pop businesses.

I am sorry to hear about the troubles with your truck, but I am sure that you can optimize operations to become profitable on most days.

-----


Hmmm do I know you?

-----


Nope.. I got into the LA scene a bit late (closer to your food truck's retirement date).

-----


I commented on the article to explain that Portland is pretty different due to the differences between 'trucks' and 'carts.' Obviously a cheaper trailer without high fuel costs makes getting started a little easier, and I suspect our permits are a bit cheaper as well. All that equals more carts out there competing for business.

The interesting thing that I've heard about here in Portland is that at the bigger pods, landlord tenant contracts include agreements not to allow potential competitors to lease, include restrictions on style of food, i.e. landlord not allowed to lease to a second Thai cart on the same lot.

-----


The food in the article and your food seem quite involved and niche (Thai, vegan, Nordic). Does a cart like Potato Champion do better because the food is easier to produce and everybody likes French fries?

-----


Even Potato Champion serves some niche foods though, with Poutine and their fancy ketchups/aiolis. I think you're probably right, though, in that Potato Champion seems to have hit the sweet spot between niche/broad appeal in their market.

It also probably helps that potatoes are cheap.

-----


I don't know much about the business, but it seems like all the food trucks in my area in Colorado are niche foods. Occasionally there are events when food trucks show up at local parks, and there is a wide variety. There is usually several types of ethnic food, a truck or two with food sourced from a local organic farm, and a truck from a local brewery. It seems like food trucks appeal to the type of person who shops at Whole Foods or Trader Joes. In fact I think I remember seeing a truck serving food from peru or brazil, when Mexican food would be less niche. Because Colorado has a bunch of immigrants from Central America and has a number of Mexican restaurants.

-----


Thai is pretty mainstream in Portland, from what I remember.

-----


we just switched 1/3rd of our infrastructure off our existing host (engineyard, which uses AWS) onto raw AWS and saved about $2500/month. You can do it too!

-----


I've been around a few companies migrating from EY/Heroku -> AWS and the cost effectiveness is always astonishing. In addition you gain full control over your architecture, which is a major plus.

-----


Interested in how you achieved this. Did you change server setup significantly from the default EY stack?

-----


I'm curious about this too, especially the last part (similar / identical stack) ?

-----


And how much more engineering time do you waste on it now?

-----


It's not just a scheduling, there are a bunch of legacy systems you have to integrate with. And of course it's health data, so there are more requirements.

Anyone want to burn three months on an unpaid spec? :)

-----


Man, contests like this seem like a bad idea for everyone involved. Solicit bids, take 2 million dollars and get a prototype/demonstration of capability from your top 4-8 options. Review the submitted work and communication each contractor provided, select the one that you feel the most comfortable with and just pay them to make the damn thing.

-----


My understanding is that they've done that already, and that the delivered solutions fail to meet their needs.

-----


A couple of points from the OP:

* The replacement product will, as a part of the overall VistA EHR, deliver privacy, security, data integrity, patient accessibility, interoperability and other services required by federal law, regulations and VA policy. Many of these services are delivered by other components of VistA.

* VA intends to replace the current MSP with a scheduling product[1] which is a standards-based, modular, extensible and scalable, certified as compliant and fully interoperable with the production version of VistA now held by the Open Source Electronic Health Record Agent (OSEHRA), http://www.osehra.org/.

So the app won't have to replace everything...and it may get away with interfacing with some of the more easily interoperable aspects of the system. The main goal behind the contest is "To encourage development of systems that help Veterans schedule appointments to receive care from the Veterans Health Administration and to reduce risks in the future procurement and deployment of those systems"...which is vague enough to include a range of ancillary software services. And I doubt the federal contractors who likely receive much bigger amounts to maintain legacy systems would want the VA to install a system that makes them obsolete.

And on a more subjective note, large sums of money have been awarded to other government ChallengePost winner that were essentially proofs of concept and are barely functional today, if ever heavily used.

-----


Elasticsearch is great and magical, but there are a bunch of defaults that you MUST set for it to be useful. I'm surprised github wasn't using these, actually (like allocating the min and max memory to be the same size).

Generally it takes a catastrophic failure under load for you to discover that 'everyone' (everyone else) uses these!

-----


It's only a security problem if you're using the Model#where form. If you're doing Model#all or #each or whatever, you're fine.

-----


Are you sure? That's what I thought at first, but was the .where form even available in versions before 3.0?

-----


This is old news, but we successfully use https://github.com/freels/table_migrator in production not on heroku. It creates a copy of the table, performs the schema changes, copies the data over, then renames the tables for almost (i.e. 1-2 seconds) no downtime.

-----


How does this deal with foreign key references? I thought that if you rename a table the FK references from other tables will still point at the renamed table? Discussion here: http://dev.mysql.com/doc/refman/5.0/en/rename-table.html

The Percona pt-online-schema-change tool goes to great lengths to avoid this kind of problem.

-----


Basically, it doesn't - because it doesn't need to. Rails/ActiveRecord doesn't use those either. Problem solved :)

-----


Interesting. As you have probably guessed by now, I'm not a Rails developer and therefore did not know this. I was surprised to read about this after your comment and find that Active Record went the lowest common denominator route with this and therefore gave up any native foreign key integrity support.

-----


There is nothing to stop you using foreign key constraints with rails, the activerecord migration api includes methods to create them for the major db adapters and there are plugins to automate the process to some extent. It's not the Rails Way™ because it sacrifices some database-agnosticity, and therefore almost noone does it. People achieve the same behaviour with application-level validations in the model.

I'd wager that a lot of the big professional rails deployments are doing FK constraints though.

-----


The gem 'foreigner' makes them painless, and supports Postgres, Oracle, and I think MSSQL.

-----


I have never missed foreign keys on the projects I've worked on that don't use them.

This is because those projects typically sit on databases that are only expected to be accessed via web API and not directly from some other source.

Which is basically the Rails philosophy. Only let the app talk to the DB, and let the only outside interface to the DB be through the ORM that has to do most of the data validations anyway.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: