What are some of the hacks you have heard or have personally experimented? Thank you in advance!
Here was our basic workflow:
1. Marketing site set up using Squarespace
2. Developers apply using Typeform. We add them to a Google Sheet with Zapier.
3. New jobs submitted through Typeform, which triggers an email to us
4. We manually set up a new Google Form to collect proposals. We send the results google sheet to the client.
5. We manually search for developers that match the skills of the project and email them all manually to ask them to submit a proposal using the Google Form.
6. Project owner selects a developers. We docusign a contract to both parties.
7. We send a google sheet to the developer to log hours
8. Every week we go through the developer timesheet and manually issue an invoice using PaidLabs.com (with Stripe at backend)
9. When the payment gets deposited, we pay the developer (wire transfer outside USA, Payable.com inside the USA)
We slowly automated each step with a web app, which was published part-by-part as we finished automating a particular step. We did about $100K GMV with this no-code stack before we completed the end-to-end web application.
Today, Moonlight is profitable and bootstrapped. We still manually prototype things. For example, we came up with the idea of a subscription product on Tuesday last week. We had a client agree to it, so we issued an invoice through Stripe Invoices on Friday, collected the money, and are now starting to build subscriptions into the app.
I find that playing the game requires a healthy balance between #AutomateAllTheThings and "Don't waste time building a huge factory for something you need very little of OR you don't know how much of it you will need". This same experience/way-of-thinking applies heavily to development/programming in the form of "Premature Optimization". In my first Factorio game I spent WAY too much time building massive factories to pump up every single item I needed. This lead to a boring grind and wasn't efficient at all. I was overwhelmed with making sure I always had a chest full of X item ready for me that I wasted a bunch of time (which, to be fair, in Factorio I didn't exactly waste the time as I had fun).
For me this resonates heavily with as it lead to a sort of "analysis paralysis" that I immediately recognized from my attempts at a side business. The next thing I attempt I am going to make a conscious effort to focus on a MVP above all else and ignore scale completely. My last attempts at building a side business have spiraled out of control quickly as I attempt to right all the wrongs, foresee all the potential issues, side steps all the mistakes I saw happen at work, etc.
In some ways I really miss my days in high school when I would open up a blank php file and start coding instead of wasting HOURS looking into various tech stacks/frameworks/libraries/etc in an attempt to "future proof" my setup. I realize now it's a fool's errand. That's not to say you should never think about how you would scale but at the same time don't let the idea of future success keep you from creating the exact things you are trying to future proof.
In some ways I really miss my days in high school when I would open up a blank php file and start coding instead of wasting HOURS
* I think I'll create a little website for XYZ
* Better setup a GitHub repo for this
* Hmm, I really like using Angular/Typescript since I use that so I'll use that
* Better make sure Webpack is all setup and working
* Should I do my development in Docker since that's how I'll deploy it?
* Maybe I'll try this new NodeJS backend framework that looks interesting
* I need to make sure all my config lives outside of my app so that I don't hardcode values
* Oh crap, I want to share code between server and client but they both have their own Webpack configs and merging the two without screwing something up doesn't sound fun
* What, was was I going to create a website for again?
My most enjoyable non-work programming over the last year or so has been in my ~/git/temp-scripts folder where I can just create a new folder for something, run npm init, and be coding in less than a minute. This is mainly used for, as the name suggests, temporarily scripts or better yet, scripts that I'm not sure if they will have legs or if I'll just use it once. It's pretty much my little playground where I don't have to worry about scaling, reusability, etc.
I get the sense that there’s a gem in your scripting, and one of those might become useful and need more work, and then you’d have a reason to scale it.
There’s [a recent interview with PG](https://www.startupschool.org/videos/36) where he talks about the joys of hacking, you might find it interesting.
I've found, getting back into it, that Project Euler really helps me 'just start coding'. Last time I got into it (around 2014) I would focus on trying to understand the mathematics and solving things by hand. Now I look at a problem or two every few days, and have fairly streamlined solving them.
Breaking the problem into chunks, making sure those chunks work with tests, gluing the chunks together to get the fact needed by the question, running with the specific input needed for the answer, realising the naive way I implemented the solution is way too slow, adding timing information, analysing the algorithmic complexity, reading pages and pages of mathematical theory...
Sometimes it goes off the deep end, but getting some result almost immediately is so useful.
It's the reason Factorio is fun (you can just build it by hand, most of the time) and Excel is used everywhere (you can see the algorithm and its results together, as you build it). It takes a bit of discipline to do yourself, when the gratification loop isn't built in to the tools you're using, and that's why I think Project Euler is so useful. You get short, achievable programming problems that you can solve quickly and build a habit of solving efficiently.
If I signup as a developer, how easy/hard would it be to find a gig? I assume the supply of good developers is not an issue so finding work would be hard.
Yes, we applied to YC. We didn't get an interview. I assume that the market size isn't big enough for VC.
> If I signup as a developer, how easy/hard would it be to find a gig?
Contracting kind of depends on how much energy you put in. We've seen smart people not try that hard, and get zero gigs after 20 applications. On the opposite end, professional contractors who give thoughtful applications get many jobs at high hourly rates.
Moonlight can only net increase your job flow. If you are in need of contract work, you can always email us (team at moonlightwork.com) and we'll manually coach you on improving applications, and make warm intros to clients. (We have had success doing this manually over the past month, and are contemplating on turning it into an automated "I'm available right now" flow in the product.)
Worse ideas with smaller markets have been funded by YC. I'm guessing you could have gotten in if you pitched a larger vision.
It's been a long journey to understand what customers want. I think that a subscription product is our future, and we didn't figure that out until last week! But, we even tested that manually - and, in doing so, have done more net revenue in the previous seven days than we did in the first year of our business.
We've fortunately been ramen profitable this year, but I hope that in 2019 we can start to focus on growth!
A marathon is a race ;-)
Is your idea different?
(Paid is a YC company, btw!)
EDIT: My apologies for the misunderstanding! I didn't realize that you've already built that functionality from your top comment.
One of our technical needs was auto-charging clients for overdue invoices that were neither approved nor disputed. That's application logic, not something we can use an external system for.
1. The user was filling out a form that was sent to me.
2. I would email hotels asking if they had suites available on the users dates.
3. I would enter their responses manually and the app would send a push notification to the user to see what I had entered.
4. Payment and everything else was done over email.
It looked legit, and during its peak I was doing it up to a hundred times a day. We are directly connected to hotels now so this doesn't happen anymore thankfully.
> We are directly connected to hotels now
Once you implemented the backend connections, was the quote of "tens of thousands of dollars" accurate? Was that your development cost or some access fee required by the hotel (or their booking system provider)?
In terms of scalability, my takeaway has been this: in every company I've worked since, I've fought tooth and nail to make sure that the engineers are the ones handling the level 2 support and up.
Support is invariably happy because they only need to triage and process the really dumb questions. They offload the time consuming stuff and end up cutting their budget needs.
The engineers, on the other hand, invariably get extremely annoyed by the deluge of UX and/or recurring questions that swamp them. Oftentimes it's to the point where they spend so much time on them that they've no time to build new features anymore. And then, magically, they aggressively prioritize and fix the more time consuming/support intensive issues to get them off of their hands.
Everyone is better off. Clients are happier; support is happier; marketing and sales are happier; finance is happier; and so are the engineers. The latter are less happy initially, but they rapidly get a better feel of (and end up appreciating) how their work impacts end-users.
It's structuring the entire company around the selection bias of people who complain about your product, and not focusing on the hypothetical majority of users who never have issues with it, who may outnumber the group of complainers by 10x or 100x.
At that point the right thing to do is to pay way less attention to things that represent a support load, and iterate on already working features used by the 10x or 100x.
Sometimes it's the best thing to do, sometimes it's a horrible idea. It's not some no-brainer that software development in general should be structured like this.
I've yet to join a company that had a marketing department on top of things enough to pinpoint what the hypothetical majority of users who never had issues actually want.
(I'm sure there are some out there; but I've yet to find one that will hire me. And for clarity, I usually join companies as part of the marketing team. After a few weeks of getting a feel of what end-users actually want, fixing the existing product niggles turns out for one reason or another - maybe it's just the type of companies I work with - to be the or one of the lowest hanging fruits from an ROI standpoint.)
This is easiest with hosted SAAS. It's why companies like Google care so little about user feedback. They can e.g. do an A/B test on their UI and find from their data that moving some button makes it easier to find (fewer false clicks), no matter what the minority of users who contact them and swear up & down that the new UI sucks say.
With self-hosted or desktop software this is harder. You might have some phone-home feature that sends aggregate statistics, or simply pro-actively contact a random segment of your userbase. "Hey, you use our product. Does it mostly work for you, or are you being hindered by issues? If so which?".
That's still a huge selection bias (people who care enough to respond to a survey), but still beats the even bigger bias of people filing bug reports.
And I'm saying this as exactly the sort of person who'd be annoyed enough to file a very detailed bug report against some piece of software I use.
Direct user feedback shakes you out of that dead-end, and towards a newer, much better dead-end on the path to success. Maybe the button is solving the wrong problem, but users latch onto it as the first thing that comes to mind. You can't find that out unless you interrogate your customers, and most engineers wouldn't do it unless their back is against the wall. Happened to me last week - spent an hour on the phone+hangouts with a customer, and it turned out they needed something other than what they asked (and we didn't have). Happens all the time.
[BIG-CO] employees are just talking to each other, and then A/B testing their way to success along the nearest promotion-worthy metric. The courage, or the necessity, to overcome the aversion of the users is a competitive advantage for a small company.
EDIT: want to add that I am enjoying our conversation here, thanks for joining.
Then they kept working on it. With every version, they bolted on some extraneous garbage that made the rest of the application worse and worse. MP3s were popular, so they added searching FTP sites for MP3s... something that I find hard to imagine any human being ever did or wanted to do at any point. I eventually just had to stop using it, they made it far too aggravating. And they never even fixed the bug (after a disconnect, it would use the random port previously used for a data connection to try to establish a command connection and if the reconnect had been quick enough, the server would still be spewing binary data into what the client expected to be a plaintext connection, resulting in random garbage).
"Thank you for your suggestion. This is a common request" and then "but we're not planning to build it for [some reason]" or "we're considering it for the future". Depending on if you want articles about how dumb your reasons are, or articles about how you promised you were working on it three years ago, and it never happened.
I agree some people are born complainers, and you should not implement things mentioned by just one user or one especially noisy user. I also agree that support requests shouldn't be the only way one understands users. But I suspect there are very few improvements valuable to people who don't contact support that would never be mentioned by someone who does.
To me the people who contact support with suggestions aren't just the most irritating fraction of your user base. They're also the most dedicated users. The ones who love your product the most. The ones who are doing something you didn't expect because they are an early adopter in a different market. The new users that you'd like to have more of but won't until your product is easier to learn.
There are other ways to accomplish it, but the basic idea is to never let product development get too far removed from the actual customers. Large companies can get away with not listening to the customer (for a while at least), smaller ones not so much.
I once spent a couple of weeks working out how to give helpful error messages in the yacc parser of an open source project, for issues that arose repetitively, annoying the maintainers... but it was ignored. They are still getting annoyed by those issues to this day. And I'm still annoyed about it.
Also it assumes that the engineers don't find a better way to deal with the support questions, such as canned "answers".
It doesn't. If the product manager is prioritizing things, he or she invariably gets backlash from the engineers when they put forward that they're swamped and can't deliver unless they deal with X, Y, Z first.
> and can't just get the list of frequent issues from support
Which are then, all too often in my experience, ignored or downplayed by whoever prioritizes tasks over at engineering.
When the engineers have skin in the support game, they pay extra attention to what may or may not lighten their "not very fun" workload.
If you have engineers doing level-2 support and you have product managers saying "Do this other thing", then the engineer now repeatedly has to resolve a conflict among the possibilities of:
A) Check email and slack to see if something new has come in.
B) Check your error reporting/monitoring to see if Sentry has reported any new exceptions/errors from existing services.
C) Issue #1 from a customer
D) Issue #2 from a different sort of customer who has more revenue impact
E) Issue #3 which was actually bubbled up from someone in sales.
F) Non-urgent but important Project X from the product manager which you are actually 30% of the way finished with and consequently, it is the thing that is sitting in your brain.
G) Check hackernews as a salve to the stress of a bunch of ambiguously-prioritized tasks.
Accomplishing engineering tasks requires allocating your brain's working memory and directing your focus on ONE thing at a time. The harder that is, the harder it is to do anything but option G.
If you put engineers in this situation, but you don't give engineers the training and structure to arbitrate decisions about prioritization, then you're setting them up for needless conflict and failure. If you just designate all of your engineers as being responsible for level-2 support without any additional structured then each one needs to tell the product manager, "I'm not going to work on this project which will bring the company $X Monthly Recurring Revenue for another week, but I'm going to deal with these support requests. I am in charge of prioritizing what gets worked on. Your job as a product manager is not to decide priorities."
Which is a silly thing to tell a product manager. There should be an existing consensus around how this sort of thing is handled and prioritized. If your product managers aren't allocating time for fixing frequently-encountered bugs or UX problems, then the solution isn't to set them up for a conflict with engineers. The solution is to have someone exercise leadership of the product managers so that they is alignment on the general importance of getting bugs fixed quarterly goals and so that when someone (PM or engineer, whoever is triaging level-2 support queries) exercises judgement about the priority of a bug relative to other work, others broadly agree on that decision.
You need to have some structure accomplishes this.
It falls apart when support doesn't train newcomers in what engineering needs, or when support just "throws things over the wall" to engineering.
I once worked with a very bad "director" of support who couldn't figure out that a customer was sharing something to the wrong email address. She did absolutely no diagnostics. When I called her on it, and explained that she couldn't throw everything over the wall to me, she quit the next day. Her replacement was much, much better.
> It falls apart when support doesn't train newcomers in what engineering needs
It requires as low skill level from support as you can get. Some rudimental diagnostics skills and understanding of the product itself are needed; from there, if it's not in the FAQ, let engineers handle it in full.
One of three things then happens - in my experience, anyways:
* The engineers prioritize fixing the stuff they don't want to repeatedly deal with.
* The engineers populate a FAQ for the stuff when it's low impact enough that they're OK with support copy/pasting the FAQ item until they address it - if ever. Which can lead to further questions that get fixed or expand the FAQ, but that's fine too.
* (In rarer cases) some of the engineers quit because they think support is beneath their pay grade.
Management has to enforce that tickets aren't thrown over the wall. I can't count the number of times that your approach is misinterpreted as, "I don't need to know much about the product or try to understand the customer's problem and communicate it correctly."
Anyway, without support that takes pride in, well, support, it leaves customers very unsatisfied.
I wanted to grow a service business, and everyone will tell you that service businesses don't scale for any number of reason. I grew my consulting business by identifying stuff my customers really needed, was teachable, and had great margins. Once my customers saw the value, I switched from hand to mouth survival waiting for invoices to get paid to working for upfront payments. With cash in hand, I could hire people to so the work as fast as I got the business. Maybe my business wasn't infinitely scalable, but it grew way bigger in a short period of time than anyone expected :)
The owner might elect to keep money in the business account instead of transferring it to a personal one for tax reasons. If you are fully in control of the business there isn't much difference. For example you can buy a house with company money and lease it to yourself for $100/month. This would not register as profit for you or the business.
- Pick something you really wanna do
- Sell something with big fat margins
- Don't run out of cash
- Go where the customers are
- Don't outsource your critical path to anyone ever
- Figure out what customers want to spend money on but can't
- Figure out how to get customers to pay you in advance for work you haven't done yet
I'm not saying this to be negative, as this will actually work to validate product-market-fit before investing in the AI development.
Most of these companies will fail though, as their pricing will be based on a 100% AI solution, which is rarely achievable.
(Has anyone done this particular sci-fi plot?)
Didn't pursue the idea further because I didn't really believe in it, but it just proved to me that investors often chase buzzwords instead of ideas
However, I haven't read any good journalism actually showing that this is the case.
(yes, we're a long way from there today but fixing that is what I'm working on)
Heck, even after we learn to talk and read, we still don't learn to write by having it explained, and only vaguely learn by demonstration. We mostly just learn by generating outputs and having them evaluated by the other NI (sort of like how AlphaGo Zero works.)
AI can get you 80%, 90%, 95%, and 99% of the way to solve a problem. When you first start out, a human element will be doing most of the heavy lifting and creating training data for you. As you iterate on the model and improve it, the human intervention element will be less and less time consuming over time. As you near 100% accuracy, you will probably still want a human intervention element acting as QA because all models have some rate of error.
'Let's send emails, teach [users] professional photography, and test them. "We said, 'Screw that.'"  Instead, they rented a $5,000 camera and went door to door, taking professional pictures of as many New York listings as possible. 
I don't even see why it's considered a hack.
People are visual - we make decisions based on how things look. Their product is online, the pictures are basically everything. The snaps are 90% of what's being presented, so making sure those are good would be worth 1/2 of their time overall. The technology part of AriBnB at smaller scale is just not rocket science, it's almost commodity.
Google is a good counterexample. Google in its early days was famously not visual - the visual design was incredibly sparse, there were no pictures on the results, there were no pictures on the ads - and that was a huge contrast to most other sites, where you had animated punch-the-monkey graphics scrolling across the screen. The reason for this is that the primary utility for Google (in the early days) was to get you off the site and to your destination as quickly as possible. Anything that drew your attention - other than the result you wanted - was a distraction. Google's made attempts to put pictures on the results pages since - we had plenty of data showing that peoples' attention is instantly drawn to images, and when your department's name is "Search Features" it's awfully tempting to draw attention to the features you're developing - but every such attempt seems to cost Google in brand equity in the long run, and ends up getting reverted eventually.
The key insight for AirBnB was that they are selling an experience, not information. When you're deciding where to book a hotel, you want a visceral sense of how it would feel to be on vacation at that place, and a picture is the best way to achieve that.
It seems straightforward, but the alternative would have been to establish a network of local photographers with contracts and agreed rates, scheduling software, performance management, QA, etc...
That would be more scalable, so if you just do it yourself instead of going for scale its a "hack".
Here's an example answer: https://stackoverflow.com/questions/409999/getting-the-locat...
Long story short, instead of making documentation, I fixed everything that my clients would ask me how it worked. This has limitations, but it saved me tons of time on support, training and documentation that would later change anyway. And my product got consistently easier to use, and dropped training requirements to near zero.
It's amazing how often I can re-train someone on the exact same thing multiple times and have it not stick. All it takes is something you do rarely (couple times a year or less) and retraining/support is needed.
The mental frustration and training time/support costs is a big motivator to fix the underlying problem.
I think this fundamental perspective maybe be harder to implement the larger your team/business though?
The short version is that we were looking at building a complex social experience that had Facebook newsfeed items as a key component. We used GreaseMonkey to fake it entirely: user testers would log in to Facebook on our tricked-out computer, a GreaseMonkey script would steal faces and names from below the fold, and then insert fake news items into their feed that were apparently from their real friends. (At the end of the user test we'd come clean so they weren't confused.)
What we learned is that people hated the idea. So that meant I never had to write a line of production-grade code for this. Some of our competitors had very similar ideas and ended up building full products to learn the same lesson. What they built was way more scalable, of course. But that's not a virtue in a code base that just gets thrown out.
Fortunately for us, it was DAP (planting fertilizer) and CAN (topdress fertilizer). The nice thing about Kenyan agriculture is that most farmers, if they have a cow, already are making sure to use their manure. The bad news is there's not nearly enough cows making nearly enough poop in Kenya to provide the nitrogen the crops need. We're literally shit out of luck... :)
We built a network of agrodealers that we use to check people out. It's scaling pretty effectively now.
It's really just a demo of platform which creates a knowledge graph in any network: https://metacortex.me/
I can't technically say a lot of things due to myself not being a registered financial advisor. However, what I can do is post my stories and predictions (which I used to do regularly on Reddit).
I managed to get around 100 highly engaged users (many lesser engaged) to help me develop the system. I did this by offering 1 month free every time they provide feedback. I then manually wrote a response and implemented a fix for each suggestion (plus provided one month free).
The users that didn't provide feedback ended up paying for the service. Everyone else helped find bugs in the platform, which helped a ton.
And of course, the most well known hack was Steve and Alexis submitting most of the content for the first few months until there were enough people to keep the front page fresh.
We did other stuff too, like office tours for anyone who asked (and showed up in SF), giving free reddit gold to anyone who sent a postcard (which we had to process by hand), sending merch to people who did good things.
To wit, they submitted most of the content -and comments- under a variety of pseudonyms to give the illusion of mass-adoption
That's not correct. There were no comments on reddit for a long time. By the time we launched comments, there were plenty of users to make them.
I still do all of the customer/technical support. To be honest, it is a bit overwhelming and starting to take away from other tasks so I likely won't do 100% of it for much longer. However, it has really helped us hone in on our customer needs & sell them on new features. I don't regret it one bit!
p.s. we are searching for a CTO to help us build out a team (large equity stake + modest salary). Email me hank (at) ordermetrics (dot) io if interested.
"About Order Metrics
OrderMetrics.io is built and maintained by ex-ecommerce professionals who were dissastisfied by the tools ..."
I read this as you are "ex-professionals" which probably isn't true ;) My 2 cents, change it to just be:
"OrderMetrics.io is built and maintained by ecommerce professionals who were dissastisfied by the tools ..."
Eventually we automated out a whole suite of fulfillment systems that have scaled into 6 figures worth of orders.
It's amazing what you can do with clear inputs & outputs and the illusion of a black box until you can actually build your black box.
Most users didn't use the product at as much of a scale that I did, so they were happy. The users who tried to use the product at a higher scale were so few that we just accepted that we couldn't meet their needs and grow our business.
So, to generalize:
1: Growing your business is more important than scaling for 100% of your users. Scale for 98% of your users and invest in use cases that will grow your business.
2: An early stage startup doesn't need to scale like Google. If your business takes off like that, you'll be able to hire enough smart people to fix the problem. Instead, spend time growing your business.
If you develop stuff and still have to put manual effort because you save time - fine. But don't just get lazy and stop thinking about the consequences of your decisions - it will bite you sooner or later.
but one needs to know/learn this first, otherwise there wouldn't be an innovation
Recruit new users by manually installing your s/w on their computers by @patrickc” link: http://paulgraham.com/ds.html
TBH, even if I had been somewhat interested in the app, I would have been inclined to uninstall, just because I don't approve of this tactic.
That's the total opposite of what happened to you. In fairness, GP could have been more explicit when describing the approach.
That one deal tripled our YTD revenue, and opened my eyes to the fact that my sales process was far more complicated than it actually needed to be.
What could possibly be unethical about that?!
I'd heard that one technique another startup used was to go to all the local Apple Stores and set the home page of Safari on each computer to their website. That seems borderline unethical, but that's not what Stripe was doing, nor would it really work for their market.
1. We used Google Sheets as pseudo database.
2. We stored Services, Channels, Locals, & Plans.
3. We wrote a basic Python script that exported the data in structured JSON files to be used by our CMS.
Did your MVP using Google Sheet impact or influence the design of future versions of your backend?
This was fine even as we approached >100K monthly uniques.
We moved more the data to our CMS as traffic grew for ease of use, but we could have used this method basically indefinitely.
2. Channel many normally automated tasks through high touch customer service - encourage users and their guests to email us about anything and everything. We've learned tremendously from user feedback and by responding often within an hour, we've earned a lot user loyalty. One example was a grandmother who wanted to buy a gift for her grandchild, but the linked item was out of stock. She called the DreamList customer number straight to the founder. We found a different store for her to buy from. Now we have a product graph at different major stores and more leverage as a business.
3. Watch for user culture as well as team culture. A small set of users may find your site tremendously useful, and another small set may find it useful for malicious purposes (such as spamming people to sign up for some scam using your invitation forms, for example). We track all errors, log email bounces, etc, and wait a while before opening features that can be abused (including payments). Max Levchin has a story about how abuse and the 90 day payment refund process almost sank PayPal and they had millions in the bank.
4. Don't assume anything is a "best" approach. Try out counterintuitive measures and you will find ways to beat the competition far cheaper. Examples: Using Go when competitors use Rails and you need far less servers to handle the same number of users with a faster experience; We also invented several new words to test how pages with static content, vs JS painted built-in JSON, vs JS painted with external JSON do on SEO. We then leave the test going and re-check our assumptions periodically. I'd underline the last sentence for companies using JS frameworks - some slow you down more than they help.
5. Any new feature is launched and customer-serviced manually until we know we are serving those users in the best way possible. We went through a bunch of different wordings for every single email in our usual workflow, and little things, like an extra sentence that shows your humanity, made a huge difference.
There is so much more you can do, but the gist is there are dozens of approaches to every feature and problem, and it's worth considering more than one or two used by competitors. It doesn't take much to delight users and build a better product these days.
It was never going to scale. We charged our customers less than we were paying for the phone line, let alone the ISP service. If the ISPs discovered we were consuming their lines and not paying enough for them, we would have been thrown off.
Still, it was enough to get the company to a sale. All they needed to do was hang on until the tech caught up with them. Today, you could build the same product in a weekend...
OTOH, if this is true, maybe it did scale, in terms of performance at a certain architectural cost, and maybe more importantly in terms of their ability to hire as a very young (in every sense) startup.
2004: PHP is quick, let's use that.
2006/7: PHP is slow, let's build a compiler for it (HipHop)
2010/11: Still slow, let's build a VM and compile to assembly (HHVM)
2012: types are cool, let's build an oCaml parser to pretend they exist in PHP!
I kinda view it as Zuckerberg profoundly agreed with the never rewrite code theory.
The main sheet contained information about each customer and their subscription, e.g. pets, chosen recipes and subscription frequencies.
One sheet would dynamically populate what and how much we needed to cook for a given week by looking at active customers and their subscriptions. This was broken down per ingredient basis. We'd then place POs, cook and pack the food.
Another sheet would generate orders for the given week based on similar logic. We then used Google Spreadsheets scripting support to generate packing slips, customised feeding guides and food labels. Each asset type had its own Google Docs template. The script filled in the blanks and created a new folder. The resulting files were then exported as PDFs and passed to our fulfillment.
Oh, and we did our own deliveries in NYC too in the early days.
We've since built out our own ecommerce platform from the ground up and we continue to automate fulfillment, customisation and operational processes. Sounds exciting? We're hiring.
Does the position require dog-fooding the product?
Given the year, it used tapes to play the movies. The user would dial in with their phone, enter a code for the movie they'd want to watch, a motorized catalog system would fetch the tape, insert it into a player and then the user could control the tape player with their phone.
Well, it was a couple days before launch and the catalog system was still not working. They ended up launching with the system printing a receipt, then a human would fetch the tape and insert it into a machine. This went on for several weeks before the automated catalog system was working.
In hindsight, they realized it was actually easier to scale the humans fetching tapes than the machines and therefore the humans were even faster at busy times. System subscribers didn't even know humans were used in the beginning and it launched without a hitch as far as users were concerned. So, like many stories in this thread, sometimes it's easier to bootstrap with manual labor and it can scale with machines doing the right parts of the process.
The solutions by themselves don't illuminate unless you also illustrate what they were trying to achieve and how they got around the limitation.
After the first few hires, their engineers all took part in interviewing candidates. That has deferred the scaling problem quite well.
Furthermore, my current sales process is having people enter their email address on my outdated landing page and then reaching out to them personally to see if they are a good fit. This worked great early on since too many users signing up at the same time can take a massive toll on my servers due to the initial import load (some users have TONS of listings in their stores).
I'm now working on a proper marketing page and introducing a scalable way to have users sign up themselves.
The Mayo story seems like something else. While you might learn something by buying the product in retail once or twice, buying in bulk to simulate demand isn't an acceleration to learning something. It's this thought that if only people could find my product, or have an easy way to buy it, it will sell. But once you stop buying, the effect goes away. It reminds me more of a dilemma that larger companies talk about: juicing this quarter's earnings at the expense of next year's.
Also, the mayo-jar-eating-itself gif in this article is brilliant.
As a teen at my first job, we would prime our tip-jar with a dollar bill from the register in order to jump start tips.
You'd be amazed at how well that worked.
We've got good chops in both biotech/life sciences and software development ..and the default urge is to build a fully automated system and tools.
however, what we're doing instead, is focusing on customer development at the front end - the "customer onboarding" process. This is involves a bunch of human interaction to understand exactly how customers are approaching their data and experimental processes - it feels a lot like consulting on the front end and definitely doesn't "scale".
Over time, we'll start creating very specific tools that help a) make onboarding quicker and easier for the customer and b) reduce the onboarding burden on us - the key is to only do this when we have clear understanding of the required features and the ROI for developing them.
Current customer feedback is very interesting it ranges from: "analytics still requires a lot of high-touch expert consulting" to "feature _______ is a simple feature that customers need right now ..make it, and that will lead to other feature insights and customer use."
...would love to hear any advice / thoughts ..we're in the struggle :)
 Yukon Data Solutions https://yukondata.integralappsystems.services/t1/
10 years ago I worked for a solar monitoring startup called Energy Recommerce. This is one of the most satisfying jobs I’ve had in my 20 year career. I did all of the embedded programming to implement our data logger: the provisioning UI, the drivers to collect data from devices, the database, and the mechanism to deliver data to our backend. We collected performance data from residential and commercial solar systems, and reported production data to state governments so the system owners could collect production rebates. We helped commercial solar companies manage their systems, including some big-name companies. The engineering team was literally me and one of the founders.
There were a lot of hacks and short cuts, but I will tell the story of one particular design decision that carried us for years, but was ultimately replaced.
We made the decision to define classes of devices (power meters, inverters, string monitors, weather stations) with a fixed set of datapoints, regardless of what each individual device could do. We then collected the data provided by the device and filled in the object. Sometimes a datapoint in our object was not available and we stored a NULL value. We scaled values to the units we defined in our objects (e.g., kw to w). More often the devices provided more data than would fit in our object, and some customers started asking for these datapoints. This meant we had to defined extended versions of some of our datapoints; meter_ext, inverter_ext, etc. This meant now we had two sets of objects, but the basic model still worked. We also sometimes had data collection bugs: collecting the wrong datapoint, or a scaling a value incorrectly. This meant the data was invalid and we had to update the software on the data logger to correct the issue. Still, our team was small and we moved fast.
Our biggest competitor at the time was a company called Fat Spaniel. We were always a bit baffled by the size of Fat Spaniel - our team was 5 people, including business development and software dev. Fat Spaniel had, in comparison, lots of funding and a much bigger team, but their customer base was not much larger. What were they doing?
Eventually an inverter company called Power One acquired both us and Fat Spaniel, and combined us into a single organization in San Jose. This turned out to be great. We combined into a single, really effective team, and I had a chances to peek under the hood of what Fat Spaniel had built.
They had made a fundamentally different design decision: they collected all the data provided by every device and delivered it to the backend. Once on the backend there was a complex xml-based system that defined what the data meant and put it into the same sorts of objects we had. If they found a bug in the way a datapoint was assigned, or the way it was scaled, they could fix the definition and then re-play all the data they had collected. Because they had the raw data from the device they could always go back and add a datapoint or fix other issues in the historical data.
This system was fundamentally more complex to develop, and required a bigger team, but it was grounded in good principals. It became the system we based our combined operation on. In fact, what we did was adapt the hardware and firmware that we had developed at Energy Recommerce to use the backend that Fat Spaniel had developed.
The system Fat Spaniel developed was technically better, and resulted in less technical debt. However, it was considerably more complex and took more money to develop. This meant their burn rate was much higher and they moved more slowly and we could compete head-on with them using a smaller team and less money.
[edited to fix cut&paste error]
The depth of the feedback comprises of everything from logo, font, color, size, call to action, images, and every other component on the website. With feedback we also provide constructive steps that founders can take to make their product better. All of this being completely free.
The non-scalability part:
The time taken to evaluate a website is proportional to the quality of feedback we deliver. If the time invested is less, the feedback looses value. To keep it up to the mark where the feedback is valuable, a proper evaluation & time investment is necessary. This may account to almost 5-7% of a designers time for each feedback everyday. When the number of feedback requests shoots up, there are only limited feedbacks that can be provided everyday.
How we benefit from it:
In this complete process, we learn alot about generic errors, optimal UI/UX, trends, ideas and lot more. We learn about the little innovative things that founders do which stands out on their website. Other things we gain are; network, prospects & karma.
Although the product is still in the beta, but we have already provided 100+ feedback to founders for their landing page UI/UX. Considering the amount of insight that we gain, we can write a complete ebook over it. It may turn out to be the next not-scalable-thing we do for scaling.
To give you the most recent example, I collaborated with a small store owner and helped him increase his conversion rates. This put hin on track to hitting a very important milestone of selling $1000 worth of product each month. Doesnt sound like a lot, but his margins are insanely good.
This is not scalable at all, and costs me a good amount of money to do. But the insights and connections are more than worth it. To give you an idea, I used to have a business doing the very same things and charged goid money for it.
If you sell anything online, get in touch. Id love to collab with you.
With https://pdfshift.io, I'm in the same situation. As a tool to convert HTML to PDF, I get many requests about documents not being rendered well on PDF. I take the time to help my users tweaking the CSS to have the perfect result.
This takes a lot of time and doesn't always make them convert, but for those who do, I have a lower chance they churn because they know they can count on me.
I felt like it was the best way to get continuous feedback from customers and dog food our product.
It has really helped us improve our product but is now going from something that "doesn't scale" to something that "prevents scaling". I hope to hire my replacement pretty soon!
so yep, prototypes are things that people do that don’t scale.
That includes things that can't scale.
That's the whole point.
Why discriminate against prototypes when we're doing things that don't scale and they're... well... things that don't scale? Making prototypes and testing assumptions with them is 100% in the spirit of the methodology.
Somehow in hardware this seems crazy and there's a million reasons not to actually sell the product (certification, hard to change in the field, it's expensive!), so I think it's done less often.