Hacker News new | past | comments | ask | show | jobs | submit | colmanhumphrey's comments login

haha, not sure I agree with this one at all, I think a name is pretty far down the list of priorities for a business. I would be extremely surprised if "customers who started trials but then didn't use the product at all, but then later wanted to, but couldn't because they couldn't find the name" is a segment that moves the needle.

From this brief note, you sound like you potentially are trialling an order of magnitude more services than the average person. I bet that gives you a ton of interesting perspective on all these products, sign-up flows, etc, but probably puts you in a very unusual position with respect to product names!


This rings true, and I guess further backs up how important it is to carefully choose your initial customers, so the product is actually good enough by the time it gets to the broader swathe of companies. At least, that's my guess — I'm the one that failed after all!

I am surprised to hear you say this given your previous experience founding a product that from a distance might look to cover some similar space to retool but in practice is quite different. My perspective: only investors ever asked us about retool, not customers. Some of them even used retool tool. Hah even just typing this out brings me back to investor meetings!

a very generous comment! Very interesting you say that, I believe we did at some point have some thoughts about targeting B2B2C, but sadly that was one of the areas we failed to fully learn about / experiment / iterate on.

oh that's an interesting perspective. I would say you're right, and maybe I didn't phrase it well. I probably should have said something more like: if you don't know how, or if you can't learn how (through iteration etc), then how can your customers. Or something to that effect.

That sounds like a very viable and valuable type of business, but it's kind of a different category of business model

Here is a chart going back to 2014: https://slight.run/graphs/colman/ratio_of_seekers_to_hirers_...

Not quite as drastic as yours, but still not ideal.

The underlying query and data source: https://slight.run/apps/colman/job_seekers_vs_job_hiring_hac...

I'm restricting to top-level comments here, because those seem like a better proxy than all comments (excluding some discussion on a given post for example).


If anybody is interested, maybe this should be considered along with the core inflation rates - which is high in the US (peaked at 7% within the last 12 months and currently at 5.5%) and possibly still rising in the EU (12-month peak [now] at 5.7%).

If the central banks are going to stick to their declared purpose of keeping this around 2% we're still in for further rate hikes, I would imagine.

* https://tradingeconomics.com/united-states/core-inflation-ra...

* https://tradingeconomics.com/euro-area/core-inflation-rate

I know it's not within the desired behaviour on HN to simply write "thank you!", and I hope I did provide a bit more than just that. That being said: Thank you! Also to ed_balls and madcaptenor for collecting data in the other thread.


Hopefully they are not quite that stupid. Inflation is a lagging indicator, so expect it to start dropping fairly quickly as the -12 months price delta normalises, against the M2 injection 2 years ago.

M2 continues to drop (this is a bad thing for stability, but will feed into prices dropping over time):

https://fred.stlouisfed.org/series/M2SL


To answer what you were probably going to look for in the link: peak was March 2020 (1.19), when Covid got real, bottom was April 2018 (0.067), don't know any particular significance of that month.


April 2018 sounds about right for the tech bubble peak - iirc softbank was still in full swing, everyone was "blitzscaling", it was peak exuberance. Not that the music stopped right after that, but even into 2019 things were starting to decline and the VC industry was getting slightly more conservative.


FED started raising interest rates in 2017[1]. The market didn't really start reacting until 2018, and by 2019 the FED got cold feet and was lowering rates again.

[1] https://fred.stlouisfed.org/series/DFEDTARU


Is cold-feet a euphemism for "harassed by an orange toad"?


Looks like your chart only goes up to 2022 October? Whereas OP's goes up to today.


It includes the Nov 1 post, you can see the data here: https://slight.run/apps/colman/job_seekers_vs_job_hiring_hac... (Vega reads 2022-11-01 into your local timezone, hence why you're possibly seeing 2022-10-31).

They claim they update it daily (https://console.cloud.google.com/marketplace/details/y-combi...) but they only update it every few months it would appear.

Of course you can manually get this data from the HN API and create a query built on https://slight.run/apps/colman/job_seekers_vs_job_hiring_hac... by adding the extra few rows manually (but unlike OP, I recommend you exclude non-top level comments).


Ok so either way though, it's not really comparable to OP because it doesn't include 2023 data (i.e. all of the current recession).


If instead I had supplied a time series for eight years ending just prior to all of OP's data, you wouldn't discuss "comparing" the charts (I assume!), so it's a little unclear what the concern is with comparison.

If you just need the last few data points, of course you can either just look at OP's chart, or if you don't trust that data, then quickly get the data yourself. Again you can even create an account on slight.run and recreate the full graph if you prefer.

We are also not currently in a recession. If instead you're referring to market conditions that affected tech heavily, that was in place by Q3 2022 (e.g. https://techcrunch.com/2022/12/20/remembering-the-startups-w...), and this also lines up more closely with how the Fed has been raising rates (https://fred.stlouisfed.org/series/DFEDTARU).


I got 71.13647% on a first pass. So we want P(all 4 in 9) = 1 - P(not all 4), and we can split that out a few ways. To not get all four, we can restrict ourselves to three, so that's (3/4)^9, but there are four ways of doing that, so that's 4 * (3/4)^9. But that counts using singles and pairs too many times. Specifically each version of "three" can be exactly three balls, three ways of one ball, or three ways of exactly two balls ("1 or 2 or 3" = "1&2&3" or "just 1" or "just 2" or "just 3" or "1&2" or "1&3" or "2&3").

- We can then subtract 6 * P(two balls), so 6 * (2/4)^9. Now this counts singles a few times too, in fact it cancels all of them out.

- We then need to add back four singles, so 4 * (1/4)^9

Putting this together gives:

1 - (4 * (3/4)^9 - 6 * (2/4)^9 + 4 (1/4)^9) = 0.7113647


This took me way too long to understand. :)

But I get it now. If the four prizes are ABCD, then if you calculate your chance of only getting two balls in five purchases, you can do it by calculating your chances to get "A or B" five times in a row. But those include the AAAAA and BBBBB scenarios, which aren't two balls.

Repeat that for AC, AD, BC, BD, and CD.



Don't know if you're still reading this thread, but one thing I'm stumped on is that if I plug in 8.333 instead of 9, I get a probability of around 65%.

If I plug in 7, I still get a probability of above 50%, which seems to conflict with the 8.333 answer calculated upstream.

Wouldn't a positive EV imply that a 50% probability is the break-even?


The reply button is back so I can thank you publicly!


I'm surprised to read this — while the examples could be more cleanly presented, I felt like the title for this post and the initial gif on landing show what this does reasonably clearly. I do see what you mean about the PRO part.


Can’t begrudge anyone involved, but this feels kind of lame. I thought Figma really could compete long term with Adobe.


This is why I'm so bummed by it.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: