Hacker News new | past | comments | ask | show | jobs | submit | more mbroshi's comments login

I love this post. This sort of thing happens to everyone, most people just are not willing to be so open about it.

I was once sshed to the production server, and was cleaning up some old files that got created by an errant script, one which file was '~'. So, to clean it up, I type `rm -rf ~`.


Somewhat similar story from many years ago. Was in ~/somedirectory, wanted to clear the contents, ran `rm -rf *`. Turns out somewhere in between I had done a `cd ..`, but I thought I was still in the child directory. Fastest Ctrl+C ever once I saw some permission errors, but most of the home directory was wiped in that second or two.

Didn't have a backup of it unfortunately, though thankfully there wasn't anything too critical in there. Mostly just lost a bunch of utility scripts and dotfiles. I feel like it's beneficial in the long run for everyone to make a mistake like this once early on in their career.


It would be beneficial for `rm` to have a way for making `trash` mode the default mode of operation. Unix shell is prone to errors like this beyond all reason. And nobody would die typing:

  fs.delete(shell.expand('*'), recursive:yes)
  fs.undo()
or something like that with completion helpers. Our instruments usually lack any safety in general, and making them safe is hard, especially when you're just a worker and not a safety expert. All the world today benefits from mistake-friendly software, except for developers who constantly walk on ui minefields.


I think the link should be here: https://www.ezequiel.tech/2019/01/75k-google-cloud-platform-...

And the date should be (2019)


It was first posted at this URL in 2018-02:

https://web.archive.org/web/20180215070105/https://sites.goo...


I wonder why the body needs to move to stay warm. Seems like a waste of energy. If I were to "design" the body, I'd make automatic heat generation one of my MVP features.


My favorite hidden feature in Firefox is having multiple profiles[0],[1]. Unlike containers, profiles have separate bookmarks, extensions, etc. I have found profiles even more useful than containers for development.

[0] about:profiles (paste into URL bar in Firefox)

[1] https://support.mozilla.org/en-US/kb/profile-manager-create-...


This is a feature of Chrome as well, although it isn't very hidden. https://support.google.com/chrome/answer/2364824


Somewhat perpendicular, but intermittent fasting has been a great recent lifehack for me. I skip breakfast every day, and sometimes skip lunch. Gives me back at least an hour every day. And gorging for dinner is a lot of fun.


Skipping lunches is also a good method of spending less time at work and more time outside enjoying life. I only take 15 minutes noon breaks now.


I eat lunch and it still only takes 15 minutes :D


Either way, that's fast ;)


I skipped breakfast for years, until I realised it was the reason I was always in a bad mood for the entire morning. Skipping breakfast and lunch might work for you, but it's definitely not a solution for everybody.


The crazy thing about diets is how differently everyone responds to different plans.

For me, if I eat anything significant for breakfast (lets say > 400 calories), I am STARVING by the time lunch rolls around at noon. Like, stomach in pain, I need food now, I'll have 2 pounds of wings and a side of macaroni, thanks. But that overindulging doesn't carry over to dinner, which is pretty typical.

So, the synthesis of all of that is eating breakfast significantly increases my caloric intake for the day, despite having no measurable difference on energy, mood, etc.

If there are any rules that have worked for me, it's: Eat less, Eat less often, and Avoid carbs.


I find this true on a high carb breakfast (which most breakfasts are). If I eat 4 eggs for breakfast I easily go until dinner without eating, but will usually eat around 3pm. Then some days skip dinner.


Try having lots of beans at dinner. The resistant starch gives you a "second meal effect" the following morning.


That is not a healthy diet. You should be eating smaller meal more often. If you are going to fast, do it for at least a 24 hour period.


I think it’s worth considering what a healthy diet really is. Is it really healthy or are we just used to thinking it’s healthy?

As an example, a diet high in fat is generally considered by most people to be unhealthy.

You see people doing Keto and you start wondering if it’s really unhealthy.

Eating red meat is considered to be the unhealthy. You see people doing ZeroCarb diets and they (at least) look extremely good/healthy.

Fasting is generally considered unhealthy. You have people doing intermittent fasting, ADF or extended fasting at it works exceptionally well for them.

Vegan? It works really well for some people.

IMHO there isn’t one thing that works for everyone as far as diet goes. There are also the theoretical and the practical aspects of what we eat and when we it it.

I think everyone should try different things and figure out something that works for them, long term. It should not be a “diet”. It should be just something that blends into your lifestyle and gives you results for the investment you’re making.


Though, nearly all of those diets seem integrate more veggies and massively reduce sugar.


Reducing sugar is probably a good idea.


I mean intermittent fasting where you fast for ~16 hours and then eat larger meals during the remaining 8 is pretty popular.


Eating more deals in a day (whether small or large) = more insulin spikes throughout the day = less of a chance to heal your insulin resistance.


There are functional benefits for having UUIDs as a primary key, but yes there are performance impacts on writes and ORDER BY. The best way to find out how it will impact your application is to have performance tests in place, and test out the primary key change in a development environment. I do not think you'll able to determine the impact on performance/scalability based on "pure thought".


Why would you want to ORDER BY on a UUID field? Not trolling, I honestly can't think of a reason why you would want to do this. Secondly, aren't UUIDs treated by the database engine as a 128-bit integer? If they are being treated as varchar fields then I can see how this would affect performance negatively but again, I question if this is really the case.


What if you just want a stable row ordering, and don't care what that ordering is?


For random (and not timespace prefixed) uuids, you can end up hitting more blocks because if locality of reference, if you are using b+trees. If you are using an lsm index, you get blocks of data written at the same time in the index anyway, so your "slow" disk isn't so bad, because that is in your cache already. For b+trees and random uuids, data in blocks are basically scattered everywhere. So your index lookup of 1 billion items could hit 1 billion leaf blocks, instead of 1 billion / entries per leaf.


It's pretty unlikely to get a collision[1], and the client should be able to handle the gracefully (regenerate the UUID and retry).

[1] https://www.quora.com/Has-there-ever-been-a-UUID-collision


The rarity of UUID collisions depends on the quality of implementation. The Quora answers assume UUIDv4 with a high-quality RNG.

I was able to reproduce UUIDv1 collisions at will when the timestamp had microsecond resolution and the clock sequence had to be generated randomly each time. That is, I simply had to get two processes to generate the same fourteen bits within a microsecond.


That only seems possible if both processes were running on the same physical machine (and hence share the same input MAC address).

It's a nasty corner case, but I'm sure the UUID designers considered it a valid tradeoff, since I believe the algorithm was designed for a distributed scenario. In the single-machine case an atomic counter would be a much easier solution with very reasonable efficiency anyway. Still, it might have been clever to also include the local process id in the UUID, I wonder why they didn't to that.

At any rate the problem is easily worked around by running the two processes on different machines, i.e. ensuring you have at most one UUID generating process per host (with respect to the database table in question).


> It's pretty unlikely to get a collision

unlikely is not zero, which was why I commented. I'd hate to rely on uniqueness of something that has a chance of not being unique.


If you generate 1 billion UUIDv4s a second, it would take, on average, 85 years for you to produce a duplicate, and the resulting list of UUIDs would take up ~45 exabytes. And keep in mind that even if inserting a row fails because you've somehow managed to generate a duplicate UUID, it is trivial to make a new UUID and retry. Since the database enforces the uniqueness constraints of primary keys, I'm hard pressed to come up with a scenario in which generating a duplicate UUID would actually do anything serious.


GP pointed out the special version of the algorithm that handles collisions gracefully. Note that this algorithm ("open addressing") is so frequently used that you will probably find a variation of it in pretty much any piece of software you have ever used. It's a well-understood method, not only from a practical perspective but also in terms of theory; check out e.g.: https://en.wikipedia.org/wiki/Linear_probing


I wonder whether the premise of your question is faulty. If you ask enough people: "In your last 100 flips of a coin, did you get more than 60 heads?" some will truthfully say "Yes." Unfortunately, that does not mean there's anything special about their coin-flipping strategy, or that you will be able to generate a successful coin-flipping strategy.

My guess is what you really want to know is "What is my expected gain if I try to employ an algorithmic trading strategy?" I have my suspicion of the answer to that question, but I don't believe your current question will shed any light on the answer.


Mostly I believe this too, but I am familiar with some people who can consistently make money year after year. From talking to them it becomes clear that they understand things very, very deeply.

See /u/Fletch71011 on reddit-- he's always happy to discuss things. He's made millions trading options, mostly algorithmically as I've understood it. The methods he uses are sufficiently complex that you need to be very well acquainted with the intricacies of derivatives to follow along, but basically he trades volatility instead of price movement. Regardless of whether the price of the asset goes up or down, he makes money. In his opinion, it's foolish to try to trade price direction, and you're basically flipping coins and likely to lose money.

I tried understanding what he was doing and abandoned the attempt. I had to conclude I was not quite so clever as he.


Be careful with volatility. I don't know what he's trading on exactly. But a big part of volatility trading is selling insurance, i.e. selling insurance against the direction of S&P. You can make a lot of money collecting insurance premium, but on the event of a payout, like a sudden big drop in S&P, the loss can be very substantial. See XIV and SVXY in February of this year.


Trading volatility might imply that he's buying options in both directions.

Big moves either up or down would be profitable. The only unprofitable move here would be no substantial moves in either direction. (In which case you lose your entire bet, but no more.)


Could you expand on how that would work? If X is priced at 10 units of currency, and I promise to buy 1 X for 11, and to sell 1 X for 9. And X stays available for 10, I end up paying 11, receiving 9 - netting a loss of 2. If I manage to promise a sell/buy at 10, I even out. What do I lose with low volatility? And how do I make money "both ways"?

Clearly I lack a basic understanding of the concepts involved.


"Volatility" in the term "Volatility Trading" does not mean the stock's movements, it is a way of measuring the excess value in an option beyond what the parameters of the option would imply. That excess value is usually referred to as the market's assumption about the future volatility of the stock, but really its just an error term influenced by market participants based on supply and demand. Low volatility means "pretty close to its theoretical value assuming no volatility" or to put it another way: "cheap" i.e. good for buyers and bad for sellers.

Sort of like how different companies with the same cash flows can trade at different multiples, otherwise identical options in two companies (or different expirations/strikes in the same company) can trade at different prices because of the opinions of market participants. Volatility traders act when the different prices/error terms are too far apart, counting on the prices/error terms to converge a.l.a. pairs trading.

Since they are trading the error term directly, they attempt to construct positions that remain relatively flat in value as the stock moves around, but are designed to only change in value when the error term changes. That is how they can make money "both ways", because they can profit if the stock goes up, down, or stays the same, as long as the error term moves in the correct direction.

The reason you only see sophisticated people doing this kind of trading is because you need a large and complex position with many hundreds of options to be in a truly market-neutral environment. You can't take advantage of mispricing without such a large position because buying/selling single options involves a tremendous amount of risk, so you need to do that as a part of a larger portfolio to spread that risk. Retail traders tend to spread the risk by doing 2 transactions (the mispriced option and a well-priced but mirrored hedge option), but that is a) much more expensive from a commissions standpoint and b) really limits the range of market-neutrality forcing you to adjust more frequently to stay market-neutral, again, with commission costs.


So it's "buy low, sell high" - but for options, not stocks?

[ed: that's to say, you need a way to get more/accurate pricing information than reflected in the market - but for options, not assets]


Yes, this is how the guy I was referring to explained it to me~ he created a pricing model to predict errors/inaccuracies in the market's model, and was thus able to know when an option was likely to move towards a different price to correct itself.


Yes, but its sort of changing the definition of high and low from price (with stocks) to volatility (with options) and having the expertise to trade volatility like that without _accidentally_ trading on price.


If I recall correctly, your structure describes a future not an option.

If you buy put options for X at 10, and call options for X at 10, then if the price moves down you exercise the call option, and if the price goes up you exercise the put. Unless the price is totally fixed, you make some profit. The real question is whether this profit outweighs the price of both your options.

You need the price to move sufficiently for this plan to be worth it. Especially because in any case, either your put option or your call option is worthless. Thus, you need twice as large a price move as when buying only puts or calls. The upside is that you don't need to care about the direction of the movement.


In volatility trading you don't cary naked options (you hedge them usually dynamicaly - readjusting hedge every now and then) and usually close positions before options expire. So your analysis does not apply. If you want to get understanding on how to trade volatility the "Volatility Trading" by Euan Sinclair is excelent.


Khan describes the long straddle (3:53): https://youtu.be/2HIRaOQDRho


I think the terms you're looking for are "straddle" and "strangle" options strategies.


> Big moves either up or down would be profitable.

The problem is that much more often than not the “only unprofitable move” is the one that happens. Options in general are too expensive and that’s why selling “insurance” is profitable most of the time. Maybe he can identify consistently mispriced vol, though.


> Maybe he can identify consistently mispriced vol, though.

This was the method I used, as described in another comment.


Very interesting, thanks for the pointer.


The common strategies are delta heding, gamma hedging and gamma scalping for market neutral trades.


if you delta hedge (ie taking opposing positions in the underlying and the derivative, ie selling a call and buying the stock and then constantly readjusting your position), you can protect yourself against small / normal changes in price of the underlying, but if there are big jumps in the underlying, and acceleration of delta (ie gamma) you can lose a lot.


On a per equity basis there are reasonably consistent ways to predict near term volatility using sentiment analysis and revenue forecasting ("alternative" data). I would not attempt this with something like the VIX, but for selling options on individual equities it can work.


As a former vol trader, I think this is possible. I have friends who were former pit locals and are now sitting at home trading options.

It is all moving to algos, though.

With vol it's one of those specialized areas where it's probably quite hard to learn without having sat on an options desk. Even the way it's presented in the books does not give a good picture of what you're supposed to do.


A 50% chance to lose money per year still allows for very long strings of success. Some strategy's trade ~80% chance of a small win for a ~20% chance of a large loss which means lucky streaks could last for decades.


So, basically you are saying that it's all gambling and nobody has a better strategy / better information than everybody else. Some are just lucky and it is all because of the survival bias.

I agree with you in that it is a possible explanation, but I disagree in that it is the only one possible.


I feel that what he's saying is that it's hard to tell if somebody actually has a working strategy or it's just gambling, they can be nearly indistinguishable, and (given the number of people) someone showing a streak of successes is really not much evidence that it's something beyond luck.

They may have something, but since you can't determine if it's so and there is a lot of just gambling, then most likely it's that.


That's part, but the reverse is also true. Someone could lose money and still have better odds than normal. Something like skill adds 5% then luck adds or removes 30%.


I didn’t gather that from what he said at all, personally. I interpreted his comment at face value; it is possible to pull a profit over an arbitrary period of time even with 50/50 odds.


I am not sure I understand this.

It takes just as much skill to guess if volatility will go up or go down as it does to guess if prices will.

The best models we know of (say GARCH) will still pace the information in the option prices.


A key part of how options premiums are priced is the expected, or implied volatility (IV) of the underlying (the stock/future/whatever). Therefore you can be an options seller (selling calls and puts) to get high premium, expecting that before the options expire, the IV of the underlying will decrease, making it more likely you can keep the credit received from selling those high-IV priced options. It can also be historically shown that the IV for any underlying is about 75%-80% of the time overestimated, in which the price of the underlying turns out to not be as volatile as what was suggested by the IV.


But 20-25% of the time it is more than 3x as volatile. Taleb built his career on this.


Markets have been going up for a while now. So anyone with half a brain is making money. But long term, there are essentially 0 investors making money on day or algorithmic trading.


That's extremely untrue. Especially if we are counting non-retail investors i.e. prop market maker trading


Its not that complicated, he mentioned using off-the-shelf software, there just aren't a lot of retail traders who can open an office in the CBOE and hook directly to the exchange computers while running enough contract volume to essentially make markets.


I think your argument is logically correct, but you are using numerical assumptions that are off by one or two orders of magnitude. Your typical successful algorithmic trader is probably flipping their metaphorical coin 1,000,000 times, and getting 520,000 heads. Each individual trade may only be slightly profitable, but there is often no statistical ambiguity about the effectiveness of the strategy.

Individual trading strategies often become less effective over time, though. Whether this kind of success can be sustained at the level of a trading firm over many years is an entirely different question. Whether they can beat the market after fees is a third, also entirely different question.


> "What is my expected gain if I try to employ an algorithmic trading strategy?"

The market is negative sum, so any abstract strategy has an expected excess return which is negative. It should be everyones assumption without competing evidence

Algorithmic strategies include such gems as "buy on mondays and sell on thursdays", and there is no inherent magic to them making them better than my "buying stocks with names I like".


Apparently some mice do menstruate[1]. Do mice have another mechanism for handling the issue the author raises: "What to do when the embryo died or was stuck half-alive in the uterus?"

[1]: https://www.nature.com/news/first-rodent-found-with-a-human-...


I would speculate that it works similarly to most other mammals. Spikes of prostaglandin in the blood cause abortion via smooth muscle action and fluid production (or, if the embryo has developed to term, parturition rather than abortion). Prostaglandin is produced in the body cyclically. The corpus luteum releases progesterone that counteracts prostaglandin, but toward the end of the estrous cycle the CL degrades into an inert corpus albicans. Healthy implanted embryos release gonadotropin, which "stops the clock" for the CL so that it continues to produce progesterone. Eventually the placenta starts producing its own progesterone, and at that point the CL is no longer important. A dead or half-alive embryo is likely to produce neither gonadotropin nor progesterone, so it will be aborted and flushed on the next prostaglandin spike. The later this occurs in the pregnancy, the more stress is placed on the mother.

It's my impression that after implantation, this process is the same in humans as in other placental mammals. Thus menstruation might be more properly understood to be about flushing unimplanted zygotes rather than aborting unhealthy embryos.


I believe OP's point was that aggregating crowd-sourced reporting of outages is surprisingly accurate and prompt, not that there aren't any other tools for reporting outages.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: