Hacker Newsnew | comments | show | ask | jobs | submit | kylebrown's comments login

> But if the startup gods aren’t smiling, and you can’t either figure out the cause and/or figure out how to correct it, it’s time to start working on a Plan B for the business. Plan B often includes kicking off a strategic process that ends up in the sale of the company before it becomes as obvious to others as it is to you that you’ve got a dying shark on your hands.

That sounds not entirely ethical / honest..

> That sounds not entirely ethical / honest..

It isn't, that's simple. In any deal there are two main obligations for seller and buyer. For the seller to inform the buyer about any material things they are aware of affecting the item being sold ('duty to inform') and for the buyer to look at the item being sold long and hard to see if there is anything they can find that will affect their judgment ('duty to research').

So from an ethics and an honesty perspective you're 100% right, and from a legal perspective you are also right but that's where it gets hairy. Because the 'duty to inform' is not specified in enough detail to make that duty an explicit thing without wiggle room these cases (where a seller should have been aware of a defect but did not disclose it) usually end up in front of a judge and it is definitely not an open-and-shut thing that every such case is ruled in favor of the party filing the complaint. It's one thing to state that a seller should have been aware of something but quite another to prove that they actually were aware of something.

I've seen a deal or two where the seller was 'glad to be rid of 'x'' where you'd expect them to be perfectly aware of the situation and where the buyer did their DD and were totally happy with what they bought. So sometimes it is simply a matter of perspective and the supposed defects are simply not as relevant to the buyer as they were to the seller. But as a seller I'd rather err on the side of caution and inform the buyer even if that sometimes would mean a deal would not go through.

You can bet that after this remark any exits from a16z will be looked at just a little bit more carefully, if that is even at all possible. Most deals at this level go through a very thorough process making it fairly rare that a thing such as 'growth is stopping or about to stop' would slip through the cracks and I'm kind of surprised that a16z is of the opinion that a seller would be able to get away with this kind of trick without either risking the buyer figuring it out in time. That would have to be a fairly in-experienced buyer.

Don't claim that is has hockey stick growth still. Done. All of a sudden you're not committing fraud anymore. Expecting infinite exponential growth is absurd on many layers.

Most businesses live in the margins. This doesn't mean the business should be disbanded.

This is how most people try to make money in any public financial market though. Sell at the top. Private companies are a bit different, but it's always buyer beware and do due diligence for a reason.

Dead sharks are worth a lot of money in the art world: http://www.amazon.com/The-Million-Stuffed-Shark-Contemporary...

Perhaps a dead unicorn would have been even more noteworthy.

The original shark fell apart and had to be replaced, which added an amusing patina of involuntary performance art to that famously iconic work.

It's not known if the piece now includes an ongoing shark-replacement maintenance contract.

Yeah, he made one of those, too[0]

[0] http://www.damienhirst.com/the-dream

If the buyer is going to just shut down the product (and perhaps effectively mitigate their own competition, even temporarily), then there's a bit less potential for fraud to hurt the buyer.

It's not necessarily about misrepresenting your business to potential acquirers. In many cases, an acquirer may be interesting in a company's IP portfolio or product. Such an acquisition might resemble a liquidation but it's better than nothing.

Is there a free/open-source download for RiakTS?

Or can anyone recommend something lighter-weight than full-blown Kafka etc.?

Maybe, just maybe, someday basho will open the source for RiakTS, like what they did for RiakCS (Riak S2). Well, I certainly hope so :)

It isn't near as scalable as Riak, as the so-called clustering tops out at 3 nodes, but Influx is very very nice if you don't put gobs of data at it:


Riak is one of those systems, where it really does scale out linearly.

Things are actually about to get dramatically better for InfluxDB: https://influxdb.com/blog/2015/10/07/the_new_influxdb_storag...

I'm not sure how that helps fix the HA/FT clustering setup?

I can hurl more data at the system and have it go unavailable during a partition or lose updates when it heals?

Thanks for the heads up. As a current influx (and cassandra) user, I'm painfully aware of the limitations at "scale". Influx seems to be the best long term play, but still has some growing up to do.

I've had a good experience with Cassandra for times series data.

Likewise, but it doesn't solve one of the things that Riak TS and other timeseries DB's appear to try to solve: querying your data across ranges without knowing the range ahead of time. Essentially you are performing rollups yourself (which requires internal knowledge about how to aggregate values) to not scan gazillions of records to get a month's worth of data. Among other solutions, I believe this is something KairosDB[1] specifically tackles.

1 - https://github.com/kairosdb/kairosdb

> But something tells me that our society and economy derives zero benefit from nanosecond resolution trading vs. trading with say 1 minute resolution.

Not only does nanosecond resolution provide zero benefit, models indicate that it actually harms liquidity. To be specific, serial order processing in continuous-time is more efficient in "time-space", but less efficient in "volume-space". The better mechanism is batch order processing in discrete-time (i.e. process all arriving orders simultaneously in batch every 100 milliseconds, rather than one-by-one every nanosecond): http://faculty.chicagobooth.edu/eric.budish/research/HFT-Fre...

What about derived liquidity? A lot of liquidity in ETFs, futures, foreign exchange, dual-listed stocks and interest rate products works this way. Traders will do things like buy futures and sell correlated ETFs to hedge, buy ETFs and sell the basket of stocks it holds, and so on, as a way of making markets.

Because the markets are fast, there is less risk that their hedging product will "run away" from them before they can execute. Since the risk is low, they can make a very competitive and tight market. If they had a 100ms delay to hedge, they would need to make a bigger spread to compensate for the risk.

Otherwise a broker would just route mom n' pop's orders to the exchange that pays the biggest bribe to the broker (and screw mom n' pop).

Yes but for proprietary traders using their own capital, it doesn't make a lot of sense to force them to trade in a particular way since executing sub-optimally can only impact their own performance. There are legitimate reasons not to trade the lowest price market (maybe you believe the quote is slow, want to trade bigger size all at once, don't want to risk signaling to the market): http://www.hudson-trading.com/static/files/RegNMShudsonriver...

And retail brokers already do what you suggest. All their marketable flow gets sold to off-exchange market makers, while their limit order flow gets routed to exchanges that pay the highest liquidity rebates, Reg NMS just mandates what price it can trade at: http://news.indiana.edu/releases/iu/2014/02/study-of-potenti...

A point of controversy is whether or not the 43 students were incinerated. The indedentent experts (GIEI) report insist that it was impossible:

> Dr. José Luis Torero, an internationally recognized fire-investigation expert, was hired by the GIEI to conduct an independent examination into the incineration scenario. Torero, a Peruvian who participated in the forensic investigations of the World Trade Center attacks, has a Ph.D. from the University of California, Berkeley, and was previously a professor of fire security at the University of Edinburgh. He currently heads the School of Civil Engineering at the University of Queensland, Australia. The incineration of forty-three bodies in an open-air terrain like that of the Cocula dump, Torero concluded, would have required some thirty-three tons of wood or fourteen tons of pneumatic tires, along with the same amount of diesel fuel; the fire would have had to burn for sixty hours, not the twelve that the P.G.R. claimed it had, based on the confessions of captured Guerreros Unidos sicarios. The smoke from such a fire would have risen nearly a thousand feet into the sky and would have been visible for miles around; no such pillar of smoke was spotted, or even captured by satellite imagery.

The pro-government media is pushing back that its just one scientist's opinion, which is at odds with the opinion of scientists at UNAM (national university of mexico) who participated in the government investigation and concluded that the missing students _were_ incinerated.

That sounds eerily reminiscent of the Climate Change Denialist camp: bribe some unscrupulous scientist to push whatever narrative you want people to believe, then use the media to frame the conflict as a matter of opinion.

If the facts says that burning 43 bodies would require more fuel than what could be moved in secret to the place, and would leave unmistakable traces that are simply missing, that's what the facts say. Period. The UNAM scientist can claim that the Earth is flat... it is not as if their paycheck don't ultimately come from the same people that is trying very hard to cover up the whole affair.

Specifying a solution is easy; that's a purely technical exercise. The hard problem is figuring out how we get from where we are now (A) to where we want to go (B); that's a political problem.


The screenshot in the article is one I posted 9 months ago, and is being used without attribution: https://news.ycombinator.com/item?id=8593050


The other day, I ran across this in a Yudkowsky article[1]:

> Suppose that earthquakes and burglars can both set off burglar alarms. If the burglar alarm in your house goes off, it might be because of an actual burglar, but it might also be because a minor earthquake rocked your house and triggered a few sensors. Early investigators in Artificial Intelligence, who were trying to represent all high-level events using primitive tokens in a first-order logic (for reasons of historical stupidity we won't go into) were stymied by the following apparent paradox: [.. snip ..] Which represents a logical contradiction, and for a while there were attempts to develop "non-monotonic logics" so that you could retract conclusions given additional data.

1. http://lesswrong.com/lw/ev3/causal_diagrams_and_causal_model...


Thanks for this, though I may not be fully comprehending. If earthquakes can set off burglar alarms, and if burglars can set off burglar alarms , why would there be no -|ALARM->EARTHQUAKE theorem? Understandably the ALARM is for detecting burglars not earthquakes, but it does function as a detector for both.


It doesn't function as a detector for both, it functions as a detector for either. I'm not being pedantic, there is a subtle difference. ALARM -> EARTHQUAKE is false: the correct theorem is ALARM -> (EARTHQUAKE | BURGLAR). If the alarm goes off then you have no certain knowledge about which one caused it, only probabilities.


It would seem that an additional source of information (e.g. a nearby seismograph bolted to bedrock) would be the solution to this problem. This is analogous to a blind person trying to identify whether a sound is a recording or the Real McCoy. The solution is to either somehow make him able to see or to allow him to use some other sense (e.g. touch) by which to measure the origin of the sound. If he hears a dog barking, then if he sees/feels a dog, he is almost completely assured [0] that it is a real dog that was barking and not a recording playing from a machine.

Thus, adding another sense (a seismograph and sightedness/touch, respectively) would seem to eliminate the problem. If this is correct, then such problems are more ones of a lack of relevant, heterogeneous information and less of a lack of logical expressiveness of probabilities.

[0] = I say "almost", because knowledge is a fundamental philosophical problem. The usual means to "almost" assurance is, as I am arguing, to employ more heterogeneous sources of information until the only logical alternatives to what you believe are absurdities (e.g. impossible).


Ah, I wasn't picking up on the subtlety; thanks for adding clarity.


> In a hypothetical cyber-utopia or something, this could all just be a simple protocol (say, um, combining aspects of Direct Connect, BitTorrent, NNTP, Gopher, Bitcoin, GPG)

The article does mention p2p and decentralization, but the browser isn't whats holding that back. The client-server development paradigm is.

Currently, every dev making a p2p app has to roll their own platform because there's no standard (i.e. no LAMP, rails, or Heroku for deploying p2p apps). Some projects are working to change this, namely BitTorrent Maelstrom (which is a fork of Chromium with native support for magnet URLs, so it auto-renders an html/js package distributed via torrent); and http://ipfs.io (a sort of content-addressable p2p filesystem, or bittorrent on steroids).


Yeah, that's super interesting stuff... Also Etherium, which as I understand it is a way to do distributed algorithms in a very general way.


Javascript package managers like npm tend to work much more seamlessly than the old unix ones. Wouldnt mind scrapping them all for https://node-os.com/

Also, the reason package managers suck is because.. entropy. See Joe Armstrong, The Mess We're In http://www.youtube.com/watch?v=lKXe3HUG2l4


I find that the Nix package manager is much more principled, general, and promising as a model for robust and easy package managing. And with NixOS, it's a huge step towards a declarative and reversible way of managing a whole computer system.



Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact