Hacker News new | past | comments | ask | show | jobs | submit | yifanl's comments login

If you write in a manner that gets you dismissed as a chatbot, then you've still failed to communicate, even if you physically typed the characters in the keyboard. The essence of communication isn't how nice the handwriting is, its how usefully you've conveyed the information.

That would only be meaningful if ChatGPT made as much money as Google did per query (as is, I believe they lose money on each, so they can hardly make it up in volume).


They could throw an ad platform on it in 5 minutes if they wanted to.


LOL

It took Google 15 years to build an ad empire.

And they still have trouble filling ad slots on YouTube.

You think ChatGPT is just going to "turn on" $100B in online ad spending overnight?

That's not how it works.


Why do you think it would take so long to throw an ad system on ChatGPT?

Personally, if I could buy OpenAI stock at 100B market cap right now, I would back up the truck.


adding ads is easy, adding ads that work is not.

Google can easily monetize a search for "dentist in $city" where your intent is to spend money, OpenAI would have to monetize "sum up this email" where money is not involved at all.


If you have enough eyeballs on your pixels you can make money on ads. They are just deliberately choosing not to do it (Altman has made statements to this effect).


> If you have enough eyeballs on your pixels you can make money on ads.

sure, but Google makes _way_ more money than X or Reddit or Pornhub, not all eye gazes are the same.


Which would slow down user uptake, I imagine.


Because we had an implicit understanding that it would continue to be humans reading the code.


The same thing Amazon would have to do to detract fake listings: Add friction.

Of course, even mentioning the f-word is forbidden, so...


I know what the words means to me, I don't know what it means to Matt, and his opinion seems to be the relevant one.


It's not obvious to me that employees are motivated to be more productive for the sake of productivity.


is it not obvious that there must be some who will be? my experience has been that new grads are willing to try out new things - it may be that this particular AI won't get them anywhere and maybe they'll become disillusioned, but the next crop of graduates will embrace the next big thing, and if that turns out to make a huge difference, they will reap the rewards.


It's not obvious that in the general case, employees can be motivated by getting work done if the reward is more work to do.


no, the reward is a CV that will bag them a major payrise at the next gig. not everybody wants that, so it'll never be the general case. but they might still take your job - along with several others'.


It's not clear to me that anything novel done at one enterprisey job has a predictible impact on how much more you could ask for next time, beyond learning the skills to do the work and proving you can take on responsibility or make decisions. As an IC, there are so many other bottlenecks, that at best it'll just be another tool; in many cases, I don't think the value derived from a faster coding tool would surpass the value of reducing the number of meetings, which are tremendously burdensome and always creep into what would otherwise be the time you'd spend thinking about a sufficiently important problem.


Is the claim there some special property that makes it impossible to convey hate as opposed any other type of idea through text?

That seems extremely wrong, especially in this context, given that LLMs make no attempt to formalize "ideas", they're only interested in syntax.


Maybe the name for the "hate speech" is poorly chosen, since it's not necessarily about "hate".


I mean, what's the claim then, there's no such thing as an illegal idea? You can't assign a semantic value to a legal system.


What makes you confident a methodology that consistently beats the market exists?


If such methodology doesn't exist then how are quantitative trading firms in business?

Genuinely wondering. Is this because they have so much money to play with that they can move markets in their favour?


As a quant myself, we don't try to predict the market, at least not the way that people normally talk about predicting, and we certainly don't move the market in our favor.

At least what my firm does, is we look at the current state of the market at any given time point, and test whether the current state of the market satisfies our model of an efficient market. If it does, then there's no action to take, if it doesn't then we determine what kind of violation is present and jump in to close the gap.

So a very trivial example would be to take two ETFs, like QQQ and TQQQ. As a simplification a model of an efficient market would have at any moment in the day the change in price of TQQQ = 3x the change in price of QQQ.

We then observe the actual state of the market and if the actual change in price of TQQQ matches our model, then there's nothing to do. If it doesn't, then either TQQQ is under priced or it's overpriced or QQQ is underpriced or it's overpriced (or our model is just wrong or some outlier). Depending out what the condition is we buy x dollars worth of TQQQ and sell 3x worth of QQQ or do the opposite.

There's no real prediction here, we simply have a model of what an efficient market looks like, we scan the market for violations of that model, and then we perform an action to bring the market back to an efficient state.

The model I presented above is incredibly simple and just for illustrative purposes, but in a nutshell, that's our job. We have literally hundreds of models for an efficient market and for every model we have algos that test whether the market satisfies our model, and when the market deviates from our model the algo produces a signal which other algos act.


So basically, you’re seeking super low-risk arbitrage opportunities of low-moderate complexity, but like, really high throughput and with really low latency trading?


Exactly, over the past 15 years of doing this I can basically recall every single day that my firm had a net loss, with Brexit was the biggest one. Most of the losses were due to technical failures/bugs/networking issues, very few one of them were due to issues with the model.

And yes, high throughput and low latency are critical aspects of our trading and they are factored into the model as well, in that for every deviation we observe from our model need to measure how long such a deviation is likely to last and we only trade on those which are likely to last long enough for the trading algo to complete.


The one consistent method I know of would be high-frequency trading to front-run orders, which involves maintaining a moving target of state of the art infrastructure (both hardware and software), including a relationship at the markets so you can get an ultra high-speed connection (I'm certain there are rules with this to make it fairer, but I would assume not everyone will be provided a connection simply due to physical limits).

But I also assume that's not the type of thing parent comment is asking about - Any rational actor with an opportunity to do this would already be doing this after all.


How many quantitative trading firms have gone out of business?


There are a number of wealthy investors who don't tell other people about their methodology. They're either very lucky, criminals, or have beaten the market.


There's wealthy, and there's Wealthy. As the market itself tends to trend upwards, you can absolutely ride that into wealthy, no crime or extreme luck required.


If we're pedantic, this doesn't actually do what's advertised, this would be waiting X timeouts worth of event cycles rather than just the one for a true Big timeout, assuming the precision matters when you're stalling a function for 40 days.


I haven’t looked at the code but it’s fairly likely the author considered this? eg the new timeout is set based on the delta of Date.now() instead of just subtracting the time from the previous timeout.


No, it pretty much just does exactly that.

    const subtractNextDelay = () => {
      if (typeof remainingDelay === "number") {
        remainingDelay -= MAX_REAL_DELAY;
      } else {
        remainingDelay -= BigInt(MAX_REAL_DELAY);
      }
    };


Oh yikes. Yeah; not ideal.


To be fair, this is what I expect of any delay function. If it needs to be precise to the millisecond, especially when scheduled hours or days ahead, I'd default to doing a sleep until shortly before (ballpark: 98% of the full time span) and then a smaller sleep for the remaining time, or even a busy wait for the last bit if it needs to be sub-millisecond accurate

I've had too many sleep functions not work as they should to still rely on this, especially on mobile devices and webpages where background power consumption is a concern. It doesn't excuse new bad implementations but it's also not exactly surprising


I guess the dream of programming the next heliopause probe in JavaScript is still a ways off hahaha! :)


But it appears that it is consistent with setTimeout’s behavior and therefore likely correct in the context it will be used.

At least if your definition of “correct” is “does the thing most similar to the thing I’m extending/replicating”. In fact you might believe it’s a bug to do otherwise, and JS (I’m no expert) doesn’t give a way to run off the event loop anyway (in all implementations). Although I’d be amused to see someone running even a 90 day timer in the browser. :)

I’ve think a very precise timeout would want a different name, to distinguish it from setTimeout’s behavior.


That wouldn't very well because Date.now() isn't monotonic.


There is a monotonic time source available in JavaScript, though: https://developer.mozilla.org/en-US/docs/Web/API/Performance...

As I understand it, the precision of such timers has been limited a bit in browsers to mitigate some Spectre attacks (and maybe others), but I imagine it would still be fine for this purpose.


I don't understand how an implementation detail means it isn't doing what is advertised?


Each subtracted timeout is a 25 day timer, so any accumulated error would be miniscule. In your example there would a total of 2 setTimeouts called, one 25 day timer and one 15 day. I think the room for error with this approach is smaller and much simpler than calculating the date delta and trying to take into account daylight savings, leap days, etc. (but I don't know what setTimeout does with those either).

Or maybe I'm missing your point.


You don’t need to take into account daylight savings or leap days when dealing with unixtime.


I mean, Brass Birmingham and many other high ranking games would be rather poor choices for pick-up and play game nights with most groups (number 7 is Twilight Imperium, which takes 6 hours on the short end!). Indeed, a lot of them can be played as deeply as Chess or Go.

There's been study over what "biases" the site has, which I personally think is rather uninteresting (what's the use of a global ranking without bias, after all?), but there's a lot more to it than what's easy to learn.


They also have a complexity score, so you can certainly search for 'find me the highest rated game as simple as monopoly'


Yeah, the other category (I mentioned in my footnote-edit) is giant games that you dedicate a large part of a day to. Diplomacy, Twilight Imperium, that stuff. The two ideal gaming-situations for BGG-type gamers are multi-game game nights, and gatherings to play a single round of gigantic games that they can never get their more-normal casual game night enjoyer friends to play with them :-)

Further, you see a lot of "This game has seen tons of play at our table! Maybe 100 times!", not like chess where 100 matches is something someone who's barely even interested in chess may achieve by accident (I bet I've played 200+ matches in my life, and I'm not really that into chess, don't find it as fun as probably most other board games I've played, and remain entirely terrible at it—and I mean it, even chess programs set to stupid-mode so they only look one move ahead get me about half the time, because I reliably blunder badly at least once per match and they catch it every single time). It's just a very different crowd than the dive-very-deep-into-one-game sorts that might rate whichever game they've chosen to do that with as #1 and aren't even really looking around for other games.

There are exceptions in the rankings, that's not absolute, but mid-weight game night games that play something in the 4-8 range, good lighter filler games for game night, and enormous this-is-your-whole-day games, tend to be the ones that do well, assuming they're also, like, actually good for what they are. That's why super-famous games like chess aren't higher than they are (if chess were just invented today I bet it'd struggle to break the top 5,000—"Two stars, some of the variant rules are OK but ultimately if you want an abstract two-player game on a grid, you're better off with GIPF, and the knife-fight tension and wonderful portability of something like Hive just isn't present here, if you want a game with theme but don't really care about it connecting well with play—which this game clearly doesn't—just get Hive. Also they should print the piece layout and move sets on the board, it's hard to remember all that stuff and it's not like that space is used for attractive artwork or anything mechanically-relevant except the grid anyway.")


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: