The cafeteria manager likely had no idea they were running out of forks. A simple email or phone call to the cafeteria department would have probably solved the problem, since managers are likely looking for ideas of things to buy, especially if they've been recommended by customers (students).
I disagree. I assume the manager was quite aware, but whenever he had previously asked for resources from his own boss before, they refused and implied that he needed to try to use even fewer resources.
After your manager makes it clear multiple times that they do not want to spend any money, even on things that are important, many sane people will stop caring.
He may have told the boss multiple times that they were running out of forks, and the boss said "well, we have over 200 forks, that should be enough".
But in the end, I blame the higher-level main manager. Stupidity, poor communication, stinginess, these are all very common. The workers, even first line managers, can only fight so much against it.
The first manager may have already bought a ton of spoons on his own.
Yeah, the idea that a college dining hall administrator is just sitting around desperately wishing he could spend more money but not knowing what on earth to spend it on unless he receives an email from a random student to whom he has zero accountability strikes me as far less likely than your theory.
Especially so in something that is certainly a cost centre. Not something marketable. I would expect something like cafeteria to be run on bare essentials, after the initial build phase. Spending money there is not marketable or rise profile of people involved.
I think people are amazed and impressed that bugs more than 6 months old aren't just auto-closed with "There's been several new versions since this bug was filed, please check if this still exists on the latest version, and if so file a new bug."
In spoken word, sarcasm in even the cleanest context always has the benefit of delivery paving the way for virtually guaranteed reception. A written indication does look silly when reception succeeds without its help, but guaranteed parity with speech seems like the winning maneuver.
The whole point of sarcasm is to say something you don't mean, and people who know you or other facts well enough, can get a dopamine hit for being able to infer that. It's just an ingroup/bonding thing and carries no information or value outside of that. At any rate, adding a /s is like saying "just being sarcastic", might as well not be sarcastic in the first place.
I think what you're describing is a subset of sarcasm, or potentially a separate thing that often utilizes sarcasm (significant overlap). A friend of mine will often suffix such statements in speech with "if you know you know" in the event that the delivery alone isn't sufficient to trigger everyone's memory, and "IYKYK" in text.
Pure sarcasm has no such prerequisite, which means that delivery alone (tone, cadence, and smirk) is almost always sufficient. These give away the fact that sarcasm is happening to roughly the same extent that "/s" does, just that we have more experience with the former so the latter tends to feel too spoon-fed.
Poe’s Law is vastly over-invoked. In any social setting where some amount of context can be assumed, it doesn’t apply.
And even if someone wanted to challenge me on that, I’d argue that the tiny number of people who would think “HN is a tech support site” is being said seriously aren’t worth the two seconds it takes to type “/s”.
The time it takes to type `/s` is a decent bit lower than two seconds, and yet the amount of time you've spend trying to argue that it's not worth it is vastly more.
I invoked Poe’s law sarcastically. I’m surprised you didn’t realize that, given how over-invoked it is here. I guess there’s always someone who doesn’t get the joke. /s. <—- shouldn’t be necessary, but is, obviously. /s <—- not sure if necessary here.
But this is exactly what I’m saying: if you’re sarcastic and the audience gets it, it’s good sarcasm. If you’re sarcastic and the audience doesn’t get it, it’s bad sarcasm. If you don’t have a clear enough context, just don’t be sarcastic.
There’s always someone who doesn’t get the joke. Even today, there are people who, at least at first, are fooled into thinking that Jonathan Swift’s “A Modest Proposal” was a serious proposal.
Its a footgun that results from JS’s loose typing of functions, allowing them both to be called with discarded arguments and to be flexible in the number of arguments they accept.
While each of those flexibilities can be useful, they interact in annoying ways. The fact that map passes three arguments but is often used as if it was passing one, and that parseInt accepts two arguments but is often used as if it accepted one makes it very easy to make this mistake.
> Its a footgun that results from JS’s loose typing of functions, allowing them both to be called with discarded arguments and to be flexible in the number of arguments they accept.
This just goes to show how one can use a language without actually understanding its core semantics. Additional arguments aren't "discarded" at all: they're still available to the called function in via `arguments`, they're just not required to be assigned a name in the function signature.
Basically, a Javascript function fn() declaration is equivalent to Python's def fn(*args): declaration and you have the option to assign a name to positional arguments. Any named positional argument that is not provided by the caller simply leaves the argument uninitialized, i.e. undefined.
It's a core concept of the language and I'm always puzzled when non-beginners struggle with this. That's also a very good reason to not use vanilla Javascript at all and skip straight to TypeScript and its brethren instead.
> It helps avoiding number of arguments problems, though
This issue is all about the number of arguments problem, and TypeScript (in cases like this) won't flag that problem. Which is something I think it should.
If you manually unroll the "map" and call parseInt() explicitly with the same arguments that map() calls parseInt() with, TypeScript will flag that. But not when map() is in the picture.
I understand why they chose to do that, but I still disagree with it.
he he
I tried to fix this in Typescript but look like they do not care
So I made and use (for my projects) my own version of typescript ;-)
https://www.npmjs.com/package/@topce/typescript/v/5.1.6
There indeed you would have compile time error
error TS2345: Argument of type '(string: string, radix?: number) => number' is not assignable to parameter of type '(value: string, index: number, array: string[]) => number'.
Target signature provides too few arguments. Expected 3 , but got 2.
I'm curious - have you been using this for long? Have you noticed any code which this flagged and which you thought it was being too strict? I feel like I would want this flag on all of the time, but I'm curious if there are edge cases that I have not considered but maybe you have experienced.
Not for a long just few days.
It could be too strict , probably that why they rejected PR ,
but it depends of callback definition.
I proposed it as a flag because it is breaking change by default turn it off.
But as they rejected PR in my fork
I remove flag and in latest version is always on.No flag.
I change one my repo to use it and need to patch some lib definition of
callback to be more strict also build with typescript "skipLibCheck": true,
If I do not want to change existing code to add parameters for each callback
I use trick bellow :
type JavaScriptCallback<
T extends (...args: any) => any,
P = Parameters<T>
> = P extends [...infer Rest, infer _Last]
? ((...args: Rest) => ReturnType<T>) | JavaScriptCallback<T, Rest>
: T;
}
and then is more like standard TypeScript would not complain about
parseInt because I redefined typedef of map
to accept 0 or 1 or 2 or 3 parameters .
But I am in control.
Only edge cases in some callbacks I notice tsc complains that type is any with strict option turn on
then I add a type .
It is experimental, would prefer if they add it as option.
Change is just in checker emitted JavaScript is still same.
As always there are some trades of.
But for me it works so far so good ;-)
No problem
so basically you can fix errors :
add parameters with that are not used for example _index, _array
or override callback type definition wrap it in JavascriptCallback
Every language that supports default and variable arguments does this. This includes Python, C, and C++ and has absolutely zero to do with "loose typing" in this case.
You can say "well feature X exists elsewhere" or "library function Y exists elsewhere", but only JS makes the collective design choices that cause this phenomenon.
Good, bad? IDK, that's subjective. But unique? Certainly.
Oh, I definitely could post an equivalent example in Python and C. There's simply not enough space on the sidebar to do it ;)
If you ever, like at all, had the pleasure of using any Python library without type hints - past iterations of numpy and in particular matplotlib come to mind - you'd be blown away by the amount of
def some_function(*args, **kwargs)
Have fun trying to figure those out even with an API reference at hand. Fun times.
Also C does have equivalent syntax, namely variadic functions in combination with function pointer arguments. You can design all kinds of crazy interfaces with that, too, which will exhibit strange and unexpected behaviours if used incorrectly. Heck, name a single C noob who didn't cause a segfault while trying to read a number using sscanf().
When it comes to stdlib interface design, C is actually much more offensive than JS :) strtok() comes to mind. More footguns than stars in the sky just in that innocent seeming function alone. And don't even get me started with the C++ STL...
So no, it's not just JS that makes design choices that seem odd and unintuitive - you'll find them in every language that is actually in widespread use and "battle tested". It's funny to me how some people try to single out JS in that regard, even though it's in no way special when it comes to this.
JS discarding extra positional arguments (and that being leveraged in map and similar methods) seems to be the unique bit. I don’t think any other popular language does that.
The combination with the design of common-across-languages iterator methods like map, filter, reduce is uncommon. (Neither is doing it implicitly for all functions.)
You're discussing this as if the critical "gotcha" was with `parseInt`. It's not the critical gotcha here. The critical one is actually with the semantics of `map` which behaves differently in JS compared to just about every other language with a map (or equivalent) which passes only the elements of the sequence (or sequences in some cases) to the function. See Python, Ruby, C++ (transform is the nearest direct match), Rust, Common Lisp, Scheme, etc. Using the example with `parseInt` just demonstrates this strange, and unique, design decision.
JS has this semantic and the interfaces are designed with that in mind.
Take C# for example. There is a List<T>.ForEach() function that passes the value to the delegate. Fine. But what if I require the index as well? I can't use List<T>.ForEach() in that case, because ignoring extra arguments is not part of C# function call semantics.
The interface of map() has been designed to be as flexible as possible to cover use cases outside of just passing the current value. This matches perfectly with JS function call semantics that allow any number of arguments by default.
Why doesn't parseInt() *always* require two parameters? Why is one design decision "strange and unique" (e.g. map() passing 3 arguments) while the other (parseInt() accepting one or two arguments) is perfectly fine? I also would be careful with citing the STL as being sound when it comes its interface design :)
JS is different from other languages - big whoop. The same complaints about seemingly strange design decisions can be made for any sufficiently old and widespread language - JS isn't unique in this regard.
There actually annoying strangeness in other things, e.g. confusing type conversions in places (e.g. ("" == false) === true; [1, 2, 3] + [4, 5, 6] === "1,2,34,5,6"), which is why "==" should never be used if you want to retain your sanity.
The interfaces and standard functions, however, are as offensive or inoffensive as those in most other languages.
No, it doesn't. The S&P 500 basically is the market. Any meaningful difference between it and the other US investable equities is marginally notable.
It's literally asking "How do these roughly 500 plus or minus largest stocks correlate with the largely the same basket of securities?"
If you took one company and compared it against the total US market, then you have something at least. If you take the market itself and compare it against itself, it makes no sense.
> If you took one company and compared it against the total US market, then you have something at least.
This is a common metric referred to as market beta.
> If you take the market itself and compare it against itself, it makes no sense.
S&P 500 and Russell 2000 are both broad market indices, but they perform differently. Even using the same index constituents with different weightings (e.g. equal vs cap weighted) can produce meaningfully different results. It makes plenty of sense to compare them.
I don't think you understand why multiple broad market indices exist in the first place. It has nothing to do with "performance" and everything to do with licensing fees when institutional organizations want to track against the market.
No one. No one is seriously looking at the Russell 3000 and comparing it to the Wilshire 5000.
To the child comment:
I'm talking about two total market indices, apples to apples, and you're talking about apples to oranges. NASDAQ is a stock exchange, not an index.
Further still, even if you were talking about the NASDAQ composite, they're two entirely different indices. They do not have the same goals. There might be some relative meaning there.
There isn't much meaning between comparing two or more indices that have the same goals and constituents. You're only tracking the differences between constituents at that point, and you wouldn't need beta to do that. You could just calculate the difference between the constituents that are not a part of the total set.
> No one. No one is seriously looking at the Russell 3000 and comparing it to the Wilshire 5000.
People compare indices all the time to assess relative performance (e.g. Russell to NASDAQ) or HY credit to IG.
Response to edits:
> I'm talking about two total market indices, apples to apples, and you're talking about apples to oranges. NASDAQ is a stock exchange, not an index.
NASDAQ composite is one of the most commonly referenced indices. There's also NASDAQ 100, on which one of the largest ETFs in the world (QQQ) is based.
> Further still, even if you were talking about the NASDAQ composite, they're two entirely different indices. They do not have the same goals.
That's why people compare indices -- they're interested in evaluating different aspects of the market.
> There isn't much meaning between comparing two or more indices that have the same goals and constituents. You're only tracking the differences between constituents at that point, and you wouldn't need beta to do that. You could just calculate the difference between the constituents that are not a part of the total set.
Weightings? An equal weighted index will reflect increased contribution from smaller companies versus a market cap weighting (compare ETFs SPY/RSP). Same constituents, different emphasis.
Index funds have tracking error, so it absolutely makes sense to ask for the ten index funds most correlated to the S&P 500. I'd expect a Vanguard S&P 500 ETF or similar to show up in the list, along with some competitive products.
Also, roboadvisors like Wealthfront trade strategies based on negative correlations within and between market indexes.
Reread my comment. I didn't say it was the market. I said it _basically is_ the market. See my comments about FT Wilshire and FTSE Russell.
Regrettably, I shouldn't have bothered replying at all, some HN readers think beta is a fundamental measure. It is not. Many sources get this wrong for some reason.
If you are looking at price movements, you are doing technical, not fundamental analysis.
Well I guess 80% of the market isn't basically the market by your definition is it and every major institution that uses the S&P 500 as a proxy for the US market is wrong, right? OK. Sure.
That still makes sense. Something can be more or less correlated with the broader market. It's very useful, actually: you can use securities that tend to not be correlated with the broader market to build a portfolio with less average variance.
Yeah, I understand the concept, its just that regardless that people can graph it, doesn't mean that it widely has any tangible meaning other than being able to play with the numbers that come out of it.
It's a concept that fits nicely in the category of technical analysis, but not one that allows you to find any particular market insights.
If you took the whole tradable US securities market and said "what doesn't correlate with these price movements?" it doesn't mean anything.
If it did, I would ask you, "What explicitly do you _think_ this means?" Just because you get numbers in and out doesn't mean you're working with anything meaningful.
The only broadly meaningful concept you can get out of anti correlation with the market would be bonds because interest rates increasing push down the estimated future cash flows of publicly traded companies, affecting their valuations.
The correlation of an asset to the market is the definition of 'beta' and it is not technical analysis (from investopedia):
"Beta is a measure used in fundamental analysis to determine the volatility of an asset or portfolio in relation to the overall market."
"Technical analysis is a trading discipline employed to evaluate investments and identify trading opportunities in price trends and patterns seen on charts."
I know what Investopedia says, but there's nothing fundamental about beta. Beta is the "measure of how an individual asset moves (on average) when the overall stock market increases or decreases."
That's just looking at pricing. It's the same category as technical analysis.
>If you took the whole tradable US securities market and said "what doesn't correlate with these price movements?" it doesn't mean anything.
If it did, I would ask you, "What explicitly do you _think_ this means?"
Caveat: I'm an idiot and just trying to understand from a layman's perspective.
Wouldn't you be able to use this to invest when you think the market, as a whole, is overvalued?
For example, if the tool showed that a 30-year treasury bond (or similar) was negatively correlated with the S&P 500, wouldn't it suggest it was a good idea to buy bonds (or similar) when I thought the overall equity market was overheated? The idea being there are certain industry/stocks like maybe precious metals/mining that do well when the rest of the market is tanking?
> It's a concept that fits nicely in the category of technical analysis, but not one that allows you to find any particular market insights.
It's useful for hedging risk
> The only broadly meaningful concept you can get out of anti correlation with the market would be bonds because interest rates increasing push down the estimated future cash flows of publicly traded companies, affecting their valuations.
This is wrong. Consider pair trading or long/short strategies, both of which rely on estimating correlations.