> The National Bureau of Standards (now NIST) based the time standard of the US on quartz clocks between the 1930s and the 1960s, after which it transitioned to atomic clocks. In 1953, Longines deployed the first quartz movement. The wider use of quartz clock technology had to await the development of cheap semiconductor digital logic in the 1960s. ... In 1966, prototypes of the world's first quartz pocket watch were unveiled by Seiko and Longines in the Neuchâtel Observatory's 1966 competition.
but certainly it is common for today's technological miracle to become tomorrow's pointless novelty
I don't know how often this happens in Western mega corporations (which is probably why you're surprised).
And you obviously haven't been to Japan. Yes, long hours, but your employer takes care of most aspects of your life. Healthcare, apartment, marriage, your kids education, everything. See this article by patio11: https://www.kalzumeus.com/2014/11/07/doing-business-in-japan...
In the non-crappy businesses in India other people — coworkers, bosses, etc — taking care of you is routine. And — okay this is a terrible example — in the unfortunate situation that somebody dies, for instance, trust me, random people just help the family with funeral and stuff like that.
What I'm trying to say is this: the rest of the world isn't filled with morons and cattle.
Oh and by the way, across the world people want quality products. Even people from — gasp — the global South. We too want our fridges and washing machines and TVs and cars and everything else to last forever.
>>> I think it would help if you link "NeSy computation engine". I'm actually not familiar with this (not in the symbolic world, but interested. Just never had time, so if you got links here I'd personally appreciate it). I can find the workshop but not the engine.
I linked to it in my previous comment. I'm referring to the ``NeSy computation engine'' described here. I didn't know there was a ``NeSy'' workshop and this paper was my first encounter with the term.
I think it's interesting that you mention symbolic world like it is separate from some other world. There's the AI that was and the AI that is today. There's the AI over there and the AI over here. Whenever you hear someone mention symbolic in the context of AI go ahead and grab a chair because immediately after this they are going to talk about cyc and John McCarthy for at least 20 min. If you're lucky they might throw some Prolog in there.
I don't think this is productive and I don't think there is another symbolic world. There is just the world. There are certain things in the world for which a numeric, directional representation makes sense. There are other things for which it makes no sense at all. It's my view that primitives in language are one of these things. Additionally, there are certain places where it makes sense to consider these representational approaches and other places where it only makes political sense. Lastly, there are symbols - atomic primitives - and there are ``symbols,'' objects with vectors in them and who knows what else.
What's striking to me about this paper is the coverage of formal grammars and semantic parsing entirely within the context of domain adaptation. Definitely the best part is the coverage of compositionality (https://ncatlab.org/nlab/show/compositionality) in the context of composing computational graphs. This is striking to me because all of these things (except domain adaptation) are essential to any reasonable theory of meaning but they are covered as if they've been repurposed for the practical application of populating Google Knowledge Panels, which I believe is exactly what happened. Check out the definitions of semantic parser and symbol.
>> Domain adaptation across verticals
>>> This is also a bit vague and so I'm not sure what you _specifically_ mean.
Crude semantic attributes pulled from character sequences and mapped onto the latent space of images have utility in business contexts if the mapping for some term sufficiently distinguishes it from the mapping of another term that has the same surface form. It ends there. GloVe was a half-baked representation of meaning in language when it was adapted from word2vec in 2014. GPT-2 grabbed the torch in 2019. It still doesn't work. Well, it sometimes works for adapting a general model to a specific domain such as a business vertical, but only in a crude and superficial way. Note that almost no ML research today discusses this representational issue at all, and that almost all ML research takes this representation as a starting point. If you decide to publish hyperparameters in your paper, such as in an appendix, hyperparameters related to vocab size and the dimensionality of your embedding space often aren't even worth mentioning. That's fine, I guess, because they don't mean anything anyway, but not talking about this, in my view, is not fine.
Check out the Mamba paper for example. Like most of ML research today the focus is on optimization. The representation problem has been solved so there's no need to talk about it: we map everything onto the latent space of images because short-form video content rules the day and that's how dude is gonna hit his 7T: advertising (https://blog.seanbethard.net/five-epistemes/).
>>> I think the problem is that math is taught by a game of telephone.
I think that, for language, the ML research community is, by and large, not even using the right maths.
>>> But that said, I still think vectors can do a lot. Especially since vectors and functions are interchangeable representations. Though I think we need to do a lot more to ensure that networks are capable of learning things like equivariance and importantly abstract concepts.
Thank you so much for highlighting the important of equivariance. I think this is a crucial concept for work at the cross-modal interfaces, especially in the context of the Curry-Howard correspondence, or, more recently, the Curry-Howard-Lambek correspondence. Right now the ML (CV) research community is labeling nouns with bounding boxes... lol. If that doesn't illustrate the fact that multimodal work is a vision-first enterprise I don't know what will.
>>> I think a bit too exaggerated but hey, I've been known to say that ML research is captured by industry and we're railroading everything. And that it is silly we publish papers on GPT when we don't have the models in hand as it just becomes free work for OpenAI and we can't verify the works because OAI will change things.
Check out the evaluation criteria in that ``NeSy'' paper, especially the metric that's supposed to tell you something about what the system was designed to do. I'm sure OpenAI is happy to have this info about their system.
>>> But I also don't know what you mean by "IR".
Ten years ago I considered NLP adjacent to information retrieval. Today I consider it part of information retrieval. There's very little work published today that suggests otherwise.
>>> Honestly I don't know what you mean by this. But if you are saying that the divide we create like NLP vs CV is dumb, then I'm all with you.
It is not my intention at all to create or highlight any divide. If there is indeed a known divide between CV and NLP I don't know anything about it, I don't want to know anything about it and it's not surprising.
>>> I also think it's silly how we call generative models. Aren't all models generative?
Generative refers to a situation where you begin with a finite set of things and productively form any number of well-formed expressions from these things.
>>> That includes me, and even I have a hard time parsing what you're saying and it doesn't help with the side snipes like URB-E scooters.
I'll take potshots at the Paul Grahams and Steve Jobs of the world every day and not lose any sleep over it. If they take their AirPods out of their ears maybe they'll hear me coming.
>>> But if I'm right and you need more than scale, then we better keep working because I'd rather not have another AI winter.
All I have to say about scaling is that, for language, I hope it's clear by now that more data and more params is not going to improve the situation. I can see how this is almost never the case for vision.
Damn it somebody said AI winter again. You aren't going to start talking about cyc and McCarthy for 20 min now are you?
>>> I also think it is quite odd for these companies to not be hedging their bets a little and more strongly funding other avenues.
The formula works.
>>> At this point, all I'm trying to get people around me in ML to understand is how nuance matters. That alone is a difficult battle. I'm just told to throw compute at a problem and data with no concern to the quality of that data. It does not matter how much proof I generate to show that a model is overfit, as long as the validation loss doesn't diverge, they don't believe me.
I'm interested in learning more about what you mean by nuance.
Probably just needs more compute and data. Just throw some synthetic data in there and call it.
> Rust’s rich type system and ownership model guarantee memory-safety and thread-safety — enabling you to eliminate many classes of bugs at compile-time.
Note that it explicitly mentions "memory" safety and "thread" safety, which has a specific but reasonable definition in Rust (for example, memory safety doesn't cover physical memory leak). Also it explicitly mentions which portion of Rust is responsible for such guarantees, namely "type system" and "ownership model", and it is reasonably claimed that the ideal implementation of both will indeed completely achieve such guarantees. To be clear, the current implementation is also very close to that ideal to make such claim meaningful, but there is always a difference between the ideal and the practice.
It was great, then it focused on monetization at the cost of other things.
I still kept it because I could simply share an handle and talk with strangers over the internet on my phone/laptop. There won't be that need anymore.
I hope this feature puts pressure on Whatsapp to implement the same.
There's a good chance not enough people are willing to pay for search and this never becomes a problem.
Disclaimer: happy paying customer.
On the up side startups don't have equity you can trade, so you're unlikely to demonstrate disloyalty this way.
And to be fair, if you go work for a startup you know what you're getting into. (And hopefully you know that the equity they give you has about the same value as a lottery ticket.)
I hate god for making LLMs work…
In the future if the Turing test is fully passed I think morality will boil down to preferring humans if and only if it’s a situation of competing scarcities.
In the case of “showing pictures to computer people”, showing pictures to a computer doesn’t prevent you from showing it to a human so this framework imply no moral justification for a human preference.
Are you seriously going to argue that, say, Bach or Shakespeare were not all that creative?
The government was both going to demand, and fund local companies creating a competing product and was already doing so long before sanctions arrived. Nothing about the sanctions sped up the process in the least. The Western sanctions had almost 0 bearing on accelerating local chip production, but it did do a GREAT job of limiting their ability to buy the hardware required for advanced fabrication.
You also seem to have a belief that a "monopoly" in China is anything remotely like the West. It's not a free market, if a company in China tries to gouge the market or take it in a direction that's not approved at the expense of what the party wants, their executive leadership will quickly find themselves on the outside looking in (if they're lucky) or sitting in a jail cell if they aren't. See: Jack Ma
Semiconductors were core to the "Made in China" strategy the party was pushing in 2014...
Also, you need some way to log in to your account. So you need an identifier and some way to validate that you are the owner of that identity. And next to that you want to prevent spam. So I think the choice to use a phone number as an identifier for a text-messaging app that is meant to be a secure replacement of SMS is not that weird.
But let's say they are data hoarding our phone numbers, and they can get other details about us through the black market because we use other more insecure services where we suddenly don't seem to care about privacy. Then what do you think Signal does with this data? They can't resell it because they don't have anything unique, they actually need to invest money to link their database of just phone numbers to something else. And then? What malicious things will they be able to do?
Calling it now, within 5 years American police will start experimenting with EEG helmets for detecting impairment.
EDIT: also implants are rediculously unnecessary and dangerous. And that’s just for sensing - please please please never let a Silicon Valley company modulate your brain. That’s sci fi lesson #1, above “don’t do clones” and “time travel is bad”
I _love_ those. When I'm trying to configure my modem, nothing gets my nether regions tingly faster than
(1) Visiting the documented configuration site on the back of the modem
(2) It 404s in a new and excitingly different way from every other modem you've owned
(3) After visiting a nearby business with working internet, the helpful interblags populace suggests you need an app, for ... reasons
(3a) At least in the old days, that was just a blatant money grab (harvest your location history and contact list, spam notifications, and so on). Now with halfway decent permissions it's less clear why they're such a-holes about it.
(4) You have the foresight to install the app while internet is available.
(5) Your phone is 2 years old, and the manufacturer views bare-minimum due-diligence updates as a liability and hasn't packaged the latest OS updates. Google simultaneously doesn't trust you to update your phone without somebody-other-than-them taking the liability and also highly recommends all their app developers to only target the very latest Android version. None of the modem-makeitbetter-thingamawhutts work on your phone because 2yo devices are practically fossil fuels in the making at this point.
(6) You surf the interblags for an old version of the modem-makeitbetter-thingamawhutts. After searching long enough, you have a >50% chance of having found the real archive and not a virus, since Google refuses to host old APKs.
(7) By sheer luck, that suffices, and your modem manufacturer doesn't have one of those "your thingamawhutt version is too old" screens in their app.
(8) You go back home, where it's safe and all is well, and when you try to configure your modem you find that the app only works if it can phone home to the mothership. That's a fine thing to want to do, except the only reason you're using the blasted app is because you don't have internet at home and would like to rectify the deficiency.
(9) After a bit of contemplation on the life choices that led you to this moment, you recall that your wife's phone can produce a hotspot that might remedy the immediate clusterfuck.
(10) That doesn't work. Her phone has internet and working hardware, but Google has decided to ask her cell provider if she's allowed to share the hardware she purchased with nearby devices for a single crisis moment. Her cell provider says that she can do anything she would like with the device she paid for and owns, so long as they also get paid repeatedly.
And so on. I'm having fun, reply if you want me to continue the saga, but IME an app is a very poor choice for configuring a modem, and that sort of bullshit has bit me in more ways than I can count. Even when you can get the damned thing to work, like OP mentions, the settings aren't anywhere near as nice as what the firmware natively supports.
> Local SSD disks are physically attached to the server that hosts your VM.
Disclosure: I work on GCE.