It's usually either another CRM, like Nutshell, Sugar, Zoho, or a dozen other CRMs, or pointing out that what they built out Salesforce is another type of app all together, and direct them to a proper app for that market. The latter type tends to be ticket systems, document management ssytems, and in the case of one company they used it as an EMR. The EMR they used was open source but did very little. They liked the "free" aspect of it, but they were paying $400k/yr on Salesforce to do all the things their EMR didn't, as well as another app platform they were building out on. I pointed out they were spending $600k/yr on an EMR, not zero dollars. The COO pitched a HUGE fit but they did move to a $200k/yr EMR that did everything and they saved tremendous amounts of time in clinic, and money in administrator salaries.
My hot take is that the root of this issue is that the destructor side of RAII in general is a bad idea. That is, registering custom code in destructors and running them invisibly, implicitly, maybe sometimes but only if you're polite, is not and never was a good pattern.
This pattern causes issues all over the place: in C++ with headaches around destruction failure and exception; in C++ with confusing semantics re: destruction of incompletely-initialized things; in Rust with "async drop"; in Rust (and all equivalent APIs) in situations like the one in this article, wherein failure to remember to clean up resources on IO multiplexer cancellation causes trouble; in Java and other GC-ful languages where custom destructors create confusion and bugs around when (if ever) destruction code actually runs.
Ironically, two of my least favorite programming languages are examples of ways to mitigate this issue: Golang and JavaScript runtimes:
Golang provides "defer", which, when promoted widely enough as an idiom, makes destructor semantics explicit and with simple and consistent error semantics. "defer" doesn't actually solve the problem of leaks/partial state being left around, but gives people an obvious option to solve it themselves by hand.
JavaScript runtimes go to a similar extreme: no custom destructors, and a stdlib/runtime so restrictive and thick (vis-a-vis IO primitives like sockets and weird in-memory states) that it's hard for users to even get into sticky situations related to auto-destruction.
Zig also does a decent job here, but only with memory allocators (which are ironically one of the few resource types that can be handled automatically in most cases).
I feel like Rust could have been the definitive solution to RAII-destruction-related issues, but chose instead to double down on the C++ approach to its detriment. Specifically, because Rust has so much compile-time metadata attached to values in the program (mutability-or-not, unsafety-or-not, movability/copyabiliy/etc.), I often imagine a path-not-taken in which automatic destruction (and custom automatic destructor code) was only allowed for types and destructors that provably interacted only with in-user-memory state. Things referencing other state could be detected at compile time and required to deal with that state specifically in non-automatic-destructor code (think Python context-managers or drop handles).
I don't think that world would honestly be too different from the one we live in. The rust runtime wouldn't have to get much thicker--we'd have to tag data returned from syscalls that don't imply the existence of cleanup-required state (e.g. select(2), and allocator calls--since we could still automatically run destructors that only interact with cleanup-safe user-memory-only values), and untagged data (whether from e.g. fopen(2) or an unsafe/opaque FFI call or asm! block) would require explicit manual destruction.
This wouldn't solve all problems. Memory leaks would still be possible. Automatic memory-only destructors would still risk lockups due to e.g. pagefaults/CoW dirtying or infinite loops, and could still crash. But it would "head off at the pass" tons of issues--not just the one in the article! Side-effectful functions would become much more explicit (and not as easily concealable with if-error-panic-internally); library authors would be encouraged to separate out external-state-containing structs from user-memory-state-containing ones; destructor errors would become synonymous with specific programmer errors related to in-memory twiddling (e.g. out of bounds accesses) rather than failures to account for every possible state of an external resource; the surface area for challenges like "async drop" would be massively reduced or sidestepped entirely by removing the need for asynchronous destructors; destructor-related crash information would be easier to obtain even in non-unwinding environments...
Depends. If those employees truly are excess and they haven't been doing much for the past couple years, they might be just producing tech debt. Cancelled projects and migrations have negative value. I doubt this entire 20% was made redundant overnight, which means they haven't been valuable for some time.
So what have you experienced specifically? It should come as no surprise that telling people that "the powers that be are reading your thoughts and erasing your memories" is met with incredulity. That's a pretty big claim that doesn't mesh with the experiences and worldview of almost everyone.
I'm going to be frank, it sounds like paranoid schizophrenia. I hope you're doing okay stranger.
It's clear they are preparing for an acquisition. They are profitable with great gross margins and cash in the bank. With 20% workforce reduction they are going to be very attractive to the salesforce of the world.
I think there is only one thing we should focus on: Measurable capability on tasks. Understanding, memorization, reasoning etc. are all just shorthands we use to quickly convey an idea of a capability on a kind of task. Measurable capability on tasks can also attempt do describe mechanistically how the model works, but that is very difficult. This is where you would try to describe your sense of "understanding" rigorously. To keep it simple for example, I think when you say that the LLM does not understand what you must really mean is that you reckon its performance will quickly decay off as the task gets more difficult in various dimensions: Depth/complexity, Verifiability of the result, length/duration/context size, to a degree where it is still far from being able to act as a labor-delivering agent.
So far I’m only reading comments here about people wow’d by a lot of things it seemed that M3 pretty much also had. Not seeing anything new besides “little bit better specs”
If you're willing to play, here are plenty of lenders who will finance this purchase.
If it affects your earning power to that extent, you should probably pony up and save in the long run, probably just a few years until you see returns.
Caste system usually can't be bypassed by paying a monthly subscription fee.
I will note that making it a subscription will tend to increase the overall costs, not decrease. In an environment with ready access to credit, I think offering on a subscription basis is worse for consumers?
Can you elaborate, are those workflows in queue or can they serve multiple users in parallel ?
I think it’s super interesting to know real life workflows and performance of different LLMs and hardware, in case you can direct me to other resources.
Thanks !
This is definitely tempting me to upgrade my M1 macbook pro. I think I have 400GB/s of memory bandwidth. I am wondering what the specific number "over half a terabyte" means.
Everything you list is basically part of any display that is not bottom of the barrel.
Yes, it's a very good monitor but it would be crazy otherwise considering the price. And it has one fatal flaw: you can only connect to it with USBC/Thunderbolt making it an almost Apple only monitor which is extremely annoying in the long run...
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback - https://arxiv.org/abs/2305.14975
Huh? It's obvious to almost everyone that Dropbox is in a tough position in that it's largely failed to expand its offering beyond the very original product and value prop, which is also being eroded by the other major players.
So they attempted to use DB's position to leap-frog into new product categories, which require big spend on R&D and related teams, as well as new heads for supportive teams (sales, marketing, support, etc).
It's not working, so they are pulling back and re-trenching.
That all seems pretty transparently NOT a "having too much money" problem.
How is it that despite the vast population, we no longer have geniuses like Newton, Leibnitz, Gauss, Maxwell, Einstein, etc.? If they existed today, they would just be selling ads and stocks.