Hacker News new | past | comments | ask | show | jobs | submit login

It's usually either another CRM, like Nutshell, Sugar, Zoho, or a dozen other CRMs, or pointing out that what they built out Salesforce is another type of app all together, and direct them to a proper app for that market. The latter type tends to be ticket systems, document management ssytems, and in the case of one company they used it as an EMR. The EMR they used was open source but did very little. They liked the "free" aspect of it, but they were paying $400k/yr on Salesforce to do all the things their EMR didn't, as well as another app platform they were building out on. I pointed out they were spending $600k/yr on an EMR, not zero dollars. The COO pitched a HUGE fit but they did move to a $200k/yr EMR that did everything and they saved tremendous amounts of time in clinic, and money in administrator salaries.

The max memory is dependent on which tier M4 chip you get. The M4 max chip will let you configure up to 128gb of ram

That's if you pick the M4 Pro chip SKU (Apple's naming conventions are admittedly unhelpful here). The M4 Max SKU supports up to 128 GB.

Right, the nvidia card maxes out at 24GB.

My hot take is that the root of this issue is that the destructor side of RAII in general is a bad idea. That is, registering custom code in destructors and running them invisibly, implicitly, maybe sometimes but only if you're polite, is not and never was a good pattern.

This pattern causes issues all over the place: in C++ with headaches around destruction failure and exception; in C++ with confusing semantics re: destruction of incompletely-initialized things; in Rust with "async drop"; in Rust (and all equivalent APIs) in situations like the one in this article, wherein failure to remember to clean up resources on IO multiplexer cancellation causes trouble; in Java and other GC-ful languages where custom destructors create confusion and bugs around when (if ever) destruction code actually runs.

Ironically, two of my least favorite programming languages are examples of ways to mitigate this issue: Golang and JavaScript runtimes:

Golang provides "defer", which, when promoted widely enough as an idiom, makes destructor semantics explicit and with simple and consistent error semantics. "defer" doesn't actually solve the problem of leaks/partial state being left around, but gives people an obvious option to solve it themselves by hand.

JavaScript runtimes go to a similar extreme: no custom destructors, and a stdlib/runtime so restrictive and thick (vis-a-vis IO primitives like sockets and weird in-memory states) that it's hard for users to even get into sticky situations related to auto-destruction.

Zig also does a decent job here, but only with memory allocators (which are ironically one of the few resource types that can be handled automatically in most cases).

I feel like Rust could have been the definitive solution to RAII-destruction-related issues, but chose instead to double down on the C++ approach to its detriment. Specifically, because Rust has so much compile-time metadata attached to values in the program (mutability-or-not, unsafety-or-not, movability/copyabiliy/etc.), I often imagine a path-not-taken in which automatic destruction (and custom automatic destructor code) was only allowed for types and destructors that provably interacted only with in-user-memory state. Things referencing other state could be detected at compile time and required to deal with that state specifically in non-automatic-destructor code (think Python context-managers or drop handles).

I don't think that world would honestly be too different from the one we live in. The rust runtime wouldn't have to get much thicker--we'd have to tag data returned from syscalls that don't imply the existence of cleanup-required state (e.g. select(2), and allocator calls--since we could still automatically run destructors that only interact with cleanup-safe user-memory-only values), and untagged data (whether from e.g. fopen(2) or an unsafe/opaque FFI call or asm! block) would require explicit manual destruction.

This wouldn't solve all problems. Memory leaks would still be possible. Automatic memory-only destructors would still risk lockups due to e.g. pagefaults/CoW dirtying or infinite loops, and could still crash. But it would "head off at the pass" tons of issues--not just the one in the article! Side-effectful functions would become much more explicit (and not as easily concealable with if-error-panic-internally); library authors would be encouraged to separate out external-state-containing structs from user-memory-state-containing ones; destructor errors would become synonymous with specific programmer errors related to in-memory twiddling (e.g. out of bounds accesses) rather than failures to account for every possible state of an external resource; the surface area for challenges like "async drop" would be massively reduced or sidestepped entirely by removing the need for asynchronous destructors; destructor-related crash information would be easier to obtain even in non-unwinding environments...

Ah well. I can dream, can't I?


Depends. If those employees truly are excess and they haven't been doing much for the past couple years, they might be just producing tech debt. Cancelled projects and migrations have negative value. I doubt this entire 20% was made redundant overnight, which means they haven't been valuable for some time.

> I'm pleased that the Pro's base memory starts at 16 GB, but surprised they top out at 32 GB:

That's an architectural limitation of the base M4 chip, if you go up to the M4 Pro version you can get up to 48GB, and the M4 Max goes up to 128GB.


I got a refurbed M1 iPad Pro 12.9” for $900 a couple years ago and have been quite pleased. I still have a couple of years life in it I estimate.

A lot of sitcom tropes involve behaviors that are repulsive in real life.

So what have you experienced specifically? It should come as no surprise that telling people that "the powers that be are reading your thoughts and erasing your memories" is met with incredulity. That's a pretty big claim that doesn't mesh with the experiences and worldview of almost everyone.

I'm going to be frank, it sounds like paranoid schizophrenia. I hope you're doing okay stranger.


> Jokes aside, how do you end up having more than 500 excess people than what you need?

One of my dad's anecdotes, back when he was alive, was when he was interviewing someone for a job.

"Why did you leave your last position?"

"After six months, management noticed my entire floor was doing the same thing as the next floor."


Note that ASP.NET Core is significantly faster than Spring. The closer alternative in both UX and performance is going to be Vert.X instead.

It's clear they are preparing for an acquisition. They are profitable with great gross margins and cash in the bank. With 20% workforce reduction they are going to be very attractive to the salesforce of the world.

I think there is only one thing we should focus on: Measurable capability on tasks. Understanding, memorization, reasoning etc. are all just shorthands we use to quickly convey an idea of a capability on a kind of task. Measurable capability on tasks can also attempt do describe mechanistically how the model works, but that is very difficult. This is where you would try to describe your sense of "understanding" rigorously. To keep it simple for example, I think when you say that the LLM does not understand what you must really mean is that you reckon its performance will quickly decay off as the task gets more difficult in various dimensions: Depth/complexity, Verifiability of the result, length/duration/context size, to a degree where it is still far from being able to act as a labor-delivering agent.

So far I’m only reading comments here about people wow’d by a lot of things it seemed that M3 pretty much also had. Not seeing anything new besides “little bit better specs”

If you're willing to play, here are plenty of lenders who will finance this purchase.

If it affects your earning power to that extent, you should probably pony up and save in the long run, probably just a few years until you see returns.

Caste system usually can't be bypassed by paying a monthly subscription fee.

I will note that making it a subscription will tend to increase the overall costs, not decrease. In an environment with ready access to credit, I think offering on a subscription basis is worse for consumers?


> COBRA is the single largest expense for departing employees. Industry standard is to offer 18 months, not 6.

Are we in the same industry, where are you based? I got 2 months when I was laid off last year.

I also know many tech people who got just 1 month.


Can you elaborate, are those workflows in queue or can they serve multiple users in parallel ?

I think it’s super interesting to know real life workflows and performance of different LLMs and hardware, in case you can direct me to other resources. Thanks !


> IIRC it's the first time since the 2012 15" MBP that a matte option has been offered?

The so-called "antiglare" option wasn't true matte. You'd really have to go back to 2008.


Profanity and wingdings is whatever, but nixing allergen info is fd up.

This is definitely tempting me to upgrade my M1 macbook pro. I think I have 400GB/s of memory bandwidth. I am wondering what the specific number "over half a terabyte" means.

You have another one with a network gateway to provide hot failover?

Right?


>As CEO, I take full responsibility for this decision and the circumstances that led to it, and I’m truly sorry to those impacted by this change.

But I will be glad to see that my salary and bonus increased a bit this year :)


Everything you list is basically part of any display that is not bottom of the barrel. Yes, it's a very good monitor but it would be crazy otherwise considering the price. And it has one fatal flaw: you can only connect to it with USBC/Thunderbolt making it an almost Apple only monitor which is extremely annoying in the long run...

Related:

GPT-4 logits calibration pre RLHF - https://imgur.com/a/3gYel9r

Language Models (Mostly) Know What They Know - https://arxiv.org/abs/2207.05221

The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets - https://arxiv.org/abs/2310.06824

The Internal State of an LLM Knows When It's Lying - https://arxiv.org/abs/2304.13734

LLMs Know More Than What They Say - https://arjunbansal.substack.com/p/llms-know-more-than-what-...

Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback - https://arxiv.org/abs/2305.14975

Teaching Models to Express Their Uncertainty in Words - https://arxiv.org/abs/2205.14334


I didn't realize that surrogateescape (UTF-8B) had become the default! If so, that's a great improvement. Thank you for the correction.

I can't tell who you're saying is self promoting? Apple/the team that made the Apple ads or Ubuntu/the team that made the Ubuntu ad?

Huh? It's obvious to almost everyone that Dropbox is in a tough position in that it's largely failed to expand its offering beyond the very original product and value prop, which is also being eroded by the other major players.

So they attempted to use DB's position to leap-frog into new product categories, which require big spend on R&D and related teams, as well as new heads for supportive teams (sales, marketing, support, etc).

It's not working, so they are pulling back and re-trenching.

That all seems pretty transparently NOT a "having too much money" problem.


What kind of argument is that? Everyone does their research, that doesn’t mean there aren’t bad (and even stupid) choices made.

How is it that despite the vast population, we no longer have geniuses like Newton, Leibnitz, Gauss, Maxwell, Einstein, etc.? If they existed today, they would just be selling ads and stocks.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: