A startup made for the purpose of acquisition was never a competitor. If you are willing to sell to the big player in your industry you are not competing, you are an opportunist. A startup that wants to compete will run very differently from a startup that wants to be purchased.
A big whale company that gobbles up some of the fifty startups that only have like 2% of the market total is not a competitive market at all.
>A startup made for the purpose of acquisition was never a competitor
You cannot get acquired unless you represent a percentage of market share, have IP which will lead to greater market share, and/or have employees who can expand market share for a product.
>A big whale company that gobbles up some of the fifty startups that only have like 2% of the market total is not a competitive market at all.
A big whale company performing that many M&As to little startups is essentially fueling future competitors. If I was an investor I would see that market as valuable for the unicorn breakthrough possibility or at least an eventual acquisition exit event.
> A big whale company performing that many M&As to little startups is essentially fueling future competitors.
What an absolute load. They are stamping out competition, concentrating market power, and making red oceans even redder. If you were an investor I wouldn’t give you my money to gamble with.
It's not a simple "acquisitions are good" or "acquisitions are bad" discussion. There is an ideal amount of M&A activity in an economy which is more than "no startup ever gets acquired" and less than "every startup is bought out by big tech". In recent years we have been closer to the first extreme, and very few startups have had exits of any kind.
My biggest complaint with gRPC is proto3 making all nested type fields optional while making primitives always present with default values. gRPC is contract based so it makes no sense to me that you can't require a field. This is especially painful from an ergonomics viewpoint. You have to null check every field with a non primitive type.
If I remember correctly the initial version allowed required fields but it caused all sorts of problems when trying to migrate protos because a new required fields breaks all consumers almost by definition. So updating protos in isolation becomes tricky.
The problem went away with all optional fields so it was decided the headache wasn't worth it.
I used to work at a company that used proto2 as part of a homegrown RPC system that predated gRPC. Our coding standards strongly discouraged making anything but key fields required, for exactly this reason. They were just too much of a maintenance burden in the long run.
I suspect that not having nullable fields, though, is just a case of letting an implementation detail, keeping the message representation compatible with C structs in the core implementation, bleed into the high-level interface. That design decision is just dripping with "C++ programmers getting twitchy about performance concerns" vibes.
According to the blog post of one of the guys who worked on proto3: the complexity around versioning and required fields was exacerbated because Google also has “middle boxes” that will read the protos and forward them on. Having a contract change between 2 services is fine, required fields are probably fine, have random middle boxes really makes everything worse for no discernible benefit.
proto2 allowed both required fields and optional fields, and there were pros and cons to using both, but both were workable options.
Then proto3 went and implemented a hybrid that was the worst of both worlds. They made all fields optional, but eliminated the introspection that let a receiver know if a field had been populated by the sender. Instead they silently populated missing fields with a hardcoded default that could be a perfectly meaningful value for that field. This effectively made all fields required for the sender, but without the framework support to catch when fields were accidentally not populated. And it made it necessary to add custom signaling between the sender and receiver to indicate message versions or other mechanisms so the receiver could determine which fields the sender was expected to have actually populated.
proto2 required fields were basically unusable for some technical reason I forget, to the point where they're banned where I work, so you had to make everything optional, when in many cases it was unnecessary. Leading to a lot of null vs 0 mistakes.
Proto3 made primitives all non-optional, default 0. But messages were all still optional, so you could always wrap primitives that really needed to be optional. Then they added optionals for primitives too recently, implemented internally as wrapper messages, which imo was long overdue.
The technical reason is that they break the end-to-end principle. In computer networking in general, it is a desirable property for an opaque message to remain opaque, and agnostic to every server and system that it passes through. This is part of what makes TCP/IP so powerful: you can send an IP packet through Ethernet or Token Ring or Wi-Fi or carrier pigeon, and it doesn't matter, it's just a bunch of bytes that get passed through by the network topology.
Protobufs generally respect this property, but required fields break it. If you have a field that is marked 'required' and then a message omitting it passes through a server with that schema, the whole message will fail to parse. Even if the schema definition on both the sender and the recipient omits the field entirely.
Consider what happens when you add a new required field to a protobuf message, which might be embedded deep in a hierarchy, and then send it through a network of heterogenous binaries. You make the change in your source repository and expect that everything in your distributed system will instantly reflect that reality.
However, binaries get pushed at different times. The sender may not have picked up the new schema, and so doesn't know to populate it. The recipient may not have picked up the new schema, and so doesn't expect to read it. Some message-bus middleware did pick up the new schema, and so the containing message which embeds your new field fails to parse, and the middleware binary crashes with an assertion failure, bringing down a lot of unrelated services too.
If you want to precisely capture when a field must be present (or will always be set on the server), the field_behavior annotation captures more of the nuance than proto2's required fields: https://github.com/googleapis/googleapis/blob/master/google/...
You could (e.g.) annotate all key fields as IDENTIFIERs. Client code can assume those will always be set in server responses, but are optional when making an RPC request to create that resource.
(This may just work in theory, though – I’m not sure which code generators have good support for field_behavior.)
The choice to make 'unset' indistinguishable from 'default value' is such an absurdly boneheaded decision, and it boggles my mind that real software engineers allowed proto3 to go out that way.
I don't get what part of your link I'm supposed to be looking at as a solution to that issue? I wasn't aware of a good solution except to have careful application logic looking for sentinel values? (which is garbage)
Yes, proto3 as released was garbage, but they later made it possible to get most proto2 behaviors via configuration.
Re: your question, for proto3 an field that's declared as "optional" will allow distinguishing between set to default vs. not set, while non-"optional" fields don't.
If that's true, why have types in proto at all? Shouldn't everything be an "any" type at the transport layer, and the application can validate the types are correct?
It was a little weird at first, but if you just read the 1.5 pages of why protobuf decided to work like this it made perfect sense to me. It will seem over complicated though to web developers who are used to being able to update all their clients at a whim at any moment.
proto3 added support for optional primitives sorta recently. I've always been happy without them personally, but it was enough of a shock for people used to proto2 that it made sense to add them back.
Just out of curiosity, what domain were you working in where "0.0" and "no opinion" were _always_ the same thing? The lack of optionals has infuriated me for years and I just can't form a mental model of how anybody ever found this acceptable to work with.
Like nearly every time, empty string and 0 for integers can be treated the same as "no value" if you think about it. Are you sending data or sending opinions? Usually to force a choice, you would make a enum or a one-of where the zero-value means the client has forgotten to set it and it can be modelled as a api error. Whether the value was actually on the wire or not is not really that important.
0 as a default for "no int" is tolerable, 0.0 as a default for "no float" is an absolute nightmare in any domain remotely related to math, machine learning, or data science.
We dealt with a bug that for weeks was silently corrupting the results of trials pitting the performance of various algos against each other. Because a valid response was "no reply/opt out", combined with a bug in processing the "opt out" enum, also combined with a bug in score aggregation, functions were treated like they replied "0.0" instead of "confidence = None".
It really should have defaulted NaN for missing floats.
I think your anecdote is rather weak with regards to the way protobuf works, but to entertain, why would a confidence of 0.0 be so different from None? 0.0 sounds very close to None for most numerical purposes if you ask me.
"Hmm, lat = 0, I guess they didn't fill out the message. I'll thrown an exception and handle it as an api error"
[Later, somewhere near the equator]
"?!?!??!"
------------------
"Ok, we learned our lesson from what happened to our customers in Sao Tome and Principe: 0.0 is a perfectly valid latitude to have. No more testing for 0, we'll just trust the value we parse.
[Later, in Norway, when a bug causes a client to skip filling out the LatLon message]
"Why is it flying to Africa?!?!"
------------------
Ok, after the backlash from our equatorial customers and the disaster in Norway, we've learned our lesson. We will now use a new message that lets us handle 0's, but checks that they really meant it:
message LatLonEnforced {
optional double lat = 1;
optional double lon = 2;
}
[At some third party dev's desk]
"Oh, latitude is optional - I'll just supply longitude"
[...]
"It's throwing exceptions? But your schema says it's optional!"
------------------
Ok, it took some blood sweat and tears but we finally got this message definition licked:
If both lat and lon are required, you don't need to throw an exception for lat=0. If you want lat=null lon=0.0 to mean something like "latitude is unknown but longitude is known to be 0.0," yeah you need optional or wrapped primitives.
Edit: If a client doesn't fill out the LatLng message, that's different from lat and/or lon being null or 0. The whole LatLng message itself will be null. Proto3 always supported that too. But it's usually overkill to check the message being null, unless you added it later and need to specially deal with clients predating that change. If the client just has a bug preventing it from filling out LatLng, that's the client's problem.
The confusing part here is that even if the LatLng message is null, LatLng.lat will return 0 in some proto libraries, depending on the language. You have to specifically check if the message is null if you care. But there are enough cases where you have tons of nested protos and the 0-default behavior is actually way more convenient.
Yeah - I think what I'm getting at though is that you want to guard against situations where somebody accidentally doesn't set one of the fields, and yet 0 is a valid value to set on that field. You could accidentally forget to fill in latitude, and that would be bad news.
Def +1 the confusingness of how you have to explicitly check for has_latlon() to make sure you're not just getting default values of a non-existent LatLon message. The asymmetry between primitive and message-type fields in having explicit presence checking is also part of my beef. It's weird to have to teach junior devs that you can do has_* checks, but only for messages.
It's safer in a way to guard against these situations, but it seems like they don't intend you to do that often because there are more downsides outweighing it. Proto2 had the "required" feature that had its own issues. Our team trusts to some degree that the user actually read the API spec, and so far it's been ok.
I can imagine message nullness being clearer in languages with optional unwrapping like JS. Like foo.bar.baz gives an error if bar is null, and if you want default-0 you use foo.bar?.baz. Idk if that's what happens though.
In the case for Lat/Lon, I guess that 0.0 could have a meaning, though it is very unlikely someone is exactly at lat/lon 0.0. An alternative is to translate to the XY coordinate system, though that is not a perfect solution either.
If you really feel like expressing that LatLon as possibly null, it should rather be:
Yes it was python but that has nothing to do with it. Same would happen in go, rust, R, or matlab.
Correct answers: 1.0, 0.0, 1.0
Confidence from algo: 1.0, 0.0, n/a
Confidence on the wire: 1.0, 0.0, 0.0
Score after bug: 66%
Score as it ought to be scored: 100%
It was enough to make several algorithms which were very selective in the data they would attempt to analyze (think jpg vs png images) went from "kinda crap" in the rankings to "really good"
Well, only in python is that N/A value also a float. In protobuf, go or Java for that matter, that data model must somehow be changed to communicate the difference.
If you had use 3 float values in Go or Java you would have had the same problem.
Yeah, I think it's best to first rethink of null as just, not 0 and not any other number. What that means depends on the context.
Tangent: I've seen an antipattern of using enums to describe the "type" of some polymorphic object instead of just using a oneof with type-specific fields. Which gets even worse if you later decide something fits multiple types.
Not really one domain in particular, just internal services in general. In many cases, the field is intended to be required in the first place. If not, surprisingly 0 isn't a real answer a lot of the time, so it means none or default: any kind of numeric ID that starts at 1 (like ticket numbers), TTLs, error codes, enums (0 is always UNKNOWN). Similarly with empty strings.
I have a hard time thinking of places I really need both 0 and none. The only example that comes to mind is building room numbers, in some room search message where we wanted null to mean wildcard. In those cases, it's not hard to wrap it in a message. Probably the best argument for optionals is when you have a lot of boolean request options where null means default, but even then I prefer instead naming them such that the defaults are all false, since that's clearer anyway.
It did take some getting used to and caused some downstream changes to how we design APIs. I think we're better for it because the whole null vs 0 thing can be tedious and error-prone, but it's very opinionated.
Any time you have a scalar/measurement number, basically any value with physical units, counts, percentages, anything which could be in the denominator of a ratio, those are all strong indicators of a "semantic zero" and you really want to tell the difference between None and 0. They are usually floats, but could be ints (maybe you have number_of_widgets_online, 0 means 0 units, None means "idk".)
What's the difference between none inches and 0 inches? Might need a concrete example. We deal with space a fair amount and haven't needed many optionals there.
I had the same experience. It was a bit awkward for 6 months, but down the line we learned to design better apis, and dealing with nullable values are tedious at best. Its just easier knowing that a string or integer will _never_ cause a nullpointer.
Huh, yeah I see. I guess I work more on the robotics side where messages often contain physical or geometric quantities, and that colors my thinking a bit. So "distance to that thing = 0" is a very possible situation, and yet you also want to allow it to say "I didn't measure distance to that thing". And those are very distinct concepts you never want to conflate.
I can see that. Or the rare situations where I needed 0 vs null, if for some reason that situation was multiplied times 100, I'd start wanting an optional keyword.
I just built a rig with new, top of the line, hardware and installed fedora on it without an issue. What components are you talking about specifically?
I had the opposite experience actually. Windows didn't include network drivers for my custom PC mobo+chipset, while linux had them included. Actually had to download the windows drivers on linux then boot into windows and install them...
Usually your main function can't be used by any other part of your program. You should move all component implementations to modules so they can be re-used elsewhere.
I don't think Apple will back down in order to appease developers. Their strategy is pretty clear: make their platform so popular people pay to play by Apple's rules. Apple could charge 50% or 70% on their app store and developers would still line up. The bigger issue in the relationship is when they started building competing apps with the developers. Why would anyone want to help Apple build a platform that they will then use to undercut you? This is also after they already took a huge cut of your revenue.
This strategy works because investors will demand that companies support a popular platform.
I don't know if this will work for Vision Pro because there's still no evidence of mass market appeal for VR or AR technology. Even Facebook's money and marketing muscle hasn't been able to make Meta VR mainstream. Nor is Playstation VR going to get widespread adoption or significant software support until Sony starts bundling it with Playstations which would be very risky by making Xbox dramatically cheaper.
I think companies are making VR devices because they're appealing to their engineering department not because they actually have reason to believe there's consumer demand for such tech. But the popular alternatives to invest in are things like crypto and generative AI that also have open questions about whether the product is actually useful to the mass market.
Apple's $5k headset is not a popular platform yet, nor will it be as popular as smartphones anytime soon.
Companies want to push their apps to Apple/Google because everyone has a smartphone in their pocket at all times and has their eyes glued to it most of their spare time meaning shopping, product sales and advertising $$$.
Compared to that, even if the headset would be half price, very few people will be as absorbed into it as much as they are into phones, therefore companies are in no rush to port their apps to Apple's headset.
And I'm saying this as someone who has a VR headset and is bullish on VR. It's difficult to beat the convenience and portability of a smartphone with a clunky headset you wouldn't wear on the street.
Maybe when headsets will be as slick and as functional as Iron Man's E.D.I.T.H. glasses they'll be able to fully replace smartphones for eyeball time, but we're very far from that future, as the current iteration is more of a tethered virtual TV/monitor rather than a fully immersive stand alone AR experience device, and silicone and battery technology isn't advancing fast enough anymore to bring Iron Man tech to reality anytime soon, let alone make it affordable.
So I think devs are right in giving the headset the cold shoulder for now until it becomes a mass market product with mass appeal that people wouldn't mind wearing on the street.
My reflective thoughts are first if you make a solid piece of software Apple should be hiring you to integrate your product into their main software stack.
Secondly, particularly to your last point, I think this encourages short development cycles and continually pushing something new.
Since if you build a good innovative product you run the risk of Apple undercutting it like you said. Unfortunately this means that there's less incentive to maintain and update a software product in exchange for developing and selling something new.
Why is this line used everywhere in the news? What is the consequence of getting someone on the record for speculating the value of an IPO? It's the most frustrating and lazy part of journalism.
Sometimes people don't want to go on the record talking about their employer, for fear of repercussions. Employees have minimal protections in the US, and if you get fired, there goes your health insurance, all your prescription medications, dental and vision coverage, your ability to pay rent, pay for auto insurance, etc. We have very poor social safety nets and if you lose your job you mostly lose your life. If you're lucky you might have a couple months of savings, but even that is uncommon outside tech circles.
Why not interview someone willing to go on the record that will speculate the IPO value? There are plenty of articles where they interview professional investors with knowledge about such matters who go on the record.
I think journalists typically prefer nameable sources whenever they can find one. They can't always, or those sources have had bad experiences with other journalists and don't want to bother, etc.
Keep in mind that journalism in the US is itself a very underpaid profession that mostly fails to pay a living wage. Journalists work on tight deadlines with minimal support and don't always have the luxury of doing deep investigative reporting. They might only have a few hours to get a story ready, and if no nameable sources are willing to come forward, well, maybe that anonymous one is better than nothing.
Source: anecdotal only, former journalism major, knew writers in the industry.
---
Edit: not defending the practice, just saying it happens. Sometimes there are just lazy or plain bad journalists too. I don't know this writer and won't speculate, but in general it's not unusual to cite anonymous sources.
If we want to discourage that practice, it's not something an individual journalist can fix (short of cultivating long term sources). Most sources have no protection, and even real whistleblowers (like Snowden or Reality Winner) who are supposed to have protections are just left hung out to dry. For a source, talking to the media has a lot of risks and almost no upsides.
It looks like one core issue was designing the system around pre-determined bounds (monthly, yearly, etc). This happens in lots of systems and isn't specific to billing. A job scheduling system will have the same problem (we built it to track seconds and now they want us to track calendar dates!).
Navigating this challenge involves the ever-changing boundaries of billing. Initially, you build a monthly billing system, only to find that your team requires a yearly one, which seems relatively straightforward. However, complications arise when you introduce usage-based billing, incorporating weighted values akin to storage, layered on top of a quarterly plan. This complexity is further exacerbated when custom billing structures are needed for negotiated contracts, adding another layer of intricacy to the mix.
I think the promise of threads is exactly what many people want. Hopefully the product improves to meet the expectations. Some things I'd like to see:
1. Native notes like feature for long form posts I can embed and share. I cannot stand the tweets threads of (n/27) about a topic.
2. Topic based feeds. Whether it's hashtags or some other type of categorization process. I think I read on HN the other day a twitter engineer had come up with something like that but it never got buy in from the decision makers. This would also help filter out things I don't want to see more broadly than manually muting users.
3. Enabling the option to only see verified accounts.
4. Private threads. Threads where you can tag an audience where only they can see the posts and replies. This would be a form of DM that isn't just another chat app.