Hacker News new | past | comments | ask | show | jobs | submit | zeroCalories's comments login

At what point do you bring in Bazel? I've heard some people say stuff like Python should always use Bazel, or that full stack webapps should use it, but it often just seems like extra cruft. Why not just have separate repos and use each language's tools?

For me, it's usually when I have more than three or four languages in a single codebase and there's codegen (e.g. protobuf, GraphQL, etc) involved. It's nice to be able to `bazel test //...` a whole repo, and building minimal, reproducible Docker images becomes comparatively straightforward.

That said, I've never made the leap in using Bazel for frontend stuff, I always use NPM/pnpm/yarn without any Bazel trimmings.


We had an integration test that involved many tool chains and languages to prepare. Rust compilation, WASM compilation, Docker builds shell scripts that produced a VM image, Python was in there somehow too, and then a test harness, written in Rust, that took all that to actually run the test.

With Bazel setup, and it was a beast to setup, developers could run all that, with remotely cached incremental builds, efficient parallel execution, and all reproducible (and same as CI) from a local developer environment with just one command.


I recently left Google and I do miss blaze. I'm working on some stuff that's rust and wasm and typescript and there are containers and I'm very tempted to use bazel for basically correct caching and stuff.

But even as a xoogler I think that the common wisdom of "don't use bazel until you can afford a team" or "use bazel if you are the only developer" are probably right.

My somewhat janky series of dockerfiles and bash scripts is at least debuggable to my business partner.

I'm just not ready to commit our future frontend developers to having to use bazel yet.

I'm sort of sticking to a lowest common denominator technology stack for most things except where we are spending our innovation tokens in wasm land.

But someday, probably. In the meantime I may setup sccache.


> Rust compilation, WASM compilation, Docker builds shell scripts that produced a VM image, Python was in there somehow too

was it through bazel rules? The worry is if some of those rules will get bug or missing necessary feature, it will be pita to deal with.


This is one reason why I’m quite interested in buck2; there are no built in rules at all, so if you need any changes, you can just make them.

Unfortunately the open source set of rules is a bit rough right now, with a decent happy path but a cliff if you’re off of it, but given some time…


The rules are open source. We did run into some road bumps in Rules Rust, that required patches or clever work arounds. But I believe the community has made great strides to improve Bazel’s Rust support in the last five years.

To be frank, I would avoid introduction of Bazel unless you are experiencing major pains with your current system and have at least a small centralized team willing to build expertise and provide support.


Bazel is definitely like kubernetes where you don't need it, until you need it.

If you got a big polyglot application its perfect for you. If your app is largely all in one language then you don't need that complexity in you life.


We leverage Bazel heavily across multiple very large projects where I work. It greatly simplifies dependencies across languages with different build steps. We also run Bazel in docker so everyone has the same build environment. Finally, we leverage Bazel on cloud build farms so very large builds don't happen locally and can leverage shared caches. This is really where the hermetic nature pays big dividends.

Our codebase uses a single build system for CPP, Python, CPP, GoLang, and more.


I'm spoiled, as I worked for 2-3 years at Google, and coming from gamedev - I was wtf is this thing, then when I quit I started missing it.

I've used with Java, Python, some C++, some R, bash, sawzall and few other langs. The beauty of it is that it can express dependencies across things from different languages (where this makes sense). Another cool thing (for Java) and FFI C/C++ code is that it can build custom java runner that embeds everything in single binary. So your FFI C/C++ code and the Java code becomes one binary.

But basically you can say - in order to build this, you also need to build this in a single language.

On top of that you can express tests too. And then it knew how to take/combine unit tests of same "size" and shard them to execute across machines.

And because you have to size your tests (e.g. RAM / CPU wise), it knows which to pick, and if your test does not fall anymore in that category it'll tell you to fix it. Might even offer command-line (nowadayws buildozer command-line) how to fix it. (That was ~2014-2016)

Then for C++ (and other langs) it'll track whether you've specified your header deps. To think about it, let's say you've said - here is a.cpp and a.cpp includes a.h - but forgot about adding explicitly a.h - Bazel (blaze) can catch that. One of the ways is simply to "symlink" only explicitly placed files in random temp folder, and compile from there - so "a.cpp" won't find "a.h" there and it'll tell - "Here run this command to add a.h to your target".

so actually quite the opposite - you want build system that unites all these different languages/runtimes/environments.

Not like - python/pip rust/cargo dotnet/nuget javascript/npm java/maven - because with each such different language specific build layer you lose how to express depedencies between them.

And not only that - in bazel deps are static, e.g. with a.h missing explicitly in your BUILD file, and bazel would tell you that. In the examples above is healthy mix of everything - from dynamic discovery to who knows what.

Then again, I get it - it's not the easiest to get through, the .bzl is dark magic sometimes, but the BUILD files should be easy to use to anyone - engineer, linguist, statistician, producers, tech art, etc.


> Not like - python/pip rust/cargo dotnet/nuget javascript/npm java/maven - because with each such different language specific build layer you lose how to express depedencies between them.

Its funny you highlight this as a feature, because my #1 piece of feedback for the bazel ecosystem is that it really sucks to have to give up language-native tools. Whether it's LSP servers, documentation that says "add this to your gradle file", "add this to your build.sbt", it's always an uphill battle.

And then it's an uphill battle convincing product engineers to forget everything they know, do it the bazel way instead.

I get why bazel is cool, but I think it's worth highlighting that for a lot of folks it very well can make their inner development loop worse than without it, particularly if they don't have someone (or several people) dedicated solely to developer experience.


Can you explain further? Yes, it takes more effort to express, or rather to create .bzl files to express these dependencies (e.g. let's say we have Python using C/C++) - but once it's done it gives a higher level (BUILD) language where this is easy to express... Now there are still many rough corners (especially when comes to dealing with non-mono repo deps), but I'd rather use this, than hodge-podge of shell scripts/makefiles/cmake/etc, where the steps are not even clear what to run first, and the confiendence if this going to be reproducible.

I am not really talking about cross-language dependencies. I mean simple things like Java product engineers who know gradle and want to use gradle and get grumpy when they don't know how to add dependencies to WORKSPACE. These people need to be trained to use different tools rather than use the skills they already know. That's fine, but it's definitely a cost to using bazel.

Or if you are trying to maintain bazel for an org. Invariably, you will receive requests of the form "I would like to achieve [well document thing in language-specific build tool], please help me do that in Bazel". The net result for me, who was once in that role, is that understanding the original build system, then bending bazel to match a certain behavior (usually with no or inadequate docs) is a huge amount of effort. Enough that I now am skeptical of the value-add of bazel.


For what it’s worth, gradle is probably the only other build tool that has good inter-language capabilities (and it is heavily used by the android toolchain). Most other build tools are specialists.

Large codebase in which you need a hermetic, repeatable, incremental build and test support. Note that in a hermetic build tests can be incremental too: if input and code do not change, there’s no need to rerun the test, and Bazel will skip it.

You'll get wildly different answers depending on who you ask. Most ex-Googlers appreciate Bazel. Folks used to ergonomic single-language builds dislike it.

In my experience, Bazel works best when:

- You have a monorepo.

- You have one engineer who knows it in and out and can stay current.

- You use several languages well-supported by Bazel (Protobuf, Go, Java, Kotlin).

The benefits are a single way to build and run all tests and binaries across all languages. Bazel CI providers (BuildBuddy, EngFlow) are much nicer than YAML engineering in other CI systems.

I have an old comment that's still representative of my experience: https://news.ycombinator.com/item?id=32831555


It seems like OP is describing a shared interface, not necessarily inheritance.

I'm envious of your success. I'm convinced that reading dense technical books is crucial to developing yourself over time, but I have a hard time reading technical books alone. The few times I've been in a book club they started out great, but fizzled out in a couple of months. Maybe I'll look into it again.

This is unrelated to the book club topic, but have you considered reading dense technical books as audio books? I often do this and I find that I absorb them maybe 50% as well as if I sit down and read them, but that I read 500% more. As a result, I can re-read them a few times and still come out ahead.

(rather than actually getting an audio book version, I'll typically get the epub and use a third-party reader with a read-aloud feature. I mention this because audio books are often not available, and because first-party readers typically won't allow you to read books aloud specifically because they to upsell you on the audiobook)


I do listen to audio books, but I don't like them for technical books. Screen readers usually do a poor job of handling code and diagrams that are crucial for understanding something. I like talks and lectures though, but with those I suffer from a similar issue to reading alone.

> Screen readers usually do a poor job of handling code and diagrams that are crucial for understanding something.

That's fair. This is definitely the main reason why my comprehension is lower and I have no solution for it.


what reader do you use?

I currently use this one for iOS which handles both epub and pdf pretty well: https://apps.apple.com/us/app/voice-aloud-reader/id144687636...

I've switched a few times, and anything on an Apple platform will typically just use the speech synthesis APIs and have the same set of voices available: https://developer.apple.com/documentation/avfoundation/speec...


thanks!

Sounds fun. How do you get your list of articles?

It’s crowdsourced. People add to the list things they find interesting and would like discussed more widely. We also have round robin turns setup to decide who picks the next article to read from the pool.

Sites spam low quality product reviews with affiliate links to Amazon. This is done by "reputable" sites as well. I don't blame Google for down ranking this meta.

You should migrate to somewhere that does value affordable housing.

Where would that be that's nice to live, safe, good infrastructure, good jobs AND by some miracle still allows you to buy affordable housing?

Plus, if you move to a low-CoL area with a high-CoL job then you're not really solving housing, you're just moving your problem elsewhere onto other people making housing unaffordable for them. How many former cheap places to live have been gentrified into oblivion in a short amount of time?

The goal is to make housing affordable AND help preserve local communities and culture(even though that might be contradictory to a degree), not to keep gentrifying and uprooting them in a musical chairs style game and then wonder why communities are dying, family units are dying, birth rates plummet, family support networks are dying, mental illnesses are up, loneliness is up, etc.


You seem to assume a free market for housing, that would be at least partially incorrect. The free market is a cause of long term (timeframe of decades) housing unaffordabilty because the haves treat housing as investment while the have-nots live under bridges or in cars which puts them in an unemployment/debt spiral.

The free market could work if housing was a depreciating asset like cars. Too many people in the west would be pissed off if that happened, though. It will come to some countries with collapsing demographics and that won’t be a pretty sight.


Given the numerous government regulations, as well as regulatory capture and NIMBYism, housing is not a particularly free market. Zoning and building codes (only some of which are important for not making death traps) as well as low income units lead to prohibited or high costs of construction.

>It will come to some countries with collapsing demographics and that won’t be a pretty sight.

Except that's not how it happens. Demographics have been collapsing in many EU countries for a while now and prices have been going nowhere but up at rates beyond wage growth.

Internal migration from rural to urban, and external migration from war torn and impoverished nations also to urban centers, keep pushing demand up regardless of local reproduction numbers.

Sure, you can now buy cheap properties in some empty towns in the south of Italy for example, but to what end if there's no jobs there?


At some point we run out of people in rural areas to migrate to urban and 30-50 more years later we get Detroit, maybe, except everywhere at once.

Watching South Korea very closely... for the next 20 years.


>At some point we run out of people in rural areas to migrate to urban

But you won't run out of external migrants due to wars, climate change and poverty who want to leave their countries and move to the western developed ones.


Gentrification is based. Unless you want to stop people immigrating you'll never "preserve the character of your neighborhood" while keeping prices down. The only solution is to build more housing.

it's really hard to afford a sf apartment AND a crippling drug addiction, for several reasond

I’m not sure if getting rid of the addiction would automatically make SF houses affordable. Certainly doesn’t help. But it’s also not SF’s job to house the entire nation’s homeless population.

it seems like it is if they're former SF residents priced out of their homes.

Homeless services can’t discriminate based on previous residency, they aren’t even allowed to ask. HUD has some residency requirements, but they are only loosely enforced. A lifelong resident of SF are often competing for the same resources with ex-cons who just got off the bus after being released from prison in Texas given only an open bus ticket.

That being said, a resident of SF has many more other ways of avoiding the streets (and still be considered unhoused) vs that ex-con, so the numbers are going to be lopsided if we are just counting the visible homeless problem.


That's weird. I learned just last week that Palo Alto requires proof that you ever lived in Palo Alto with a piece of mail before they'll let you into their shelter. Also their shelter has bunk cots. not bunk beds, bunk cots, so the bottom person is inches away from the top person.

According to https://www.asaints.org/outreach/hotel-de-zink/, that is Palo Alto’s only homeless shelter and it doesn’t mention a residency requirement. It also doesn’t have the bunks you are referring to.

They have a 6 week waiting list.

The place with the bunk cots is WeHOPE.


East Palo Alto is not the same as Palo Alto at all. I can definitely believe that is at least feasible then, although they do not list a residency requirement on their website. The only requirement is:

Must have a referral from a San Mateo County, or Santa Clara County Partner Organization to receive shelter.


They may be legally distinct, but we can agree that they're physically adjacent, and thus for someone who's unhoused in the Bay Area, they're both options on where to live. I don't think an invisible line that anybody can cross without any sort of border control really that important a distinction here.

I would never call them the same. East Palo Alto is much much poorer than Palo Alto; I think I would be laughed at when I was living in San Mateo if I ever tried to pass off a location in East Palo Alto as being in Palo Alto. One is full of Stanford kids and rich people, the other is full of poor people. Highway 101 isn’t a very invisible line.

I wouldn't call them the same either, but the context here is being unhoused in the Bay Area. I'm not, thankfully, but when I was living in PA I definitely met a few people who dropped East when referring to where they live. Anyway, it's too late to edit my original comment.

It's very unlikely that former residents (in the sense of paying for housing for extended periods of time with their own wage income) comprise a significant portion of San Francisco's street homeless.

This is another one of your misconceptions.

> Nearly 8 out of every 10 unhoused people in Oakland were living in Alameda County when they lost their housing.

https://www.sfchronicle.com/projects/2021/homeless-project-o...

> Primary Cause of Homelessness (Top five responses, Fig. 19)

> Family or friends couldn't let me stay or argument with family/friend/roommate: 27%

> Eviction/Foreclosure/Rent increase: 25%

> Job loss: 22%

> Other money issues including medical bills, etc.: 13%

> Substance abuse: 13%

https://homelessness.acgov.org/homelessness-assets/docs/repo...

It's possible that many frequent flyers to emergency rooms have been dumped from other communities. But most homeless people are just that, people who have lost their homes in their community. And anyway, how could it really be any different?

> with their own wage income

Of course poverty is the number one reason they are becoming homeless. Why are you talking about wage income. They have too little income. Who the hell wants to live on the street!


First of all, you cite data about Oakland when I was talking about San Francisco. The cities are different enough that it's worth noting. I will also state that I am well-informed about the matter and have few "misconceptions" in the obvious sense.

> Nearly 8 out of every 10 unhoused people in Oakland were living in Alameda County when they lost their housing.

That's not what the Point in Time count asks or tries to measure. The statistic reported is the location of last known shelter. So, as an example, someone who moves to San Leandro from Fresno to crash on a friend's couch for two weeks and then is asked to find a different place to stay would count as "living in Alameda County" for the purposes of the statistic. Another example: a longtime homeless person who has cycled in and out of shelters in the region for decades counts as "living in Alameda County" even if they first lost their home in Kern County or out of state.

> Primary Cause of Homelessness (Top five responses, Fig. 19)

This is silly data to cite. Drug use is correlated with money issues, interpersonal relationship problems, eviction, foreclosure, inability to keep a job, and more. Maybe if the survey had a multiple response design, the distribution would be relevant.

> And anyway, how could it really be any different?

I can think of dozens.


Greyhound bus stations. If you’ve ever taken a bus across country before, they pick up a lot of people at prisons and they stay on until some west coast city (LA, SF, Portland, or Seattle). That accumulates, and once they stick around for at least a year, they are considered resident (for some definition of housing lost, that means even if they were housed in a hotel once, or couch surfed at the start). Surveys that rely on self reporting are incredibly inaccurate. One was done in Seattle, and found out that 80% of King county’s homeless population was previously housed in pioneer square, an absurdity that put the entire survey in doubt.

It really is in the self interest of homeless oriented agencies and NGOs to present the problem as local as possible. If it isn’t local, then giving out housing will only make the local problem worse (people will start arriving for their free housing from other areas of the country), you can judge your success by how much worse the problem gets, which isn’t popular with local voters.

Without good information, at any rate, it isn’t weird that we are seeing the problem get worse for every billion we throw at it. Eventually the popular cities will just give up trying very hard because they never had the power to fix it in the first place.


Is this some kind of joke? Do you really think people who get evicted for not paying bills automatically get sent to a different city? How do you propose your magic mechanism to send people from SF away the moment they become homeless? Someone who ends up homeless is going to stay in a place that is familiar to them.

> Do you really think people who get evicted for not paying bills automatically get sent to a different city?

When did I say that?

> How do you propose your magic mechanism to send people from SF away the moment they become homeless?

What does this have to do with anything? The vast majority of tenants evicted in San Francisco receive both legal representation and relocation fees starting at ~$7k per person and more if you claim disability, which most do.

There are ~80 nonpayment evictions in the city annually, and there were ~0 from 2020-2023.

> Someone who ends up homeless is going to stay in a place that is familiar to them.

Probably true of the average homeless person, but there are many more homeless people outside San Francisco than in it, so you only need to believe a small percentage of, say, California's homeless population ends up in the city for local homelessness to be dominated by folks who lost their last stable shelter outside the city.


I've always kinda hated exceptions as it makes the contract between a caller and a callee hard to determine, and makes your code highly coupled. I prefer the Go or Rust style of handling it through return values. Briefly skimming the language, I'm not sure if there is anything that fixes that?

I think this kind of model could be cool if your IDE could dynamically determine all uncaught exceptions for a function, and lets you jump to each possible place where the exception could be thrown. Not sure how you handle coupling though. This seems like it would result in an insanely volatile control flow graph.


This is what IntelliJ does for Java. A problem is reported whenever you have a function that throws exceptions and isn’t caught in a caller anywhere in the project, and you can jump to implementation or calls easily.

However, exceptions that a function can throw are part of the function signature in Java unless they extend RuntimeException (and in that case your program won’t compile if you throw an exception without adding it to the signature). While the circumstances in Java make it much easier for IDEs to report uncaught exceptions, it’s a solvable problem for non-runtime exceptions using static analysis.

On the other hand, returning standardised Ok/Err-wrapped values seems like a simpler approach, both in terms of tooling support and developer convenience.


I think once Java has finished up exception switch-case it will be a model followed by other languages. Being able to catch exceptions at both, method and transaction boundaries will be a boon for readable control-flow.

> and transaction boundaries

What are transaction boundaries? Is Java getting transactions?


Algebraic effects is going in completely the opposite direction.

> ...as it makes the contract between a caller and a callee hard to determine, and makes your code highly coupled. I prefer the Go or Rust style of handling it through return values.

There is literally (literally!) no difference at all between throwing and exception and returning it as a variable. Except for the fact that in the exception passing style you have to write the boilerplate by hand, instead of letting the compiler do it for you.

Why anybody with a sane and functioning brain would want to do that by hand in 2024 I will never understand.


I think people like it because the control flow of a given program is more obvious when you write it that way. No one can "throw Foo" three libraries down from their caller as an "API". See https://go.dev/blog/errors-are-values

You're just trading one type of control flow visibility for another. With even the most basic amount of error-return handling the actual control flow of your program is quite obscured.

I dislike Go, but I understand why people like most of its decisions.

I cannot fathom why people think it does error handling well, though. The codebase I work in has _so_ many errors that are completely ignored, and errors are a lot harder to track down.

These problems can be solved by writing better code, but the problem is that it's, of course, hard to write good code.

Java's exception handling has problems, but at least it gives you nice stack traces and you can't forget to propagate an exception.


> I cannot fathom why people think it does error handling well, though.

It doesn't. But it lets me do error handling well if I'm up to it. Which is the worst of all possible worlds except for the one where error handling is done badly.

> Java's exception handling has problems, but at least it gives you nice stack traces and you can't forget to propagate an exception.

Y'know, I actually had the chance to use raw java (and even a modern version of java to boot) for something semi-recentish, and it was pretty great. I was surprised how well it worked out. Unfortunately, this wasn't the experience I got on any sort of enterprise java project though. The stack traces usually truncated before it got out of the framework being used.

I'm not really working in go because I'm rejecting java itself. I have certain disagreements with the design philosophy, especially in its early days, but it's reasonably possible to write decent code in it. I refuse to work in java because of its ecosystem. I know what writing java is like in an enterprise setting, and I'll take 1000 `if err != nil`s over that experience.


Enterprise Java should really be called something else because it ruins the name of vanilla Java.

Vanilla Java is excellent. I miss some language features, but recent updates have really made the language competitive.

On the other hand, enterprise Java continues to be what everyone thinks of when "Java" is mentioned. It's also terrible.


As an implementation detail, exceptions are usually much more expensive than just popping the stack, as computation is needed for each frame you traverse.

Having both throw and return is like regex: now you have two problems.


Exceptions are less expensive in the unexceptional case. Consuming a Result value requires a branch to see whether it holds Error. That is not necessary if the function returns a value directly (and throws an exception on error).

However, exceptions require a larger runtime and can make optimizations harder to reason about.

I have heard this before but never really understood it.

Take this:

```

int bar(int arg); // May throw

int foo(int arg) {

    int b = bar(arg);

    return b * 5; 
}

```

VS:

```

Error<int> bar(int arg); // Uses error type

Error<int> foo(int arg) {

  auto b = bar(arg);

  if(b.ok()) { // Happy Path

    return b.unwrap() * 5;
  }

  return e; // Exceptional Path
}

```

In the happy case (no exception) how can the first version be harder to optimize than the second one? In the exceptional case exceptions will probably be slower. You trade fast common case for slow exceptional case so it makes sense to me. Is the slower one the one that is hard to optimize? Is this what we are talking about when optimizations are harder to reason about? Or are there some other things that become harder because of exception? Things like RAII, defer, goto stuff?


> In the happy case (no exception) how can the first version be harder to optimize than the second one?

If your runtime has to do extra work to support stack-unwinding, it could easily be slower. You're doing work in the happy path in that case just in case the sad path happens. My guess is adding function metadata to some sort of datastructure (linked list?) so that it can figure out whats happening when it long jumps to the handler. I'd bet that allocation has at least one conditional in it.

The alternative would be putting the metadata on the stack, letting the function complete normally, and then check a flag to see which path you're on (same overhead).

To be clear, I have no idea how various runtimes implement this, just that the extra magic could easily have the same, or more, overhead than a conditional check.


Admittedly, I'm not an expert. I believe it's due to the extra control flow points (assume all functions can throw, not just specially marked ones) and not having just a local value, which procludes analyzing the operations done on the error path.

Unless you're using exclusively Java-style checked exceptions, then there literally (literally!) is huge difference. That is:

    def main():
        try:
            my_a = a()
        except ExA, ExB:
            my_a = None
        ...
         
    def a() -> A:
        ...
        my_b = b()
        ...

    def b() -> B:
        ...
If b changes its signature to return type C, then it is a type error in a. main doesn't need to worry about what b returns; only a does.

BUT if b begins raising ExC instead of ExB, then that will break main at runtime. That is, main needs to be aware of what b could raise, even though it doesn't directly call it.


This is unrealistic example and has error handling completely backwards. The exception handler in main() only knows how to handle specific types of errors -- it doesn't know or care where they are thrown, that's not it's problem. Lets say, for example, it knows it can handle network errors by retrying the operation.

If b() changes to throw a different type of exception that main() doesn't know to handle them main will break. And breaking is entirely the correct behaviour.

Maybe b() is an interface and the actual implementation might not even have existed when main() was written. Maybe yesterday it was implemented with a file, tomorrow with a HTTP call, and maybe next week something else. But tomorrow, at least, it can retry the operation when it fails.


That's all great in theory, but in practice, I see except clauses mostly used to handle particular exception classes that callees are known to throw.

My applications have very few exception handlers and most don't do any "handling" at all except logging.

My most robust application is a desktop application with a single exception handler at the event loop that merely prints the exception message in an a message box. If you try and save a file to a bad network location or something, just click ok and try again.


Literally the exact same thing happens if you use exception passing style and encapsulate all errors as a generic `Error` type. (Which is what everyone does in practice.)

I prefer to build result-passing, no-exception-throwing systems out of an exception-throwing core where the language itself may throw exceptions but the thing I build from the language always returns result.

Elm is an extreme case where indexing into an array returns a result type instead of an exception when the index is out of bounds, unlike most languages including Haskell.

Maybe my program logic is intended to always access only valid indices in an array, but here I'm given a result type where I have nothing to do in the error case since my code is never intended to reach that case. I would rather let the language throw an out-of-bounds exception here to tell me that my implementation is incorrect while I'm testing.

Same with libraries in a language. It really depends on the use cases of the library you're writing whether results or exceptions are better. The most convenient thing for the user of a library would be to provide both exception-throwing and result-passing alternatives. This is what the OCaml standard library does as well.


If go didn't naturally eat error context, I'd like it more. But in the time I've used it, it makes errors much, much more painful to root cause without a debugger.

I don't think that's Nate Silver's point? He said that some specific journalists are generalizing their social circle's opinion to the wider public leading them to make unfounded claims about the election. It seems clear that for all the attention pro-Palestine groups are given, the issue is basically a rounding error for the election, yet many left wing journalists are adamant that Biden is throwing the election on this issue. Are they bad journalists, or cynical propagandists? Dunno.


Plenty of companies have compute, but everyone is barely catching up to GPT-4 over a year after release. I'm sure places like Meta would love to unlock the details of what makes it so good.


There's no mystery, they just have a lot of actual user data, so they've been able to refine the question answering behavior. They've also baked some common problem solving strategies like chain of thought into the model via training.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: