Hacker News new | past | comments | ask | show | jobs | submit | gwbas1c's comments login

> So the main motivation for HTTP/2 is multiplexing, and over the Internet ... it can have a massive impact.

> But in the data center, not so much.

That's a very bold claim.

I'd like to see some data that shows little difference with and without HTTP/2 in the datacenter before I believe that claim.


Datacenters don't typically have high latency, low bandwidth, and varying availability issues. If you have a saturated http/1.1 network (or high CPU use) within a DC you can usually just add capacity.

Well, when you assume you make an ass out of u and me! (I get that this probably isn't a high-priority area to study.)

> If you have a saturated http/1.1 network (or high CPU use) within a DC you can usually just add capacity.

Then I'd be curious how much a lack of HTTP/2 support in the application layer costs. It might be small change for an early stage startup, but when companies get large, these corner cases end up paying peoples' salaries.


> Imagine: the invoice isn’t just a record in your ERP anymore. It’s a machine that wants to get approved. It wants to get paid.

Stopped reading at that point. Realized this is just fluff from someone letting their imagination run away.


You know what I really want?

Rosie, from the Jetsons.

I want a physical robot to do domestic tasks. All the things that Alexia+ automates are things that don't take much time, nor are things I want to hand over to AI.


It’s been pretty shocking that a Turing-test sentience at genius level was easier technology to make than a consumer level robot with 1950s level design expectations that stupidly picks up and moves objects.

It’s humbling to humanity. We differ from animals in our “spirits” but that part of us was less difficult to invent than “legs” which every macroscopic creature has mastered.


> Is there room for a more altruistic form of investing?

One common trend that I see is that the bad guys often think they are the good guys; and the good guys often screw things up so badly that they create nasty messes and end up turning into the bad guys accidentally.

For example, in Massachusetts we want to meet climate goals. To do this, we subsidize homeowners in getting heat pumps and updating insulation. To fund the subsidies, electric and gas bills went up and many poor people can't afford them. And oil bills stayed the same, incentivizing people to continue to heat with dirty fuel.


There are plenty of attempts at "safe C-like" languages that you can learn from:

C++ has smart pointers. I personally haven't worked with them, but you can probably get very close to "safe C" by mostly working in C++ with smart pointers. Perhaps there is a way to annotate the code (with a .editorconfig) to warn/error when using a straight pointer, except within a #pragma?

> Just talk to the platform, almost all the platforms speak C. Nothing like Rust's PAL (platform-agnostic layer) is needed. 2) Just talk to other languages, C is the lingua franca

C# / .Net tried to do that. Unfortunately, the memory model needed to enable garbage collection makes it far too opinionated to work in cases where straight C shines. (IE, it's not practical to write a kernel in C# / .Net.) The memory model is also so opinionated about how garbage collection should work that C# in WASM can't use the proposed generalized garbage collector for WASM.

Vala is a language that's inspired by C#, but transpiles to C. It uses the gobject system under the hood. (I guess gobjects are used in some linux GUIs, but I have little experience with it.) Gobjects, and thus Vala, are also opinionated about how automatic memory management should work, (In this case, they use reference counting.), but from what I remember it might be easier to drop into C in a Vala project.

Objective C is a decent object-oriented language, and IMO, nicer than C++. It allows you to call C directly without needing to write bindings; and you can even write straight C functions mixed in with Objective C. But, like C# and Vala, Objective C's memory model is also opinionated about how memory management should work. You might even be able to mix Swift and Objective C, and merely use Objective C as a way to turn C code into objects.

---

The thing is, if you were to try to retrofit a "safe C" inside of C, you have to be opinionated about how memory management should work. The value of C is that it has no opinions about how your memory management should work; this allows C to interoperate with other languages that allow access to pointers.


> C# / .Net tried to do that. Unfortunately, the memory model needed to enable garbage collection makes it far too opinionated to work in cases where straight C shines. (IE, it's not practical to write a kernel in C# / .Net.

It was pratical enough for Singularity and Midori.

Those projects failed due to lack of leadership support, not technical issues.

Additionally, Android and ChromeOS are what Longhorn userspace could have looked like if leadership support was there, instead of rebooting the whole approach with C++ and COM, that persists to this day in Windows desktop land, with WinRT doubling down on that approach, and failing as well, again due to leadership.


Gobjects are a nightmare. A poor reimplementation of C++ on top of C. You have to know what "unref" function to call and that type to cast. For all the drawbacks of C++, it would have been less bad than Gobject.

It's less so opinionated and more so that WASM GC spec is just bad and too rudimentary to be anywhere near enough for more sophisticated GC implementations found in JVM and .NET.

It's been awhile since I skimmed the proposal. What I remember is that it was "just enough" to be compatible with Javascript; but didn't have the hooks that C# needs. (I don't remember any mentions about the JVM.)

I remember that the C# WASM team wanted callbacks for destructors and type metadata.

Personally, having spent > 20 years working in C#, destructors is a smell of a bigger problem; and really only useful for debugging resource leaks. I'd rather turn them off in the WASM apps that I'm working on.

Type metadata is another thing that I think could be handled within the C# runtime: Much like IntPtr is used to encapsulate native pointers, and it can be encapsulated in a struct for type safety when working with native code, there can be a struct type used for interacting with non-C# WASM managed objects that doesn't contain type metadata.


Here's the issue which gives an overview of the problems: https://github.com/WebAssembly/gc/issues/77

Further discussion can be found here: https://github.com/dotnet/runtime/issues/94420

Turning off destructors will not help even a little because the biggest pain points are support for byref pointers and insufficient degree of control over object memory layout.


Just remember: There are a lot of chemicals that produce LSD-like effects. The farther away you are from the chemist, the less likely you know the actual drug that you're getting. This is especially the case at concerts / festivals, where the "game of telephone" might mean that you don't really know what you're taking.

After reading many trip reports on Erowid for LSD, I suspect that the authors often unknowingly took something else. A classic case is STP/DOM, which often comes in paper / tabs and is visually indistinguishable from LSD. If you ever hear the familiar, "I took some crazy acid. At first it didn't work, so I took another, and then I finally came up after an hour and had an intense trip," it was probably STP/DOM instead of LSD.


Same thing for ecstasy: Can be anything from filler scam, to an MDMA dosage that will probably send you to hospital, to drugs that work very similar to MDMA and can also be called ecstasy to weird ass hard drugs that are something completely different.

In Vienna we have an organization that checks pills or powder from the batch you bought and tells you what exactly it is.


even LSD doesn't usually work immediately. it usually takes something like 30 minutes to an hour to start having effects.

i think one of my records is something like 12 hours after dosage to start feeling the effects. which i think happened because i also ate a bunch of food before taking

i use LSD recreationally every 1-2 weeks or so


I frequently eat before taking LSD and it never takes 12 hours to hit, after 1 hour it's always live and peak will occur within 3-4 hours. With a light meal or empty stomach it comes on a bit faster to a noticable state (but the hidden hunger can interfere with well being during the trip )

Checking out your profile, did you already have dissociative identity disorder before finding your way to LSD? How do you think the two interact? No judgement or implications, curious.

> Checking out your profile, did you already have dissociative identity disorder before finding your way to LSD?

yes. I believe the reason I first tried LSD was because another friend with DID said that LSD permanently gave them the ability to "think in parallel". unfortunately it did no such thing for me but it does do something else interesting that makes it still worthwhile. also I experienced an accidental ego death once and it was probably one of the most interesting things that's ever happened to me.

(I think ego death can only ever happen by accident because the moment you realize it's happened, that realization means it's over.)

> How do you think the two interact?

In my experience, LSD allows for a clearer identification of parts, especially in memories. That means while normally I can recall memories and feel like I was always the only one there, on LSD others can recall the same memories and realize that they were there too and had some influence on what happened at the time. I think there are also some memories where I wasn't at all and that can only be recalled while on LSD.

LSD I think also helps with switching and like "identity" of parts. It's easier for them to be completely themselves, I think. Less blurry/fusion/unknown/etc. stuff


> i think one of my records is something like 12 hours after dosage to start feeling the effects.

ugh that would suuuuck. Can't imagine waking up for work on a Monday after a seemingly failed trip the Sunday before lol


Yep. Pretty sure I got DOx in 1990 as a teenager. 24 hour trip that left me with panic attacks and HPPD for years(well, I still have HPPD decades later).

It was tie-dye blotter my friend got at a dead show. 2 hits. My friend took 5 and started throwing up within an hour which is not something I've ever heard of LSD causing.


I willingly took DOI once.

Also had a pretty awful experience.


At Bonnaroo, one of our party got STP instead of acid and we had to babysit all frickin weekend. Of course, I had already taken my mescaline so it was not fun at all.

Pathogens evolve:

FWIW: Mutations that cause the pathogen to kill the host quickly often "hurt" the pathogen because it doesn't have a chance to spread to other hosts. Hopefully this illness spreads "slowly enough" that this mutation has a poor chance of surviving.


This effect is unfortunately balanced out by ways of killing the host that spray body fluids everywhere in large amounts and help transmission.

I mean it's still better for the pathogen if it can accomplish that and keep the host alive, but that's real finesse.


I don't "pathogens evolve" and malaria is a viable reasoning here at all. There's a practically endless list of better hypotheses than Plasmodium suddenly completely breaking it's normal patterns.

For what it's worth though, vector-borne diseases in general (such as one spread by insects) are also largely exempt from the (erratic) tendency for the pathogen to evolve some moderation, as the host doesn't need to move itself and only needs to be fresh enough to attract attention from the vector species.


Makes me wonder how easy / hard it is to turn this kind of feature into a standalone product?

IE, send email, IP, browser agent, and perhaps a few other datapoints to a service, and then get a "fraudulent" rating?


This is basically what Google's reCAPTCHA v3 does: https://developers.google.com/recaptcha/docs/v3

The other versions of recaptcha show the annoying captchas, but v3 just monitors various signals and gives a score indicating the likelihood that it's a bot.

We use this to reduce spam in some parts of our app, and I think there's an opportunity to make a better version, but it'd be tough for it to be better enough that people would pay for it since Google's solution is decent and free.


Also called DaaS, "discrimination as a service"

Not sure if this was a slight but yes, payment providers and other services need to discriminate valid uses of their service from fraudulent.

I'm thinking along the lines of "let's ban all the Chinese" and "let's ban all the Russians", because that's where the abuse comes from. That's often what those models, both simple and advanced, boil down to.

American stores could prevent most shoplifting by banning people of a certain skin color from entering. The US doesn't let them do this, even though it would most definitely work. They're not allowed to do it for a very good reason, but those reasons seem to be lost to internet companies, who seemingly push so hard for diversity, equity and inclusion.


Except stores aren’t banning the customers from browsing or building a cart. It’s only when someone goes to pay does the fraud detection run and block the transaction. What the US does allow companies to do, like Walmart, is run “background checks” on you before allowing you to cash a check. Over the years I’ve known many with problematic banking history or bad credit who would get denied from this.

I agree blanket bans like you bring up would be problematic and wrong, but I see nuance in using, say, the country of origin as one of the factors in their risk assessment.


There's nothing wrong with trying to discriminate against bots.

If your setup makes you look like a bot, that's YOUR problem. Stop doing things that make you look like a bot.

I get that you want privacy, but so do bots.


I once did a machine learning project at Intel. The end result was that it was no better than simple statistics; but the statistics were easier to understand and explain.

I realized the machine learning project was a "solution in search of a problem," and left.


Career hack: skip the machine learning and implement the simple statistics, then call it machine learning and refuse to explain it.

statistical regression is also machine learning.

hack v2: call it AI

> I searched everywhere, but there wasn’t a single library that had all the functionality I needed. I ended up going with python-pptx, since it seemed to be the best of imperfect options. As you can guess from the name, it’s a Python library. It was a bit inconvenient to integrate with the existing typescript stack, but it was worth the effort. At Listen, we believe in using the best tool for a task instead of sticking with one language just for the sake of i

I find that calling a library in a different language can often be more effort than it's worth; in many cases the effort needed for a cross-language integration can take a lot more time than just accepting the limitations of the tools available in your language. Other times, it's easier to choose the language based on tool availability.

IE, once I used IKVM (a C# tool that converts .jars to .Net dlls) to call Rhino, a Javascript interpreter, because there weren't good Javascript interpreters in .Net. That turned out to be flakey, so then I tried running Javascript in a 2nd process, where the 2nd process was Java. That also turned out to be flakey, because there were some corner cases in my interprocess communication library.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: