Hacker News new | past | comments | ask | show | jobs | submit | ramon156's comments login

We can learn a lot from terry davis

One positive side of immediate mode is that the code becomes a lot more straight forward. You only see that loop function, so it's very nice if you need a POC, or if it's going to be ran on embedded.

Here's some pseudocode for argument's sake: http://paste.debian.net/1317369

I don't think the immediate-mode code is really more straightforward. Retained mode looks just as simple and only has that one same function with the same 5 basic SLoC, only

- (A) the ui library has to do less work at runtime, resulting in a more responsive app (as per parent comment);

- (B) i was forced to reorder the component instantiation. In immediate mode, the counter's label depends on the button's state, whereas in retained mode, the button update callback depends on the counter instead; and

- (C) immediate mode only needed one sprintf instead of two since the initialization and update logic is unified. You can combine them in retained mode with a helper function or closure (and in real-world apps you would); in this trivial case this increases the SLoC but in real-world cases it reduces it.


And it also isn't 100% best-practise, more like all-practise. It tries to show any and every way you could approach a problem while somehow keeping it minimal. Really like it but if you follow the tutorial 1:1 you'll most likely get frustrated

> Really like it but if you follow the tutorial 1:1 you'll most likely get frustrated

I went through it and hated it. It's very tedious and I felt like I learned nothing from following along.


So what are they supposed to do?

While I want to agree, your comment just sounds salty

No law says its illegal, but an internview should he unbiased on someone's ethnicity, sex, etc.

It rolls off of your tongue nicely

Alright, but now picture this: it's now open to the masses, meaning an individual could probably even do it.


The problem of producing bio weapons is not computational, it's physical in nature. Even if predictions from these tools become 100% accurate and encompassed 100% of the chemistry, you still need to actually do the manual steps to breed the bioagents. And, very importantly, you need to do so without getting you and your co-workers deadly ill long before finishing the thing. Which requires extremely sophisticated machinery.

Alternatively, you can go today in some of the poorer corners of the world, find some people with drug resistant tuberculosis, pay them a pittance to give you bodily fluids, and disperse those in a large crowd, say at a concert or similar. You'll get a good chunk of the effects of the worse possible bioterrorism.


To be honest, apart from the containment systems you mentioned, much of biology basic research doesn't actually need that much sophisticated kit for basic cloning and genetic manipulation.

A lot of the key reagents can just be bought - and BTW it's why code like Screepy exist ( https://edinburgh-genome-foundry.github.io/ ),.

I think the real thing that stops it - it not that you can't make stuff that kills people, but the problem of specificity - ie how do you stop it killing yourself.


>>kit for basic cloning and genetic manipulation.

A lot of the key reagents can just be bought<<

along with the cryo-fridges required to keep said reagents.

add about 10,000 usd to your purchase request.


Most reagents and kits for molecular cloning will be fine at -20°C.


These are very specific ideas you have..


It’s very frustrating when people consider the possession of information equivalent to malice. It suggests that the right way to run society is to keep people stupid and harmless.


Harmless is enough. The smart ones will figure out that knowing shortens lives.


Okay, I'll live in the harmless society, you go live in the harmful society.


We all live in the harmful society already right? I’m not aware of where to find this harmless society.

Suffering of finite beings is inevitable. While a very worthwhile goal, creating a harmless civilization isn’t possible. There are some common sense things we should do to prevent harm like negative consequences (prison etc) for needlessly harming each other. However, locking up knowledge doesn’t make much sense to me.

I’d rather explore the bounds of this world than mindlessly collect my drip of Soma and live comatose. To me that sounds more harmful.


I don't care about people harming people incidentally. I also don't want to shut down knowledge. But there is "knowledge" and there are "materials" that everyone agrees must be controlled and limited, like high explosives and bioweapons. Then the question is if large AI weights are a kind of "knowledge" or a kind of "material", and IMO they're much closer to material despite being data.

> I’d rather explore the bounds of this world than mindlessly collect my drip of Soma and live comatose. To me that sounds more harmful.

This only once again demonstrates that winning a debate is entirely about getting to define the choice under consideration. To me, it's not about Soma, it's about "humanity survives" and "humanity goes extinct due to out-of-control superintelligence." I don't want to die, so I'm for AI regulation.


Remember, I also said stupid. Ima go live in the non-dummy society and you can do whatever.


These are ideas you should read and think about because intelligence agencies all over the world have been thinking about them for the past hundred years.


I hear you, but I don't think an individual can. If I gave you $20,000 and an open source 70% accurate protein folding model and told you to develop, mass produce, and develop a deployment mechanism for a highly infectious and deadly pathogen, I don't think you could make that happen. Nor do I think you could do it if you had a PhD in microbiology.


right now


The risks don't exceed what is already out there. If someone wants to do damage, especially in America, there are more than enough ways they can do it already. The technology should be made free. I also wonder how much the claims are being exaggerated and are marketing-speak vs. real results. Is there any benchmark for this that they have published?


No. I love to be egalitarian as well but this AI thing really feels different. We didn't just invent a better plow or a more durable sword. We're working on making a better brain. I think social media shows us a pretty good slice of the average person and it's not great. Now imagine they can manipulate the smartest person in the world to do dangerous, dumb shit.


>we didnt just invent a better plow

But we invented metalworking. And if metal were for kings chairs only, we would still have no plows.


Honestly, I wouldn't worry about bioterrorism as much as handling mishap. Stick new proteins into a bacteria the wrong way, don't wash hands thoroughly enough, and suddenly something is eating all the trees in the region, or whatnot.

Designing an effective, lethal pathogen - fast enough to do damage, but slow enough to not burn itself out - is hard. Accidentally making something ecologically damaging is probably much simpler, and I imagine the future holds plenty such localized minor ecophagy[0] event.

--

[0] - Yes, I totally just learned that term from https://en.wikipedia.org/wiki/Gray_goo a minute ago.


You raise an extremely important point. It appears to me that most people do not understand the implications of your point.

Organized terrorism by groups is actually extremely rare. What is much less rare are mass shootings in the USA, by deranged individuals.

What would a psychopathic mass shooter type choose as a weapon if he not only had access to semi-automatic weapons, but now we added bio-weapons to the menu?

It seems very clear to me that when creating custom viruses becomes high school level knowledge, and the tools can be charged on a credit card, nuclear weapons will be relegated to the second most likely way that our human civilization will end.

I believe the two concepts being brought together here are the Law of Large Numbers, and the sudden ability for one single human to kill at least millions.


> It seems very clear to me that when creating custom viruses becomes high school level knowledge

That would be very bad indeed, but there is no path from AI to that. Making custom viruses is never going to be an easy task even if you had a magic machine that could explain the effects of adding any chemical to the mix. You still need to procure the chemicals and work with them in very careful ways, often for a long time, in a highly controlled environment. It's still biology lab work, even if you know exactly what you have to do.

Also, bioweapons already exist and have been used in a few conflicts, even as recently as WWII. They're terrifying in many ways, but are not really comparable to the horror of nuclear weapons hitting major cities.


> You still need to procure the chemicals and work with them in very careful ways, often for a long time, in a highly controlled environment. It's still biology lab work, even if you know exactly what you have to do.

You can get that as-a-Service, and I imagine that successes in computational biology will make mail-order protein synthesis broadly available. At that point, making a bioweapon or creating a grey goo (green goo) scenario will be a divide-and-conquer issue: how many pieces you need to procure independently from different facilities, so that no one suspects what you're doing until you mix them together and the world goes poof.


We know the principles of how to make very powerful and dangerous anorganic compounds today, with extreme precision. Do you see any chemistry-as-a-service products that sell to the general public? Is it easy to obtain the components and expertise to make sarin gas, a clearly existing and much simpler to synthesize substance than some hypothetical green goo bioweapon?


> Do you see any chemistry-as-a-service products that sell to the general public?

Sort of? Depends on how general you insist the general public to be. Never used one myself, but I used to lurk on nootropic and cognitive enhancement groups, and I recall some people claiming they managed to get experimental nootropics synthesized and sent from abroad, without any special license or access. And then there's all the lab supply companies - again, I never tried, but talking with people I never got the impression it's in any way restricted, other than being niche; I never heard them e.g. requiring a verified association with an university lab or something. Hell, back in high school, my classmate managed to get his hand on some uranium salts (half for chemistry nerdom, half for pure bragging rights), with zero problems.

> Is it easy to obtain the components and expertise to make sarin gas, a clearly existing and much simpler to synthesize substance than some hypothetical green goo bioweapon?

Given that I know for a fact that making several kinds of explosives and propellants is a bored middle-schooler level problem, I imagine sarin is also synthesizeable by a smart amateur. Fortunately, the intersection of being able to make it, and having a malicious reason for it, is vanishingly small. But I don't doubt that, should a terrorist group decide to use some of either, there's approximately nothing that can stop from cooking some up.

What makes me more nervous about potential biosafety issues in the future is that, well, sarin is only effective as far as the air circulation will carry it; pathogens have indefinite range.


> Making custom viruses is never going to be an easy task even if you had a magic machine that could explain the effects of adding any chemical to the mix. You still need to procure the chemicals and work with them in very careful ways, often for a long time, in a highly controlled environment. It's still biology lab work, even if you know exactly what you have to do.

You appear to be talking about today. I am referring to some point in the future.

If you extrapolate our technological progress out to the future, it certainly seems possible, at some point.


Not based on AI biochemistry simulators, at the very least.


Time for WWWLLM communication!


Now we can take this LLM, and paste it right into windows's write!


It shouldn't be a very large one. Lot's of empty space and dead synapses leading nowhere in the source material.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: