Hacker News new | past | comments | ask | show | jobs | submit | tylerl's comments login

I spent a fair amount of time at work five or six years ago trying to figure out how to make supply chain security actually possible in the general case with standard open-source tools. And I can tell you that the fact that Docker builds are fundamentally non-deterministic caused me no end of frustration and difficulty.

This was about the time that Bazel was being open-sourced, and Matt's rules_docker extension was already in there. A solution existed, so to speak, but it would have been nutty to assume that the average project would switch from the straightforward-looking Dockerfile format to using Bazel and BUILD files to construct docker containers. And Docker Inc wasn't going to play along; they were riding a high valuation that depended on them being the final word about containerization, so vocally pretending the problem didn't exist was their safest way forward.

At one point I put together a process and POC for porting the concept of reproducible builds to docker in a user-friendly format -- essentially you'd define a spec that listed your dependencies with no more specificity than you needed. Then tooling would dep-solve that spec and freeze it into a fully-reproducible manifest that encoded all the timestamps, package versions, and other bits that would otherwise have been determined at build time. Then the _actual_ build process left nothing to chance: grab the identified sources and build and assemble in a hermetic environment. You'd attach the manifest to the container, and it gave you a precise bill of materials in a format that you could confidently use for identifying vulnerabilities. Since the builds were fully hermetic, a given manifest would only ever produce one set of bits, which could be reproduced in an automated fashion, allowing you to spot supply chain inconsistencies.

In my tooling, I leaned heavily on package providers like Debian as "owning" the upstream software dependency graph, since this was a problem they'd already solved, and Debian in particular was already serious about reproducibility in their packages.

In the end, it didn't go anywhere. There were a LOT of hacks to make it work since the existing software wasn't designed to allow this kind of integration. For example, the dependency resolution step required splicing in a lot of internal code from package managers, and and the docker container format was (and probably still is) a mess that didn't allow the end products to be properly identified as reproducible without breaking other things.

Plus, this is a problem that only people trying to do security at scale even care about. We needed a sea-change of industry thought around verifiability before my solution would seem at all valuable to people outside a few huge tech companies.


Hey Tyler!

Funny to see you here. Matt and I haven't given up on this, we're giving a lot of that another try at Chainguard.


Sweet. Glad to hear someone's working on it who knows what they're doing. :-P


Does anyone know of a more generalized framework for doing this kind of thing? I'd been meaning to write a framework kind of like this for some time, but never got around to it, and was hoping someone else would. This one unfortunately doesn't really check the important boxes, but it's a good start. I was hoping more for:

* Target language agnostic (this one seems to get mostly there) -- the nodes communicate data/logic flow, you then serialize/deserialize accordingly. * Focus on data flow, not just execution -- IO from nodes, visual indicators of data types (colors or something) * Capable of visually encapsulating complexity -- define flows within flows * Ideally embeddable in web apps (e.g a browser/electron frontend or something)

These are pretty popular to embed in complex "design" oriented applications, especially ones that involve crafting procedures by non-programmers (e.g. artists, data scientists, etc). Examples where this is implemented that come to mind include Blender, Unity, and Unreal.

A core part of the fundamental design of each one of the successful implementations is that they allow efficient code to be crafted by people who don't think they understand code. Making it visual helps engage the brains of certain kinds of people. The "code as text" paradigm is spatially efficient, but it's like a brick wall for some people.


Having had to do that quite a few time, for instance for audio / control / visuals graphs in https://ossia.io as well as some proprietary stuff, I'm pretty much convinced now that it's easier to just whip up the graph data structure that fits the problem rather than trying to make a generic framework for that. Every time I tried to use a library of dataflow nodes it ended up not doing what I wanted the way I wanted, and rewriting something tailored to the use case was just much faster especially considering that you likely want user interface features which will be specific to what you are doing with your dataflow graph.


I've been working on a more general framework. Its not public, but I'm happy to chat with anyone who has a specific interest. At present it consists of the following elements: * Language specification - code is stored as json. * Compiler with an interpreter, Javascript, Typescript and Rust backends. * Editor - not quite the classic style node editor. Our new design fixes a lot of the problems and complaints with the old style node editors, particularly information density and spaghetti layout. * Language Server - provides type hints, etc to the editor. * VSCode extension - integrates the editor and compilier into VSC.

Also of note, the language is statically typed, with generics. It handles loops, branches, and functions are first class. Recursion is not supported at present.

In time we also plan to build a LLVM backend, so an intermediate language won't be required. Currently the compiler is written in TS, but as it matures more we intend to make the language compile itself.

If you want to talk, seek me out (I work for Northrop Grumman Australia).


Hey, same about looking for a generalized framework, but I'd approach it from the other side.

I'm pretty okay with the general shape of code (a nested tree-type structure), but think the possible interactions are unnecessarily awkward by being forced into a text editor.

You ought to be able to effortlessly fold, move around and disable/enable blocks of code. There's not much of a point in allowing indentation or whitespace mistakes and it doesn't usually make much sense to select a part of a token (or often even just a token without the block it's attached to).

These issues can mostly be fixed by representing tokens and lines in a tree-like structure of nodes, for which useful editors already exist.

IDEs try to retrofit these things into their text model (the most advanced one I'm aware of being IntelliJ), with automatic indentation fixing and folding, but even the best attempts aren't even using 20% of the potential.

My most jarring example of how bad IDEs still are at this is folding + disabling (commenting out): When I comment out a block of code because I don't care about it that is folded because I don't care about it, it shows it back to me even though I don't want to see it because I double-don't care about it!

(Side note: I'm aware XCode doesn't do this, but it's far from general)

There is so much effort put into concepts, parsing and whatever, and everything uses trees, because that's the natural way of reasoning about code. Why are we still stuck interfacing with it through an awkwardly serialized version?


There's Google's Blockly: https://developers.google.com/blockly


Not sure if fits the criteria but I have built CanvasGraphEngine[0] with the intent for it to be a visual programming tool for devs, meaning you still write code but it can be made into a node/graph so its easier to encapsulate code in a more visual manner. The main image[1] demonstrates how to train and predict a MNIST classifier with image generator and model building. Underneath the hood it connects to jupyter and each node wraps the necessary python code required for only that node. The jupyter version of it is not public but can be made if enough interest.

[0] https://github.com/AIFanatic/CanvasGraphEngine [1] https://raw.githubusercontent.com/AIFanatic/CanvasGraphEngin...


https://github.com/msgflo/msgflo + https://app.flowhub.io/

Time is a flat circle. I'm sure this has been attempted countless times before, but it never seems to stick around.


We are working in https://hal9.com which is language agnostic and allows you to compose different programming languages; however, we are focused at the moment at 1D-graphs but have plans to support 2D-graphs in the coming weeks.

If you want a demo or just time to chat, I'm available at javier at hal9.ai.


Maybe have a look at pyqtgraph: https://www.pyqtgraph.org/


The communication doesn't give that impression; instead it says that the paper makes claims that ignore significant and credible challenges to those claims. Dean said that these factors would need to be addressed, not agreed with.

Publishing a transparently one-sided paper in Google's name would be a problem, not because of the side it picks, but because it suggests the researchers are too ideologically motivated to see the see the problem clearly.

Ironically, it indicates systemic bias on the part of the researchers who are explicitly trying to eliminate systemic bias. That's just a bit too relevant to ignore.


If that is indeed why the demand for retraction, why didn't they state that up front in the meeting where they told Timnit she needed to retract the paper or remove her name? Instead they initially refused to tell her the reasons for the demand for retraction.

They didn't give her a chance to address those factors at first.

Later they had a manager read the confidential feedback on the paper in question, but still didn't leg her read it herself.

If that feedback was only saying that the paper lacked relevant new context and advancements, why were they being so cagey about it? Something doesn't smell right about that.


Apple's new chip "shreds" a 2016-era GPU?

Wow.


Apple’s new integrated graphics chip shreds a discrete GPU.

The models announced last week were all the low end models that currently only have integrated Intel graphics. This is a huge upgrade for those systems. The expectation is that when Apple announces replacements for their higher end products, some of which use discrete GPUs, there will be a similar upgrade in performance.


It's not impressive until you consider the difference in power draw between the discrete laptop GPU version of the chip and the entire M1 SOC.

>The power consumption of the GeForce GTX 1050 is roughly on par with the old GTX 960M, which would mean around 40-50 Watts

https://www.notebookcheck.net/NVIDIA-GeForce-GTX-1050-Notebo...


IIRC, all GCPs IPv6 support is complicated by the fact that they adopted IPv6 from the get-go for internal routing, and layer the user-visible virtual address space on top of it, embedding the user-visible addresses inside the invisible "actual" VM addresses, and that layering strategy allows for something super amazing or fast or something. Something like that.

So then you ask the engineers, "when are you going to adopt IPv6?" And they're like: "What do you mean? We've never NOT used IPv6 for everything important."

On the one had my GCP server's "native" IP address that the OS sees is always an IPv4 address. On the other hand, it's always in the 10.x.x.x/8 range. Everything else is NAT and LB.


My brother is an ER doc in a well-known facility, and he says this covid thing is freaking the everliving shit out of the front-line medical profession. This virus is just not behaving like a normal disease should.

The doctors who have been around long enough say that the feeling in the hospitals is just like the early days of AIDS. All you knew is that patients were dying from a disease that doesn't follow any of the normal rules, and nobody's sure why, and all the healthcare workers are nervous AF that they're going to get it too, but everyone is trying to be brave because the patients and family are scared out of their minds, and calm needs to start somewhere, right?


My close acquaintance on the front lines have expressed similar sentiments. This “vibe” you’re describing is dead-on.


Interesting thing is ignore influenza and compare with other pandemic viral diseases. A lot of them are similar in that they can but not always involve multiple organs and result all sorts of complicated disease courses in a high percentage of patients.


Thing is, Google requires an absolutely stupid amount of computing resources for running their core business. YouTube transcoding is a great example and a big one for sure, but I bet they have even bigger ones in there somewhere. I have no real data to base this on (and I'm sure nobody does), but I'd bet 5:1 odds that if Google were an AWS customer, they'd be bigger than all the others combined.

So in that case, optimizing for a single customer makes perfect sense if it's the right customer.


Eehm... Arbitrage is buying on one market to immediately sell on another market at a profit. Which is exactly what these folks were doing.

Except that the other market administratively blocked them from selling. Then the first market wouldn't reverse the purchase, leaving the would-be arbitrageurs holding the bag.


The ones who were able to unload at the moment they purchased were arbing. If you could sell the toilet paper before you even got it out of the store at Costco, you have executed a successful arb. There was no risk in this trading profit.

The others who carried inventory for a nontrivial period of time were market makers, and carried market maker risk. Market maker risk is the bid-ask moving against you when you are carrying inventory. These guys were basically carrying unhedged physical long positions in toilet paper thinking they had a free option to unwind at par, but Costco changed the game on them.

Not all of these guys were arbing.


The study is fundamentally flawed and the article (headline claim in particular) is nonsensical.

The study is flawed in that it lacks a control group and is missing information about relevance. You say people don't see the road when you tell them to look at a screen. Wow, color me shocked. But if you're going to demand change, you need to be able to put that fact into context. Like, what are the measurements when asked to perform the same action in the built-in head unit? And is this a contrived example, or does it represent typical use.

The headline has a similar problem. We already know that driver reaction time is effectively infinitely high for distracted driving. If the driver doesn't see the obstacle then they aren't just slow to react, they don't react to it at all... because why would they react to something that they don't realize exists? So you're comparing that fact to delays caused my chemical impairment? How? A driver with phone integration will have no impairment at all if he's not looking at the display at precisely the time of the incident, but intoxication has a persistent effect.


Really? That's certainly not something I generally hear. You can levy a lot of complaints against police systems, especially the a few of the more infamous departments. But that one seems like it's pushing a bit to make a specific connection.


The United States "abolished" slavery in the 13th amendment of its Constitution; however, the amendment explicitly makes an exception for punishment for crime.

Text of the 13th amendment of the United States Constitution, Section 1 (emphasis mine):

> Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.

It's really not pushing it when it says so right there.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: