When typescript came out, you were seen as weird for wanting such a thing. I once had a VP of engineering dm me to tell me to stop discussing typescript in the company dev channel around 2015 (if you're reading this, that was a dick move). Nowadays you're kinda odd man out if you don't want types. So the idea of adding types even optional ones probably wouldn't have gone down well. The closest we ever came was es4 which of course never landed: https://evertpot.com/ecmascript-4-the-missing-version/
That's a good point, has anyone hardened a database by locking out users who select columns that don't exist? Or run other dubious queries? This would obviously interrupt production but if someone is running queries on your db it's probably worth it?
I once did an security assessment for a product such as what you describe. Among other problems with it, the product itself had SQL injection vulnerabilities
If you are mature enough to do that, you're mature enough to net SQL injections in the first place. There shouldn't be that many handwritten queries to review in the first place as most mundane DB access is usually through a framework that handles injection properly...
Zane Lackey (with Dan Kaminsky) gave a talk that discussed doing literally that sort of things, back in 2013. Zane went on to found Signal Sciences (acquired by Fastly), doing this sort of stuff in the 'WAF' space.
I guess the main difference is that a WAF attempts to spot things like injection (unbalanced delimiters, SQL keywords in HTTP payloads where SQL shouldn't exist, etc.) typically without knowledge of the schema, whereas GP is talking about the DBMS spotting queries where queries must exist but disagree with the schema. Might as well do both, I suppose.
That’s not what the talk is about - it’s using dbms query error logs to spot attackers. Stuff like “table doesn’t exist” or “invalid syntax” on your production database can be extremely high signal indications that something is wrong, potentially maliciously so.
In the very early 2000’s I worked at a company building something along those lines. We could analyze SQL and SMB traffic on the fly and spot anomalous access to tables/columns/files, etc. Dynamic firewalling would have been the next progression if the company didn’t have other issues.
So if you deploy code before you run the associated db migration, or misspell a column name, you magnify the impact from whichever code paths (& application tier nodes) are running the broken SQL, to your entire production environment.
Simple variation to a hard shutoff: immediately page "significant risk a successful sql exploit was found", and then slow down attackers:
If an SQL query requests an unknown table, log the error, but have that query time out instead of responding with an error. Or, even better, the offending query appears to succeed, but returns fake table data, turning it into a honeypot built-in to the DB. This could be done at the application layer, or in the DB.
The goal is to buy an hour for defenders to determine how to respond, or if its a red herring. There are a variety of ways of doing this without significant user impact.
Yeah it's definitely something that could do more harm than good to a company long term. But I'm sure there are instances where this tradeoff is worth it. They would invest more heavily in runbooks or maybe even ci that runs migrations on deploy. Deleting columns would need to be done on your deploy + 1. Probably no rollback at all.
Yes it's some physical connection being broken. It's not unusual to adjust the ways in which the electrical grid is connected. See https://www.youtube.com/watch?v=7Q-aVBv7PWM from Practical Engineering which talks about some of the nuances of building switches which can break connections between different parts of the grid safely.
Genuine question, is this comparison really apples to apples? Microsoft wanted to compete with sun right? Does google want to compete with programming languages like this? My gut tells me this is NIH not wanting to compete.
Microsoft didn't want to compete with Sun so much as have an application development language with a garbage collector that wasn't owned by Sun.
You don't make much money off programming languages inherently.
This also elides an obvious riposte (so you mean they should have just used Mono? how did all that work out?) and a metric ton of differences between what C# targets and what Dart targets.
MS wanted to fracture the Java ecosystem. The Microsoft Java VM was an attempt to lockin developers to MS Java and not sun Java. They created J# and C# because of the sun lawsuit they lost.
They still wanted a Java like ecosystem but they would be sure it only ran on Windows servers.
MS spent years being hostile to open source software. It's only in about the past decade that they've turned a corner.
Here's a famous email from Bill Gates about Java and how to stop it.
Exactly, if they are used enough that someone declared the types in a @types subrepo. Sometimes these are excellent. However, I sometimes work with code in fairly niche domains written in pure JS that can pretty much return anything depending on the input (not necessary even input types), rendering even these bindings very hard to write and not ergonomic at all.
And this sometimes holds for even fairly popular libraries, like d3.js which I sometimes use for visualization. The idiosyncratic API design for object manipulation, selecting DOM nodes by string id and doing stuff based on their associated data, just doesn't really work in a strongly-typed context without 50% of the code being unreadable casts. And d3 is still trying at least to be somewhat type-safe, unlike other libraries.
It seems odd to put crypto and LLMs in the same boat in this regard - I might be wrong but are there any crypto projects that actually provide value? I'm sure there are ones that do folding or something but among the big ones?