What I don't get is that the government now says it wants a 3rd runway (this has been debated for 30 years). Why add a 3rd runway, costing billions and taking decades, to an airport that can't use it 24-7 due to noise restrictions, and doesn't even have resilient power from the grid. Heathrow should have been bulldozed years ago and replaced with housing, and the estuary airport built. Or the Maplin Sands project 50 years before that.
Adding a runway to an existing airport is relatively low risk and comparatively cheaper than building a new major airport altogether. Anyone considering the latter will surely look at the Berlin Brandenburg Airport [0], which ran roughly €4 billion over budget and opened nine years behind schedule. Given the dire financial situation of the United Kingdom right now, I would wager this is an incredibly hard sell.
I feel like if you build that extra capacity it will immediately get used and you will still have no extra capacity in these situations. An airport holding extra capacity feels like it's just burning money given the demand.
According to a BBC News report in 1970,[12] it was determined that if the wreck of Richard Montgomery exploded, it would throw a 300 metres (980 feet)-wide column of water and debris nearly 3,000 metres (9,800 feet) into the air and generate a wave 5 metres (16 feet) high. Almost every window in Sheerness (population circa 20,000) would be broken and buildings would be damaged by the blast
It would damage buildings and shatter every window in town. Look up videos of the Beirut explosion to gain a sense of the amount of energy involved. Even with water as a shield the force and shockwave will still inflict harm.
(1) the UK doesn't have Tsunami warnings, because it doesn't have Tsunamis. This also means they don't know how to deal with them institutionally.
(2) Right by a river leading directly into the capital. I don't know how far away a 2m tsunami would actually go, is it close enough to the river entrance to focus it? https://www.floodmap.net to play with what "2m" would mean to the local area.
There is an effort in OpenTelemetry to create a standard query language for observability. There were a lot of discussions with a lot of opinions; there were even several talks during KubeConEU about that:
Why not just use SQL? With LLMs evolving to do sophisticated text-to-SQL, the case for a custom language for the sake of simplicity is diminishing.
I think that expressiveness, performance and level of fluency by base language models (i.e. the amount of examples in training set) are the key differentiators for query languages in the future. SQL ticks all those boxes.
You are right. SQL is the best language, but it likely needs some extensions. See SQL with pipe syntax. Read Google paper or try it out in Big Query.
There are a lot of fundamentals in observability, but there are very verbose in SQL:
- rate operator, which translates absolute value to rate, possible with SQL and window functions, but takes many lines of code
- pivot, where you like to see the top 5 counts of errors of most hit-by-error microservices plus others over time
- sampling is frequent in observability and will be useful for LLMs, it is a one-liner in SQL with pipe syntax, even customizing specific
I actually believe LLM gen AI plays extremely well with pipe syntax. It allows us to feed partial results to LLM, sampling as well as show how LLM is evolving queries over time. SQL troubleshooting is not a single query but a series of them.
Still, SQL with pipe syntax is just syntactical sugar on SQL. It let's you use all SQL features as well as compiles to SQL.
Still, the best is yet to come. Previously, SQL extensions were a pain. There was no good place, and table-value functions were a mess.
Now, it would be possible to have higher-order functions such as enrichment, predictions, grouping or other data contracts. Example:
FROM orders
|> WHERE order_date >= '2024-01-01'
|> AGGREGATE SUM(order_amount) AS total_spent GROUP BY customer_id
|> WHERE total_spent > 1000
|> INNER JOIN customers USING(customer_id)
|> CALL ENRICH.APOLLO(EMAIL > customers.email)
|> AGGREGATE COUNT(*) high_value_customer GROUP BY company.country
This may be called one SQL to determine distinct e-mail domains, then prepare an enriching dataset and later execute the final SQL with JOIN.
Iterative SQL with pipes may also work better with GenAI.
Thanks for sharing. I wasn't aware they pushed that out. The ordering of this makes so much more sense. My only real concern, is that I think CTEs are so common place and part of ANSI SQL that you'll see people trade a standard for this. But, I also gripe that Snowflake uses // for comments and people get confused when their code doesn't work on another DB. Or Oracles' join syntax is another example.
Pipe-based query syntax has been appeared a better alternative to SQL in many databases, because it is easier to read, write and maintain. The only thing is that this syntax isn't standardized among databases yet.
BTW, we at VictoriaLogs also use pipe-based syntax for LogsQL query language [1] since the first release in 2023. Recently I wrote an SQL -> LogsQL conversion guide [2], which has been appeared quite clear and easy to follow. This guide was written during the conversion of SQL queries to LogsQL at ClickBench benchmark from ClickHouse [3].
Yeah, but that's also true for a mediocre (human) SWE...
edit: My point was that, since mediocre SWEs make up a large proportion of the workforce, an LLM that performs at the level of "mediocre human" will still have massive implications for the labor force.
People keep saying that, but would you work in a team full of "mediocre" professionals or have a social circle full of "mediocre" friends if you had a choice?