Hacker News new | past | comments | ask | show | jobs | submit | gsnedders's comments login

To reply mostly with my WPT Core Team hat off, mostly summarising the history of how we've ended up here:

A build script used by significant swaths of the test suite is almost certainly out; it turns out people like being able to edit the tests they're actually running. (We _do_ have some build scripts — but they're mostly just mechanically generating lots of similar tests.

A lot of the goal of WPT (and the HTML Test Suite, which it effectively grew out of) has been to have a test suite that browsers are actually running in CI: historically, most standards test suites haven't been particularly amenable to automation (often a lot of, or exclusively, manual tests, little concern for flakiness, etc.), and with a lot of policy choices that effectively made browser vendors choose to write tests for themselves and not add new tests to the shared test suite: if you make it notably harder to write tests for the shared test suite, most engineers at a given vendor are simply going to not bother.

As such, there's a lot of hesitancy towards anything that regresses the developer experience for browser engineers (and realistically, browser engineers, by virtue of sheer number, are the ones who are writing the most tests for web technologies).

That said, there are probably ways we could make things better: a decent number of tests for things like Grid use check-layout-th.js (e.g., https://github.com/web-platform-tests/wpt/blob/f763dd7d7b7ed...).

One could definitely imagine a world in which these are a test type of their own, and the test logic (in check-layout-th.js) can be rewritten in a custom test harness to do the same comparisons in an implementation without any JS support.

The other challenge for things like Taffy only targeting flexbox and grid is we're unlikely to add any easy way to distinguish tests which are testing interactions with other layout features (`position: absolute` comes to mind!).

My suggestion would probably be to start with an issue at https://github.com/web-platform-tests/rfcs/issues, describing the rough constraints, and potentially with one or two possible solutions.


And it's worth noting VLIW survived longer in GPUs—where there isn't any expectation of ISA stability, which avoids that problem.


And, perhaps also relevantly, your compute kernel probably fits in iCache, even with VLIW instructions.


Note the Eurostar is perhaps an especially bad example in terms of affordability, versus most HSR in Europe:

* the Channel Tunnel has very high access charges (because construction was very expensive, and as a private entity it took on a lot of debt and wants to pay that off!), and

* the terminals in both London and Paris are too small for the time taken by border control now the UK is no longer in the EU (which has led to Eurostar running fewer trains—but the demand has hardly decreased).


For a comparison for a journey I do:

22-24 March, Copenhagen to Berlin and back, one adult:

€210 by train.

€170 by plane (no luggage added).

The plane is much faster, but I can get work done on the train. Both cities have cheap public transport connections between the centre and the airport.


What would you consider part of a PWA focus area?

You link to https://web.dev/learn/pwa/capabilities and https://whatpwacando.today elsewhere, but I think many would consider many of those features to not be inherently PWA features. Many, _many_, of those features have wide usage outside of PWAs, which makes putting them into a "PWA" bucket somewhat meaningless. Constructing buckets like "Storage" or "Media" would probably be more meaningful to web developers.

If you look at the basic definition of PWAs, it's primarily about manifests and service workers, and about using the web platform in combination with these.

Manifests are hard to test automatically — their content is largely shown in differing parts of browser UI, and UI is by-and-large unspecified in web standards (because vendors want freedom to make their own UI decisions) — which makes including them difficult. It's also unclear how much there is in way of interoperability problems.

Service workers are definitely something that could be proposed (though haven't been in recent years); I again don't believe there's been that much in way of interoperability problems there recently.


This has been done to some degree at Green Park station, which still very much is quite warm in places, for ~£9M about a decade ago.

But when TfL has had its budget slashed, and such projects are competing for financing with other projects with more direct operational benefit… it's hard to justify. And it's still not necessarily easy to drill at all!


Agreed that drilling isn't easy, but this does seem more like a budgetary and political issue than an engineering challenge.

Not much (if any) drilling may even be necessary, as there's also the option of running coolant lines directly into the train tunnels. Existing water infrastructure could also be used as coolant — especially if it's otherwise sitting idle for a fire that hopefully never happens!


> seem more like a budgetary and political issue than an engineering challenge.

Science is about proving that something can be done at all.

Engineering is about taking that result and productionising it - doing it reliably, safely, effectively and on a budget. On the right site, even if that's in a busy city. "time and budget issues" are part of any engineering challenge. Very few engineering projects are "Money no object".

An also, there currently isn't the political will in the UK to invest in expensive infrastructure and engineering projects. London actually gets more investment than the hinterlands. See https://www.reuters.com/world/uk/whats-happening-hs2-britain...


It seems like we're in agreement here? Geothermal heat pumps aren't particularly novel, so it doesn't seem like engineering a solution is the blocker here. It's mainly politics.


Exactly, budget is an engineering concern, and in this case it's nigh impossible to engineer a solution, given the space and budgetary constraints.


The budgetary constraints are purely political though. Stuff like this has been engineered before at a reasonable cost (eg the Green Park station for 9M according to a poster above). So while yes things have to be engineered to a cost, it’s clearly not some impossibility high number.


AFAIK they were contractually obliged to produce a new generation.

However:

> [The 2017 CPU] has no microarchitecture improvements over [the 2012 CPU]; despite nominally having a different stepping, it is functionally identical with the 9500 series, even having exactly the same bugs, the only difference being the 133 MHz higher frequency of 9760 and 9750 over 9560 and 9550 respectively.

So they practically _didn't_.


I wonder how many got sold or if it was just a run to fulfill that contract and nothing else.


We got 4 from that last batch that Intel ran. Our cloud migration project is way behind schedule and we needed the insurance. Support for the VMS version we’re running on them goes until 2028 I think.

We’re decommissioning a couple of first-gen Itanium servers in the next few months. I think that the assumption is that we’d have to pay for disposal, so if someone wants them for Linux kernel support and can pick them up in Chicago, reply from an account with contact info and I’d send an email if it’s possible to give them away for free (I can ask).


Great attitude!

You may have to destroy the harddrives for legal reasons.

Or secure erase them but then you may as well pull them (still, that would save someone another parts hunt). If you go this route check out to see if there are still any old installation media and hardcopy docs that can go with them, that might be worth almost as much as the machines to the recipient.


Notice that nobody replied ;)

Anyway, VMS servers don’t generally have hard drives, they are designed to be run in a cluster, with each server using fiber channel to connect to a SAN. Also, I found that the ones we’re getting rid of aren’t first-gen - they’re 9300 series (3rd or 4th gen)


Yes, it's an expensive paperweight by the time you have it.

But we can sex it it up a bit: probably the people who would love to have this system to run Linux on haven't figured out yet that there is GOLD on those circuit boards. Lots of it!!

Free gold, just imagine...

That should do it ;)


I wonder if they used a newer process. Presumably not since that would involve making new masks, raising the NRE costs massively.


Per the table at

https://en.wikipedia.org/wiki/Itanium#Processors

nope, both Poulson (2012) and Kittson (2017) used the same 32nm process.


> It really amused me that people think "RDF never caught on" when it is the basis of so many major standards such as RSS, Adobe's XMP metadata, and, I just found out, ActivityPub.

What does it mean to have caught on or not, though? How many RSS, XMP, ActivityPub implementations are actually considering things as triples and not just considering parent-child relationships within the XML tree?


Modern HTML, per the HTML spec, has implicit namespaces: if you look at the DOM on any site, you'll find the root element of any HTML page is an element in the "http://www.w3.org/1999/xhtml" XML namespace (c.f. `document.documentElement.namespaceURI`).

It was _absolutely deliberate_ that the DOM produced by parsing HTML and XHTML is now the same in overall structure, with the same XML namespaces, as it means browsers don't need to have things like `if localName == "html" and (namespaceURI == "http://www.w3.org/1999/xhtml" or isHTMLDocument)` all over the place.

Modern WHATWG specs put a lot of effort into reducing the number of places where the behaviour of a given DOM depends on whether the document is an HTML document—and most of those are places where non-namespace aware APIs default to the "http://www.w3.org/1999/xhtml" namespace in HTML documents.


> I guess unless we want to assume Opera were digging their own grave by actively engaging in HTML5 and WHATWG, we have to conclude HTML parsing wasn't the hard part

As someone who was around the WHATWG from relatively on, and worked at Opera over the period when it moved to Chromium, I'd strongly suggest that HTML5 was a _massive_ win for Opera: instead of spending countless person hours reverse-engineering other browsers to be compatible with existing web content, things like HTML parsing become "implement what the spec says" (which also implicitly made it match other browsers). Many of the new features in the HTML5 spec of that period were things that either were already basically supported (in SVG especially) or were relatively minor additions; I don't think they played a significant role either.

There's a good reason why Opera was heavily invested in the WHATWG, and it's the fact that by having a spec implemented cross-browser defining how to parse HTML and how cross-page navigation works you eliminate one of the biggest competitive advantages more major browsers have: web compatibility, both through legacy content targetting the browser specifically and also web developers being more likely to test in those browsers. (And to be clear—this would've been true for any form of error handling, including XML-style draconian error handling; but the long-tail of existing content needed to be supported, so even if major sites had migrated to XML you still needed to define how to handle the legacy content.)

The downfall of Presto is arguably mostly one of mismanagement and wrong priorities decisions being made; I don't think Presto was unsaveable, and I don't think the Presto team was so significantly under-resourced it couldn't compete.


This isn't how it works at all; a server sends a message to a push service which then delivers it to the user agent. It is only once the user agent receives the message that it needs to start an installed service worker (if it isn't already running) to deliver the message in the form of the notification. (There's some interest in making it possible to send a notification directly via the push service without having the cost of starting a service worker, but that's future work.)

In the Safari case, this push service is essentially the same as is used for native apps, and it will do the same aggregation as you see for other notification types.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: