Hacker Newsnew | past | comments | ask | show | jobs | submit | bArray's commentslogin

I disagree. I think we need to make it clear that following hyperlinks should always be a cognitive choice.

> To download W3C's editor/browser Amaya, [click here].

This gives you an option, where multiple options may be available.

> To download Amaya, go to the [Amaya Website] and get the necessary software.

This is even better, as 'click here' assumes the input device.

> Get [Amaya]!

Whilst being simpler, it does not make clear that the action is optional.

Whether I click something may require some additional information around the link, for example:

> To download W3C's free editor/browser Amaya, go to the [Amaya Website] and select the latest version on the home page.

Now I know that it's free, and I have instructions to carry out to find what I'm looking for.


This is very much from a sighted person's point of view. When you use screen readers, you can switch to a 'links' navigation mode and only go through links, in which case all you'd hear would be "click here", "Amaya website" and "Amaya".

See also https://www.w3.org/WAI/WCAG22/Understanding/link-purpose-in-..., also keeping in mind that since June, the underlying WCAG guideline is a EU-wide legal requirement for company websites.


I don't think this is right. The WCAG allows for "programmatically determined link context" which includes text surrounding the actual link. "click here" is bad but "Amaya website" or "Amaya" are fine.

e.g. from the WCAG examples you link to:

> An account registration page requires successful completion of a [Turing test](https://www.w3.org/TR/turingtest/) before the registration form can be accessed.


> This is very much from a sighted person's point of view. When you use screen readers, you can switch to a 'links' navigation mode and only go through links, in which case all you'd hear would be "click here", "Amaya website" and "Amaya".

I think this is a UX problem with screen readers, and actually probably something LLMs might massively help with. If I was designing something for screen readers, I would probably have interactive elements within a context window, i.e.:

    <context>To download W3C's free editor/browser Amaya, go to the <a href="..">Amaya Website</a> and select the latest version on the home page.</context>
The user would hear "Amaya Website" and would then have some ability to also hear the link context. For pages missing the context windows some attempt could be made to create one automatically.

> See also https://www.w3.org/WAI/WCAG22/Understanding/link-purpose-in-... , also keeping in mind that since June, the underlying WCAG guideline is a EU-wide legal requirement for company websites.

On this page itself, within the box "Success Criterion (SC)" the listener would hear "purpose of each link", "programmatically determined link context", "ambiguous to users in general". The last one is, well, ambiguous to users in general. Even as a sighted person, without selecting it, I wouldn't know what it is actually going to link to.

I would say that the web is generally actively hostile towards screen readers, and not because of a lack of WCAG adoption. You have text in images (not just static, but also GIFs), JS heavy components, delayed loading, dependant interactions (such as captchas, navigation drop downs, etc), infinite scrolling - the list just goes on. The web is primary a highly visual space and likely will remain so.

I don't think the EU's accessibility act is actually enforceable [1]. Unlike cookies, some of the changes required are massive, to the point where it may not even be worth existing in the EU market if it's enforced.

> Incorporating captions into video content, as well as providing audio descriptions and transcripts

Even proving you are compliant is a lot of cost, which includes audits and training staff. You can always trust the EU to regulate itself out of being competitive.

[1] https://www.wcag.com/compliance/european-accessibility-act/


I don’t think either of the suggested options delivers the best possible UX. Copy of course depends on context, but „click here“ is never justified as the best alternative.

You can do:

• [Download X] - immediate download link.

• [Learn more about X] - go to webpage, discover other interactions there

• [Register to download X] - if registration required

Short and concise copy is generally better, extra information rarely makes content better.


I think it really depends on the context. In a news article it rarely makes the content better, but in a documentation wiki, the context can be everything. I think we are fooling ourselves to suggest that there is zero nuance and that there is a 100% correct approach always.

Best:

> You can download the Amaya Browser from [Amaya’s download page]

It’s both explicit for sighted reader and screen readers too.

Yes there’s some duplicated words. But the point of that paragraph isn’t to be artistic, it’s to be informative. You can save the creative word play for your regular paragraphs.


I would even say you could go with if duplicated words is an issue:

  You can [get the Amaya Browser] from the download page

Nice refinement there.

I still think that the missing context is an issue. Imagine the page is some 10k words, by the time you get to the bottom, you might not remember what "Amaya" is. So just saying "Amaya's download page" tells the user that it is a download, but nothing about what it is a download for.

I wonder how successful the screen reader experience is for using the web. Without checking URLs, how can they be sure for example they don't enter this credit card details on http://bank.xyz/scam_page , rather than https://bank.com ?

Or how do they know whether the download page automatically downloads the file whilst they are on it?

I can only imagine that using the web is extremely difficult.


Yeah, context matters. If it’s a Amaya product page then the context is already there. But if it’s a large article that meanders across a few topics, then your approach would be better. Though in that scenario I think you’re better still by directing people to a product page instead of a download page.

I wish they would stop with snap, snaps have been nothing but a pain. Ubuntu keep pushing half-baked ideas into the wild - who asked for a system that would randomly kill apps without warning? It's like the Rust SSH thing, they are going to make it the default whether you like it or not, even though they know it is not 1:1 and probably never will be.

I'm currently having an issue with Firefox where it will not stop crashing all of the time, even whilst using Hackernews. Not a RAM or CPU issue, just buggy software pushed through a "move fast and break things" attitude.


Google “remove snaps Ubuntu 24.04” (or whichever version you’re on). I did so, nuked all snaps and replaced Firefox with an upstream Deb repository. Everything’s working fine so far.

Or run PopOS which is Ubuntu without the snaps.

May as well run Debian at that point

I’ve found that Ubuntu comes with more things set up out of the box than Debian, so it gets me up and running faster. Or could look into Mint. Sure, to each their own - as long as it has no snaps!

Try Mx Linux :)

On a server, 100% but on a desktop/laptop, Ubuntu does bring some conveniences (though Pop_OS! improves that balance, the good stuff minus the over-dependence on snaps).

Or Linux Mint..! :-D

> snaps have been nothing but a pain.

I remember being vocal about it being a bad solution to a problem nobody had while I was working for Canonical. That's probably one of the reasons it seems unlikely they'll ever hire me again.


I like using snap on my LTS servers, I can test new CLI tools there and see if the new version has soem fixes that I need or not, if the snap works better I can use it without messing around with installing some PPA to update the tool and it's dependencies.

What I dislike about snaps is the performance. Somehow they have managed to make them practically unusable on computers older than a few years.

It's like they saw RedHat and though "ah the reason people complain about that is because they're just not going fast enough."

> However, entry-level opportunities have taken a significant hit, dropping by nearly a third since the advent of widely available generative AI tools at the end of 2022.

As others have pointed out, we're off the back of COVID and money is tight. Most companies are looking to shrink team sizes, hence the lack of hiring. AI tools have almost zero relevance, and the rest of the article does not substantiate the claim at all.

> Despite this, year-on-year growth remained positive at +0.49%, marking the third consecutive month of annual improvement and suggesting a slow but steady recovery from the recent market slump.

There are things that can be done to screw with numbers, but it only works for a short time. The UK for example only counts unemployment as those actively looking for work, but it is well known that there is a growing number of permanently unemployed which is hidden. The number of people applying for PIP (a type of disability allowance typically awarded to those unable to work) is growing by 1k a day [0]. In perspective, the population of the UK grows by ~1.6k a day and is projected to fall.

The growth figure is likely based on the ONS [1], which are constantly having to revise their figures for data that was already available at the time of publication [2] - they do this a lot. One hack by this government is when they grew the public sector at the cost of the private sector, which is unsustainable. A company I know halved their employees but still pay the same amount of national insurance, so the economic value per employee is reduced due to increased overheads.

This is not unique to the UK, though. I speak to many Europeans that go through a similar situation, and I hear of similar stories throughout the West. The EU for example has a growing issue with ECB bond management where it plays games to to try to bring down borrowing costs. The Japanese market, a large purchaser of foreign debt, is starting to get concerned [2], and signals a potential crisis for the US to be able to borrow at low rates. From all accounts China is just about keeping their head above the water and are pulling a lot of tricks.

[0] https://www.theguardian.com/politics/2025/jun/24/pat-mcfadde...

[1] https://www.ons.gov.uk/economy/grossdomesticproductgdp/bulle...

[2] https://www.bbc.co.uk/news/live/business-68680004

[3] https://www.reuters.com/markets/asia/japan-consider-buying-b...


Interesting project. I'm quite interested in developing a small programming language myself, but am not sure where to start. What resources do you recommend?

Crafting Interpreters https://craftinginterpreters.com is a super friendly, step-by-step guide to building your own language and VM, looking forward to seeing what kind of language you come up with too!

I'll second this. It's fantastic.

The concepts that the OP talks about (liveness analysis, constant folding, dead code elimination), and similar stuff revolving around IR optimization, can be found explained in great detail in Nora Sandler's "Writing a C compiler".

I remember the Apple store in my local area being the most elegant of the available stores, well lit and surrounded with glass windows, being exceptionally clean inside and always busy. The staff were well dressed and the atmosphere pleasant.

I ventured inside a few times to checkout the latest technological offerings of Apple, and was impressed. None of the sales staff ever approached me, but I was able to afford the devices, despite perhaps being dressed as though I couldn't. The irony is that even some of the poorest people in the UK I see walking around with iPhones and their children use iPads.

I never purchased or owned an Apple device to this day, but I did appreciate the well built hardware and snappy software.


Needs to be removed from front page.

It's literally cheaper to build this kind of thing from scratch than to try and re-use existing components like this.

Maybe there is still a market at this price point, for example if there are tax breaks, or the price of the thing you are selling is so much that the customer just swallows the extra price.

I still think it would be better if we were to go the way of modular systems. I'm currently building out a controller system that has a modular interface and should be upgradeable as I swap out components and improve it, without adding much to the overall footprint. I think this really is the way forwards with this kind of thing.


> I still think it would be better if we were to go the way of modular systems.

Modularity can be expensive, though. The unused IO soaks up pins and pushes you to bigger packages and up the SOIC/QFP/QFN/BGA chain. You add multiplexers and transceivers and buffers and so on. The traces take board space and layers and the connectors cost a big chunk of the BOM. Separate modules add SKUs and manufacture, assembly and inventory overhead, and the offboard interfaces take space, power and time.

Whenever you have any appreciable volume, it's almost always cheaper to integrate and demodularise, even before you consider the physical size and form factor of the device.

Otherwise all embedded systems would be made of dev boards wearing a hat. Now, yes, there are many systems that use something like a RPi Compute Module or a TI ControlCard, but once you crack a certain volume, it's an easy cost optimisation to "flatten" it into a single PCB.

And the one thing you do not want from designing around a module is the possibility that the supply of surplus OldPhone X3 mainboards or whatever dries up in two years and it turns out the new generation of modules are just a bit different.


yeah the website says a whole bunch of nothing imo & doesnt really define a problem needing to be solved, perhaps they've struck a deal with phone carrier's to get unsold phones that are destined for the landfill as they have a t-mobile logo on their site, thats the only business aspect I can imagine get 10s of million worth of components for like a 1/10 of the price etc

google is telling me around 400k phone like devices are thrown out into landfills everyday, there might be a market to bring down costs eventually if they get logistics properly moving


I think this proving out the concept. A dev board costing. 150 doesn't matter for professional projects. It latters for tinkerers. What matters is unit price for desired qty.

And this has 4G/LTE (because it is a smartphone) so comparisons to base RPis are largely irrelevant.

And in industrial embedded Linux stuff there is essentially no correlation between price and performance. Most don't need performance and they aren't really cost-optimizing this bit of the production line very hard. It just needs to be certifiable, reliable and replacable.

I do hope they come down a lot in price and prove this out over many more phone variants.


> And this has 4G/LTE (because it is a smartphone) so comparisons to base RPis are largely irrelevant.

Yes? So have countless new phones at around 150€. Including screen, battery, case, and warranty.

Edit: Just for fun, a list from a german shopping/comparison site, aptly named 'scrooge', selected for LTE, at least 2GB RAM, Octacore, Android 15 to not get too old stuff, in stock, 4 days delivery max, capped at 150€ incl. delivery. Sorted for lowest price first:

https://geizhals.de/?cat=umtsover&xf=10063_15.0~2607_2048~26...

Editoftheedit: To stay with the terminology of the 'largely irrelevant base RPI', they've built (or intend to?) a base board for whatever they are using as CM/Computemodule to plug into. I see some GPIO, some USB, one Ethernet.

A little bit of board layout, soldering of mostly passive components, and that's it.

Best of luck. (LOL)


> It just needs to be certifiable, reliable and replacable.

I think those are some good unanswered questions here. The supply of used phones is pretty cyclical, and almost all of them are out of production when their supply peaks.

Also pretty much all smartphones rely heavily on components without data sheets and with proprietary firmware blobs that won't be updated or patched without first-party support, or at all.


If you only built it for the most popular models of end of life phones, maybe you could get the price point down enough to sell the. At a profit. But for everything else just forget it. A raspi is cheaper with a better community.

They seem to be treating the old phone as modular, they mount the old PCB on a carrier board with more I/O, they don't look to be desoldering individual chips.

You should be able to just reflash the phone and maybe point a small fan at the case. OEMs do everything they can to make that impractical though.

Trying adding "ass", it explodes [1]. Not sure if that's because of keywords such as 'class' or something else? "dumb" is really on the uptake [2].

[1] https://www.vidarholen.net/contents/wordcount/#fuck*,shit*,d...*

[2] https://www.vidarholen.net/contents/wordcount/#fuck*,shit*,d...*


Assembly, assign, assert, assume, associate... I think most of what you're picking up is not actually naughty.


Good ol' Scunthorpe problem.


Yea, if you remove the star at the end, it goes back to normal


Report: Adding ass makes stuff explode. Dumb is on the uptake.

Resolution: Behaving as expected. Won't fix.


Combine this with the Meta Pixel illegal localhost tracking that bypasses privacy measures [1] [2] and the privacy leaking could be off the scale.

I think this goes for all things - medical data such as heart rate, blood sugar, steps, weight, VO2 max, etc, could all be seriously misused.

Personally I try to use apps that are not cloud-based, or make my own, but this isn't an option for everybody.

[1] https://www.zeropartydata.es/p/localhost-tracking-explained-...

[2] https://news.ycombinator.com/item?id=44235467


You don't need a Meta pixel if the app simply... shares the data with Facebook, as Flo was caught doing.

https://en.wikipedia.org/wiki/Flo_(app)#Privacy_and_security...


LLMs for code review, rather than code writing/design could be the killer feature. I think that code review has been broken for a while now, but this could be a way forward. Of particular interest would be security, undefined behaviour, basic misuse of features, double checking warnings out of the compiler against the source code to ensure it isn't something more serious, etc.

My current use of LLMs is typically via the search engine when trying to get information about an error. It has maybe a 50% hit rate, which is okay because I'm typically asking about an edge case.


ChatGPT is great for debugging common issues that have been written about extensively on the web (before the training cutoff). It's a synthesizer of Stack Overflow and greatly cuts down on the time it takes to figure out what's going on compared with searching for discussions and reading them individually.

(This IP rightly belongs to the Stack Overflow contributors and is licensed to Stack Overflow. It ought to be those parties who are exploiting it. I have mixed feelings about participating as a user.)

However, the LLM output is also noisy because of hallucinations — just less noisy than web searching.

I imagine that an LLM could assess a codebase and find common mistakes, problematic function/API invocations, etc. However, there would also be a lot of false positives. Are people using LLMs that way?


If you do "please review this code" in a loop, you'll eventually find a case where the chatbot starts by changing X to Y, and a bit later changes Y back to X.

It works for code review, but you have to be judicious about which changes you accept and which you reject. If you know enough to know an improvement when you see one, it's pretty great at spitting out candidate changes which you can then accept or reject.


Why isn't this spoken more about? Not a developer but work very closely with many - they are all on a spectrum from zero interest in this technology to actively using it to write code (correlates inversely seniority from my sample set) - very little talk on using it for reviews/checks - perhaps that needs to be done passively on commit.


The main issue with LLMs is that they can't "judge" contributions correctly. Their review is very nitpicky on things that don't matter and often misses big issues that a human familiar with the codebase would recognise. It's almost just noise at the end.

That's why everyone is moving to the agent thing. Even if the LLM makes a bunch of mistakes, you still have a human doing the decision making and get some determinism.


So far, it seems pretty bad at code review. You'd get more mileage by configuring a linter.


My work has been adding more and more AI review bots. It's been like 0 for 10 for the feedback the AI has given me. Just wasting my time. I see where it's coming from, it's not utter nonsense, but it just doesn't understand the nuance or why something is logically correct.

That said, there have been some reports where the AIs have predicted what later became outages when they were ignored.

So... I don't know. Is it worth wading through 10 bad reviews of 1 good one prevents a bad bug? Maybe. I do hope the ratio gets better though


> LLMs for code review, rather than code writing/design could be the killer feature

This is already available on GitHub using Copilot as a reviewer. It's not the best suggestions, but usable enough to continue having in the loop.


Totally agree - we’re working on this at https://sourcery.ai


Yeah Claude Code (Opus 4) is really marvelous at code review. I just give it the PR link and it does the rest (via gh cli). — it gives a md file that usually has some gems in it. Definitely improving the quantity of “my” feedback, and certainly reducing time required.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: