Hacker Newsnew | past | comments | ask | show | jobs | submit | devinprater's commentslogin

Wait, you mean there are senior software engineers who don't know how to use a *nix terminal? I've been using them since I was like 16 or so.

You can 100% work your way to a senior position without ever leaving windows. The people who are like that just don't tend to be hanging out on platforms like HN.

To be clear, this was developing software running on *nix environments, and they were all using Mac (or WSL) and the usual open-source *nix dev tools. This is not a case of developers purely targeting Microsoft environments (indeed it would be excusable in that case).

I am, although I have used nix occasionally.

In Europe C# fills the role of Java.

You're just in an American echo chamber.

Now the number of senior C# engineers in Europe who couldn't fix a broken deploy on IIS or SSL cert problem on a windows server? That is rather high in the windows field too.


i was hiring for a senior devops role a few years ago. part of the interview was ssh into a machine and debug some web server configs we purposely broke. step one was email an ssh public key to me. now, i dont remember the command cause i do it so rarely, i dont expect them to, but for a senior role we expect you can google this, its not supposed to be hard. the number of people that could not generate an ssh key was crazy. i had people emailing me their current company private key. and if we did spend half the interview on the key, they never could pass the trivial part of how we broke it, which effictively just required reading the log file.

They know that it's a magic box into which you paste whatever incantation is in the README.md or spoonfed to you by an LLM, but otherwise have no mental model of how it works. Hell, they didn't even have the reflex of pressing "arrow up" to correct a mistyped command. And don't get me started on the lack of mastery of their tools - whether Docker, package managers or other tools they use daily.

(and speaking of LLMs, those can actually be a wonderful teaching aid - but they don't seem to be bothered by their lack of knowledge and so don't even try to take advantage of them)

I bet the guys are good at Leetcode though, or whatever bullshit interview process that hired them. This is in a Western European company that has adopted all the "best practices" possible, and places high importance on career progression, and these are considered senior SWEs on track to become engineering managers.


@noprocrasted - thank you for your 100% spot on comments. +1. And you summarised it so well that I hope they will be remembered by job seekers of today.

Ugh such overreaction. ADB is still a thing. Apple doesn't even have an official command like tool where you can just push an IPA to your phone. Goodness.

For how long will ADB work? Obviously Google doesn't want user to install apps outside of their control

Google doesn't want millions of people to have every cent of their money stolen.

This measure is about making it harder to pull off a specific type of scam that is plaguing South East Asia. No conspiracy.

For actual information on the purpose of this change rather than conspiracies, I refer you to https://android-developers.googleblog.com/2026/03/android-de...

Since the victims of these scams do not typically own a traditional computer/cannot be pressured to get to one quickly, ADB will remain a thing.


With that reasoning every action would be justified to stop scammers. Google should capture all your calls and check if there could be scamming going on, right?

The current malware situation at android store situation does not help to carry that point:

> https://www.forbes.com/sites/daveywinder/2025/03/18/60-milli...

> https://www.theregister.com/2025/08/26/apps_android_malware/

> https://www.androidheadlines.com/2026/04/novoice-android-mal...


> Google should capture all your calls and check if there could be scamming going on, right?

If you're dumb enough to own a Pixel then arguably they're doing something just as bad.

[1] https://www.reddit.com/r/GooglePixel/comments/1097qm0/manual...


> Google doesn't want millions of people to have every cent of their money stolen.

Megacorporations like Google do not care a single bit about ordinary people. They only care about making more money. How do they make more money? By preventing people from installing NewPipe and Blokada.


I sorta get that reasoning, but is a 24 hour cooldown really going to stop scammers? They're already used to multi-day scams, so wouldn't they just say they'll call back in a day to finish the process?

Yup. The specific scam here is built upon preventing the victim from talking to trusted individuals. A cooldown breaks the spell.

Complex, multi-day pig butchering stuff is not what Google is going after here or would have any hope to defeat. But they can deal with banking malware.


I could still push an app to my phone via adb after this nonsense gets implemented?

Google is altering the deal. Pray Google does not alter it any further.

> Integrating AI where it’s most meaningful, with craft and focus.

Spoken like a true AI.


A lot of these models struggle with small text strings, like "next button" that screen readers are going to speak a lot.


I think I tried on my Android everything I could try and 1. outside webpage reading, not many options; 2. as browser extensions, also not many (I don't like to copy URLs in your app) 3. they all insist reading every little shit, not only buttons but also "wave arrow pointing directly right" which some people use in their texts. So basically reading text aloud is a bunch of shitty options. Anyone jumping in this market opening?


we'd love to serve this use-case. i'll make a demo for this next week and comment here with it.


Lol last night, on a forked and accessible version of Termux I vibecoded into existence, on an Emacs and Emacspeak vibejiggered to work on Termux, I vibecoded, with gptel-agent, an Emacspeak package to make it speak when tool calls are being asked for by the model, and automatically speak any explanatory text after all the tools are called and edits are made. All on my phone with a Bluetooth keyboard. It's so easy, even a blind man can do it! :)

And because it's all controlled by me, I can tell it how to have the package speak, what it should ignore, and I'm not stuck with whatever some sighted person at some big company thinks a blind person wants. Everything should at most be open source, and at least be hackable.

All that to say, AI has helped me out a ton. Now I can be as productive as Emacs, and a Linux terminal, and maybe one day a Linux GUI with real Firefox and such, allows. And it would have *never* happened without AI.

So let's please do continue bringing on the AI. Make it smart and local, so I can have continuous AI descriptions right on my phone, with the ability to screen share or even agent-control my phone to get around inaccessible apps. Oh and fix AI app accessibility so the app sends output to screen readers when I type to it cause I hate talking to my phone and not every blind person wants to speak all the time. Ugh I hate that stereotype.


This is so amazing! I am so stunned and deeply interested on how you set it up and your workflow. To me it sounds like you are already in the future many of us sighted people imagine with AI.

I am not sure if you are able to use it, but I saw Droidrun.ai (https://www.droidrun.ai/) the other day, and agents should be able to drive the phone.


I'm glad I have chatGPT to turn that image with benchmarks into an accessible table lol. I like claude Code, but their accessibility in anything other than accidental CLI accessibility is frustrating. Try it. Load a screen reader like VoiceOver for Mac (cause I know most programmers use Macs) and go to claude.ai. In the "write your prompt to Claude" box, type something like "What will the weather be like tomorrow?" and press Enter/Return. Try closing your eyes for a good 30 seconds and within those 30 seconds, tell me how you'd know if a reply has been given by the model. Then try the same thing with ChatGPT. I would /love/ to be proven wrong.


thanks for sharing! just tried it for the first time.. Anthropic should really do better


Over the past month, with vibe-coding, I've:

* Made Termux accessible enough for me to use.

* Made an MUD client for Emacs.

Gotten Emacs and Emacspeak working on Termux.

Gotten XFCE to run with Orca and AT-SPI communicating to make the desktop environment accessible.

None of this would have happened without AI. Of course, it's only useful for a few people that are blind, use Android, and love Linux and Emacs and such. But it's improved my life a ton. I can do actual work on my phone. I've got Org-mode, calendar, Org-journal, desktop chromium, ETC. all on my phone. And if AI dies tomorrow, I'll still have it. The code is all there for me to learn from, tweak, and update.

I just use one agent, Codex. I don't do the agent swarms yet.


I'm completely blind. I like Linux. I've started to love Android since getting a Samsung and getting rid of OnePlus, cause accessibility. Termux is cool, but it's accessibility wasn't. So, I had Gemini rangle it up a bit into my fork of Termux [1]

Now it reads (usually) only newly incoming text, I can feel around the screen to read a line at a time, and cursor tracking works well enough. Then I got Emacs and Emacspeak working, having Gemini build DecTalk (TTS engine) for Termux and get the Emacspeak DecTalk speech server working with that. I'm still amazed that, with a Bluetooth keyboard, I have Linux, and Emacs, in my pocket. I can write Org and Markdown, read EPUB books in Emacs with Nov.el, look at an actual calendar not just a list of events, and even use Gemini CLI and Claude Code, all on my phone! This is proof that phones, with enough freedom, can be workstations. If I can get Orca working on a desktop environment in Termux-GUI. But even with just Emacs and the shell, I can do quite a bit.

Then I decided to go wild and make an MUD client for Emacs/Emacspeak, since accessible ones for Android are limited, and I didn't trust my hacks to Termux to handle Tintin++ very well. So, Emacs with Emacspeak it was, and Elmud [2] was born.

Elmud has a few cool features. First of all, since Emacspeak has voice-lock, like font-lock but for TTS, Ansi colors can be "heard", like red being a deeper voice. Also a few MUD clients have sound packs on Windows, which make them sound more like a modern video game, while still being text-based. I got a few of those working with Elmud. You just load one of the supported MUD's, and the sound pack is downloaded and installed for you. It's easy and simple. And honestly, that's what I want my tools to provide, something I, or anyone else who chooses to use them, that is easy to get the most out of.

None of this would have been possible without AI. None of it would have been done. It would have remained a dream. And yes, it was all vibe-coded, mostly with Codex 5.2 on high thinking. And yes, the code may look awful. But honestly, how many closed-source programs look just as bad or even worse under the covers of compilation?

[1] https://github.com/devinprater/Talking-termux-app

[2] https://github.com/devinprater/elmud


Nah, just Microsoft Copilot. No Os.


No way. There's just no way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: