Hacker Newsnew | past | comments | ask | show | jobs | submit | mark_l_watson's commentslogin

I have thought about stopping the use of all tech leaders: only use LLM access by running locally and Huggingface, only use a small 3rd party email provider, just use open source, and only social media use is via Mastodon.

What would be the effect? Ironically, more productive?

I am pissed at Microsoft now because my family plan for Office365 is set to renew and they are tagging on a surcharge of $30 for AI services I don’t want. What assholes: that should be a voluntary add on.

EDIT: I tried to cancel my Office365 plan, and they let me switch to a non-AI plan for the old price. I don’t hate them anymore.


I agree. I am happiest just using plain Emacs for coding and every once in a while separately using an LLM or once or twice a day use gemini-cli or codex for a single task.

My comment is for coding, but same opinion for writing emails - once in a blue moon, then I will use a LLM manually.


You raise a good point. For specific programming tasks, I don't really want token-by-token suggestions in an IDE. And, like you, when I have a specific problem, e.g., "I need to do Kerberos auth like this in that language." -- I go to ask an LLM, and it is generally very useful. Then I look at the produced code and say: "Oh, that's how you do it." I almost never copy/paste the results from the LLM into my code base.

I basically agree. OK: Small focused models for specific use cases, small models like the new mistral-3-3B that I found today to be good at tool use I and thus for building narrow ranged applications.

I have been mostly been paid to work on AI projects since 1982, but I want to pull my hair out and scream over the big push in the USA to develop super-AGI. Such a waste of resources and such a hit on society that needs resources used for better purposes.


I agree. re: energy and other resource use: the analogy I like is with driving cars: we use cars for transportation knowing the environmental costs so we don’t usually just go on two hour drives for the fun of it, rather we drive to get to work, go shopping. I use Gemini 3 but only in specific high value use cases. When I use commercial models I think a little about the societal costs.

In the USA we have lost the thread here: we don’t maximize the use of small tuned models throughout society and industry, instead we use the pursuit of advanced AI as a distraction to the reality that our economy and competitiveness are failing.


Most of the energy for AI does not go into chatbots. Using Gemini is not remotely close to driving a car for 2 hours. If a prompt is 0.3 Wh (https://cloud.google.com/blog/products/infrastructure/measur..., https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...), each prompt is closer to using an e-bike for 50 metres.

You could have your morning shower 1°C less hot and save enough energy for about 200 prompts (assuming 50 litres per shower). (Or skip the shower altogether and save thousands of prompts.)


I think it's also worth comparing to the CO2 impact of consuming meat, especially beef, which is pretty high.

(It's the training, not the inference, that's the biggest energy usage.)


+1 interesting

I used DeepSeek-v3.2 to solve two coding problems by pasting code and directions as one large prompt into a chat interface and it performed very well. VERY WELL!

I am still happy to pay Google because of their ecosystem or Gemini app, NotebookLM, Colab, gemini-cli, etc. Google’s moat for me is all the tooling and engineering around the models.

That said, my one year Google AI subscription ends in four months and I might try an alternative, or at least evaluate options. Alibaba Cloud looks like an interesting low cost alternative to AWS for building systems. I am now a retired ‘gentleman scientist’ now and my personal research is inexpensive no matter who I pay for inference compute, but it is fun to spend a small amount of time evaluating alternatives even though mostly using Google is time efficient.


In reading the comments here I only saw two references to Apple's local system LLM. I wrote my own chat app using it and it effectively handles simple queries locally and otherwise sends queries to Apple's secure enclave servers that protect privacy, according to their privacy statement.

For tech people using Ollama and LM Studio for routine tasks works fairly well.

Some of the small Chinese models like Qwen really are good. In my workflows it is usually obvious to me if I want to use a local model or use something like Gemini 3 research with many built in tools. It takes work, but writing custom tools specific to my needs to use with LM Studio increases the fraction of use cases I can run locally.


Good guidelines. My primary principle for using AI is that it should be used as a tool under my control to make me better by making it easier to learn new things, offer alternative viewpoints. Sadly, AI training seems headed towards producing ‘averaged behaviors’ while in my career the best I had to offer employers was an ability to think outside the box, have different perspectives.

How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.


I'm not optimistic about this in the short term. Creative and diverse viewpoints seem to come from diverse life experiences, which AI does not have and, if they are present in the training data, are mostly washed out. Statistical models are like that. The objective function is to predict close to the average output, after all.

In the long term I am at least certain that AI can emulate anything humans do en masse, where there is training data, but without unguided self evolution, I don't see them solving truly novel problems. They still fail to write coherence code if you go a little out of the training distribution, in my experience, and that is a pretty easy domain, all things considered.


The vast majority of advances seem to be of the form "do X for Y", where neither X nor Y is novel but the combination is. I have no idea whether AI is going to better than humans at this, but it seems like it could be.

I am going to sound like a mouse-using Luddite but I configure .emacs with a one line addition to allow a mouse click to reposition the cursor and fast scrolling works also.

I have been using Emacs for 40 years, and decades ago, before ground based fiber, it was a 2 satellite bounce ping time between my office in San Diego and the 38 data collection sites around the world for a DARPA project and the mouse click in Emacs saved a ton of time.

In the present time, I use Emacs on remote servers and local editing, either using my iPad Pro, and Apple Mouse, and a Studio monitor, or a MacBook - on both environments I find occasional mouse clicks or fast scrolls using mouse or Apple trackpad still saves time even in zero latency environments.


> fast scrolling works also

Grokking efficient navigation within Emacs buffers completely removed the necessity for scrolling for me - most of the time it's all about finding specific content - using consult-line, imenu, various jump methods, ex-commands, etc. - there are so many different tools in Emacs to rapidly move around, it makes scrolling feel like useless fiddling, not efficiency.

Mouse is nice for operations that don't require exact precisioning - like resizing windows in your WM, or another cool albeit pretty rare and gimmicky use for it is setting mouse clicks for multiple-cursor selection - feels like shooting lasers in a video-game, i.e.,

    (global-set-key (kbd "C-s-<mouse-1>") 'mc/add-cursor-on-click)
   
Selecting regions of text with mouse? Why, if vim-navigation lets me quickly grab: anything between things - parens, brackets, quotes, etc.; anything including those things; anything up to the char; including the char; until some text; backwards up to the text, etc.

With expreg (expand-region) I can quickly expand and contract my selection - it's so smart - it first selects the word, then line, then sentence, then paragraph - similarly it expands/contracts structurally for code, it understands org-mode, yaml, markdown and Lisp structure. After developing the muscle memory for these things, selecting and moving text with the mouse feels so crude and annoyingly inaccurate. Makes me feel sorry for the vscode kiddos to be honest.


Tramp isn't an option? I know people say it's slow (it is) and synchronous (yes, it is) but I'd rather pay a high latency cost only upon saving than a moderate one ALL THE TIME.

It is an option. You are talking with an old man (me!) and I just usually like simple Emacs setups, also using mosh and ssh and tmux.

How do you go about editing via ipad, I'm just getting into this whole world, and am finding it difficult to figure out a system.

I use an app like Terminus or Prompt to access a remote server. I have a keyboard and a mouse for my iPad Pro.

I love Racket. Just for fun, I wrote a Racket book, read online: https://leanpub.com/racket-ai/read

For Scheme languages I recommend Racket or Gerbil. Racket is great for beginners since the IDE is pretty good and the standard libraries and contributed libraries are good. Gerbil is good for systems programming. network utilities, etc.


I just bought your book. Thanks for writing it. I look forward to reading it!

This is my question also. I tend to not use apps, use DuckDuckGo browser.

I sometimes do use Safari which is a more convenient browser - it would be ironic if DDG browser is less private than Safari.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: