Hacker Newsnew | past | comments | ask | show | jobs | submit | more danielovichdk's commentslogin

Mark Seemann has written extensively about the subject.

He a tremendous source of knowledge in that regard.

https://blog.ploeh.dk/2017/01/27/from-dependency-injection-t...


His AutoFixture C# NuGet takes away so much pain from unit test maintenance. It does have a learning curve.

https://github.com/AutoFixture/AutoFixture


I wonder if those trains was imported from Italy ? During the Danish transition to IC-4 some of those trains ended up with Gaddafi in Libiya (https://www.oryxspioenkop.com/2021/02/this-was-gaddafis-pers...)


No, they're Spanish trains: the Talgo Avril (https://en.wikipedia.org/wiki/Talgo_AVRIL, https://youtu.be/6iFfVpZwLJ4?si=ahxuQnauNw1-ucqR). It's a model specifically designed for that.

As an armchair expert, I think it turned out badly because they had to develop cutting-edge technology (no trains with that top speed and support for gauge change existed before, and it also has other quirks, like being uncommonly wide to support five seats per row) but, at the same time, make it very cheap (the project started in the context of harsh austerity in the years after the financial crisis, with PIGS accused of overspending, etc.). They promised too much for the budget and ended up delivering a half-baked train. At the beginning, a year ago, it was a disaster (lots of incidents with trains stopping mid-way, etc.), now they seem to be ironing out the problems and things are getting better but they're still much more unreliable than other trains.

I hope at least the lessons learned help towards making a better model in the future.


Does that train have the British Advanced Passenger Train in it's ancestory? The carriage shape that is narrower at the top looks familiar.


All I wanted to say is, this is not because of AI. Or at least, I have not seen data that shows that. The loss of jobs is happening in all sectors, all over the world. Some say its because of higher interest rates, not sure though ?


I wouldn't buy one. But fun photo at least. Looks like something that took a long time to build but yet again showed how incapable man really is.


And yet some people think AI will take over jobs. I am amazed this robot was not in place 20 years ago. Really ?


Human labor is shockingly cheap.


Amazon warehouse base pay is ~$21-22/hr due to labor supply shortages.

https://www.google.com/search?q=amazon+raise+wages+warehouse


That's about 60k a year for the employer, I think? That probably doesn't even cover the BOM cost of this robot installation.


Robot lasts 6-7 years under typical duty cycles (per FANUC). Think how an EV is cheaper over its lifetime to operate vs a combustion vehicle, capex vs opex. Price of labor will continue to increase in the future due to structural demographics. And lets be real, Amazon warehouse jobs are not good jobs and terribly hard on the human body. No one yearns for the Amazon mines. These are jobs that should absolutely be automated.


Oh yeah I agree with what you're saying, and of course they're working on this because they think it makes sense to do so, I just figure it's still a tricky sell even to replace $20/h humans.

FWIW I design industrial equipment for meat processing plants, where you'd be lucky to get 6-7 months out of a robot arm. I wish it was affordable to use robotics there, because there's a lot that could be done to eliminate some truly awful jobs.


What are the key contributors to reduced robotic longevity, both environmental and duty cycle? How can the robot last longer in harsh environments and aggressive duty cycles?


Harsh environment and poor maintenance.

High pressure hot washdown followed by cold temperatures is kind of a nightmare scenario for equipment. Also some of the plants use cleaning chemicals that will strip off paint and anodizing. The cleaning crews are poorly paid and poorly treated, so they're not going to be careful with equipment, which means you get damage to wiring and sensors from pressure washers.

On the maintenance side, technicians just have too much going on, and it's rare to find someone who has robotic-equipment-level skills. Of all the plants I've been to, I can think of only a couple that are suitable for that level of sophisticated automation. The rest would be SOL if their robot went offline, and we wouldn't be able to train them past that point.

I think for this to work, either the company running the plant needs to own the system and set up specific training and tasks to care for it, or it needs to be provided near-constant support from the manufacturer.

I think you can buy stainless robots that might be good for this sort of thing, but I've never looked into it much because we have a hard enough time supporting our much more basic products.


Appreciate the time you took to reply, thank you for indulging my curiosity.


Where I live your basic fast food job is starting at $16/hour. When I was a kid $20/hr was a nice wage for an adult, these days it is very low.


Actually it's pretty expensive in the long run. They want raises, are finicky about their health, have pesky habits like going home, having life partners and something silly called work/life balance. Also, they sometimes organize and become collective bodies under something called a union.

In reality, I'm a strong supporter of everything above. Maybe we can really provide people better jobs by delegating repetitive and boring things to machines and allow everyone to do something they enjoy to earn their lives.

One can dream, I guess...


If you have the misfortune to be in a developed country (not the USA) then yes. Worker without rights are evidently pretty cheap. They go home, but you can just get twice as many. Catastrophic self-organization happens on scales comparable to robot crashes, and you can just recycle the offending units and replace them with new ones.


>are finicky about their health

Machines are anything but reliable. They need constant servicing and maintenance and still break entirely


Depends on how they're built, and building them takes experience. Plus, if their MTBF is long enough with enough hot spares, you can rotate the problematic ones out fix them while they are being replaced from the hot-spares pool.

When you are not budget constrained, and building things for businesses, a little overengineering goes a long way.

I have a Xerox 7500DN color laser printer next to me, and it's working for more than 20 years at this point. It has gone through a lot of spares, but most (if not all) issues are from parts wearing down naturally. Nothing breaks unexpectedly on that. Same for robots. Give enough design budget, overengineer a little, and that thing will be one hell of an ugly but reliable machinery.

When you work with real "industrial" stuff, the landscape is very different.


All moving parts degrade. A nice thing about machines is you can service and refurbish them to like-new condition.

There are options to deal with your shitty knees, hip, and back, but none of them get you back to 100% of your original capabilities and, carry an element of gambling, and will involve the kinds of painkillers that can ruin you far more comprehensively than a shitty joint will.


Humans are not reliable either. Humans are much more likely to be out sick unexpectedly.

If you keep up the maintenance plan for machines they rarely break before their predicted retirement date when you replace them. And since the maintenance and retirement dates are predicted in advance you can plan for them and thus ensure they happen when you want them to.


These "what if we give everyone jobs they are interested" remarks are just bullshit. You're not going to give people more interesting jobs, the result will just be flooded job markets everywhere. Then more jobs will become automated and people will then flood to more sectors that aren't automated. What a stupid dream, let people have meaningless jobs if they want that.


AI does and its not just 'AI'.

We are now switching over to a self optimizing system approach.

We had big data and didn't do anything with it but now whenever we do something with an LLM, we give it feedback, its getting processed benchmarked stored and used.

ChatGPT 3 was not impressive because it was good, it was impressive because it showed everyone that we started this ara now. This lead to massive reallocation of resources around the globe from a human and money perspective.

Whatever we had with ChatGPT-3 was build with humans and money significant less than what we now have. Which leads to progress unseen before and this will continue at least for now.


I can tell you that since last month a company now does all their training translations via AI, no more need for the whole translations team.

Additionally, this is now a common feature in CMS space, automated translation of content and assets.


I don't understand why people still express doubt about this - AI already is and has been taking over jobs.


It's really about the hype that is the problem and that people often mean LLMs when they say AI.

LLM isn't going to drive a forklift; it needs more agency than a textbox in order to do that.

But it's really going to be products (ex. Microsoft Word) rather than a technology (ex. Electricity) that'll replace jobs (ex. Typists).


Listen. Pi-Hole is forever something I resemble with American Pie.

Good luck with whatever it is. Can't go there.


Claiming to use WinDBG for debugging a crash dump and the only commands I can find in the MCP code are these ? I am not trying to be a dick here, but how does this really work under the covers ? Is the MCP learning windbg ? Is there a model that knows windbg ? I am asking becuase I have no idea.

        results["info"] = session.send_command(".lastevent")
        results["exception"] = session.send_command("!analyze -v")
        results["modules"] = session.send_command("lm")
        results["threads"] = session.send_command("~")
You cannot debug a crash dump only with these 4 commands, all the time.


It looks like it is using "Microsoft Console Debugger (CDB)" as the interface to windbg.

Just had a quick look at the code: https://github.com/svnscha/mcp-windbg/blob/main/src/mcp_serv...

I might be wrong, but at first glance I don't think it is only using those 4 commands. It might be using them internally to get context to pass to the AI agent, but it looks like it exposes:

    - open_windbg_dump
    - run_windbg_cmd
    - close_windbg_dump
    - list_windbg_dumps
The most interesting one is "run_windbg_cmd" because it might allow the MCP server to send whatever the AI agent wants. E.g:

    elif name == "run_windbg_cmd":
        args = RunWindbgCmdParams(**arguments)
        session = get_or_create_session(
            args.dump_path, cdb_path, symbols_path, timeout, verbose
        )
        output = session.send_command(args.command)
        return [TextContent(
            type="text",
            text=f"Command: {args.command}\n\nOutput:\n```\n" + "\n".join(output) + "\n```"
        )]

(edit: formatting)


Yes, that's exactly the point. LLMs "know" about WinDBG and its commands. So if you ask to switch the stack frame, inspect structs, memory or heap - it will do so and give contextual answers. Trivial crashes are almost analyzed fully autonomous whereas for challenging ones you can get quite a cool assistant on your side, helping you to analyze data, patterns, structs - you name it.


I think the magic happens in the function "run_windbg_cmd". AFAIK, the agent will use that function to pass any WinDBG command that the model thinks will be useful. The implementation basically includes the interface between the model and actually calling CDB through CDBSession.


Yeah that seems correct. It's like creating an SQLite MCP server with single tool "run_sql". Which is just fine I guess as long as the LLM knows how to write SQL (or WinDBG commands). And they definitely do know that. I'd even say this is better because this shifts the capability to LLM instead of the MCP.


The magic happens in the "analyze -v" part. This does quite a long analysis of a crashdump (https://learn.microsoft.com/en-us/windows-hardware/drivers/d...)

After that, all that is required is interpreting the results and connecting it with the source code.

Still impressive at first glance, but I wonder how well it works with a more complex example (like a crash in the Windows kernel due to a broken driver, for example)


Some of these open source licenses are somewhat flawed when it comes to building a business on things that are "free".

Wouldn't it be possible to add a clause to some of these licenses that if you are using open source software and generate a certain amount of revenue from it, something has to be given back to the project.

I totally understand that the software is meant to be free but isn't there a balance here, where at some point it must be enforced that some the money it generates in a business, must flow back to its contributors/project ?

I have worked plenty of places where Redis was a thing that served at least some backbone for success for a business. Those places could pay a fee for generating revenue based on free software ?

Does it make sense ?


IMO for that to make sense the license would have to be infectious and even then you run into the anything but trivial and gameable issue of splitting royalties among the tree of dependencies. Redis itself uses other open source libraries and so on. It's a tricky problem but it shouldn't stop us from trying, any amount of no strings attached funding for open source is better than what we have today. Personally I'd like to see it solved by diverting some amount of tax payer money to open source project maintainers, via some hopefully non conflicted government agency that ascertains what projects are more funding worthy than others.


Yes, and it's called proprietary software which is free for non-commercial use and companies under a certain size. Docker Desktop uses this license.


Great piece of writing. Thank you.


Been riding steel, carbon and titanium frames. Steel failed the most, especially after many years (+30) of service. Weldings doesn't last.

Carbon frames and forks I have seen in freak accidents. Like snap-cracking by not even hitting some hard. This too me has been pure structural issues, since with metal, it has always been the weldings.

Titanium I have seen tore apart as well, again all about the weldings.

As for component failures, I have experienced rim-walls explode due to brake heat, rimes fall apart on flat tires, chain snapping and seat post clamps suddenly break and riders losing their saddle :)

I am going for a ride in a few hours. Can't wait!


Huh. I've been riding steel bikes pretty much all my life too (from local cheap-made with national tubing to Columbus SL tubing and the like) and never have seen a single one failing at any weld. When they break, and in the rare event they happen to break, it's about corrosion.


Haha, so much for "steel is real".

I must say, the more I research this, the more I admire aluminium. Other than street cred and the alleged road vibrations, it's incredible how strong, light and cheap it is.


Aluminum has noticeable fatigue life though, unlike carbon. Something to keep in mind if you are considering it.


Well, sort of. But as I understand it, that limit is orders of magnitude over mine, or almost any cyclist's 5-year, say, mileage.

Over my 10k km or so mileage on my current alu frame, I had 0 issues so far, despite a few crashes.


Carbon is basically "fancy fibrous plastic" though. It still degrades. Just in different ways than aluminum or steel.


> Steel failed the most, especially after many years (+30) of service.

Congratulations on reaching your 90th birthday!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: