Hacker Newsnew | past | comments | ask | show | jobs | submit | codaphiliac's commentslogin

watching all agile coaches turn into claude experts in 3 2 1 …

You joke, but that does seem to be happening from what I've seen - Agile Coaches are rebranding to become "AI coaches" or "AI Enablers".

Figures, gotta keep that grift going somehow...

Honestly I kind of envy the energy of these people.

Studio to close in 3 2 1 … ?


ta da:

''Video game giant Ubisoft closes Halifax studio, cutting 71 jobs''

https://www.cbc.ca/news/canada/nova-scotia/ubisoft-studio-ha...


almost the same as RTO mandates:

we’ll force you to come back to justify sunk money in office space.


I personally think all the gains in productivity that happened with WFH were just because people were stressed and WFH acted like a pressure relief. But too much of a good thing and people get lazy (seeing it right now, some people are filling full timesheets and not even starting let alone getting through a day of work in a week), so the right balance is somewhere in the middle.

Perhaps… the right balance is actually working only 4 days a week, always from the office, and just having the 5th day proper-off instead.

I think people go through “grinds” to get big projects done, and then plateau’s of “cooling down”. I think every person only has so much grind to give, and extra days doesn’t mean more work, so the ideal employee is one you pay for 3-4 days per week only.


We just need a metric that can't be gamed which will reliably show who is performing and who is not, and we can rid ourselves of the latter. Everyone else can continue to work wherever the hell they want.

But that's a tall order, so maybe we just need managers to pay attention. It doesn't take that much effort to stay involved enough to know who is slacking and who is pulling their weight, and a good manager can do it without seeming to micromanage. Maybe they'll do this when they realize that what they're doing now could largely be replaced by an LLM...


Not for nothing did the endless WSJ and Forbes articles about "commuting for one hour into expensive downtown offices is good, actually" show up around the same time RTO mandates did.


Don't forget about the poor local businesses. Someone needs to pay to keep the executives' lunch spots open.


Well, not if rents crash because all the offices moved out from the area, and the lunch spot can afford to stay open and lowers prices.

We don't talk enough about how the real estate industry is a gigantic drag on the economy.


Hey now. Little coffee shops and lunch spots and dry cleaners are what make cities worth living in in the first place.


Wonder how many sqlite databases would be too many. At one point I assume not all databases can be kept opened at all time. what sort of overhead would there be serving a tenant not opened up yet? there has to be caches etc. not warmed up causing lots of disk IO


At some level, it doesn't make a big difference if you've got a file open or not once the file's data falls out of the disk cache, you'll have the same kinds of latency to get it back. Sure, you'll avoid a round or two of latency pulling in the filesystem data to get to the file, but probably not a big deal on SSD.

Chances are, the popular databases stay in cache and are quick to open anyway; and the unpopular ones are rarely accessed so delay is ok. But you'd also be able to monitor for disk activity/latency on a system level and add more disks if you need more throughput; possibly disks attached to other machines, if you also need more cpu/ram to go with it. Should be relatively simple to partition the low use databases, because they're low use.


In this scenario you would use short-lived connections, and the overhead would probably be approximately the same as reading a file.


That would be the FD limit, divided by 3 (the DB itself, the shm file and the WAL).


But each SQLite connection (even to the same DB) will also consume 2 FDs for the DB and the WAL.

You'll more easily pool connections to the same DB, of course, but the difference might not be so stark.


Something like that, yes. A tenant that hasn't been opened yet - well, you create the tenant first, and then proceed "as normal". With ActiveRecord, your `pool:` configuration defines how many database handles you want to keep open at the same time. I set it relatively high but it can be tweaked, I'd say. And there is automatic eviction from the pool, so if you have a few sites which are popular and a lot of sites which are not popular - it should balance out.

There could be merit to "open and close right after" though, for sure.


Hi! are the SWE openings open to remotes in Canada?


Thinking this could be useful in a multi tenants service where you need to fairly allocate job processing capacity across tenants to a number of background workers (like data export api requests, encoding requests etc.)


That was my first thought as well. However, in a lot of real world cases, what matters is not the frequency of requests, but the duration of the jobs. For instance, one client might request a job that takes minutes or hours to complete, while another may only have requests that take a couple of seconds to complete. I don't think this library handles such cases.


Lots of heuristics continue to work pretty well as long as the least and greatest are within an order of magnitude of each other. It’s one of the reasons why we break stories down to 1-10 business days. Anything bigger and the statistical characteristics begin to break down.

That said, it’s quite easy for a big job to exceed 50x the cost of the smallest job.


defining a unit of processing like duration or quantity and then feeding the algorithm with the equivalent of units consumed (pre or post processing a request) might help.


To mitigate this case you could limit capacity in terms of concurrency instead of request rate. Basically it would be like a fairly-acquired semaphore.


I believe nginx+ has a feature that does max-conns by IP address. It’s a similar solution to what you describe. Of course that falls down wrt fairness when fanout causes the cost of a request to not be proportional to the response time.


Per capita means nothing... Our environment cares only about total emission. The rest is politics


If per capita means nothing, the logical solution for China is to split itself into 100 countries. Somehow that would make the contribution from each new country less important in your view.

Measuring by per capita pollution and getting that measure down is the only sane way to count emissions. The end product is the same, total emissions go down, but we can focus on a metric that makes sense no matter how large the country you’re measuring is.


I'm not following. All other things equal, how would the national division of China affect the sum of emissions?


We don't generate emissions for the fun of it, we do it to support the lifestyles of humans.

Sure, total emissions is what matters, but the formula for total emissions is:

    Total Emissions = Emissions per capita * People
Unless you're proposing a reduction in people (don't) then per capita means quite a lot.


Money. Ego


AWS will spin-off to be a wildly successful entity.


Email security scanner following links?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: