Hacker News new | past | comments | ask | show | jobs | submit | Sleaker's comments login

Love hearing about new methods in the voxel space!

It's a bit unfortunate the article conflates voxels with a specific rendering technique. Voxels are just the usage of the 3d grid. It seems like based on the middle section that the author is equating voxel usage with cubic style rendering, or what we often call bloxel renderers (Minecraft).

It is also mentioned that the triangle geometry can be import d and used in engines directly, but I think the author is forgetting that bloxel, or really any rendering process can do the same thing, this is how typical voxel rendering plugins that use other styles already get first class support in existing engines.


Great article, but on ECS I thought the primary point of using it was locality of memory so that you don't get cache misses during iteration of elements. Yes you are preferring composition over inheritance but I thought that was more of a side-benefit to the main thing ECS was trying to solve.


I think wsj started blocking anyone that uses bypass paywall, anyone got archives of this yet?


Also been running ha for about 3 years, but have been veerrrry slow to add devices and only had a couple automations setup around Christmas. I actually tried th smart plugs thing and found the hue bulbs to be a way better experience because I want the RGB and dimmable capabilities on those as opposed to just on/off, it's something I might look into for some of the other areas of the house though. Nice write-up!


It doesn't sound like they have a great security team, they added the "Associate a secondary email address" feature recently. This isn't something that has always been in the software. It seems more like they were cutting corners and not properly testing through ways to exploit their own new feature when it was related to account security.

On top of that it looks like they had a 9.6 CVE that allowed integrations to perform commands as other users...

From the outside it looks like they are trying to ship features faster than they can keep them safe and tested. Perhaps because they are having an incredibly difficult time monetizing well? It makes sense from business standpoint in some respects, but also the security stuff could just absolutely tank the business when the whole point of a (self)hosted git solution is essentially just account management.


I don't put much stock in this "9.6" stuff; CVSS is a ouija board that will say whatever people want it to say. But regardless: the best security teams in the world still see critical vulnerabilities in their software, because software is all garbage.


I've recently stopped two design choices in external facing resources that posed significant security risks.

One of which was around credentials resetting to emails that aren't stored in the API auth system itself, but rather come into Salesforce as a support case. "Don't worry, a support team member has to action the request" was meant to be reassuring, until I explained that this translated to "the only mechanism in place to prevent credentials being stolen comes with a massive social engineering vulnerability".

But it's the previous choices I haven't come across yet that worry me.


> From the outside it looks like they are trying to ship features faster than they can keep them safe and tested.

this sums up their entire product

every feature you could possibly imagine, somewhat working


A few years ago there would be people defending GitLab for “transparency” every time something went wrong.

They even went overboard with the transparency and made public some slack conversations which for me would have made it one of the worst places to work.


They moved on to kagi


Every time I end up opening a gitlab page I wonder how they’ve redesigned their nav again. Seems to be a new design every 6 months.


exactly, really confusing me!


> It doesn't sound like they have a great security team

That's an unfair comment. Even the best teams ship bugs. If you want to measure the quality of a security team, you look at their performance trajectory (for both detection and response) relative to the size of their total threat surface.


All i needed to know about the quality of software gitlab ships can be found in using their CI system on any half decent size project. You can tell it was half baked with many bugs and edge cases that can be easily avoided. When you look at the bug tracker all of them have been documented for years and they just ignore them.

My favorites are

* using included files that run no job is a failure. The only real work around is adding a noop job all over your ci system.

* try to use code reviewers based on groups. The logic is so complex and full of errors i can’t even explain it unless i spend an hour reading the docs.

* when using the merge train and enabling merge result pipelines you end up with two different jobs per commit. This is cool except in the UI it always shows merge results first. If you have ten commits you need to look on the second page to find the most recent commits ci jobs. That is just annoying but more no environment variables overlap for what MR or commit it is. This makes doing trivial things like implementing break glass pipless almost impossible.

Anyway gitlab sucks i wanted to not use github but really it’s just bad. Not to mention we have outages monthly that we always know of 30 minutes to an hour before gitlab does then we look on the status page and see the downtime is 10 minutes when its been 40 for us and likely everyone else. We have in the last year had close to 2 full days combined of downtime from gitlab. Of course they report 99.95% uptime.


github post Microsoft was also a major pain to add code review for groups that were not a per-project, manually curated list of users.

there were some "addons" like panda something that made it less worse, but still a crap fest in terms of usability and compliance.

not to mention that now you can barely use it without being logged in. im overall glad to have moved to gitlab and codeberg. do not miss github AT ALL.


I believe the rebuttal to this argument is partially done in the article by showing hospital admittance statistics. And if this was a factor, I trust that the authors would absolutely delve into this idea. They call out that self-reporting mental stat is not an accurate enough indicator, and choose to couple that with other statistics.


Good to see more indepth research on this the last article I had read was asserting the social media access theory was correlation and not causatory because studies that were done in mental resiliency were showing that if you have built the resiliency then you can safely handle the social media... But it seems like that explanation doesn't take into account the developing mind. Love haidt's work and glad to see there's more research being done to find out exactly why.


I've seen similar claims (social media is correlation and not causation) and it does seem plausible to me.

What I haven't seen so far (and it's possible this data exists, I'll admit to not exactly looking for it) is studies around parental consumption of social media and the impacts it has on their children.

IMO social media/advertising can have amplification effects by targeting parents.


Yea we try to set a good example as parents. You can’t expect kids to not get addicted to social media when their parents are… constantly scrolling though their phone and social media! I bet even basic ground rules help. We try to adhere to: don’t be seen by our kid doing it. Some parents hold themselves to this rule when it comes to drinking and smoking, so why not apply it to smartphone scrolling too?


This seems like it was already part of the plan, see lost ark + new world launches and ongoing prime support.


I feel like the point about wanting more or wanting what your parents had, but not being able to see the things that you have access to at better quality or for less is pretty huge. People like to do comparisons against other people and seem to be very materially driven, and lose sight of the more global or historical contexts that preceded them.


It's not just "better quality" or "for less". It's when. You can't compare the life you can afford in your 20s to the life your parents could afford in their 40s or 50s.

It's unfair for another reason. Your parents may have been just scraping by, but tried to hide the financial stress from their kids. When you're the parent, there's no hiding the financial stress from yourself.


But you absolutely can compare what your parents could afford in their 20s to what you could afford in your 20s.

And if you look at that, you'll realize that, correct, Americans are not as poor as they think they are - they are significantly poorer than that.

Think of the major purchases Americans made in their 20s, higher education, housing, etc. Compare the cost and quality with today. Now compare the income one needed then to survive and raise a family versus what one needs now. Compare minimum wage from that time to today. Compare the average income of a large company CEO vs their average worker between then and now.

You'll see that real wages have plummeted (and not kept pace with inflation), the C-Suite class has made a killing and not shared the wealth with their workers, inflation has skyrocketed, cost of living has grown to an unsustainable level, and things such as education have become so unaffordable that people are skipping out on it altogether.


I would counter the house/education one by suggesting that it's cherry picking the points that support the underlying claim. It seems like other tech related items that people engage with everyday without even thinking about it are orders of magnitude less expensive. Ex: communication, information access (especially global). But even this is missing the underlying contextual claim, and your arguments kind of highlight exactly that. Americans primarily are looking at there immediate history (one generation back) or at what others have around them. They don't typically compare against other cultures or 100+ years ago.

Real wages absolutely plummeted over time for Americans but taken in the context of non-americans is another point that was trying to be brought up.


What's the main benefit of using a GUI with git for people? For me it's usually just the ease of not needing to know a commit hash to perform a rebase against after a merge, but I generally just do these ops from within an IDEs builtin git support and everything else is command line. Do others find a huge benefit from GUIs?


In magit I find invaluable being able to stage chunk by chunk. It enables a whole different workflow, where I review my own code staging it step by step.

Also being able to navigate blame history inside the editor is pretty cool.

No idea about other GUIs, if I have to get out the editor to use git I might just use the command line.


For workflow reviews I tend to just push to GitHub and then use it's review diff when making PRs to catch anything I may have missed (typos usually)

Ah I do use git blame sometimes but it's builtin to the ide so I get it while editing code live. I guess this is the biggest difference I have with not wanting a gitgui, any of those ops I feel like I can get with my ide.


Sure, magit is in some sense "getting it with the ide".

Not sure how advanced is blame in other editors, it's not just blame annotation, is being able to navigate the history of a line just jumping from one change to the previous and back. It's incredibly helpful with large code bases to understand how some code evolved and why it's implemented like that.

Another great magit feature is being able to open a file from any git revision. Like you want to copy some line from a file that's being long removed you can just call magit-find-file on a revision that still has it, open it in buffer, copy what you want etc.


I agree, navigating blame history is incredibly useful, if only to save you from asking the wrong person about a particular change.

Vim's Fugitive[1] can do this and also in Textmate to. So I would hope that most editor git plugins can.

1. https://github.com/tpope/vim-fugitive


`git add -p`


Working often with multi branch projects across repos (forks and upstream), using GitKraken simply makes me more productive. I don't think anymore about how I'm gonna merge or rebase stuff, it makes everything more intuitive. I still use the CLI for advanced stuff that the GUI doesn't handle, but for day to day operation, it's a panacea


Lately I've been using this: https://martinvonz.github.io/jj/v0.10.0/

It's fully compatible with git.

What I used to need a GUI for now I can easily do with jj.


I am perfectly comfortable with CLI git, certainly when measured against my peers, but I still prefer the visual tree, context menus, command palette, etc. I like to right click and edit commit messages. I like to copy branch names. I like to do visual three way merges. I can understand what went wrong more quickly when a PR has unexpected changes. I can see what branches are in origin at a glance. I don’t have to remember the name or hash of anything.


- (un)staging only parts of files instead of whole files

- seeing full content of a branch without checkouting

- seeing diffs between local/remote/stash branches instantly just by Ctrl + mouse click

- automatic stash+reapply local changes when pulling or checkouting branch

- easy using multiple ssh keys (due to multiple remotes)

- easy show/hide parts of tree in (a nice rendered) tree

- just one mouse click to show commit in github (and probably other sites)

etc.

most of them could be done with CLI but it would just be suffering for me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: