Hacker News new | past | comments | ask | show | jobs | submit | cma's comments login

Isn't that a less efficient way than a $110m dividend or buyback?

Dividend means taxes . Buyback requires someone to sell (I.e. exit) usually no one wants[1] to especially when the company is going hot. investors would rather have (more)stock in a hot company than cash.

Fund managers and staff have also disincentives for early exits, i.e. they have to find and invest in another company and cannot just keep the money, which means more work. They rather exit by switching stock to a hotter in demand, hard to get in companies if they can.

[1] there are always some employees and founders who would prefer some liquidity , but either they don’t hold large enough positions (employees) or investors don’t want to give a lot of liquidity (founders)

For public companies it is different- buybacks work because there is always someone ready to sell. Usually retail but also short term funds who don’t care about liquidating. ETFs and other very institutional investors or those into buffet style long term investments will not sell easily


Overrated is great for raising the next investment round. It is much easier than convincing investors they are underrating someone you think would make a better choice to go with someone overrated instead. If he is overrated enough the increase from overratedness in the next investment round will pay for more than the acquisition cost.

> They are gambling that Jony has another iPhone in him

All they have to do is convince investors of that before the next round and they get a net return on him.


>The problem is Windows IO filters

Not the biggest issue of them, 'find' and 'git status' on WSL2 in a big project is still >100 times slower on windows dev drive which avoids those filters than it is with WSL 1 on dev drive.

WSL 1 on regular ntfs with defender disabled is about 4x slower than WSL1 on dev drive, so that stuff does cause some of it, but WSL2 feels hopelessly slow. And wsl 2 can't share memory as well or take as much advantage of the filesystem cache (doubling it if you use the windows drive in both places I think, unless the network drive representation of it doesn't get cached on the WSL2 drive.


WSL2, in my testing, is orders of magnitude faster at file heavy operations than anything outside WSL, dev drive or not. We have an R&D department that's using WSL2 and jumping through hurdles of forwarding hardware because it's night and day compared to trying under windows on the same machine. It provided other benefits too, but the sheer performance was the main selling point.

WSL2 does not take less advantage of filesystem caches. Linux's block cache is perfectly capable. HyperV is a semi-serious hypervisor, so it should be using a direct I/O abstraction for writing to the disk image. Memory is also balloning, and can dynamically grow and shrink depending on memory pressure.

Linux VM's is something Microsoft has poured a lot of money into optimizing as that's what the vast majority of Azure is. Cramming more out of a single machine, and therefore more things into a single machine, directly correlates with profits, so that's a heavy investment.

I wonder why you're seeing different results. I have no experience with WSL1, and looking into a proprietary legacy solution with known issues and limited features would be a purely academic exercise that I'm not sure is worth it.

(I personally don't use Windows, but I work with departments whose parent companies enforce it on their networks,


> Linux's block cache is perfectly capable. HyperV is a semi-serious hypervisor, so it should be using a direct I/O abstraction for writing to the disk image.

Files on the WSL2 disk image work great. They're complaining about accessing files that aren't on the disk image, where everything is relayed over a 9P network filesystem and not a block device. That's the part that gets really slow in WSL2, much slower than WSL1's nearly-native access.

> Memory is also balloning, and can dynamically grow and shrink depending on memory pressure.

In my experience this works pretty badly.

> a proprietary legacy solution with known issues and limited features

Well at least at the launch of WSL2 they said WSL1 wasn't legacy, I'm not sure if that has changed.

But either way you're using a highly proprietary system, and both WSL1 and WSL2 have significant known issues and limited features, neither one clearly better than the other.


> WSL2 does not take less advantage of filesystem caches.

My understanding is when you access files on the windows drive, the linuxvm in WSL2 caches it in its own memory, and the windows side caches it in its: now you have double the memory usage on disk cache where files are active on both, taking much less advantage of caches than if you had used WSL1 where windows serves as the sole cache for windows drives.

I'm only comparing working on windows filesystems that can be accessed by both. My use case is developing on large windows game projects, where the game needs the files fast when running, and WSL needs the files fast when searching code, using git, etc. WSL1 was usable on plain NTFS, and now much closer to ext4 with dev drive NTFS. WSL2 I couldn't make fast.

You could potentially have the windows files on a network drive on the WSL2 side living in native ext4, but with that you get the double filesystem caching issue, and you might slow a game editor launch on the windows side by way too much, your files are inaccessible during upgrades and you have to always have RAM dedicated to WSL2 running to be able to read your files. MS store versions of WSL2 will even auto upgrade while running and randomly make that drive unavailable.


Running WSL2 on Dev Drive means that you're effectively doing network I/O (to localhost); of course it's slow. It's also very pointless since your WSL2 FS is already a separate VHD.

Not pointless if you are working on a windows project but using unix tools to search code, do commits, etc. WSL2 just isn't usable for it in large projects. git status can take 5 minutes on unreal engine.

If you just need the stock Unix command line tools, MSYS2 will give you them at native speed, no VM needed, no funky path mappings etc.

WSL is for when you actually need it to be Linux.


Ironically they usually make unit tests and testing by hand in the repl harder

It says it doesn't use VNC while others do:

     While current solutions depend on VNC to display the Linux interface, we got rid of VNC altogether along with the problems it causes.

It's doing VNC-alike "graphics remoting", but with less isolation/security in the name of efficiency/performance:

> We thought this is too inefficient. So we decided to combine both into a single application, to eliminate most of the interprocess communication, and avoid having the Linux server run in the background, and thus suffering from power optimizations. We still have a framebuffer, but we do the scrapping and updating directly. We have reduced all the hassle to mere memcpy and texture update operations. This turned out to be huge! In the future, we hope to reduce this overhead even further by rendering directly to the texture, and this saving the need to scrape and copy memory.

When the next release of Android Linux Terminal ships vGPU with virtio, it will provide better graphics performance than VNC, while retaining strong security isolation between Debian guest VM and host Android.


Not clear it is much less secure. It could be using subprocesses and shared memory since they say they are just writing a buffer. I think you can't get that with cross app communication, only subprocess in Android, right?

I have a feeling iPod's popularity had more tondo with buying up exclusive access to mini harddrives than the industrial design. Same harddrive maker deals extended to the smaller drives on the iPod mini. iPhone typically gets exclusive access to TSMC's latest nodes a year ahead of competitors. Same with airpods, getting the power draw to that level about a year before most others using their exclusive access to a TSMC node.

Having a big enough brand that buying out exclusive access to new tech isn't a huge risk is key, though they probably got the iPod HD exclusivity very cheap and weren't so big then. Then having the exclusive access builds on the quality and mystique of the brand and makes it less risky to buy in again on the next wave of exclusivity.


> though they probably got the iPod HD exclusivity very cheap and weren't so big then.

Toshiba were struggling to find a market for the 1.8" disk they'd invented. It was mentioned in passing after a routine meeting with Apple engineers and, to the latters' credit, they immediately saw the potential and called Steve Jobs to get the cash to sign an exclusivity deal. It cost them $10 million, absolute peanuts.

The Creative Nomad had a vast capacity but used standard 2.5" disks to minimise costs, making it bulky. If Creative had had the opportunity and foresight to grab the 1.8" supply chain, history might have been very different. If...


I think that's why they put the supply chain wizard in as CEO after Jobs and not the designer.

> No longer a leader in the industry and simply following trends and riding the waves (the AI trend…)

While they missed out on the first major commercialization of LLMs, they invented transformers and now have the leading LLM for coding and second best or potentially best training hardware in the world, designed in house, which they started working on before the current boom and kept improving.


they are leading LLM for coding like I am leading candidate for President in 2029

2.5 is better with agentic coders than anything else and way better on API price compared to openai and claude.

> Yeah, there are threat models that won't be stopped here

Like running windows in a VM or using an HDMI capture card. And are they going to break running teams meetings when using moonlight etc. with this? If you are OBS capturing during the meeting does it get blacked out or just breaks your recording?


You don't need to elaborate on mechanisms for bypassing because you're already imagining a threat actor that is out of scope.

This is primarily about blocking accidental leaks by regular employees who were asked to not record but ignored it. This kind of reuse of content happens all the time in companies of any significant size and isn't entirely stopped by simple requests or watermarks. This tool gives companies one more option to protect against this very lame and boring but also very real threat.


> regular employees who were asked to not record

i think this should not be possible to be asked.

For example, an employee might want to record to cover their own ass (e.g., if being asked to do some morally questional things, which the employee could record then use as protection against the company going back on their word).

Having the ability to _control_ whether an employee can keep records independently of the company only serves to move more control away from the employee.


Trade secrets are a thing. Legal requirements are a thing as well. In some cases (think HIPAA) an employee recording something that they're shown can translate to significant legal liability for both themselves and the company.

> This is primarily about blocking accidental leaks by regular employees who were asked to not record but ignored it.

I think you're seriously overestimating regular employees. A significant number of people will send you smartphone pictures when you ask for a screenshot - why would they suddenly start looking into on-device screen capture when taking a picture or video of some random presentation?


  > A significant number of people will send you smartphone pictures when you ask for a screenshot
n=1 but this is also my experience at $JOB for a majority of times for me as well

I already addressed this possibility in my first comment. Points 1+2.

And also gives legal teams more foundation to stand on, bypassing this isn't trivial so it shows real intent

"isn't trivial"? When the sibling comment mentions how a lot of employees will by default use a camera when asked for a screenshot?

By "isn't trivial", I mean that takes real effort on the user end to bypass (as they can't simply take screenshots in traditional methods). Bad choice of word on my end

They have absolutely no way to enforce anything of sorts on linux, technically.

A typical corporate workspace won't have any employees running Linux on their work machines.

That is not my experience in the last 10 years.

The easiest one is already mentioned in the article: Someone pulling out their phone and snapping a photo.

People know it’s not perfect. However, raising the bar discourages the spontaneous captures that people might try out of habit.


The beginning of Barton Fink is a good watch if you want to immerse into this comment.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: