Hacker News new | past | comments | ask | show | jobs | submit | jdprgm's comments login

Both iOS and macOS seem to have a strange number of their updates "coming later" and that is not even including Apple Intelligence which they really made the cornerstone of the updates in initial marketing. Very bizarre.

Happened has you with Universal Control.

I'm ok with this. I'd rather they get the core right rather than scrambling to deliver all these nice-to-haves in time, and botching it up.


This effect has been compounding too. I used to upgrade almost every cycle but have been on 11 Pro now and haven't been motivated to upgrade until this 16. It takes the combined changes of about 4 cycles to match the level of interest in upgrading single cycles did in the first 10 years.

If I do get a 16 Pro the only thing I can even think of from an upgrade perspective that would make it last less than 5 years would be if on device inference becomes a major thing and there are huge and relevant improvements, everything else is basically mature and I can't see being motivationally better in a few years.

People bring up battery as a big thing for upgrading but with the smarter charging that has changed a lot, my 11 Pro is still at 80% max after 4+ years which isn't materially different on a day to day basis for me.


You are lucky, or have excellent battery discipline. I just had to trade my 11 for a 15, the battery life was terrible (enough to force me, knowing the 16 was imminent). Also backgrounded apps would reload almost every time I swapped, maybe I am just using memory-hungry apps though.

Guessing you guys have already read it but here is a good related book for others in the thread: https://www.goodreads.com/book/show/42036377-where-is-my-fly...

Project sounds like an ambitious exciting idea and i'd take the pessimism/skepticism as a positive sign that you are on the right track. It feels like something has changed in tech for a lot of people with regards to tackling really ambitious projects compared to years past.

I'm curious how the relative difficulty of fully automated driving would compare to fully automated flying. I'm no pilot but it seems in a lot of regards flying might actually be easier with regards to the number of possible scenarios cars can be faced with.

I imagine trying to compare the risk between flying and driving is pretty interesting. I would think in flying risk is significantly more dependent on the individual vs the tremendous risk in driving posed by every other car on the road. That and I would guess the accident vs fatal accident balance is completely different.

Considering we fairly easily are willing to give basically any 16 year old a car license and just accept a rather high risk rate with driving I wonder how much the gap between driving and flying could be closed. That and the financial gap as well -- from a raw materials perspective could we in theory have a honda civic of the sky that a significant percentage of Americans (say 30-40%) could find accessible?


This really needs a video demo or at least a more in depth text description of the features. Will download later to try but curious does this just do simple hard cuts on audio text or is there any ai magic for blending sentence timing if that makes sense?

A number of comments turned me onto Descript -- made a similar comment on another audio thread recently: drives me absolutely insane how all audio tools with any AI are web based monthly saas instead of offline private gpu upfront purchase.


The web based tools launch and move faster. There’s no lack of offline tools, if you’re the kind of person that files issue tickets in their spare time


"if you’re the kind of person that files issue tickets in their spare time"

What does that have to do with non-Web-based applications?


<crickets>


Elevenlabs has some pretty cool stuff but I really despise how it's all cloud based. Wish there was an audio ai company following a path similar to what topaz has been doing for video/photo ai with desktop software. Open source has been lagging more than I expected in this area too.


GPTSOVITS, StyleTTS2, and RVCv2 are still the open source SOTA for TTS and voice conversion. These models are unfortunately really far behind Elevenlabs' offerings. We're not much further along than the Tacotron2 (2018) days.

Elevenlabs is the only model company I can think of that is ahead of everyone else in their category. Video and LLMs are hyper competitive, but voice is a one-company game. Elevenlabs hired up everyone in the space and utterly dominates.

I'm hoping this changes. They've been in pole position for over a year and a half now with nobody even coming close.

There's probably a reason why they're so research-oriented. The minute an open source model is released that rivals Elevenlabs in quality, they're in big trouble. There's absolutely zero moat for their current products and there are fifty companies nipping at their heels that want to be in the same spot. Elevenlabs' current margins are juicy.


That is completely psychotic blacking out single frame screenshots for copyright reasons. Copyright is truly out of control and just comically ridiculous at this point.


It's not blacking out on purpose, rather it's a side-effect of the DRM chain. The DRM content is rendered outside of the UI chain, so it cannot be captured as part of a normal screenshot or video. The same happens on macOS (and even windows) if you try to screen record while watching DRM L1 content such as Netflix, Apple TV+, etc.


Widevine and HDCP can seriously suck it. Look at any torrent site and tell me that it stops piracy


DRM is on purpose.


Always has been. Tom Scott did a great video on copyright [1] - it's long overdue to revamp.

1 - https://www.youtube.com/watch?v=1Jwo5qc78QU


While I still use Reddit often I kind of hope this marks the decline and something new emerges. Pretty much anything and everything their army of devs have built over the past 5+ years has been anti user. I can't even remember the last time a positive new feature was added. (This also kind of feels broadly true for the majority of the consumer apps in recent years -- remember the 2010's when devs actually added new features for users to apps on a regular basis?)

There are countless things I assumed would have been fixed years and years ago that never have been. For example the trash search engine where you are better off using google with site:reddit.com. I do wonder if it's incompetence or intentional.

Would love to see something in a vein similar to what BlueSky is attempting with twitter clone for reddit. Have a lot of ideas in this area lately.


> Pretty much anything and everything their army of devs have built over the past 5+ years has been anti user.

That's because you're a power user.

Reddit owners figured out that even though the service gained popularity as a middle ground between 4chan and Facebook, they can make most money if they kick out weirdos and cater to general audience, so they're consistently making changes to make it appealing to average Joe. You can clearly notice how they're slowly but surely removing controversial content and promoting userbase growth over everything else.

My prediction is that Reddit will keep growing, but it's simply going to be "Facebook, but for people 15 years younger".


What are some example "general audience" features added these last 5 years?


- Realtime chat

- Design that is much less customizable via userscripts/CSS

- More emphasis on users rather than communities (enhancements to user pages, profile pictures that are displayed in comments)

- Garbage native media hosting alongside worse handling of offsite (youtube/imgur) media hosting

- Inline gifs in comments

- Algorithms that emphasize clickbait

- Algorithms that try to guess what you want to see rather than letting you tell it what you want to see

- Suppression of content deemed unsavory by advertisers

- Emphasis on mobile design over desktop design

- Backward incompatible changes


So, the userbase is increasing? Because those "15 years younger" are already on other platforms shaking their donkey.

Sounds like a great meeting summary but did it yield results?


Tons of mod tools built on top of shadow comment removals: crowd control, comment nuke, disruptive comment collapsing, contributor quality score, subreddit shadow bans via automoderator ...

Check your account here [1], you probably have removed comments you don't know about. Or comment here [2] to see how it works.

[1] https://www.reveddit.com

[2] https://www.reddit.com/r/CantSayAnything/about/sticky


That obnoxious FB feed style ui they try to force you to use.


dark mode


  > remember the 2010's when devs actually added new features for users to apps on a regular basis
AnkiDroid, Telegram, and the Tesla app are the only three applications that I've seen add actual features for end users, in years. Even Firefox has stagnated, and some apps like Google Translate have become difficult to use for anything other than the happy path. I just bought a new phone, a three year old model still in stock, and I'm not even updating the OS as the current OS allows me to record phone calls but the newer ones do not.

I am completely off the update treadmill.


Does Firefox need to be continuously adding features "for the users"?

I kind of like that their privacy efforts have been trimming back unnecessary features, at least from 3P hosts.


Honestly, most apps shouldn’t be “adding features.” Simply because of what OP said: the new features are almost always anti-user. Most applications I use today are either as good as or worse than the apps were in 2014. Developers throughout the industry have been furiously developing for ten years and we’re not making anything better.


Lemmy is to reddit as blue sky is to Twitter and both run on Activity pub, IIRC.


Mastodon instead of Bluesky and you've got it, I think. (Bluesky has its own protocol.)


I've really fallen in love with mastodon.

I follow a whole bunch of developers, artists and some writers. It's such a breath of fresh air compared to what twitter ended up being.


Yes it does. Bluesky uses the AtProtocol.


Interestingly enough the best kind of development I see happening in the public sector in Australia. For example both the official Bureau of Meteorology weather app and the general car rego / council matter app get regular worthwhile improvements. With actual meaningful changelogs in the play store updates. None of that stupid "bug fixes and performance improvements" bullshit that every other popular app puts in there.

Still boggles my mind at times.


At least they let you keep the old web interface, instead of forcing the new stuff on you.


You have to wonder how long this will last, especially now that they're public.

One day, they'll need to squeeze a few extra percentage of revenue to meet their quarterly target and decide that dropping old.reddit.com will move enough people to their revenue optimized new page to get there.

Or there will be a breaking change in the API and they'll decide they don't want to bother supporting the old one anymore.

In any case, the days of old.reddit.com are counted. I already stopped using Reddit on my phone after they shut down third party app. Just waiting on old reddit to disappear to finally say goodbye to this website


You have to wonder how long this will last, especially now that they're public.

One day, they'll need to squeeze a few extra percentage of revenue to meet their quarterly target and decide that dropping old.reddit.com will move enough people to their revenue optimized new page to get there.

Or there will be a breaking change in the API and they'll decide they don't want to bother supporting the old one anymore.

In any case, the days of old.reddit.com are counted. I already stopped using Reddit on my phone after they shut down third party app. Just waiting on old reddit to disappear to finally say goodbye to this website


Reddit killed off the former mobile (i.reddit.com) interface, however. Amongst other uses, that was great in terminal browser clients.


Building an actually successful business is the most pro-user thing any company can do.


This has been my main complaint with Oculus all the way since the Rift days. At that point I assumed it was forthcoming within a few years yet here we are 8 years later and somehow it's not all that different. I don't understand how Oculus/Meta isn't drastically ahead at this point on software.


Actually, MSFT made the same blunder when it came to HoloLens. Well .. they did start to build some of the core spatial context (and had a fabulous headstart). But somewhere along the way, they yielded to Unity/Unreal. This was mind boggling to me as giving away the keys to the platform to another party was literally the founding story of Microsoft (with IBM having made the blunder). I wonder if engineering leadership recalls history when making such strategic goofs.


Layman's take: All of them are afraid of making the system that fulfills the promise nominally, but that lacks some key component or is on hardware that doesn't get adopted, only to have a competitor swoop in, clone that system with the necessary fixes, and essentially do what Apple did with MP3 players and smartphones. They're all trying to establish market dominance BEFORE giving us a reason to use the devices (bass ackwards) - and are even happy to see the market collapse, if it meant that, simply, no one cracked that particular nut.

Apple, Meta, Microsoft like how things are right now. These pushes are much-hyped, but they're made less out of real passion for the promise and more desperation to avoid being left behind.


What’s more insulting after the announcement of the vaporware known as “infinite office” is meta’s total lack of attention on their PC software. The work related features of Quest are near non-existent if it weren’t for 3rd parties


I totally agree. While Apple has a north star with this device (or looks like it does), Meta's endeavors always seemed like diversification. Meta seems to be looking for the north star. Apple just pointed it out, so now everyone is going to head that way.


Well it's easy to understand why. How could they build an MR ecosystem when their latest device is just barely MR?

They can only just now move towards MR with the Quest 3 and really it'll need another generation to be MR native.

They have a good relationship with developers and focused on what their current hardware is capable of, which is running one VR app. They spent the last 8 years on that use case and I think that was the right choice given the hardware realities at the time.


They are drastically ahead. They have VR games, which are the only real reason to own a VR headset at the moment (and for at least 5 years).


> VR games, which are the only real reason to own a VR headset

Because the rest of the experience is so unpolished.

I have a Meta Quest 3 and overall it doesn't exactly feel like they invested tens of billions into that ecosystem. The headset's UI is basically a 2D desktop with taskbar and app launcher covering a small fraction of the field of view, including some buttons that are so small it's tricky to aim at them with the controllers. The Oculus desktop client fails to recognize it via USB and the official remote desktop app is still in Beta while Steam lets me play games or use the desktop remotely with two button presses. To this day I have not managed to just copy files directly onto the device, no USB connection (other than to Steam) works. Only some semi-reliable wifi transfer from a third-party application worked but that required enabling developer mode.

On top of that they decided to ship it with a head strap that never fits well and gets painful within 30 minutes, and then made it unnecessarily complicated to swap. Yes, of course people aren't going to do more on that thing than play a few rounds of Beat Saber, because many simply don't want to jump through hoops like that. I think it's a great device overall but some things are just so...unnecessary.

Apple not focusing on games might be a good thing because it means they can't just rely on games for free sales numbers.


> To this day I have not managed to just copy files directly onto the device, no USB connection (other than to Steam) works. Only some semi-reliable wifi transfer from a third-party application worked but that required enabling developer mode.

To echo this today I wanted to watch something on my vision pro so I was on the tv app on my phone, saw a movie I wanted to watch, and then after a good amount of time moved over to my vision pro.

Being the scatter brain that I am I forgot what the movie was, unlocked my phone and the movie listing view was there. In my head I was like “damn wish I could share this page over to my vision pro like I do for my ipad”

And that’s exactly what I did with Airdrop. The already existing way to share anything between apple devices. I would not be surprised if universal clipboard works as well.


You fail to realize that having such headset 8h a day is not the holy grail for most people, I'd never work in such way. Horrible for your eyes and overall health in many ways we already know and many that will be discovered after this betatesting runs for decade+.

Entertainment maybe, but definitely no work- like it or not, outside few tech bubbles this is how world sees VR and its not changing anytime soon. Still, to sporty outdoorsy people this is kids toy (that shouldn't ever be on kid head), reality is and will be always better and healthier.


There's a real chicken-and-egg recursion there: VR headsets are only good for VR games because the only thing made for VR headsets is VR games because VR headsets are only good for VR games because...


It’s not chicken and egg, it’s simply the reality about the hardware. Even Apple, with all their resources and a $3500 price tag, could only make a mediocre passthrough on a very heavy headset. The hardware isn’t ready for AR yet.

Games are where it’s at for the foreseeable future. Games don’t need passthrough and they don’t need especially high resolution.

Look at how much of the vision pro is about giving people a connection to the real world while they are using the device. Games don’t need that, people want to be immersed while they are playing a game.


> Even Apple, with all their resources and a $3500 price tag, could only make a mediocre passthrough on a very heavy headset. The hardware isn’t ready for AR yet.

Hugo disagrees:

> thanks to a high-fidelity passthrough (“mixed reality”) experience with very low latency, excellent distortion correction (much better than Quest 3), and sufficiently high resolution that allows you to even see your phone/computer screen through the passthrough cameras (i.e. without taking your headset off).

Even though there are major gaps left to be filled in future versions of the Vision Pro hardware (which I’ll get into later), this level of connection with the real world — or “presence” as VR folks like to call it — is something that no other VR headset has ever come even close to delivering and so far was only remotely possible with AR headsets (ex: HoloLens and Magic Leap) which feature physically transparent displays but have their own significant limitations in many other areas.


I admit I don’t own one of these things but reviewers seem to be unanimous that the passthrough on the Vision Pro is both the best of any headset on the market, yet also very mediocre compared to seeing things through your own eyes, especially in low light.

Given that it’s designed to be used indoors, poor low light performance is a big problem.

There’s a latency/acuity tradeoff whereby the more post-processing Apple applies to improve acuity, the worse the latency and more nausea they create. It’s going to require a lot more research into hardware post-processing.


Seems like the best passthrough was a fairly easy goal to achieve since nobody else was even really trying. Heck, the Quest applies quality degrading filters to passthrough video (add noise, remove chroma) to discourage using it.


Filters to discourage use? Do you have a source for this? Surely they are just low-res, infrared cameras.


They probably mean the quest 3 which has RGB cams unlike the prior quests 1 and 2. I also disagree it would have been artificially muddled to discourage usage. If that were the case they'd not have presented it so proudly. It's just the kind of cam setup that $500 buys (in fact it probably is a bit subsidised)


But VR only needed DK1 to take off.


They are drastically ahead in the VR gaming space. But the potential VR/AR market is hundreds fold bigger than that.


The potential market in 10 years. Apple has jumped the gun here. This is their Apple Newton moment for AR.


That's probably a fair argument at the pace of innovation pre-AVP. Depending on how quickly they iterate in this (and as the article says, push developers), they may be a self-fulfilling prophecy to significantly reduce the time until this market exists.


I hope that seeing where the Newton could’ve gone gives them the confidence to continue with the AVP. A few iterations could really show a great product both in terms of quality and practicality.

I had a Newton and loved it, and eventually tried a few Palm devices but nothing ever quite hit like the Newton for me, a real shame they dropped it imo.


effectively though, they've created their own mini-innovator's dilemma. They can't do anything to alienate those users but they might have to if they want to stay competitive in the long run.

Innovator's dilemma is a great problem to have if you are dominating a profitable industry already. But it's a terrible problem to have if you are barely hitting break even or even losing money. Then you really can't afford to go backwards first to go forwards later.


Are we gonna ignore the fact that VR pornography exists?

Also I'm pretty sure those VR games that are ran on a computer connected to a headset could display on any headset, Apple or Oculus. Cursory search reveals people have already been getting SteamVR to work on the VisionPro.

Running stuff directly on the headsets is neat, but there's no headset on the market with enough power to match what you can have when plugging them into a computer.


> Are we gonna ignore the fact that VR pornography exists?

On the Apple Vision Pro?


The point is that VR games are not "the only real reason".


Through Safari, or forthcoming VR video player apps (Apple doesn't censor generic utilities).


I'm not saying they should (in fact I think the Valve approach is better), but Apple does seem to be strongly against porn. Why wouldn't or shouldn't they censor generic utilities? If it's keeping users safe in apps, wouldn't it be keeping users even safer in a browser or other content browsing app?


You’re suggesting Apple would block porn all together on their platform. You can go to any XXX site right now, find VR videos, and play them in your mobile browser. Through on a Cardboard or even a crappy Polaroid “VR” phone case and you’re set. There’s no way Apple would say “well, on the Vision Pro, we will actively block adult websites”.


The players are already there. Those who want VR porn have been able to view it on AVP since days after release.


VR porn is largely just WebXR on webpages or SBS VR 180 videos. WebXR has been available on the Vision Pro since day 1 if you enabled it in the Safari advanced options and there are now multiple video apps that can handle SBS VR 180 playback.

All the news about it not being possible on the AVP was largely a bunch of hyperbole, misunderstandings, and misinformation.


I have been loosely following Bluesky for awhile and read some blog posts now but haven't delved super deep. Can you expand on the "infrastructure takedowns"? Does this still effect third party clients? I am trying to understand to what degree this is a point of centralization and open to moderation abuse versus bluesky acting as a protocol and even if we really want to we can't take something down other than off our own client.


The network can be reduced to three primary roles: data servers, the aggregation infrastructure, and the application clients. Anybody can operate any of these, but generally the aggregation infra is high scale (and therefore expensive).

So you can have anyone fulfilling these roles. At present there are somewhere around 60 data servers with one large one we run; one aggregator infra; and probably around 10 actively developed clients. We hope to see all of these roles expand over time, but a likely stable future will see about as many aggregator infrastructure as the Web has search engines.

When we say an infrastructure takedown, we mean off the aggregator and the data server we run. This is high impact but not total. The user could migrate to another data server and then use another infra to persist. If we ever fail (on policy, as a business, etc) there is essentially a pathway for people to displace us.


Why would anyone run their own aggregator? (i.e. if you run a search engine, you can show contextual ads to recoup your investment and then some.)

Sorry about going off-topic, I realise it's only tangentially about labelling.


We'll let you know when we figure out why we're doing it.


I guess I should have asked about anyone else :) I know why you would - you're planning to sell services around Bluesky [0], and thus need Bluesky itself to be working.

But if it's already working (because you're running an aggregator), there doesn't seem much reason for anyone else to run one? In other words, isn't there a significant risk that there will be fewer aggregators than there are search engines, i.e. just a single one?

[0] https://bsky.social/about/blog/7-05-2023-business-plan


I think this is a really good question. Let me offer one possible answer:

It might not be necessary or useful to have multiple aggregators right now. However, I do feel better knowing that if Bluesky the company goes under or changes to a degree where I'm not happy with their decisions, it's possible for someone to stand up a second aggregator.

For that matter, if someone's a free speech absolutist and if they care enough about it to spend the money, they could stand up an aggregator right now with more permissive standards.


Even at scale, running a Relay should be well within the means of a motivated individual or org that is willing to spend hundreds of dollars per month. Right now it'd cost just tens of dollars per month to run a whole network Relay. Some people are already doing this I believe.

Running an "AppView" (an atproto application aggregator/indexer/API server) is generally an order of magnitude more expensive and complicated. But still not beyond the reach of a user coop, non-profit, or small startup.

So these services should all be well within the capabilities of at least multiple companies operating in the atproto ecosystem as it scales.

And in many cases it should make good sense for these companies to do this since it will improve their performance by colocating their services and enable them to do things like schedule their own maintenance windows, etc.


Thanks! "It costs thousands of dollars a month, which is feasible enough that people will find a way" sounds pretty reasonable.


Would it be possible to do a p2p aggregator (Like yacy but for atprotocol)?


It might be worth trying, but essentially what you're trying to do is cost/load sharing on the aggregation system. You could do that by computing indexes and sharing them around, to reduce some amount of required compute, and I suspect we'll be doing things like that. (For example, having the precomputed follow graph index as a separate dataset.) However if you're trying to replace the full operational system, I think the only kind of load sharing that could work would require federated queries, which I consider a pretty unproven concept.


Nice work, I like the simplicity, math blocks, and mixed syntax highlighting. My workflow for the past few years has been within all my sublime projects to have a gitignored folder named scratchpad with mixed files for this exact use case. It's annoying having to create separate tiny files if i want syntax highlighting though for a mixed bag of stuff. I would LOVE this exact app as a sublime plugin on a per project basis that created a .scratchpad file i could gitignore with this same feature set. I haven't ever gone deep on capabilities/limitations of sublime text plugins and if this would even be feasible. I definitely would want this ability for different blocks but on a per project basis or I can see the scrolling easily getting out of hand.

This project definitely hits on one of those things just about every dev probably has a slightly different approach to and nobody has really targeted a solution towards so kudos on that.


Why gitignore? Wouldn't you lose all your notes if your computer died or similar?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: