Hacker News new | past | comments | ask | show | jobs | submit | resharpe105's comments login

Key question is, will there be a hard switch to only ever use on device processing?

If not, and if you don’t want practically every typed word to end up on someone else’s computer (as cloud is just that), you’ll have to drop ios.

As for me that leaves me with a choice between dumbphone or grapheneOS. I’m just thrilled with these choices. :/


It’s not sending every word to the cloud. I think you must invoke the AI features. Am I wrong?


I understood that it will have the full context of the data on your phone, in order to be ,,useful”.

We are yet to see if that means only the data you’ve invoked ai features for, or totality of your emails, notes, messages, transcripts of your audio, etc.


From the presentation it sounds like the on-device model determines what portion of the local index is sent to the cloud as context, but is designed for none of that index to be stored in the cloud.

So (as I understand it) something like "What time does my Mom's flight arrive?" could read your email and contacts to find the flight on-device, but necessarily has to send the flight information and only the flight information to answer the arrival time.


My guess: It’s not the shiniest, newest toy model anymore. Many people want the latest greatest just for the sake of having it.


I do understand my machine is still (more than) capable. But it is frustrating when the release cycle is so rapid, it's not just a desire for 'latest and greatest', it's an almost immediate loss of bang for buck unless you buy it basically immediately.

They released the M2 max less than a year ago.


13,3” MacBook Pro is discontinued in favor of 14” MacBook Pro with M3.

I say good riddance.

Would love it if they did made the bezel smaller on an iMac though, alongside the M3 upgrade.


Meanwhile, a company I used to work at keeps on using self hosted MantisBT.

Its maintenance takes like 2 engineer days per year total, if that. Its initial setup with all the plugins ( of note are those for project management and release tracking) and integrations (e.g. git) took one engineer week if I remember correctly.

And it works just fine for 7 years now, to the point that the only advantage of Jira I see is a slightly fancier looking interface for the regular user. Whilst costing peanuts.

I even prefer the old school Mantis BT look and feel.


Mantis! Last time I used that system was in 2012 - also self-hosted by the company.

My impression of JIRA was always that it felt slow - at least compared to our Mantis instance. Mantis might not have such a ridiculous range of customizations, but does a small to mid-sized business ever use most of them?


It will be taken to a next level by the AI, both in terms of defensive and offensive capabilities.

We already have interesting startups and projects dealing with defence aspect.

I’ll note glog.ai as something I’ve seen most recently in this domain - it entails a machine learning algorithm that classifies SAST output and assists the developer in triage of potential vulnerabilities.


This is nonsense.


I can atest to this. Sqlc is awesome.


You can easily do nested rows/objects in sql.

E.g.in postgres:

SELECT u.*, ( SELECT JSON_AGG(p) FROM ( SELECT p.* FROM purchases p WHERE p.user_id = u.id LIMIT 10 ) p ) AS purchases FROM users u JOIN purchases p ON p.user_id = u.id GROUP BY u.id LIMIT 100;

Starting queries with from - what is the benefit of that beside personal preference?

And you can simply use a cte or a view instead of a field set.


I agree my off the cuff syntax is probably not great, but I do think there's benefit to a more native version of json_agg that doesn't return strings/json, but typed data.

The "start queries with from" is because you get easier and better auto complete and error detection for queries when the tables being used are declared. When you start typing "select some_field" there's basically no way to have good auto complete or correct until the from clause. IMO there are more philosophical reasons too.

Views and CTEs have their place, but also have their shortcomings. That I don't think this solves. But I also don't think they really fill the gap I'm thinking about either.


I have to disagree with the following paragraph:

“SQL is particularly inflexible in terms of control over response: A query result is always a set of tuples. A query is incapable of returning a rich structure that is aligned with how the application needs to consume the result. ORMs solve this partially at best at the cost of hiding the capabilities of the underlying database. They also obscure the structure of generated queries, making it difficult to understand the resulting performance implications.”

The ORMs or later mentioned GraphQL are not the only approaches to solving object-relational impedance mismatch.

SQL is perfectly capable of serializing sets of tuples to XML (part of the SQL standard), and most SQL RDBMS implementations now support working with JSON data.

Serializing using SQL to XML and in the past couple of years to JSON, and deserializing the said XML/JSON to an object model in the programming language of choice is something I’ve seen used fairly often in my career.

Heck, I’ve even seen entire business logic for a very complex system implemented via thousands of stored procedures that always returned XML. Prior to 2010, this was not the blasphemy it is made to be today…

So to reiterate my point, SQL is not inherently as inflexible as it was made to be here, thus neither ORM nor GraphQL are a necessity for dealing with SQL output inflexibility (and both can be very useful tools, as always, completely depending on the context).


I'm clearly biased, but at least in my experience, while this is technically true, you're still dealing with XML (and now JSON) shoehorned into a tuple-based context. In other words, there is still a (lossy) translation layer, it just happens to be in the RDBMS rather than in-app.

Fauna's advantage here is that this way of structuring queries is deeply integrated with the language (and underlying wire protocol) itself. For example, Fauna's response format supports returning independently iterable result sets (supported by cursor-based pagination under the hood), allowing you to lazily populate the result graph in your app based on further user interaction.


> In other words, there is still a (lossy) translation layer, it just happens to be in the RDBMS rather than in-app.

It's not lossy if your application can guarantee a json <-> datatype roundtrip and the json is validated with jsonschema (generated by your application)

In Rust it's something like this

https://serde.rs/ to do the data type <-> json mapping

https://docs.rs/schemars/latest/schemars/ to generate jsonschema from your types

https://github.com/supabase/pg_jsonschema to validate jsonschema in your database (postgres). with this setup it's interesting (but not required) to also use https://docs.rs/jsonschema/latest/jsonschema/ to validate the schema in your application


I agree with the sibling; the problem is that requires that you shoehorn all your rich data types to JSON.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: