Hacker News new | past | comments | ask | show | jobs | submit login

Years ago, over a decade ago now, I was a .Net developer. Microsoft introduced Entity Framework, their new way of handling data in .Net applications. Promises made, promises believed, we all used it. I was especially glad of Lazy Loading, where I didn't have to load data from the database into my memory structures; the system would do that automatically. I could write my code as if all my memory structures were populated and not worry about it. Except, it didn't work consistently. Every now and again a memory structure would not be populated, for no apparent reason. Digging deep into technet, I found a small note saying "if this happens, then you can check whether the data has been loaded by checking the value of this flag and manually loading it if necessary" [0]. So, in other words, I have to manually load all my data because I can't trust EF to do it for me. [1]

Long analogy short, this is where I think AI for coding is now. It gets things wrong enough that I have to manually check everything it does and correct it, to the point where I might as well just do it myself in the first place. This might not always be the case, but that's where I feel it is right now.

[0] Entity Framework has moved on a lot since then, and apparently now can be trusted to lazily load data. I don't know because...

[1] I spat the dummy, replaced Windows with Linux, and started learning Go. Which does exactly what it says it does, with no magic. Exactly what I needed, and I still love Go for this.




Pardon me for the tangent (just a general comment not directed to OP).

What I have learned over the years is that the only way to properly use ORM is as a fancy query tool. Build the query, fetch/update data, MOVE THE DATA to separate business objects. Don't leave ORM entities shared across the sea of objects!

Phew, thanks, I got that off my chest.


I wouldn't have believed you until I moved from ActiveRecord (Rails's ORM) to Ecto (Elixir/Phoenix's data mapping library which is decidedly not an ORM.) It's a million times better and I'm never going back.


Ecto is hands down my favorite part of the elixir ecosystem.

It’s so elegant and the Lego blocks (query, schema, change set, repo) can be mixed and matched in different ways.

I’ve even used schemas and change sets to validate API requests and trivially provide very nice, clean, and specific errors, while getting perfectly typed structs when things are validated.


same, I wish more libraries would go the ecto design route. my ecto queries map pretty close to 1:1 with the sql counterpart. no guessing what the output is going to look like. I spend my time debugging the query and not trying to get the orm to output he query I want.


Yes, same experience here. I felt (and still feel) that ActiveRecord is one of if not the best ORMs out there, but it was always a source of debugging and performance optimizations/pain, and the trick was basically taking hand-written SQL and trying to get ActiveRecord to generate that. After briefly moving to node.js full time I actually got very anti-ORM, although query building libraries all sucked too which left me unsure of what the best approach is.

Then I learned Ecto/Phoenix and this is truly the best way. Ecto is so close and translateable to raw SQL that there's little to no friction added by it, but it handles all the stuff you don't want to have to do by hand (like query building, parameterization, etc). Ecto is a real breath of fresh air and I find myself writing quick scripts that hit a database in Elixir just so I can use Ecto! I also love how easy Ecto makes it to model database tables that were originally defined by another language/framework or even by hand. Trying to do that with ActiveRecord or another ORM is usually a recipe for extreme pain, but with Ecto it's so easy.


Yeah, I hear some people say that they find Ecto.Query confusing, and I think it's because they never learned SQL properly. That's understandable because it's possible to use something like ActiveRecord for years without ever learning to write even a simple SQL query. But if you have a good grasp of SQL then Ecto.Query is trivial to learn - it's basically just SQL in Elixir syntax.


> it's basically just SQL in Elixir syntax.

its sql in elixir syntax with a bunch of QOL improvements.

for one thing, I can seperate my subqueries into separate variables

``` sub_q = from(l in Like) |> where([l], l.user_id == ^user_id) |> select([l], %{ user_id: l.user_id, likes_count: count(l.id) }) |> group_by([l], l.user_id)

main_query = from(u in User) |> join(:left, [u], likes_count in ^subquery(sub_q), on: likes_count.user_id == u.id, as: :likes_count) |> select([u, likes_count: l], %{ name: u.name, likes: l.likes_count, }) |> where([u], u.id == ^user_id)

user = main_query |> Repo.one()

```

Being able to think directly in sql lets you perform optimal queries once you understand sql. and imho, this much cleaner than tha equivalen sql to write. it also takes care of input sanatization and bindings.


adding an off the shelf ORM layer creates so much more opacity and tech debt than writing queries I don't understand why anyone would willingly put one into their stack. Sure, they're neat although I don't even know if they save time. There's something very satisfying about well-crafted queries. And is it ever really well crafted if you can't tweak them to improve their their execution plan? I've never had a client or boss who asked to use an ORM framework. I suspect it's something people think looks cool - treating SQL as OOP - until they run into a problem it can't solve.

[edit] for instance, I have a case where I use a few dozen custom queries on timers to trawl through massive live data and reduce it into a separate analytics DB. Using everything from window functions to cron timers to janky PHP code that just slams results from separate DBs together to provide relatively accurate real-time results. At the end from that drastically reduced set in the analytics DB... sure, I'm happy to let the client summarize whatever they want with Metabase. But those things just couldn't be done with an ORM, and why would I want to?


Yes, I would not put it just anywhere. But I have few rules about ORMs:

- Proper DB design first. You should be able to remove the ORM and DB should still function as intended. This means application-side cascade operations or application-side inheritance is banned.

- No entities with magical collections pointing to each other. In other words, no n to n relations handled by ORM layer. Create in-between table, for gods sake. Otherwise it becomes incredibly confusing and barely maintainable.

- Prefer fetching data in a way that does not populate collections. In other words, fetch the most fine-grained entity and join related data. Best if you craft special record entities to fetch data into (easy with EF or Doctrine).

- Most ORMs allow you to inspect what kind of queries you create. Use it as query building tool. Inspect queries often, don't do insane join chains and other silly stuff.

I would use ORM in one kind of app: where I would work with data that shares records that might need to be either inserted or updated, and there is several nesting levels of this kind of fun. You know, you need to either insert or update entity, if it exists, you should update, and then assign related entities to it, if it does not, then you should insert, and assign related entities to the newly created id. The ORM can easily deal with that, and on top of that it can do efficient batched queries, which would be really annoying and error-prone to hand-craft.

If the app does not require this kind of database with these kind of relations, I would not use ORM.


> No entities with magical collections pointing to each other. In other words, no n to n relations handled by ORM layer. Create in-between table, for gods sake. Otherwise it becomes incredibly confusing and barely maintainable.

So, I have a database that looks like this. My method was to lay out the database myself, by hand, and then use EF's facility to generate EF code from an existing database. The bridge table was recognized as being nothing but the embodiment of a many-to-many relation and the EF code autogenerated the collections you don't like.

Is this a problem? If you do things the other way around, the ORM creates the same table and it's still there in the database. It isn't possible not to create the bridge table. Why is that case different?


This is more of a preference for bridge to be visible in application. Also the bridge may seem simple at first, but it also may gain associated data, like created_at, order, etc.


> adding an off the shelf ORM layer creates so much more opacity and tech debt than writing queries I don't understand why anyone would willingly put one into their stack.

Simple: because if I don't, I'm going to spend the rest of my career explaining why I didn't to people extremely skeptical of that decision. Meanwhile even people like me tend to just shrug and quietly go "oh, an ORM? Well, that's the price of doing the job."

Also, ORMs are an endless source of well-paid jobs for people who actually learned relational algebra at some point in their lives, and that's not a compliment to ORMs.


ORM is not for writing analytics queries. It's for your CRUD operations. Something like Django Admin would be impossible without an ORM. You create tables for your business logic and customer support or whoever can just browse and populate them.


Wouldn't standard ANSI SQL's information_schema be sufficient to build such an interface? I'm struggling to see how an ORM is necessary.


I consider an ORM to be any SQL generating API, without which it would indeed be impossible to have a generic Admin class to make Admin views in Django.


Funny how ORM no longer means Object-Relational Mapping.


What should I call a program that generates SQL, executes it, and stores the result in a tuple, object, or whatever data structure in the programming language that I'm using? Does it magically stop being an ORM the second I use a tuple instead of a class instance, or is it now an ORM plus another nameless type of program? Are tuples also objects?


Whatever you want. It's your life.

Traditionally, though, SQL generation was known as query building. The query was executed via database engine or database driver, depending on the particulars. ORM, as the name originally implied, was the step that converted the relations into structured objects (and vice versa). So, yes, technically if you maintain your data as tuples end to end you are not utilizing ORM. Lastly, there once was what was known as the active record pattern that tried to combine all of these distinct features into some kind of unified feature set.

But we're in a new age. Tradition has gone out the window. Computing terms have no consistency to speak of, and not just when it comes to databases. Indeed, most people will call any kind of database-related code ORM these days. It's just funny that ORM no longer means object-relational mapping.


I think the core thing that ORMs do is create a 1:1 mapping between the data structures in the database (that are, or should be, optimised for storage) and the data structures in the application (that are, or should be, optimised for the application business logic).

ORMs create this false equivalence (and in this sense, so does Django's admin interface despite using tuples instead of classes). I can see the sense of this, vaguely, for an admin interface, but it's still a false equivalence.


I agree with you, but I do think there's a little fuzziness between full-blown ORM and a tuple-populating query builder in some cases. For example Ecto, which can have understanding of the table schema and populate a struct with the data. It's just a struct though, not an object. There's no functions or methods on it, it's basically just a tuple with a little more organization.


> It's just a struct though, not an object. There's no functions or methods on it

Object-relational mapping was originally coined in the Smalltalk world, so objects were in front of mind, but it was really about type conversion. I am not sure that functions or methods are significant. It may be reasonable to say that a struct is an object, for all intents and purposes.

A pendant might say that what flimsy definition Kay did give for object-oriented programming was just a laundry list of Smalltalk features, meaning that Smalltalk is (probably) the only object-oriented language out there, and therefore ORM can only exist within the Smalltalk ecosystem. But I'm not sure tradition ever latched onto that, perhaps in large part because Kay didn't do a good job of articulating himself.


Thanks for the thoughts, that's a good point. It certainly makes sense that the "object" merely needs typed properties to qualify.


Most queries are pretty trivial, ORMs are great for 90% of queries. As long as you don't try to bend the ORM query system to do very complicated queries it is fine. Most (all?) ORMs allow raw queries as well so you can mix both.

On top of that most ORMs have migrations, connection management, transaction management, schema management and type-generation built-in.

Some ORMs have inherently bad design choices though, like lazy loading or implicit transaction sharing between different parts of the code. Most modern ORMs don't really have that stuff anymore.


How do you map rows to objects? How do you insert into/update rows in your databases? These are the basic problems ORMs solve splendidly. They are for OLTP workloads, and have deliberate escape hatches to SQL (or some abstraction over it, like JPQL in java-land).

I just fail to see what else would you do, besides implementing a bug-ridden, half-ORM yourself.


Rows are tuples, not objects, and treated as such throughout the code. Only the needed data is selected in the form most appropriate to the task at hand, constructed in a hand-written sql query, maybe even taylored to the DB/task specifics. Inserts/updates are also specific to the task, appropriately grouped, and also performed using plain sql. Data pipelines are directly visible in the code, all DB accesses are explicit.


This. The right way to structure database access is a result type per tuple, not an object type per table.


ORMs don’t mandate mapping the whole table either, you are free to create multiple entities per table/view.


Maybe we need to use a different acronym than ORM, because to me the thing we can all agree we need is code that emits SQL. If you can't agree that projects need generated SQL because SQL is dog water for composition, then we can't really agree on anything.


Probably so: I can't agree with that particular inference.

1. Very often we need generated SQL because writing SQL for primitive CRUD operations is hell tedious and error-prone (as well as writing UI forms connected to these CRUD endpoints, so I prefer to generate them too).

2. Structured Query Language being very poorly structured is indeed a huge resource drain when developing and maintaining complex queries. PRQL and the like try to address this, but that's an entirely different level of abstraction.

3. Unfortunately, when efficiency matters we have to resort to writing hand-optimized SQL. And this usually happens exactly when we terribly need a well-composing query language.


I'd argue that "code that emits SQL" is never an inherent need but a possible development time-saver - we need code that emits SQL in those cases (and only those cases) where it saves a meaningful amount of development time compared to just writing the SQL.


Every RDBMS has multiple connector libraries that solve this for you, without requiring the overhead of a full ORM.


If the connector library solves this problem then the connector library is an ORM.


That is exactly where ORMs help. The problem is all of the other stuff with it. When most people just need a simple mapper. Not something to build their SQL statements for them (which seems to be why most people pick it).

But that comes to the second problem. Most devs I meet seem to be deathly allergic to SQL. :)

One project I had a dev come to me asking me to look at a bug in the thing. Having never seen that particular ORM before I was able to diagnose what was wrong. Because MS ORMs have the same issues over and over (going back to the 90s). You better read those docs! Because whatever they did in this stack will be in their next one when they abandon it in place 3 years from now.


> These are the basic problems ORMs solve splendidly.

Depends on the ORM.

I have noticed that typically, 'unit of work' type ORMs (EFCore and Hibernate/NHibernate as examples) prevent being 'true to the ORM' but 'efficient'.

i.e. Hibernate and EFCore (pre 7 or 8.0ish) cannot do a 'single pass update'. You have to first pull the entities in, and it does a per-entity-id update statement.

> I just fail to see what else would you do, besides implementing a bug-ridden, half-ORM yourself.

Eh, you can do 'basic' active-record style builders on top of dapper as an afternoon kata, if you keep feature set simple, shouldn't have bugs.

That said, I prefer micro-ORMs that at most provide a DSL for the SQL layer. less surprises and more concise code.


For me the biggest reason is automated database initialization and migration. After defining or updating the ORM model, I don't have to worry about manually CREATing and ALTERing tables as the model evolves.

This is compatible with the OC suggestion of using ORMs as a "fancy query builder" and nothing more, which I strongly support.


You always have to worry about your model changes if you run at any sort of scale. Some ORMs will get it right most of the time, but the few times they don’t will really bite you in the ass down the line. Especially with the more “magical” ORMs like EF where you might not necessarily know how it build your tables unless you specifically designed them yourself.

This is where migrations also become sort of annoying. Because if you use them. Then it is harder to fix the mistakes since you can’t just change your DB without using the ORM or you’ll typically break your migration stream or at least run into a lot of troubles with it.

And what is the plus side of having a code-first DB really? You can fairly easily store those “alter table” changes as you go along and have full availability of history in a very readable way that anyone, including people not using C#, Java, Python.

Which is the other issue with ORMs. If you have multiple consumers of your data. Then an ORM most likely won’t consider that as it alters your “models”.

For a lot of projects this is a non-issue, especially at first. Then 10 years down the line, it becomes a full blown nightmare and you eventually stop using the ORM. After spending a lot of resources cleaning up your technical debt.


> And what is the plus side of having a code-first DB really? You can fairly easily store those “alter table” changes as you go along and have full availability of history in a very readable way that anyone, including people not using C#, Java, Python.

The benefits should be obvious if you've used ORMs. They are an object that represents your database data in code rather than in a table where you can't touch it. If you have code that brings data from a database into code, congratulations, you've implemented part of an ORM. Having the data model defined "in code" treats the code as first-class instead of the SQL, which makes sense from an ergonomics perspective, you will spend much more time with the code objects than you will the SQL schemas. Either way, you will have two versions: a SQL version and a code version. You might as well get both from writing one.

If you can read alter table in SQL, you can probably read migrations.AddField in Python, and whatever the equivalent is in the other languages. I still am waiting with bated breath for the problems with much maligned (by some) ORMs to arrive.


The only area of development where ORMs haven’t been the cause for at least some trouble in my career has been with relatively small and completely decoupled services. Even here I’ve had to replace countless ORMs with more efficient approaches as the service eventually needed to be build with C/C++. That being said, I don’t think any of these should have been build without the ORM. The rewrite would have been almost as much of a hassle if there hadn’t been an ORM after all.

I’m not really against ORMs as such. I’m not a fan of code-first databases for anything serious, but as far as CRUD operations goes I don’t see why you wouldn’t use an ORM until it fails you, which it won’t in most cases, and in those cases where it does… well similar to what I said earlier you just wouldn’t have build it to scale from the beginning anyway, and if you had and it turned out it didn’t need to scale then you probably wasted a lot of developer resources to do so.


I'm not sure if you're talking about creating and altering model tables or if you mean ORMs provide safety in case underlying tables are modified. I'd argue that well-built queries should be resistant to alteration of the underlying tables, and that views and functions and stored procedures already exist to both red flag breaking changes and also to combine and reduce whatever you need without relying on third party code in another language layer to do the lifting.


Doesn't it also mean that any non-trivial migration (e.g. which requires data transformation or which needs to be structured to minimize locking) has to be defined elsewhere, thus leaving you with two different sources for migrations, plus some (ad-hoc) means to coordinate the two?

(I would say that it is conceptually perverse for a client of a system to have authority over it. Specifically, for a database client to define its schema.)


Agree completely, as does most of the Go community :) Newbie gophers are regularly told to learn some SQL and stop trying to rebuild ActiveRecord in Go ;)

But in .Net, EF is still the most common way of accessing data (I have heard, because I stopped using it over a decade ago).


EF is the common way of saving data.


That doesn't really help you with EF because there's plenty of stuff shared at context level. So depending on the order of queries in the context the same query can return different data.

I hate EF and everything it stands for. :)


Yeah, in a web app, one context per request. In desktop app... I have never used EF there.


> use ORM is as a fancy query tool

As an alternative to query-by-example (QBE)?

https://en.wikipedia.org/wiki/Query_by_Example


Well, this month we had to debug an issue where EF was NOT populating fields on classes from the db, that it definitely should have been!

So it still seems flakey. I've never worked a single job that chose EF that didn't end up regretting it. Either from it being unreliable, migration hell or awful performance.

"It allows you to treat your database like an in-memory enumerable"

Then devs go and do exactly that and wonder why performance is so terrible...

I hate EF.


Which version?


We are currently on the latest.

We had an issue last week where we had an obect like

    public class Foo
    {
        public List<Bar> Bars { get; set; }
    }
We'd query for some Foos, like:

    await _dbContext.Foos.ToListAsync();
and some amount of them would have Bars be an empty list where it should definitely be populated from the db. And it wasn't even consistent, sometimes it would populate thousands, sometimes it would populate a handful and then just stop populating Bars.

No errors, no exceptions, just empty lists where we'd expect data.

And so often we have to debug and see what SQL its actually generating, then spend time trying to get it to generate reasonable sql, when if we were using sprocs we could just write the damn sql quicker.

Another issue we have is the _EFMIgratoinsHistory table.

Sometimes we will deploy and get a load of migration errors, as it tries to run migrations its already ran... SO they all fail and then the API doesn't come back up... The fix ? TUrn it off and on again and it stops trying to re-run migrations its already ran!


navigation properties are not loaded automatically, because they can be expensive. you need to use `.Include(foo => foo.Bars)` to tell EF to retrieve them.

EF tries to be smart and will fix up the property in memory if the referenced entities are returned in separate queries. but if those queries don't return all records in `Foo.Bar`, `Foo.Bar` will only be partially populated.

this can be confusing and is one of the reasons i almost never use navigation properties when working with EF.


We have those, and when I say inconsistent I mean inconsistent on the same query / exact same line of code on the same database.

e.g. stick a breakpoint, step over, see in the debugger that it was not populating everything it should. Then run it again, do the same and see different results. Exact same code, exact same db, different results.

5000 results back from the db, anything between 5000 and a handful were only fully correctly populated.


If that happens with the correct `.Include()`, you really should raise an issue with EF, trying to reproduce it. If it's not a random mistake in your code, that's a really big deal.


Like your parent said, the same line of code will or won't populate the navigation property depending on whether EF is already tracking the entity that belongs there (generally because some other earlier query loaded it). You get different behavior depending on the state of the system; you can't look at "one line of code" in isolation unless that line of code includes every necessary step to protect itself against context sensitivity.


EF Core 8? Inconsistent behavior is not expected.

Assuming you haven't missed to add an .Include[0], please consider submitting an issue(s) to https://github.com/dotnet/efcore

[0] https://learn.microsoft.com/en-us/ef/core/querying/related-d...


Are you saying that in previous versions inconsistent behaviour is the expected outcome?


I've been using Entity Framework for the last 5 years and have not encountered this issue, as long as I've got all my Includes specified correctly.

There is also the AutoIncludeAttribute that you can specify on entity fields directly to always include those fields for every query.

My main complaints with EF are that the scaffolding and migration commands for the CLI tool are nearly impossible to debug if they error during the run.

But when they run right, they save me a ton of time in managing schema changes. Honestly, I consider that part worth all the rest.

There can also be some difficulty getting queries right when there are cyclical references. Filtering "parent" entities based on "child" ones in a single query can also be difficult, and also can't be composed with Expression callbacks.

But in any difficult case, I can always fall back on ADO.NET with a manual query (there are also ways of injecting manual query bits into EF queries). Which is what we'd be doing without EF, so I don't get the complaints about EF "getting in the way".


Lazy loading was a mistake in EF. A lot of apps had awful performance due to lazy loading properties in a foreach loop creating N+1 queries to the database. It would be fine in dev with 50-100 rows and a localhost SQL and blow up in prod with 1000s of rows and a separate Azure SQL.

Also if you relied on lazy loading properties after the DbContext had been disposed (after the using() block) you were out of luck.

With old EF we would turn off lazy loading to make sure devs always got an exception if they hadn’t used .Include() to bring the related entities back in their initial query. Querying the database should always be explicit not lurking behind property getters.

Fortunately with EF core MS realized this and it’s off by default. EF with wise use of .Include and no lazy loading is a pretty good ORM!


I switched to Dapper a long time ago, with explicit SQL queries and really haven't looked back.


> [0] Entity Framework has moved on a lot since then, and apparently now can be trusted to lazily load data

To some degree. If you're using it for anything serious you're still going to help it along a lot. It's rather easy to do so, however, and I certainly wouldn't consider writing your own code as fast or easy as simply telling EF how you want it to do certain things.

I'm not an overall fan of EF. I especially dislike how it's model builder does not share interoperability with other .Net libraries which also use it. I also don't really like the magic .Net does behind the scenes. EF as a whole has been one of the better ORMs for any language since .net core. I'd still personally much prefer something like Rust's diesel, but whenever I have to work with C# I tend to also use EF.


You might want to try Linq2Db, it is much closer to Diesel in how it works (More SQL DSL with parameter+reader mapper, less Unit-of-work ORM).

FWIW, it can actually work 'on top' of an Existing EF Core context+mappings (i.e. if you have an existing project and want to migrate, or just need the better feature-set for a specific case.) or you can get pretty close to 'yolo by convention' depending on case. In general though it's a lot less ceremony to start messing around.


Diesel is not an ORM and a typical rust library...


The strapline seems to suggest it is an ORM (I've not used Diesel yet):

>Diesel: A safe, extensible ORM and Query Builder for Rust

https://github.com/diesel-rs/diesel


Even more important than the question of productivity is that this turns a joyous activity into a depressing probabilistic shitshow where you describe what you're trying to do and hope for the best. Instead of feeling engaged and challenged, you're just annoyed and frustrated. No thanks!


> this is where I think AI for coding is now. It gets things wrong enough that I have to manually check everything

This might be dependant on the programming language, some languages are way more popular and have way more questions on StackOverflow and Reddit and repos on github, so the answers will be better.

When I use copilot for JS it's right 90% of the time.

And where it's 'wrong' it's usually just stuff it skipped over because it didnt have proper context.


>It gets things wrong enough that I have to manually check everything it does and correct it

Thing is, reading code is way faster than writing code


Huh, for me its always vice versa.

When I'm not sure in the someones code I have to double or triple check it be sure that I understand it correctly and to verify that there no somehow hidden missed steps or side effects.


> Long analogy short, this is where I think AI for coding is now. It gets things wrong enough that I have to manually check everything it does and correct it, to the point where I might as well just do it myself in the first place.

Even if that were true, reading code is reasonably faster than typing it out and then reading it again to check it.


I'm left wondering what AI everyone are using. I can prompt copilot and it gives me exactly what I need. Sure, if I barf out a lazy, half-baked prompt it yields a waste of time.

My problem is running into it's limitations, mostly around resources. I have tried giving it larger tasks and it takes bloody forever.

"Given this unstructured data, create CSV output for all platforms, with each line containing the manual, and model, ignoring the text in parenthesis."

Works great except for God-awful performance and stopping half way through. I had to break out each section and paste it into the prompt and let it work on small pieces. We need to get to the next level with this, especially for paying customers.

More concerning is that I see a clear pattern in smaller companies of hiring seniors and turning them loose with AI assistants instead of hiring junior devs. The prospect is attractive to nearly every stakeholder and the propensity to put off hiring "until next quarter" in light of this is a constant siren song. There is a lot of gravity pulling in this direction with the short-term thinking and distractions that are thoroughly soaked into the business world these days. Supposedly, one third of Gen Z (20-25 yrs old) are sitting at home, up from 22% in 1990.

I'm one of those seniors happily putting off hiring, but I find the situation and it's wider impact on the future very unnerving.


Well, having AI transform some data into a certain CSV format is orders of magnitude simpler and more straightforward of a programming task than what I try to use it for.

A lot of the discrepancy between people's experiences is simply due to the fact there's there's a massive range of programming complexity/difficulty that people can be trying to apply AI to. If your programming is mostly lower complexity stuff, non-critical stuff, or simply defined stuff, it obviously works better.

I try to use AI when I get stuck on a hard problem/algorithm, hoping that it can provide an answer/solution to unblock me. But when I'm stuck the problem I'm facing is so complicated that there's no chance at all that AI is actually going to be able to help me with it. I see absolutely no point in using AI when I already know how to solve a problem, I just solve it. I only turn to it when I need help, and it can never help me.


>>It gets things wrong enough that I have to manually check everything it does and correct it, to the point where I might as well just do it myself in the first place.

I have had personal experience with this. And seen others telling me as well. These AI things often suggest wrong code, or even with bugs. If you begin your work by assuming AI is suggesting the correct code you can go hours, to even days debugging things in the wrong place. Secondly when you do arrive at a place where you find the bug in the AI generated code, it can't seem to fix or even modify it, because it misses context in which the which itself generated at the first place. Thirdly the AI itself can interpret your questions in a way you didn't mean.

As of now AI generated code is not for serious work of any kind.

My guess a whole new paradigm of programming is needed, where you will more or less talk to AI in a programming language itself, some what like lisp. I mean a proper programming language at a very abstract level, which can be interpreted in only one possible meaning, and hence not subject to interpretation.


"My guess a whole new paradigm of programming is needed, where you will more or less talk to AI in a programming language itself, some what like lisp. I mean a proper programming language at a very abstract level, which can be interpreted in only one possible meaning, and hence not subject to interpretation."

Code generation is quite old though, and also quite common, also outside the Lisp-family. When doing non-trivial systems development in Java you tend to use it a lot, especially with XML as an intermediary, abstracted language.


> I was especially glad of Lazy Loading, where I didn't have to load data from the database into my memory structures; the system would do that automatically.

oh god, I have used Java with Hibernate a lot and once I read "Lazy Loading" I didn't even need to finish reading the post.


I've always found that it's easier to code something from scratch, than to review and fix someone else's code, and that's been my experience with Copilot up to this point. I'm not sure if it's better than just writing code from scratch productivity wise, but it makes coding kind of unpleasant for myself.

One thing I've found about Copilot is that it introduces me to novel ways to solve problems and more obscure language features. It makes me a better coder because I'm constantly learning. But do I want to be spending my time learning or do I want to make that deadline that's coming up?


I feel pre November “dev day” 90% of the time I could trust GPT4 output to just work but post downgrades the increased amount of times I’ve copy and pasted then seen the error and realized there’s unfinished placeholder stuff, straight up parts not done or previous code removed that was important.

Just means I now spend a lot of time rewriting it which I could have just done in the first place but now I’ve wasted time asking GPT too.


A key difference between database mapping and interactive AI tools is the position of the user.

I would not be enthusiastic about a system where I receive database query results for review, before delivering them to an end user somewhere on this planet. However, I am more than happy to get some extra help in communicating code from my brain to a compiler.


Object-Relational Mappers purport to mitigate the impedance mismatch between object-oriented and relational data structures.

For your analogy to hold, what is the impedance mismatch between programming and Copilot?


From what I remember, lazy loading wasn't part of EF for a long while and even longer for navigational properties. I am not even sure if it was part of EF4.


> this is where I think AI for coding is now. It gets things wrong enough that I have to manually check everything

The good way to do this is to write good unit tests for the code.

Which we should be doing anyway!


> does exactly what it says it does, with no magic

I strongly believe that software we develop should feel like magic to the users

The tools that we use to build them should not


> and started learning Go. Which does exactly what it says it does, with no magic

Too little abstraction is just as bad as too much.


The amount of abstraction available in Go is just about right. It gives you higher level constructs, while still being reasonably straightforward to predict memory and CPU performance and behavior.


Assembly is too little abstraction. Go is not.


It is. Oh, and also, Go managed to screw up even the assembly, inventing portable but actually not dialect that uses ugly bits of AT&T syntax, custom operator precedence and in practice is non-portable, forcing you to mix Go-only mnemonics (which might even collide with opcode names on certain platforms), supported opcodes of target platform, and BYTE literals for opcodes it doesn't support, making a lot of your preliminary (N)ASM knowledge useless. Isn't that magnificent?

Gee, I wonder if there's a better way to do so that is not such a lazy job. But doing it properly, like .NET does, is supposedly too much effort!


It would be great if C# could be discussed as a language on its merits, but Microsoft has been a terrible steward. It's too bad.


So you are saying it's even worse than suing anyone over using the language like a certain Java-related company or laying off people off the core language team like a certain Dart-related company?


And it’s by far the best one now that all the issues that made old EF unsound were solved in EF Core.

Good developers know to appreciate that and wouldn’t want to touch Go ecosystem afterwards with a 10 feet pole.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: