Hacker News new | comments | show | ask | jobs | submit login
OO is not Computing, Newer is not More modern (ormomg.blogspot.se)
47 points by hencjo 1610 days ago | hide | past | web | 98 comments | favorite



I would like to meet one of these fantastical java-only OO-proponents that are often brought up in posts like this, claiming that Java and OO are the end-all of programming. To be honest, I never have.

Some people seem to have a disposition to modeling problems with objects, but I fear I'm too young to have ever met any Java zealots. To me they seem more like straw men invented to help set up posts like this one.


I went to university starting 2004 for an electrical engineering degree, and our programming classes were basically structured around OO Java as a panacea.

Later in life, listening to the Berkeley SICP lectures made me realize what a joke the whole thing was.

I don't think Java OO zealots are a straw man. I think they're very common, and many of us are living in a happy Hacker News and startup bubble, where people we meet are surprisingly competent.


SICP/Scheme is usually taught as a first language(To blow their mind) for cs students, then eventually they get taught Java anyway at most universities...


This is not very accurate. SCIP in [yesterday Scheme, today Python] is taught first to MIT students (where SCIP comes from). Some universities follow MIT but most do not; I don't think there are many schools actually using SCIP at the high end, maybe some lower end schools trying to emulate MIT in order to capture their greatness (rarely works from what I hear).

Example: U of Washington CSE 142 (the entry level course for one of America's better programs) starts with Java and continues that through 143.


See the other reply. I meant to say most universities that USE Scheme, use it as introduction. Not that most universities use scheme...

As you said a lot of universities use Java or C straight away.


I don't even think this is true. There are many universities with PLT faculty (DrScheme, Racket) where Scheme is used in some courses but not the intro ones.

At anyrate, I don't see how this assertion is even useful, even if it was true.


http://www.cs.berkeley.edu/~bh/ss-toc2.html - Introduction Course

MIT - Introduction Course - when it wasn't python.

Can you give any examples, of where it was an advanced course?


Not in my experience. C++ is taught at my school, and a mix of C++ and Java is used at most (all?) the other schools in my state as the intro language. Even MIT, the source of SICP/Scheme, is using Python for its intro language. It's only much later are you exposed to other languages (at my school, Java, ML and Prolog are explicitly covered).


Maybe I should state. Most schools that USE SICP/Scheme use it as introductory language. Most don't however. MIT has only recently moved to python. That introduction course used to be Scheme based which converted to python. I believe they have a optional scheme course now.


ML was taught at my school for first-years, but they changed it to Java while I was there, much to some professors' (rightful IMO) consternation.


When I was at university (early '00) the default opinion was that OO is the way to go, and everything other is obsolete or esoteric.

I thought that way then, too. I remember complaining with my friends to teacher that he teaches us algorithms and data structures in procedural Turbo Pascal, when everybody knows it's obsolete, and we should be doing everything in C++ or Java. Yeah, I was clueless, and that particular prof. was right.

But most teachers were OO fundamentalists, and it was assumed everything should be OO. One stupid exercise - design system to keep data about students and teachers on university, using OO.

And the answer was obviously - Student and Teacher class extending Person class. No matter that sometimes people are studying on 2 facoulties, sometimes Student is also a Teacher, or becomes Teacher after he graduates. And when we could use relational DB, it was always with ORM, because objects are needed in any modern system.


"Student is also a Teacher, or becomes Teacher after he graduates" - That's the result of not using composition...

"No matter that sometimes people are studying on 2 facoulties" -- Why doesn't this work? Just use a collection, several students inside a collection object...

"And when we could use relational DB, it was always with ORM, because objects are needed in any modern system." - Because you manually want to populate objects with tons of boilerplate?

Just sounds like results of bad design in general, not OO...


"That's the result of not using composition..."

Historically, OO's acceptance of composition over inheritance only came after a decade of experience that showed inheritance doesn't work as promised. (Or two decades, depending on your accounting.) Composition as an answer came later.

"Just use a collection, several students inside a collection object..."

So, embed an ad-hoc, informally-specified, bug-ridden, slow implementation of half of a relational database in OO and its problems go away?

I've found some places where OO works (the traditional UI widgets still tend to work pretty well), but it isn't even close to a panacea and if you compare the initial promises to what it can actually deliver, history has not treated it that well.


I've found some places where OO works (the traditional UI widgets still tend to work pretty well)

And that's where OO initially gained traction. At the time, UI was a major pain. OO was a good fit. UI elements have state, and can do things. Bundling them together made a lot of sense. Inheritance also worked well, as all the objects were purely artificial creations. They were precisely how one chose to define them in code.

Problem was, the rest of the world is not like UI widgets. The main difference being, it has a reality external to the code. State and behavior are not so obviously bundled into types and instances. Verbs are sometimes more important the nouns.

But OO had been sooo successful w/ UI widgets, that many people hoped to repeat that success more widely. OO also provided an easy way to dumb things down--"everything is an object" obscures all the hard questions--just start with objects, and keep writing code until the thing works.


"But OO had been sooo successful w/ UI widgets, that many people hoped to repeat that success more widely. OO also provided an easy way to dumb things down--"everything is an object" obscures all the hard questions--just start with objects, and keep writing code until the thing works."

That's a silly argument. It could be applied to almost any programming language. Just start with functions...

The hard choices are what data structures, and algorithms to use and OO does not get rid of that.


"So, embed an ad-hoc, informally-specified, bug-ridden, slow implementation of half of a relational database in OO and its problems go away?"

When did I ever say that?

If you wanted to search the database, say people with a certain name, belonging to certain faculty, you'd have a separate method that indirectly runs SQL statement or ORM statement for you.... Perhaps belonging to the faculty object, the method taking the name as the param.

You could even have a single generic search method belonging to the faculty object, that allows a whole range of relational features(SETs etc) to be used in the search.

If you wanted to search all students, you'd have a generic search method, on the student object?

That's the most obvious design ever? Unless you wanted to make it difficult?


"When did I ever say that?"

You answered the question as if that really was the only many-to-many relationship in the system. I observe that it is only an exemplar, and that in practice any real system is shot through with similar relationships, and the answer isn't to implement each one of them in OO.

If you're using an ORM, btw, you're using a relational system under the hood, which is my point. You seemed to be implying pure OO had answers to that problem, and it really doesn't unless it imports a foreign paradigm like a relational database. If that wasn't your intent, then we're in agreement.


Yes of course ORMS use relational systems order the hood, everyone knows that. Relational Databases are easy to use, but that's not the problem. The problem comes from interfaceing the two different systems together. That's the problem. ORMS allow you to remove boilerplate, such manually populating objects, or connection code or whatever and still access relational features. They don't force you to use PURE OO to access data. If you wanted that you'd use a object database... They've existed for years.

ORMS allow you to use those relational features(HQL, LINQ), except without the boilerplate of manually populating objects, connecting to the database or whatever...

Nearly every search you perform via orm, uses relational operations...


If you're looking to replace a relational database, then you're looking for orthogonal persistence.


There is nothing informal about using linq.

Y = X.where(p => p.name == "Bob");

PS: You can also use the select syntax if you want.


> "No matter that sometimes people are studying on 2 facoulties" Why doesn't this work? Just use a collection, several students inside a collection object...

Student <-> Facoulty is many to many relation. I can make Faculties keeping arrays of pointers to Students, and Students keeping arrays of pointers to facoulties, and update both arrays accordingly when something changes.

Then I want to save the information, that Student X finished Facoulty Y, but still studies at Facoulty Z.

So pointers from student to facoulties won't suffice. I need to have a place to keep this data. So I introduce Relation object, like I should have from the start. And I end the Student/Teacher maddness, and make Person<-Relation->Facoulty. Much better.

Then I want to be able to filter students according to different conditions. And I end up writing nested loops for every possible condition and join I need. I refactor. I generalize. After a few months or years I've implemented nonstandard relational database.

> Because you manually want to populate objects with tons of boilerplate?

Because I don't want to have objects at all, when relational db is the right tool.


What? You don't keep the objects permanently in memory. You call up the faculty object via ORM, perform your action such deleting a student. Then get rid of the facaulity object(Going out scope in the method). Next to you call up the faculty object via orm everything's updated.

"Then I want to be able to filter students according to different conditions. And I end up writing nested loops for every possible condition and join I need. I refactor. I generalize. After a few months or years I've implemented nonstandard relational database."

Only a moron would write loops to search using an ORM. You implement a generic search method on the object, which executes the relevant SQL statement(Via HQL or LINQ) and returns the result. ORMS do this stuff automatically, no need to write loops to search...

I mean how badly have you been abusing ORMS?

You've been using ORMS wrong. ORMS provide the whole range relational features. They just map the results of the SQL statement into objects.

Relational Databases are good keeping data integrity correct, not so good at modelling computations. So you use the best tool for the job, relational database for storing data. Then extract that data from the database, and then use whatever (functional, OO, logic languages) to perform operations on that data.

Relational is not a way to perform computations on data, other than searching and using the various relational search features. Try implementing a machine learning algo on data using solely SQL...


Maybe I wasn't clear. I do know how I'm supposed to use ORM. I work with hibernate and EJB3 for a few years already, and no, our system doesn't work the way I've described above :)

I described how I've been taught OO. First without using databases, and with wrong abstractions (nouns instead of relations, when the system was all about relations). I think it's the default experience, because I've seen many such examples of bad OO design.

Then we've been taught databases, but using ORMs, like it was the only possible way, because objects are required for the system to be modern. And I don't like that too, because IMHO (OO + ORM + relational DB) doesn't add any value over (procedural programming with relational DB), but adds complexity, and is less transparent.


It's a personal choice, I think OO makes modelling large systems easier. At that point you can use an ORM.

If you prefer procedural code, then obviously ORMs are no use to you...

I mean love various programming paradigms, I like Haskell, but it can make things difficult. For example programming a basic game in haskell, is far from simple. You normally have to introduce reactive programming to make it sane.

I find OO makes large systems easier.

In my opinion the amount of boilerplate required for JDBC makes ORM a must in java...


You should get interviewed for enterprise jobs sometime. It's all about objects, patterns and lots of factories... some times makes me wonder if we intentionally make stuff complex to find meaning in what we do.


There are a lot of Java-only OO-proponents in the corporate world.


Agreed with parent. Come take a stroll in my office; I work in a coporate bank. Most of my coworkers learned "real" programming in the 90s, when Java was the hot new thing on the block. To most of them, the whole Programming Language eco-system can be summarized to an endless "C++ vs Java vs C#" flamewar.

Just yesterday a colleague of mine looked at me with chock and awe when I told him I enjoy coding in C. "But... it's not OO!" he mumbled, puzzled.

Pushing the caricature a little further, Python and Ruby are toy languages that only hipsters use, Perl is only to be used by the sysadmin down the hall, and Lisp is the biggest failed experiment programmers ever came up with.

Pointy-haired bosses encourage this kind of thinking: They would never get in trouble for chosing a language amongst the big 3 (C++/C#/Java).

I once heard someone say: "I don't understand why there are still new languages coming out every day. Shouldn't everyone just use Java?"


> can be summarized to an endless "C++ vs Java vs C#" flamewar.

As opposed to an endless "Ruby vs Python vs PHP" flamewar?

Pushing the caricature a little futher, C++ and Java are enterprise languages that only old out-of-touch programmers use.

Hipster Apple-using kids encourage this kind of thinking: They would never be un-cool for choosing a language among Ruby, Python, or Lisp.

It's far too easy to become that type of programmer. Choice of language has little to do with it. No doubt you understand this, just making the point clear.


You sir made my day, thank you very much!


There are equally large number of people who hate Java for no good reasons. Its all the effect of "Once a fan boy always a fan boy".


While I agree that some people hate Java because of hearsay, many people do have good reasons -- reasons that often have little to do with Java as a language and much more to do with the Java ecosystem. Oh, the horrors I've seen in Java codebases!


I was just talking with a coworker about this. We agreed that much of our existing C++ code could have been written in Java without harm (we're doing compute-heavy finance, but not fluid simulation), and that would let us skip a whole lot of infrastructure pain. The only uncertainty was how easy it would be to make a Java-based Excel plug-in, but Java on Windows seemed a smaller evil than C# on Linux.

F# is interesting, Scala is interesting, and we both wondered what it would be like to really try ocaml for real. But Java's just pretty solid. It's not exciting, but it's quality engineering. It makes some questionable trade-offs, but it seems to provoke more hate than it deserves.

Of course, we then rolled our eyes about all the crazy Java stuff we've seen, with XML-everywhere, FactoryFactories, and gratuitous IoC.

That started me wondering: what is it about Java that encourages the craziness? My theory is that it's a combination of garbage collection and lack of easy blocks/anonymous functions. Garbage collection lets you get away with things that you'd never try in C++, and lack of blocks means you're stuck with a pretty limited API, so you end up inventing extra-language channels for information.

Note: I intentionally didn't demand closures. I think the original blocks in Smalltalk weren't closures, but they were still useful for things like "monkeys select: [:m | m throwsPoop]". I'm not sure if this is really enough; closures make some things a lot easier.


That started me wondering: what is it about Java that encourages the craziness?

It doesn't hurt when you're trying to rack up a lot of billable errors and keep clients dependent on you.


I think the lack of higher order functions is a huge part. If you don't have built in higher order functions, you have to add 1 interface per function taking a higher order function and 1 class per different call. From there on, you have to manage all those extra classes and interfaces somehow, which easily spawns more classes you wouldn't need otherwise.

These resulting structures then end up being factories (You could pass the object creation function), command objects (you could have a function to call instead of a command to pass back) and so on.


You can see horrors in any language.

The problem is called lack of skills, not the language.


Was my post too long for you? You must have missed this part: "reasons that often have little to do with Java as a language".


I've saw it, but the remark about the ecosystem still reflects over the language, hence my post.


My point isn't that there's something intrinsic to Java that leads to nightmarish codebases, but there must be something associated with the Java brand name that leads to it. I haven't seen such a large percentage of this lack of skill affiliated with any other language (granted, I haven't spent much time working with, e.g., VB or PHP codebases).

I don't know what it is about Java that seems to encourage the mess, but there is something.


You can see this in any enterprise code base.

I have seen lots of awful code, when you combine software design done by out of touch architects, with lousy enterprise coders, regardless of the chosen language.


Go to LinkedIn, search for Java in jobs around you, find the ones that look more boring and apply. Enjoy the perplexity when you offer a non-OO solution to a problem.


Depends on how old you are perhaps?

I remember being told that my way would work but wasn't OO enough, and because of that it was wrong i.e. I hadn't done well on that portion of the interview.


The older you are, the more likely it is that you were presented to other ideas before - and that can help you contextualize object orientation. I learned OOP in the late 80's with Smalltalk/V, used Actor (an Algol-ish ST-workalike), but the first computer I ever programmed was a Sinclair ZX-81 clone (http://www.old-computers.com/museum/computer.asp?c=910&s...), quickly moving on to an Apple II clone (http://www.homecomputer.de/pages/f_info.html?cce_Exato_Pro.h...).

So, before my first contact with OO, I learned BASIC, 6502 assembly, FORTRAN, GraFORTH, C (on QNX!), some Pascal (Turbo on CP/M), a little bit of Lisp (and I ran away from it). Dijkstra would consider me hopeless.


Would be interesting to know what non-OO means precisely. I am reading SICP right now and a bit confused: functional programming is not newer, nor more powerful. It just easier to debug. (It seems)


I once saw a "Library" class with lots of public static methods. That's as non-OO as it gets when you are writing Java code.

The worst part about it is that it actually made sense. You don't want your Order class to worry about localizing dates and you don't want to clutter your glue code with Java's cruel date/time system either. You also don't want to make yet another subclass of a date/time and glue it to your ORM.


There are a lot of people who think like this. It does seem to be more enterprisey people generally who are working with large older codebases. I'm not sure how many people would design a product from ground up only using Java today.


Maybe people who don't care how trendy the language is? There are thousands of well tested, well maintained libraries for java mainly because of the funding from the corporate world.

What was the first API for amazon aws in? Java. Nearly everything has Java API because it is so well supported.

Java developers don't get off on, programming language one upmanship. Some people like being boring...

SAP? Jave API. Oracle? Java APIs. Google Stuff? Java APIs. Android? Java APIs

nearly every piece of commercial software has java apis.

I remember having to use a 3rd party django paypal plugin and it was half broken(IPN callbacks), that would be unlikely with Java. Ended up faffing about in its source code to get it working.


Good point. Sometimes I ask myself why everyone thinks that Ruby Gems are so amazing. I've had all kinds of awesome Java Libs for years that work without much problems.

Maven obviously isn't as nice as the way Ruby handles Gems, but to me it feels more structured.


As far as I know, you could use those APIs with any JVM language, not just Java. There's Clojure, Scala, Groovy, Jython, JRuby and probably a few others.


True; but for random programmer off the street, these languages impose an additional cognitive load. For a given Java library, most of the example code and documentation will be geared to Java itself, and translating that to "another JVM language" isn't always trivial.

Especially since most programmers out there are not polyglots, work in a big company, and may not even by "programmers" but just "engineer/accountant/business person who programs a bit".


Many people do.

What is the business value for rewriting the code in the flavor of the month language?

In what sense does it improve profit?


I've met a lot. Java, .NET, C++ guys.

Sometimes it's hard to convince them that even particular OO-flavor is not the end of the road.

C++ - guys thinking that OO in C++ is the best, java-guys thinking that extensions methods ruins OO in C#, all of them looking with horror in python. (Where's my private and protected??)


So what you're saying is you are too young to have a real opinion about the article because you don't have enough experience. Instead you invent a strawman to attack some other alleged strawmen


I was a total Java zealot from about '95 to about '00 when I was pretty much your standard architecture astronaut in a startup environment (back when Java was "cool").

Totally reformed now - I hope.


I work with one, it isn't hard to get him to start the this isn't OO it is complete and utter crap routine. To be fair to him though we just inherited a large code base of low quality. He turns out good code, but is mostly a 501 programmer.


Almost all of the observations you read on programmer blogs are summarized in this academic piece - http://www.theonion.com/articles/sociologist-considers-own-b... [Aside - I had a faint memory of that article, having only read it when it first came out, but managed to find it via a domain-filtered search with cognac and trend, the only two words I could remember from the article. Thought that memory cluster was humorous]

Usually it's someone responding to some personal issue with a public post. Perhaps they argued with a peer.

I also found it to be a serious mental bender to see OO and Java called "new", relative to "old". In most programming groups Java is the new Cobol, considered anything but new. Often it is a legacy language.


Strangely, Java is a legacy language, but also the lingua franca of the largest, most prestigious tech companies.


Such as? It certainly isn't at Microsoft. Nor at Apple. Nor at Amazon, or Facebook, or LinkedIn or Twitter. Android made a big mistake of throwing in with Java originally (as the project originated as a decoupled version of J2ME, as such was the thing at the time), and Google has a preponderance of Javaheads internally, perhaps explaining their continued reinvention of new languages.

Oracle likes Java, but that's because their products are largely code produced over a decade ago.



None of those hold Java as the "Lingua Franca" of the respective companies. Even remotely. No one doubts that you can find Java somewhere in any reasonably sized company, or as a largely inconsequential basis for a product they use (such as HBase), but only extreme hope can lead someone to conclude that makes it a principal technology.


If you cared to read the given links, or even search a bit yourself, you'd see that, for example, Twitter has replaced the original Ruby architecture by Java.


As far as I know, Java is the lingua franca of Amazon and Google. Facebook uses some, but it's not their standard. LinkedIn and Twitter each use some Java alongside their Scala. Only Apple and Microsoft exclusively use C++ or made their own languages. Google also uses a lot of C++, but they're predominantly on Java.


>I would like to meet one of these fantastical java-only OO-proponents that are often brought up in posts like this, claiming that Java and OO are the end-all of programming. To be honest, I never have.

Strange, because I have met many of them.

As well as many VB jockeys, who would look at you with a blank stare if you mentioned either OO or functional programing.

Maybe you should work in bad companies more ;-)


"Lots of so called object oriented languages have setters and when you have an setter on an object you turned it back into a data structure."

That may be because data structures is what we actually need. Data structures largely determine the computational complexity of operations and I think it makes little sense to act as if we could always think in terms of abstract responsibilities, interfaces, encapsulation, message passing, etc.

These things are useful organizational principles for coping with large systems, but at the core of most large systems is some algorithmic transformation of one data structure into another.


I think that the point here is that while data structures are objects, not every object should act like a data structure.


Keep in mind that when Alan Kay says "object" he means something different from programmers-at-large. To the general population of programmers, an object is just a data structure that's been bundled with some behavior. To Kay, an object is a computational entity that can only send/receive messages (what Carl Hewitt called an actor even before Smalltalk was on the scene). In Kay's mind, objects do not expose state the way they commonly do in the mainstream.


That's a much better way of putting it than the original quote.


Most of object oriented programming to me feels like nothing more than trying to decide the best place to put something. As time goes on though, typically the 3rd iteration, there just is no good place to put this new thing. A redesign would be best, but I only have 1 week to finish this new feature.

So a project (well at least in my experience) progresses like this: "Wonderful architecture" > "glad I designed this project well, its' paying dividends!" > "hmmmm I never considered this paradigm before" > "new guy joins team" > "argh this code is crap"

Object oriented programming can be really elegant... but I can see why people hate on it.


I think you summed it up well. People hate object oriented code because it has an extra startup burden. It's hard to just dive in and start fiddling with code because there are many project-specific abstractions you need to understand before-hand. The new guy on the project has to spend extra time to "grok" the idioms of the project. I personally don't see this as a bad thing.

The fact is, abstraction is key to programming. People who have a hard time managing extra abstractions in their head probably aren't nearly as good of a programmer as they think.


On the other hand, people complaining about this don't realize it is very much the same in any other language.

Go ahead and just fiddle around in that million loc c codebase. Or that million loc java codebase. (and ignore for a second that that c codebase is a large compiler and that java codebase is an elaborate hello world). Or a hypothetical million line haskell codebase.

In C you have to understand modules, functions working on structures and functions controlling the architecture. In Java you have to understand class hierachies and classes or functions managing the control flow. In haskell, you have to understand abstract data types, type classes and monads and functions managing control flow.


OO gets a bad rap, it is incredibly useful in applying ontology to program structuring. Yes, OOP in Java isn't ideal, but if you look at more pure OOP languages like Smalltalk or Self, its virtues really do begin to shine through.

FP is great when expressing computations, but we often are just gluing nominal entities together, where OOP excels over FP. I like to have a few FP options in my languages (C#, LINQ), but my mind is firmly rooted in objects when it comes to overall macro design of the program.


OO is great for systems, where you have real objects, and only care about their current state. In effect - for simulations, games, such stuff.

In most systems - changes over time, archival data, and relations between objects are important. And with OO it's ugly to keep them. You end up implementing relational db backed by object collections. And your nice objects instead of representing nouns (Employee, etc.), start representing relations (Employement, etc.).

So people use relational db with ORMs and load everything from database into objects only to show window/edit data, call one or 2 methods on these objects, and save data back into database. All this ceremony to write employee.giveRaise(raise) instead of giveRaise(employee, raise).


And yet this problem kind of goes away in a key value or graph database. I wonder if the ORM mismatch is often not one of objects getting in the way, but one of using relational in the first place. At anyrate, how you manage your objects (for things like search) is ideally orthogonal to the objects themselves; though this is difficult to accomplish in practice.

I use objects in reactive (non-persistent/DB) systems all the time. Objects aren't the problem, but you do have to manage your dependencies dynamically by tracing/dirtying/and cleaning. Objects actually excel at encapsulating and managing state, it is much harder to do this with functions (see all the work on functional-reactive programming).


Hm, I've never used key-value/graph datastore, and it seems I need to. Is there something like selects with joins for graph nodes? How do you represent changes in objects over time?


So, when Alan Kay implies Java damages the brain, he is quoted. When I say it, I get downvoted. Do I see a double standard here?

BTW, I learned to program on a Texas (TI-55) calculator. What symptoms should I be presenting? ;-)


RSI, perhaps?


Any age-related condition would be a fair guess ;-)


Modernism (bauhaus/destijl) is about 100 years old.

Why is it so hard to understand that there is an implicit set of limitations based on the tools (and/or materials) we use in our projects?

Think skyscraper... now think wooden skyscraper.


Objects are a poor man's closure. Closures are a poor man's object. Objects are a poor man's first-class module, and so are closures. First-class modules are a poor man's type-class. Type-classes are a poor man's first-class module.

Hail, Torat Exists, the holy teachings of the existential type!


That's why I like C++ so much. It doesn't force OO down your throat. It lets you program using whatever style is best suited for the job.


I also take or leave C++ language features as required, and I wish more people shared your point of view.

Among the C++ crowd (particularly over at Stack Overflow) there's this infuriating attitude that any code that doesn't use std::string, std::vector or boost::magic_super_auto_ptr is language sacrilege and the developer should be burned with fire because it's not C++ it's C and malloc is deprecated and arrays are deprecated, and, and...


That's because without using these things, people can end up accidently shooting themselves foot quite easily. Like accidentally removing the end character from a char* string...

Normal malloc, and dealloc can get stupidly complicated when lots of other objects reference one memory location, which is why these referencing auto pointers come in, with built bounds checking etc

Using those things is defensive programming.


Did you mean to respond to my post?

I agree with what you said. I tried hard to use C++ language features (especially STL) in a previous job and it was more hassle than it was worth, honestly.


That's why I like C so much.


Yeah, but in C you end up creating your own data structures from scratch every time. You can use data structure libraries, but without generics or something similar, it makes it difficult....

Then sometimes actually getting a external c libary to build(If it's not in the package manager) is sometimes trouble. ./configure ./make usually ends up in some obsercure error or depedency... C++ usually makes the errors even more unreadable...


> in C you end up creating your own data structures from scratch every time. You can use data structure libraries, but without generics or something similar, it makes it difficult....

Hmm... I never ran into too many problems with this myself, but it could just be outside of my experience. Can't TYPEDEFs help with this?

Also, better to have the code to step through, rather than decrypting STL/template compiler errors, no?

And what's the harm in a bit of repetition of code structures?

> Then sometimes actually getting a external c libary to build(If it's not in the package manager) is sometimes trouble. ./configure ./make usually ends up in some obsercure error or depedency... C++ usually makes the errors even more unreadable...

C++ is the main thing I'm comparing C to. Yes, sorting out dependencies isn't fun.


Array is good enough data structure for most uses, and

Yeah, but in C you end up creating your own data structures from scratch every time.

is not true.


It is true, unless you do some preproccesor hacking(Type arguments can have no * or spaces), or use void* at which point types become useless.


You don't have to create them from scratch though -- you can reuse old code, even copy-and-paste it.


I'm interested in what alternatives that the author has for the Object Oriented "zealots." I'm assuming that his alternative is a functional language. Personally, I think that functional languages are great, but I don't think that's any reason to flame Java/C++/C#. These are great tools. They were created for specific reasons, and they can certainly be used in efficient ways. Of course, they can also be horribly abused. I don't think that it's fair to say that any particular language is the end-all be-all for development. You have to pick the right tool for the job.


"ORMs are newer and hipper than those old boring outdated relational databases?" I stopped reading after this point since author obviously doesn't understand what he's talking about.


What does he not understand?


an ORM is a library to map objects onto a relational database. They're not a substitute for databases themselves, but a substitute for a DB interaction library.

ie: To use an ORM you must use a relational database. Things like mongoose exist, but they're not ORMs since they don't map objects to a relational database.


I see what you mean. I thought he was mocking the idea of wrapping the old uncool relational model in a fashionable new OO API.

What is substituted is not the relational DBMS but the relational model. ORMs are supposed to let you think in terms of objects instead of sets of tuples.


No, turns out most sql work involves extracting data from the resulting sql statement into objects. So you don't end up passing 5 params from the sql result everywhere.

ORMs do these automatically, and go beyond.


That's true for some basic CRUD operations and that's what ORMs should be used for.

But people tend to just blindly copy everything into an object model and perform operations on those objects that could just as well have been done in a set oriented way with a lot less code causing a lot less network traffic.

Obviously there are scalability downsides and not everything is easier to express in terms of sets, so I try to be pragmatic. You can of course do set operations through an ORM but that's not its purpose and the performance implications are sometimes tricky.


Most orms have something equivalent to HQL/LINQ. HQL allows you to use set operations, which then gets translated into its equivalent sql statement. Thus no extra objects get transferred across the network.

However there is one thing, most ORMs by default transfer the whole object. What is you only need a few variables?


I've been looking into different software engineering methodologies and it seems that all of them rely on specialization. I think this leads to un-necessary complexity.

OOP might not be more modern but I think it was trying to move into the direction of less specialization. Unfortunately, it has gotten side tracked.


Java is a new language? It's been around for 15 years or so. OO is even older - goes back to Simula 67 and Smalltalk in late 70s....




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: