Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Do you mostly program in a non-FP or non-OO style? Why?
69 points by open-source-ux on Sept 10, 2017 | hide | past | web | favorite | 88 comments
Functional and object-oriented styles of programming dominate programming languages today. Are you using a language that has neither of these features? If so, what is the language and why do you use it?

Or are you using an FP or OOP language but rarely use the FP or OOP features? If so, why?




Primarily a Python and Go programmer.

OO: Only rarely, and only where it really makes sense. Most programs I work on are not complex enough to justify the overhead (boilerplate, cognitive).

FP: I use this more, but again, only when I can really justify the added cognitive overhead.

You'll notice the two references to cognitive overhead: I have adapted the "write for a 6th grade level" idiom into programming. I want as many people to be able to pick up the code and modify it as possibly can. I work with everything from interns to decades plus veterans, and my code must be grok-able by both. Pure FP tends to confuse interns, and veterans have a tendency to hate the layers and layers of indirection and abstraction OO brings into a program.

Normal and boring old "an imperative main method with functions" tends to be the least offensive and most understandable to all parties. Compromise r fun.


Programming for the lowest common denominator? Something about that sounds wrong.

I generally agree with "use only as much cognitive overhead as needed to get the job done" ( I'm enjoying that term ) - but it seems like a slippery slope that could slowly bring everyones abilities down to the least capable developer.


It's also an excuse to avoid any sort of abstraction whatsoever, which makes solutions to genuinely difficult problems impossible to read, since one has to wade through the low-level details at every step.

To me, it's a bit like telling a person how to get from point A to point B by telling them when, how hard, and for how long to step on the gas and what angles to put the steering wheel at. Sure, that might work to get someone out the driveway, but good luck getting to the grocery store that way.

I will genuinely never understand the go philosophy that "excessive" abstraction is bad, with no attempt to justify why current levels of abstraction — which are charitably hundreds of times more complicated than the difference between, say, go and ruby — are good, right, and just. Everything we do on a computer from processing to memory access to networked communications to rendering to handling input to relational data modeling to… anything is dozens of layers away from what's actually happening. But somehow right now we're at the optimal level of abstraction and any more would just be too much. Okay.


> no attempt to justify why current levels of abstraction — which are charitably hundreds of times more complicated than the difference between, say, go and ruby — are good, right, and just

It seems to me that Go is part of a reaction from people who are not happy with current levels of abstraction.


I think the grandparent means that Go is already very abstracted from what's actually happening (e.g. programmers don't have to think about quantum mechanics, voltages, machine code (i.e. the actual numbers), microcode, CPU caches, virtual memory, finite memory/memory management (GC), implementing runtime type-checks or virtual dispatch) like many other languages. On one of those "spectrum" diagrams everything is bunched really far to the right:

                                             asm    Go Ruby SQL
  reality <---------------------------------------------------> abstract
Their point is something along the lines of: there's not as much justification for why all that stuff to the left is "good" abstraction, versus the justification for why the tiny extra bit to the right is "bad". The line between good/bad abstraction seems to be fairly arbitrary.


This feels like a false comparison to me. When you write ASM you pretty much know what it's going to do.

When you ADD two registers, you really are making two reads in the register file, which really is a piece of physical hardware, and those reads really do go to a physical adder and get written back to the real, physical register file. And languages like Go translate reasonably (but not exceptionally) straightforwardly to assembly. Even if you don't know all of the details that go into making it fast, your first guess of what it's doing is pretty much right.

In contrast, adding some lazy FP callback into your latest node.js framework on DOM objects might be doing something, somewhere, but who on earth really knows?


> This feels like a false comparison to me. When you write ASM you pretty much know what it's going to do.

This is because it's an excellent layer of abstraction on top of the machine code which is interpreted by the abstraction of microcode which sends it to the abstraction of a physical processor that's itself just an abstraction on top of transistors that themselves are an abstraction around manipulating voltages on a complex network of circuits.

Nobody bats an eye at these levels of abstraction. And they're not perfect! Modern CPUs have hundreds of errata. Skylake's alone is almost 40 pages long, at 3-4 per page.

But you take them for granted because they're several layers lower than what you have to deal with on a regular basis.

Even your example of adding two registers required compiling through an assembler, scheduling execution by a time-sharing operating system that fakes the concept of running hundreds of parallel processes by rapidly looping through running processes, and on and on and on.

We are awash in a sea of abstractions more deep and complex than even a dozen of the world's best engineers put together could hope to fully understand. And yet people earnestly defend language design decisions that prevent a single function from comparing two values of any arbitrary numeric type as being too complex, when issues like that are less than a hundredth of a hundredth of a hundredth of a percent of the complexity that are modern computers.


Electronics Weekly says[1]

> “The M0 is a third of the size of the M3 in its minimal configuration,” ARM CPU product manager Dr Dominic Pajak told EW – 12,000 against 43,000 gates.

That same page is 2.8MB compressed, the half of which is Javascript decompresses to 2.6MB.

Yes, a big OoO core has a lot more to throw around, but the vast majority is spent on tricks to make things go faster. The layers are thinner than you expect, and they're built that way on purpose. The hardware below assembly is a far smaller jump than the browser in the sky.

[1]: https://www.electronicsweekly.com/news/products/micros/arms-...


Over two-thirds of that is ad-related code. The HTML, CSS, and basic Javascript needed to run the site appears to be on the order of < 250KiB. Images obviously bump that number higher.

You are not arguing against abstraction. You're arguing against the user-hostile influence of advertising on delivery of content on the web.


And you think adblockers have a legitimate reason for their codebase to be 3x the size of DOOM? If all this abstraction from the low level C code of DOOM to the high level Javascript-on-the-browser bought us anything, you'd think the abstract one that is solving an easier problem would take less code, not more.


s/adblockers/adverts and trackers


> When you ADD two registers, you really are making two reads in the register file, which really is a piece of physical hardware, and those reads really do go to a physical adder and get written back to the real, physical register file

You've elided a pile of abstraction here too, like what it actually means to "read" and "write" the physical register file. Additionally, CPUs do a ton of microcoding of operations, so simple instructions like a 2-register ADD may correspond to the "obvious" thing, but more complicated ones will go through more hardware/do more things.

The gulf of abstraction between the physics and any programming language is vastly larger than differences between programming languages, so it's weird that all of the hardware and operating system and C+GC is the right abstraction, but "tiny" extensions, say, like hardware and operating system and C+GC+generics^ is not right.

^ (Any of the common ways to implement them.)

> In contrast, adding some lazy FP callback into your latest node.js framework on DOM objects might be doing something, somewhere, but who on earth really knows?

This is the false comparison! Even Go can conceal arbitrarily complicated operations behind function calls. I can make a Go API that takes closures to execute later too. (There's even core language support built on this idea, `defer`, that behaves in a way that's resistant to the straightforward (RAII-like) implementation.)

If we're going to be talking about things at the instruction-level, you can definitely complain about the dynamism of Ruby forcing even simple + to be (quite) a few more instructions, but you need to consistently look at either the micro-scale or the macro-scale, or explicitly connect the two.

If you were doing DOM manipulations in Go, I'm sure, say, listening for some events (like a "lazy FP callback" might be) would be at least a similar order of magnitude in terms of non-obviousness to the JS version. It's fair that different language/ecosystems may encourage/lend themselves towards different coding styles, but this is most evident with apples-to-apples comparisons.


> The gulf of abstraction between the physics and any programming language is vastly larger than differences between programming languages, so it's weird that all of the hardware and operating system and C+GC is the right abstraction, but "tiny" extensions, say, like hardware and operating system and C+GC+generics^ is not right.

Yes! This is why I argue that the problem isn't "too much" or "excessive" abstraction. We're already so far down the rabbit hole of abstraction that another layer or two are peanuts to the amount of abstraction we implicitly accept as being totally normal, reasonable, and okay.

The issue is bad abstractions. And we as an engineering discipline have lots of them, at all layers of the stack! I'll even happily entertain the notion that generics^ (to borrow your use) are a bad abstraction. I don't agree, but at that point I think we're at least debating something worthwhile — as long as the alternative isn't "no more abstractions", but is some other layer of abstraction that better approximates some high-level concept we want to model.

Fighting against more abstractions is a losing proposition. We need to be focused on how to develop good abstractions, so we can progress the field on solid, stable ground instead of on the quicksand it feels like we've been progressing on for the past decade. In my opinion, Rust is a phenomenal leap forward here.


Yes! You have restated my position better than I could have.


Every person who's bet against additional layers of abstraction has been on the wrong side of history so far. I see no reason why that trend ought not continue. Some of those layers will be rethought, of course. Some will be merged. But it's inevitable that new layers will be added on.

Abstraction is fundamentally the process that allows us to solve progressively harder problems, by sharing knowledge of our solutions to similar problems. It's central to our understanding of math, physics, biology, psychology, and every other scientific field you can name. Software engineering is no exception.


The losing minority, sure, but the wrong side? AAA games? Linux? Rust? Vulkan? WebASM?

Yes, some abstraction is beneficial. But too much abstraction leaves you with, you know, the decrepit state of modern software. Abstraction as you actually see it isn't letting us solve "progressively harder problems", it's letting GMail take up more RAM than my OS does from boot. Meanwhile most of the really interesting problems are still done in fairly primitive languages.


> AAA games?

You mean the ones built on a handful of abstracted 3D engines like Unreal, Source, id Tech, CryEngine, Unity, etc?

> Linux

You mean the operating system that abstracts away from us things like filesystems, hardware access, shared memory, time sharing, networking protocols, access control, multicore processors, etc.?

> Rust

You mean the programming language that adds a new abstraction of "borrowing and ownership" to help simplify the confusing existing abstractions of memory management and data race detection?

> Vulkan

You mean the API that abstracts away parallel computation on graphics hardware, upon which they intend for dedicate graphics abstractions like OpenGL and WebASM to be abstracted?

> WebASM?

You mean the API that abstracts away running low-level machine code — not on your own machine, but sandboxed on remote machines via web browsers.

> Meanwhile most of the really interesting problems are still done in fairly primitive languages.

Do you want to take a guess why I'll argue that you consider the obscenely complicated task of "delivering secure, multiuser, interactive applications with offline persistent data storage to hundreds of thousands of remote clients" a non-interesting problem?

It's only because of the incredibly successful and complex network of abstractions that are Ethernet, IPv4, TCP, DNS, TLS, HTTP, HTML, JavaScript, SQL, graphical image formats, web browsers, database servers, caches, switched networks, load balancers, and so on that these kinds of achievements that would have been considered monumental even thirty years ago are not "really interesting problems" today.

Abstraction has allowed us to break apart the component parts of this incredibly complicated problem into small, independent, coordinating layers. The fact that you think that something like delivering web applications is a non-interesting problem practically makes my argument for abstraction all by itself.


Your points are good, but parent's comment has mostly been about OO which is not necessarily a successful abstraction in these areas.

AAA or kernels games are traditionally written on a low level, i.e. C with maybe some convenience C++. Especially for memory management there just isn't a good one-size-fits-all abstraction. Look for AAA games or kernels in Java.

Another problem of OO for high-performance or complex architecture is the mindset it inflicts upon developers. It motivates decomposition in "independent" objects, while this is just not possible. You have to think about processes in the large, about dataflows, about how to structure information and how to handle cross-cutting concerns, etc. Getting at these problems with an OO mindset leads right into desaster. Look up data-oriented programming. Look how a game engine is architected. There are lots of memory and resource managers, for example. But nobody would think of a Vertex as an individual "object". Or break his head how to design a character object, when related data really has to sit all over the project (character world position and state, associated 3d resources, sound resources, ....).

>> Rust

> You mean the programming language that adds a new abstraction of "borrowing and ownership" to help simplify the confusing existing abstractions of memory management and data race detection?

So far, it's just an attempt to enable mindless OO-style programming without inflicting a runtime, but I guess with very limiting mechanics. And it inflicts a huge syntactic overhead. I predict the applications of Rust are very limited.

Game programmers are usually (as far as I know) not very busy with memory management problems.


I'm certainly not arguing for no abstraction! Use as much as you need; nobody argues with that. These are just places where the bar for "need" isn't an afterthought, and effort is taken to keep abstractions minimal to avoid their cost.

> Ethernet, IPv4, TCP, DNS, TLS, graphical image formats, switched networks, load balancers

These abstractions exists because they are necessary. They are a bit of a mess, admittedly, but invested people have worked hard to make what they were given as small a mess as they can manage, and what is left exists because it has to historically. When your packets get pulled through hardware specifically designed to do exactly that, the abstraction almost ceases to be. (Disclaimer: not a network person)

> HTTP, HTML, JavaScript

These abstractions are why 12 billion instructions per second per core counts as slow now.


Parent's comment was against excessive, i.e. superfluous abstraction. And that pretty much by definition must be avoided.

Abstraction in itself is a good thing. I just think of it as "compression". We have to express solutions to problems in the shortest way possible, otherwise they become unmaintainable.

What most people fail to see that a dead simple language like C provides with all of the tools to solve many problems with good or optimal abstraction. And if you add abstraction like OO and implicit memory management to solutions for many complex or performance-oriented problem domains, these solutions become effectively non-solutions. Because in these problem domains, decisions about memory allocation or data organisation are part of the solution.


"Excessive" abstraction is a misnomer. I can practically guarantee you that whatever level of abstraction we're working at today, we'll be working several layers deeper ten years from now. Arguing that, e.g., generics are "excessive" abstraction when we're already dozens of layers away from manipulating transistor voltages is bordering on the absurd.

The issue people should be fighting against isn't excessive abstraction, since the only way to do so is to counterproductively inhibit abstraction in the general sense. We as an industry should focus on how to develop good abstractions while avoiding bad ones.

Not that I have any particular insights on how to approach that problem. I'm just convinced that avoiding abstractions at all is throwing the baby, bathtub, and entire bathroom out with the bathwater.


Here's the thing though - most programmers are not working on "genuinely difficult problems". Yes, abstraction is a useful tool but like many tools it's harmful if used improperly. Code has a tendency to live way longer than a programmer expects. Writing code that is simple, quickly grokable, and low risk to change is really important. Especially when 3 generations of software developers have come and gone between the original author and the current maintainer.

I think falcolas's philosophy is spot on - treat most problems plainly and simply. In the rare event a genuinely difficult problem arises, use your tools to get it done right.


Abstraction is the process by which you now have the privilege of saying that what most developers do is not a hard problem.

There's a reason why we're solving the problems of today today, and not two decades ago.

You're right that code is read many more times than it's written. That doesn't argue against abstraction — it argues for it. But it argues for choosing the right abstractions, and that's where I feel like we do a poor job as an industry.

I've worked in large code bases with no abstractions and the grass is assuredly not greener. Every solution to every problem is as I said before — akin to reading directions specified in pedal pressure and steering wheel angles, rather than in terms of streets and turns.


> You're right that code is read many more times than it's written. That doesn't argue against abstraction — it argues for it. But it argues for choosing the right abstractions, and that's where I feel like we do a poor job as an industry.

Completely agree with this point. My go-to example of why abstraction should be used to increase readability is street directions. Imagine somebody asks you for directions to a restaurant, and consider how different your answer would be depending on how well the other person knows the city. If they are a tourist, you might need to give them very detailed directions, down to the level of "go forward X blocks, turn left, continue for Y blocks", etc. Conversely if they are a native, the directions could simply be "It's right next to Z". Obviously the latter directions are useless to the tourist but the former are far from ideal for the native -- it's just way to verbose, easy to forget and easy to get confused by. Understand your audience.


Programming for the lowest common denominator? Something about that sounds wrong.

No, that's almost exactly right. In my experience designing, building, inheriting, and maintaining systems that in some cases predate the dotcom boom, and in others need to run invisibly for just as long into the future, it's the simplest design that's the best design (but, as they say, no simpler).

I'm not talking about avoiding useful language features because nobody wants to learn them. I'm talking about the structure of the system itself. Minimize layers, minimize context, keep things as linear as possible. Don't use polymorphism just because you can, don't use functional tricks that obscure rather than enlighten the flow, don't make version zero an SOA, etc.


Kernighan’s law:

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”


Perhaps, but doesn't the fact that a piece of code has bugs imply that it wasn't actually written as cleverly as possible?


The code may not be buggy, but may surround bugs, thus the non bugged code still has to be debugged/understood. The easiest code to debug is the code that's never written (fastest too).

Juniors/freshers never get this and always try to impress with their cleverness. KISS will always be true.


No, the presence of bugs in a piece of code only implies that that piece of code exists. ;-)


How do you test your code? When a module of code, C, depends on abstraction X, how do you test C in isolation from X?

The common way to do this in OO (and yes this very much includes Go, whose standard library is incredibly OO) is to have C depend on an interface instead of a concrete X. Then you have full freedom to test C very robustly and at different abstraction levels.

So how do you cope with this in your code? "Imperative main method with functions" doesn't sound like it leaves room for proper testing, unless the code doesn't do much.


The functions are the interface. Just because the interface isn't formalized by a class or "interface" definition doesn't mean there isn't an interface.


If you have an implicit interface then you are implicitly doing OOP. You may be intelligently eschewing the worst parts of so-called OO languages (too much statefulness, and inheritance), but by using the best concepts of OOP you're demonstrating how great OOP can be when really done right.

I will point you to this interview where Joe Armstrong describes Erlang as perhaps the only Object Oriented language (countering his earlier, less considered view that OO sucks): https://www.infoq.com/interviews/johnson-armstrong-oop


I disagree. Just because your code has an interface doesn't make it OOP. If your functions are stateless yes they have an interface but that's not really OOP which is usually defined as scoped data bound with behavior. A bunch of static classes aren't OOP either, they may be using classes, but the state is global so it's not OOP. Interfaces are just one aspect of OOP and the concept of interfaces existed long before OOP and is used in other programming paradigms. An imperative program can have an interface too.


> If you have an implicit interface then you are implicitly doing OOP.

Are you saying that having a set of functions is an implicit interface and therefore every program ever written that contains a set of functions is implicitly doing OOP?


We are talking explicitly about an interface allowing an implementation that can be swapped out for testing. Not every program ever written is organized like this, but those that are are making use of polymorphism and therefore a bit of OOP.


Polymorphism is not OOP, although OOP designs may be polymorphic. Many FP approaches leverage polymorphic functions heavily (and in fact in most they use a parametric style of polymorphism which provides a lot more structure than the OO-family ad-hoc polymorphism).


Thank you for adding some nuance. So then would OpenGL be an OOP API?


What is proper testing? 100% code coverage doesn't mean you covered 100% of the paths in your code. Maybe you hit a line, but didn't hit that line with particular inputs that expose a bug. I'd argue that unit testing is a design tool, not necessarily a tenant of "proper testing". It will be impossible to test every unit of code super thoroughly. Even 1 line of code that accepts 1 input has an infinite number of states it can be in.

Proper testing is exercising all the functionality via the UI and understanding your program and it's use cases. If you're doing end to end testing, the codes architecture does not matter because your program is basically a black box.


Data scientist/engineer here. I write Java by day, Rust by night and in both I embrace a data-driven design, where I first carefully lay out the data so that it's easy to use within the code. In hot loops, I avoid dynamic dispatch for its runtime cost.

I don't embrace the FP style either because while I generally limit mutation (`final` almost everywhere), I use mutation where it leads to code that is easy to understand, which would be exceedingly "clever" in FP style.


I mostly use a procedural style (i.e. C like) in java, for working out new ideas in solo projects.

It's simpler, more flexible, less verbose and easier to follow than a full-on OO style. A great example is the early calculatoe e.g. in ed 2 vs 3 of the compiler "Dragon" book: older uses C, later uses OO java... and it's so much worse.

However, OO is great for wrapping up modules of functionality for which you've understood and settled on an informed architecture (or you just need to sweep repetitive boilerplate away).

I think inheritance is just about completely useless (but not quite completely), and (java) interfaces are great - if you have more than one implementation.

Big, multi-person projects are a different story.

I don't use much fp-style, except where recursion is natural (e.g. parser combinators); or for plug-in functions (hardly fp though).


I was writing a small but semi algorithm-heavy library in C# that didn't rely much on other libraries. I later decided to port it to Java and realized that the more C#-like I had written the original code the harder it was to port. Later when trying to port it to JavaScript I ended up re-writing the C#/Java code to rely less on the standard libraries since each standard library differed so much. At that point my code started to look very procedural in the OO languages possibly similar to what you've been doing in Java.

I came to the realization I didn't even need a lot of the language features for what I was doing and decided to switch it to C to not have to maintain various versions of the library and it was a steep learning curve but I've come to really enjoy the simplicity but powerful capabilities of C.


Thanks for sharing this, it's a great, succinct example of the journey many experience. You could flesh this out and post it on medium or something to get the ideas out there for discussion.


Procedural programming is what I learned first (Pascal), and I trust myself to write good code in a procedural style.

I do some microcontroller programming, and it's usually straight C. The hardware registers are all global, and their contents change based on external stimuli, so that kind of rules out the idea of stateless programming.

The earlier versions of Visual Basic had kind of a compromise, where it came with a lot of pre-made objects, and you could create objects if you got the special kit, but the casual programmer was only expected to use objects. I kind of adhere to that idea when I write in OO languages such as Python.

I use OO sparingly when programming Python, often to encapsulate hardware functionality, but then use those objects within programs that still look procedural.

I avoid using inheritance, mainly because I don't trust myself to do it in a maintainable way.


I've written almost entirely object-oriented code for two decades. My programs have almost no global variables.

I've tried functional programming in Rust, but it's not going well.[1] I don't like the "x.and_then(|foo|).if_even(|bar|).except_on_alternate_tuesdays(|baz|)" style. Rolling your own control structures is not good for readability.

[1] https://github.com/John-Nagle/rust-rssclient/blob/master/src...


1. "Shallow" OOP - minimal use of inheritance, composition ok but without making a Russian doll with 7+ layers with it.

2. "Grand scale Functional characteristics" - exposed API should try and have "referential transparency" and "composability" (you can write systems with "functional properties" in languages like PHP just fine btw.)... find not much benefit from "small scale / low level" FP.

3. Wrap stateful algorithms & other code rich in mutable variables in either (a) referentiaally transparent functions or (b) shallow objects that make it obvious how and when state changes.

4. Avoid the "islands of functional purity in a sea of objects" pattern like the plague - it results in large scale intellectual masturbation at the small scale, and incomprehensible systems at a large scale, your monadic fantasy is useless when stuck inside the method of a 8-levels-inherited monster object... functional is important on a whole-system level, a 10 line method is easy to understand even if it mutates local variables all over the place

In theory FP would be great at all scales when coupled with a good type system... but I've never got to work on projects in languages like Haskell or Scala and I'm not sure I could handle the cognitive overhead of it.

Oh, and don't use Exceptions, ever!


On programming languages: It also doesn't matter whether the language has such a good support for functional programming when you don't care about doing in on the small, inside inner-inner-function level, so `map`, `foldr`, functor or whatever... make no much difference. As long as you have the basics, like "first class functions" and "lexical closures", you can do "large-scale functional" programming in languages like Go just fine. It actually feels more refreshing in a minimalist language like this and with a minimalist and explicit type system :)


I use Scala - it can be used or abused to do advanced OOP and advanced FP.

I tend to see programming as writing. The style of writing depends very much on the context and the audience.

I lean towards "basic" FP (immutability/pure functions/composition) but none of the heavy category theory concepts (anything with types that are too complex). Its mostly procedural with a hint of FP.

Most things should be functions, most higher level things (classes/packages) should be primarily ways to bundle related functions.

Prefer to maintain a clear separation between data & operations on data or structures imposed on top of it. I dislike the OOP approach of bundling data + transformations together. Also dislike inheritance because that is baking one particular structure into the definition of data.


My preference recently for side projects has been imperative/procedural style of programming. I prefer it to OO and FP hands down. To me code is so much easier to read when you don't have clever abstractions everywhere, most code reads line by line and isn't trying to hide anything. I think there is a tradeoff between concise terse code you find in OO/FP that reuses other code vs the long step-by-step procedural code. For me personally I'd prefer longer code that is not reusing a bunch of other functions.

It's almost a weird paradox: OO/FP focuses on reusing code yet imperative/procedural code (ie: C) does not reuse code but is itself the most reusable code. You can't have your cake and eat it too I guess.


Reusable code may be not the best code. There is probably a lot of redundancy (cross-cutting concerns). Or alternatively, increased interface complexity, to the point where it's easier to just write a custom-tailored version of the code.

I still like to program in C and make my own thin abstractions / "runtime", tailored to the task at hand, and using only minimal dependencies. That way it's much easier to have only essential dependencies, not accidental ones. It's such a relief seeing a couple thousand lines of cleanly modularized C code compile almost instantly.

For example, by putting a char * into a struct definition a dependency on a more clever string type can be avoided. All that it takes is delegating memory management decisions to another place. Which is actually very beneficial, since most code shouldn't be concerned with mutation or especially allocation.


redundancy/duplication is not a boon to maintainability admitting a revolving door of co-authors.


PHP. Good old procedural PHP. Cleaner than most OOP and FP projects in the wild and I can code large things quickly and easily. The code is very maintainable as well. I even avoid PHP's OOP solutions with the exception of PDO which I wrap in procedural functions.


Physics student here. I write a lot of python scripts/notebooks, which tend to be very imperative in style. They're usually just one-off programs, and there's little in them to be abstracted out. I rarely drop into C, as numba+scipy is usually fast enough.

Other than that, I use Mathematica quite a bit for playing around with ideas. I've tried sympy, and I hoped to switch to it but it's just not as fluid and integrated.

If I'm coding for fun I tend to come back to FP and OO, I'll use either Haskell, Python or CL. I've been meaning to make something using Rust for a while. I'm planning on building a fermentation chamber (and perhaps a kegerator depending on my budget) for homebrewing at some point over the next year; I'm thinking that I might use Rust for the temperature controller.


Similar case here as a cognitive science PhD student... analyses (Jupyter notebooks) are basically imperative but if I need to write a library to support the analysis (which I almost always need to do) then OO.


When I have an option, I program in a procedural style. The code is easier to maintain and follow this way.

For side projects and for small Python code at work recently I have been applying data-oriented programming patterns. So far they worked surprisingly well.

Perhaps this is because one have an overview of the whole program state that is not hidden behind multiple layers of OOP abstractions and there is no mix between data and code, as happens with functional style.


OOP wasn't really yet a thing and GUIs (which I think of as the killer app for OO) were still experimental when I was first learning programming. I started out with Fortran and Pascal on a timeshared mainframe. Probably because of this background, I still find OO languages ungainly, especially the huge libraries that are usually associated. I'm also a bit of a skeptic about the purported value of the OO abstraction, which always seems to cherry-pick from a few problem classes that lend themselves to representation as interacting objects.

While I sometimes teach programming in C# and C++, my main responsibility is our Algorithms and Data Structures courses, for which we've kept using plain old C. From a teaching perspective, it's a great language for teaching fundamental algorithms and data structures. The students are also able to leverage that work into later courses on microcontrollers and embedded systems.

For my own work, I tend to mostly be writing code for somewhat idiosyncratic one-off data processing (I'm currently working on a genetic algorithm for class, student and professor scheduling) or embedded systems. The former I generally work in Common Lisp (SBCL)(although I do keep promising myself to learn enough Clojure to see if I prefer it). This is probably because of the way I think, as I tend to be comfortable building a system from bottom up, moving the data representation from the general facts I start with toward a representation that meets my needs. I actually came to CL pretty late, about a dozen years ago, but it really seems to suit me, possibly because I'm still primarily a command-line person. Being an emacs user certainly was a strong influence, too, although I don't hack elisp very much. I should add that I do often use CLOS, especially when working with more structured data, so I do use some OO, although it's mostly just to provide a more convenient interface to some complicated data type.

For embedded development, I still work almost exclusively in assembler or C, with less assembler every year. I tend to be working on pretty low-powered special-purpose devices, so code reuse and robust interfaces don't add much value, but close control of exactly what the hardware is doing does. This was normal in that domain until comparitively recently, but the power of even cheap devices and the availability of libraries mean that there are a lot more options. I expect that I'll continue to work mostly in C out of familiarity and inertia, but I will admit to having recently bought some Lua books with the intent of trying it with ESP8266.


> which I think of as the killer app for OO

Was this written in a book somewhere? I keep hearing it from people, but nobody's able to really explain why. OO is terrible for GUIs IMHO.


GUIs lend themselves very well to an object abstraction. The discrete elements (dialogues, menus, controls, etc) are objects that often inherit behaviour from classes of objects (i.e. A modal dialog is a generic dialog is a form is a ...). Events are messages between objects. We even see things like polymorphism eg. anything can get a "click on" message, but different objects behave differently when clicked.


I try to only use capabilities that are common to all languages. Some basic math, basic conditionals, and maybe poking a file or memory address somewhere. I don't give a enough of a crap to get excited for new but short-lived features anymore.


Same for me. Lately I was wishing there would be a translation matrix for all these basics in every language.



Now there is an illuminating and world-forwarding idea!


Personally, I learned to code in an oop style. I cut my teeth on CPP and then a few years of c#, but these days I work in python.

My problem with non OOP in a language like python is that I have trouble scaling large apps and interacting with large libraries, because without type information enforced by the compiler, there is too much left to my own working memory, compared to a strongly types language built around OOP.

What's more, without rigid encapsulation I feel im exposed to an excess of internal code any time I need to look up a parameter or kwarg. Never mind the trouble of navigating five files feel through aliased method names...

As examples, consider matplotlib or tensorflow. Extremely easy to use once you know where everything is and what params to pass where, but I can't help but feel that it would be easier with stronger OOP.

I feel like I have not yet learned to think in a pythonic manner, but after a few years I'm not sure if I every well. As an aside, I just dont understand how python doesn't cause problems for devs when it scales...


I primarily use a declarative language called LogiQL, which is a superset of Datalog. It's basically logic programming. Using predicates and rules.

We have a proprietary database called LogicBlox, that allows us to push all business logic down to the database layer, and almost entirely avoid having a service layer, or at least a very lean one. This allows us to address efficiency problems purely by optimizing database joins between predicates (tables).

Our data is highly normalized (6NF).

http://www.logicblox.com/learn/


FP. Because it makes most things easier while resulting in software that works more consistently.


Last project I did was mainly procedural and that's my default go-to type of structure but OO has its place too. For example in that last one, i had few objects that implemented same functionality but with different data structure. So if I wanted to later on add new data structure, just adding a new class with predefined and partly dynamic features, I'd only need to implement a new class.

For data iteration, i prefer functional style (even in python, even after years list comprehension just feels so wrong).


SQL is neither FP nor OO! It's not a general purpose language though, so it may not be a great answer. It's still amazingly useful and incredibly prevalent.


My current gig has me writing Go, which doesn't do much more than wave a hand at OO to start with, and the codebase I'm working with is particularly un-OO.


I mainly write in Go. My adopted approach is to be pragmatic about how I approach my problems.

I kinda tend to start with a OOP style for the data models and a small shake of FP for the functions but change this according to my needs.

There is no one true style for programming, there is only tools for a task and using the best or good enough tool is the only important thing. (Though sometimes it's fun to use the wrong tool and see how it goes)


I strive for FP, but since most external libraries are OOP, I've made my peace with it.

That's probably not the right question though. If we're just focusing on implementation, using the right data structures for the problem would have to be job #1.

Screw that up, and everything else (APIs, UIs, enhancements, training, you name it) is going to be an uphill climb.


I primarily use nouns for the macro architecture but use lots of verbs in the details.

Neither style especially dominates, and most languages have enough support for both even if the tilt one way or the other. Most people pigeon whole other styles into one or the other, sometimes ambiguously (e.g. actors get counted as both OOP and FP, relational also has that problem).


I mostly code in JavaScript.

Try to use FP most of the time, but sometimes a little mutation seems easier to grasp.


I primarily use Python scripts in my day job to parse data, automate job runs, analyze data...etc.

Rarely is anything complicated enough to abstract without making the code longer and more complex. Anybody can follow imperative code. OO is more difficult.


C


Starting with C has made me consider it a travesty whenever seeing a language where "main" is forced into being a method on a class.

OOP as a technique is fine, but any language disallowing freestanding functions is perverted, I tells ya.


I agree. I prefer pragmatic languages as opposed to dogmatic ones which is probably why I'm friends with Kotlin


Python with tricks from Erlang, PHP frameworks, Lisp. Do FP for most of things, using OOP to structure the code when reuse is necessary: e.g. connectors to external APIs


Scala[!] NLP/symbolic math.


Even in OO languages, I'm almost exclusively in an FP style because that style focuses on minimizing redundant code (e.g., don't write loops write loop transformer functions), minimizing redundant top level functions (segmentations, hand-tuned stateful getter/setters, etc), and maximizing the opportunity for correctness.

The last part is pretty important because properly testing software that talks to an external service is essentially impossible. Either you test in an integration setting where repeatable results are difficult to maintain, or you mock out your services and then you're testing your opinions against other opinions.

Here's an example of how I push this style to typescript, heavily redacted to remove company-specific marks, names and strutures: (this code is simple by intention; we may have a jr dev on the team soon and an excessive number of generator abstractions could help with this process but would cause readability issues).

    function splitOrders(orders: e.OrderResult[]): {[key:string]: e.OrderResult[]} {
        return orders.reduce(
            (accum, borders) => {
                // I sure would love a better idiom for non-destructive map updates but
                // this isn't my primary language. Suggestions welcome!
                let next = {...accum};
                next[borders.datatype] = (next[borders.datatype] || []).concat([borders])
                return next
            }, {})
    }

    export async function undoifyUserProcess(env: any,
                                            tolkien: client.tolkien,
                                            token: client.APIToken,
                                            spotData: IThingData): Promise<any> {

        const legendaryCustomer = await e.getCustomerById(env, spotData.customer_id);
        const orderResults = await e.findThingContractDetails(env, +spotData.spot_id);
        const workSchedule = splitOrders(orderResults);

        
        let btasks = scheduler.undoifyCustomerFromOrders(
            env, 
            token, 
            { key: legendaryCustomer.userKey }, 
            workSchedule['newstyle'].map((wi => wi.bigId)));

        let ltasks = (workSchedule['legacy'] || []).map(async (workItem) => {
            // .. process elided but...
            const nodeKey, groupKey = ["things", "were", "removed"];
            const customerEmail = legendaryCustomer.some_field + "removed work" 
            // Factoring this out would have introduced a function with a bunch of parameters
            // or state that is only synthesized here, so we left it embedded & closed over.
            try {
                return await omg.undoifyUserWithTagGroup(env, tolkien, customerEmail, nodeKey, groupKey);
            } catch(e) {
                console.log(`... malformed database call your mom for help ...`);
                throw e;
            }
        })

        try { 
            const work: Promise<any>[] = [btasks, ...ltasks];
            const result = await Promise.all(work);
            return await e.recordUndoifyingThing(env, +spotData.spot_id);
        } catch(e) {
            console.log(`...`);
        }
    }

If a language can't express a decently abstract functional style, I do my best to avoid it. Still, I've had to write a bit of Golang at my current job despite its utter lack of descriptive capabilities. It's poignantly frustrating to deal with, but a good reminder of how bad life used to be for programmers.


or you mock out your services and then you're testing your opinions against other opinions.

I'm curious what you mean. Mocking out the dependencies of a piece of code allows you to test that code exhaustively (for every range of possible inputs) if you choose. Even when that's not practical (probably most of the time), it allows you to do your best. It gives you the hooks to simulate any range of behavior.

and maximizing the opportunity for correctness.

The last part is pretty important because properly testing software that talks to an external service is essentially impossible

It sounds like you're just trying your best to write correct code and then hoping for the best? This absolutely will not scale, and it sounds downright frivolous.

Still, I would love to know more about where you're coming from. Do you do anything to ensure correctness aside from maximizing your hope of writing it in the first place?


First of all, Tome4/DND reference?

> Mocking out the dependencies of a piece of code allows you to test that code exhaustively (for every range of possible inputs) if you choose. Even when that's not practical (probably most of the time), it allows you to do your best. It gives you the hooks to simulate any range of behavior.

It lets you mock anything at all, which is not what you want to mock. You want to mock the service (and in a microservices architecture, its interplay with dependencies).

If you can encode those into a series of constant replies? Good, but you'd better be doing something like FSharp's type providers or Facebook/Marlow's Haxl where you capture real traffic off the wire, otherwise all you've done is inject more opinion and conjecture into your tests and confuse that with correctness.

Outside of a very small number of specific methodologies, I think our industry is at mostly a loss for how to test microservices.

> This absolutely will not scale, and it sounds downright frivolous.

It doesn't scale with junior engineers. But neither does unit testing, which is more of the same "I think these tests and we try them and pretend that's test coverage because this line of code was touched" which is demonstrably false.

In typescript, I try to do queue-based architectures with full docker stack simulation. People like to call these "integration" tests, but I consider them unit tests for impure code.

In a language that lets me do pure/impure effect separation even in a communal setting, I tend to use quickcheck for the pure parts and hand-test the impure parts if an environment exists. When I get to use Haxl, I used their caching features to preload responses and my code didn't even know it was being tested, which was great.

Also, in Haskell, I'll sometimes use MonadMock but only if I can grab raw source response from APIs (or of course I need to do something like fix time or random numbers).

But I really try not to waste my time in the futile tarpit that is unit testing anymore. It's sort of the pop quiz mentality applied to software engineering and all it does is provide false confidence that code is "tested" when the quality of the output still varies radically on the discipline of the developer.

When I _do_ go back to Ruby or untyped javascript, I do end up writing unit tests. But they're mostly to capture scope around components to ease refactoring and make requirements explicit; something every language should have.

Sorry for being so frank, but you seemed like you wanted a bigger discussion. I think the industry has collectively decided to freeze all interest in any technology conceived after 2007 and it bugs me.


Maybe I'm not quite understanding the first function, but you could probably write it like this:

  (accum, borders) => ({
    ...accum,
    [borders.datatype]: (accum[borders.datatype] || []).concat([borders])
  }, {})

although it's a little codegolfy and I'd personally write it more like this:

  (accum, borders) => {
     const datatype = accum[borders.datatype] || [];
     datatype.push(borders);
     return {
       ...accum,
       [borders.datatype]: datatype,
     };
  }, {})


Push modifies the prior data structure. I could use it here but I do my best to avoid using push unless I'm very deliberately mutating an array iteratively. In this case it would work, but I don't like afforidng myself mutability as a short cut.

Nice point on the first part though.


You could do

  (accum, borders) => ({
    ...accum,
    [borders.datatype]: [...(accum[borders.datatype] || []), borders]
  }, {})
although it's essentially concat().


I actually like that quite a bit more. I should have thought of it. Thanks!


Local-scoped mutability is not typically an issue. It's mutability at higher levels that cause things to be tough to track down. Seems like that would be generally overkill to worry about, especially in languages where immutability isn't native


> Local-scoped mutability is not typically an issue.

But our refactoring tools really don't reveal the extend of variables, so while I may be confident in this outcome in the case where I write it, I leave a bit of a caveat that's difficult to document for subsequent developers.

Same with my use of const that was noted above, it's more a message to future developers than a useful assertion (const does very, very little in js).

If we were using Purescript or Haskell where I could assert this caveat with types, I'd do that instead. Even in Clojure, where it's actually kinda painful to leak the mutable contexts, I might do it. Too easy to lose control of stuff during a refactor in Typescript, so I don't.


By the way, you don't need to use let in your original code.

  const x = {};
  x.a = 123;
You're not changing x, you're changing a property of the object it points to.


This is more a declaration of intent. I am aware that const doesn't stop property modifications, and I think it's ridiculous.

I try to use "const" to mean "I never intend to change this." And I keep lobbying people to give us a better const keyword.

But thank you, I appreciate the feedback.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: