The "Vector m a" means that it's a vector (like a Python list) that can only be m elements long, and those elements can only be of type a. E.g. "Vector 3 Int" will always be a Vector with three integers in it. If you try to treat it like a vector of any other length it will refuse to compile (e.g. pass it to a function that requires that it have 4 or more elements).
The whole line is the type declaration for a function (app) that takes a Vector of length m, and a Vector of length n, and produces a Vector of length (m+n). Which is to say, the compiler knows at compile time what length the resulting Vector will be, and will fail to compile if the function you've written doesn't - provably - always meet that requirement.
From a Haskeller's perspective it's amazing because container types are famous loopholes for code that produces runtime errors. For instance, if you call the "head" function (returns the first element) on an empty list it will halt the program at run time with an error, despite the fact that it passed the type check (because an empty list and a full one have the same type).
As someone who learned Haskell and subsequently have been writing a lot of Python, I keep a mental tally of how many of my bugs (some of which took ages to track down) would have been caught immediately by a type system like Haskell or Idris'. I'd say it's well over half.
In Haskell those kind of errors are drastically reduced, but a few still slip through. Languages like Idris can catch even more of them, in a clever way, which is awesome.
You have two parameters - a vector of size "m", and a vector of size "n". The function signature then expects to get back a vector of size "m + n".
Contrast that with a language with method signatures that can only return types like "Vector" or "List", without any information that is dependent on the incoming parameters.
So then, the logic is checked - if the method returns a vector that is anything other than the size of the two incoming parameter vector sizes added together, it won't compile.
In other words, even more behavior that would normally be a runtime bug is checked at compile time, which is a good thing if you are working in a domain where correctness is important and want to avoid runtime bugs.
In reading Haskell (and apparently Idris) types, a name that starts with a lower case letter is an unbound type variable - sort of like a template parameter in C++. The `a` in each of the Vect's must be the same type but it can be any type (picked at the call site, for any given call).
is a type signature for a function, `app`, which should append a vector (`Vect`) of length `n` to a vector of length `m`.
What's cool about Idris and other dependently-typed languages is that the length of the resulting vector, `n + m`, can be tracked in the type. This can prevent a whole slew of errors.
I'm not an expert in Idris, but I will try to answer this as best as I can.
That's the Haskell type notation (Idris and Haskell are similar). Here's an example:
plus5 :: Int -> Int
plus5 n = n + 5
The first line is the type declaration of the `plus5` function. You can read it as "plus5 takes an Int as argument and returns another Int". The second line is the function declaration. The equivalent in Python would be:
def plus5(n):
return n + 5
The syntax is the same when functions have more arguments:
add :: Int -> Int -> Int
add a b = a + b
This means that add takes two integers as arguments and returns another integer. This syntax might look strange (you could be asking yourself "How do I distinguish between the argument and return types?") but that's because in Haskell functions are curried[1].
Haskell (and Idris as well) also has parametric polymorphism, so you can define functions which operate on generic types. For example, the identity function:
id :: a -> a
id x = x
You can read this as "`id` is a function that takes an argument of type a and returns another argument of type a (so the return type and argument type must be the same)". This means the id function will work for any type of argument (String, Float, Int, etc.). Note that of course this is not dynamic typing, Haskell and Idris are statically typed and therefore all the types are checked at compile-time.
Now, let's move on to Idris. One of the reasons there's a lot of excitement over Idris is because it has dependent types, i.e., you can declare types that depend on values and these are checked at compile-time.
The example GP gave was this:
app : Vect n a -> Vect m a -> Vect (n + m) a
`app` here stands for append. This is the function that appends two vectors (lists of static length) together. In Idris, `Vect x y` is a vector of size x and type y, so
Vect 5 Int
is a vector of 5 integers. So, essentially, `app` is a function that takes two vectors as arguments, `Vect n a` and `Vect m a` (this means they can be of different sizes but their content must be of the same type). It returns a `Vect (n + m) a`.
Think about it for a minute. That is amazing! We can be sure that, at compile-time, just by looking at the function signature, our append function, given two vectors of size n and m, will _always_ return a vector of size (n + m).
If we accidentally implement a function that does adhere to the type specification (for example, it appends the first argument to itself, returning a Vect of size (n + n) instead of (n + m)), the code will not compile and the compiler will tell us where is the bug.
A length n vector of a's, appended to by a length m vector of a's, results in a length (n + m) vector of a's. This is a safety guarantee that is statically checked at compile time rather than dynamically checked at run time.
How does it work exactly? Both languages check things at compile time, but of course in c++ the length has to be known statically. Does idris generate code at runtime for the right size?
Think about generics in any language, say Java Stack<String>. The type Stack is parametrized by the type String. What would you do in a language without generic types? To achieve the same effect, you'd need to create a custom StackString type, which itself is trivial. But then, you'd need StackInteger, StackBoolean, StackSomeoneElsesCustomType, etc.
Language with a type system but without generics have this above downside. Usually you'd just give up making a type for each of these, and simply use the Stack type.
Language without Idris-like dependent types suffer the same downside: instead of creating Stack<String, n>, you're forced to create Stack<String, One>, Stack<String, Two>, Stack<String, Three>, where One/Two/Three are unique types you created to conceptually represent a number. If you don't even have generics, you'd need to create StackStringOne, StackStringTwo, ...
Dependently typed languages encode numbers (among others) as unique types, to simplify. Imagine this:
This isn't too hard to imagine is it? No runtime checks needed. Once you get the output you can carry it around, along with its type, to places that accept Stack<String, Four>.
So, asking "how come you don't have to generate these checks at runtime?" is similar to a programmer who's never seen generics asking "how come you can parametrize? Don't you have to generate all the StackString and StackInteger, etc.". It just works. Hope that analogy was clear =).
Dependent types in general blur the line between values and types, as you've seen above. Think of the usual values you use, and imagine having a new type to represent each one. Dependent types let you do this in a sane and logical way.
Thank you, I understand that, but I need a little more detail. Let me explain.
Languages that have generics can either erase the generic argument at runtime (like Java or Scala do) or generate a different concrete type for each instantiation (like C++ or Nim do). In either case, you usually know at compile time which concrete instances of the generic parameter are going to be used where.
That is, even in Java you may have a single class Stack<A>, but at the final use site this will eventually turn into some concrete type, such as Stack<String> (it may remain generic in intermediate steps). In other words, you are not going to use Stack with a generic parameter that you did not anticipate. You cannot read the runtime type of some instance, call it X, and then start writing functions that work on Stack<X>.
Now, let me turn to dependent types. A lot of the techniques that Idris promotes can be simulated in other languages, such as Scala, C++ or Nim. But in each case that I know of, you are required to know at compile time which concrete values you are going to use as generic parameters. That is, in C++ you can write functions that operate on types like Vector<A, N>, where N is the length, but you have to eventually specialize N to a concrete number before compiling.
The comment above states that Idris does not have this restriction, but it is not clear to me how that might work. I concede that by using an erasing technique (as opposed to specialization) one is not required to materialize such functions for all N - this is what one does for instance in Scala.
But the problem is: what do you do at the usage site? Say you are reading two lists off two JSON files, and you want to concatenate them using the above function. Wouldn't you need to provide at compile time a concrete value for N and M? (and consequently cast the unsized vectors to sized ones?).
If this is the case, then there is not much difference from letting the compiler materialize a function for concrete values of M and N that are used (because in either case, you have to encode knowledge about concrete values of M and N at compile time).
Maybe everything would be more concrete if I saw an Idris program that reads two vectors from external files at runtime, and then concatenates them, whatever their size is.
I believe the Idris runtime uses erasure rather than code generation[1]. The key here is that Idris will make call sites prove statically that the conditions hold, either through having static conditions of their own or proving that the programmer has checked (at runtime) that the conditions hold.
(This case of append-n-to-m-vector is a bit weird since it basically just amounts to calculating the length of the output, but my previous statement applies a bit more generally.)
[1] So one plausible representation for Vec n a would be a size + array of pointers to a's. I believe they're actually using linked list for Vec at the moment, though.
Yes, but my question is: even if you use erasure, what are you going to do at the call site? Won't you need to state a concrete size there? For usual generic types, erasure and code generation are equivalent in what they can do, because at some point you will specialize to concrete types.
I would be curious to see an Idris program that reads two vectors out of some file (without knowing the size a priori) and concatenates them
> Yes, but my question is: even if you use erasure, what are you going to do at the call site? Won't you need to state a concrete size there?
No, Idris doesn't actually care what "n" and "m" are (concretely)... it just knows that if you happen to have a (Vec n a) and (Vec m a) lying arond, then you'll get a (Vec (n+m) a) out.
> For usual generic types, erasure and code generation are equivalent in what they can do, because at some point you will specialize to concrete types.
I'm not sure I fully understand what you mean, but I think this is slightly wrong... You can think of the fully erased (Vec n a) representation as (n, void-star) in C-style or Array<Object> in JVM style. Note that "a" doesn't actually appear. So the type is actually never needed at runtime -- all functions which have (Vec n a) as a parameter will just assume that it has been (statically) proven[1] that the pointed-to-array has the correct size. Because of parametricity, functions will not be allowed to actually access any 'a' values directly. So it all works out.
I will try to be more clear. Say the size is actually unknown until runtime, for instance a vector is read from a file.
You cannot apply directly any of the size-aware functions, because it will not typecheck. I only see two avenues:
Either you assert statically that the vector has some concrete size, in which case the size is known statically and the code generation will work as well;
Or you use some other size-unaware function, in which case it is false that Idris can use runtime values as type parameters.
If I understand correctly, you are saying that there is a third possibility, but I am missing it
> I will try to be more clear. Say the size is actually unknown until runtime, for instance a vector is read from a file. You cannot apply directly any of the size-aware functions, because it will not typecheck.
Yes, you can in fact do that! Let's assume you're building the list you're reading from a file as if it were a linked list. That's the point of dependent types -- the types can (and often do) depend on run-time values.
EDIT: I suppose I should expand on this a bit. The idea is that the "read-Vec-from-file" function can look like this:
readVecFromFile :: File -> (n, Vec n Int)
(let's say it's reading Ints)
So the readVecFromFile function will return you a run-time value "n" (the number of items) and a vector which has that exact length. When using readVecFromFile, you'd do something like
(n, v) <- readVecFromFile
and you'd be absolutely free to call any function which expects a vector of any length. (If necessary you could also supply the actual length, but that's just a detail.)
The implementation of readVecFromFile is left as an exercise, but it can be done. One idea is to start with the empty vector (length 0) and an "n" of 0 -- and then to just accumulate into it, all the while keeping a "proof" that the length of the vector is way the runtime value says it is. (Using structural induction, basically, to build a proof that the runtime value is the same as the length of the vector.)
I haven't coded in C++ for a very long time, but won't any implementation of that function, and additional templated functions which call it, only get checked at template expansion time? As I understand, buggy templated functions and classes can go unnoticed in a codebase or library simply because no-one has instantiated them in a way which exposes the bug. This is definitely not the case for Idris.
My understanding was that concepts began as an attempt to fix this "problem".
My recollection of C++ (possibly outdated since 11 or 14, though that would - pleasantly - surprise me) is that template parameters must be picked at compile time. With Idris, you can pick X and Y at runtime and still get compile time checking.
operationally, i'm not sure if there is. it's certainly a lot prettier, though. i'd be curious if there's a deeper difference, too. it feels like there is, at least to me...
Actually it's not that easy. Everywhere you have a type you need to proof that the expression really has the right type, and it's not just type annotation, it's a real mathematical proof.
To understand the complexity of the task, try proving that insertion sort is really a sort algorithm. You write more code, and writing it will take more time than you would do in a 'normal' typed language.
Unfortunately, the usefulness is limited – the size of all Vects must be known at compile time. While this does prevent a large class of bugs that arise from incorrect compile-time knowledge, the size of the class of runtime bugs affected by this is a lot smaller, as for each differently-lengthed Vect, a distinct instantiation of the Vect type needs to exist at compile-time.
Per my understanding, what's so exciting about this is precisely that you're wrong - it does not need to be known at compile time, but can prove symbolically that the resulting length will be X + Y. And of course, given that, it's not statically creating a distinct instantiation of the Vect type for every N at compile time!
I'm not sure the exact approach, but there's definitely some heavy lifting. My understanding is that most (all?) dependently typed languages are essentially theorem provers at heart.
I've never actually used such a feature, but my understanding is that this is not strictly true. I think that the compiler can generically understand that this function adds two vectors and their lengths.
I'll see if I can contrive an example. Supposing you have a list L1 of vectors of length 5. You also have a vector V1 with user input, with length x (unknown at compile time). You then make a new list L2 by taking each vector in L1 and append V1 to it. L2 is now a list of vectors of length (5 + x). Even though x is unknown at compile time, the compiler still knows that all vectors in L2 are of the same length. It can make restrictions based on this fact.
It seemed strange to me when I heard this concept, I thought the compiler would need to consider infinite possibilities, but apparently similar things are possible even in Haskell. For instance, you can define a list as a recursive type that holds a value of a Null type (not just a null value of the list type) or another List. Apparently it can still reason about this.
However, these are things I've heard. Hopefully someone with more experience can chime in and let me know if I'm right here.
Not quite. Types can depend on runtime values. This can in effect _force_ the programmer to perform the required validation when accepting foreign data.
Not only that but you can arbitrarily encode constraints. Want to check something silly like whether there's an even number of elements in a list? Yea you can do that, and pass it along. I can do stuff like, okay, I know the max size of the files we're going to accept is 1mb.
Other interesting ones to look at are F* (a joint effort from Microsoft Research and INRIA to create an ML-like dependently typed language) and ATS (a low-level, fast dependently typed language meant to replace C). What's cool about both of these is that they put emphasis on allowing imperative programming despite being dependently typed and seeming very functional.
This is actually perfect for a library on algebraic structures I've been trying to make in Haskell. For example, how does one distinguish between elements in the Dihedral group of order 10 vs. Dihedral group on order 16 when obstensibly, they have the same representation. For now, I think Haskell programmers use type-level arithmetic libraries, but this is a much better solution.
Haskell is actively moving in the direction of adding dependent types too. I believe phase 1[1] of the plan[2] is slated for GHC 8.0 (the upcoming release), and I'm sure the rest of it will follow soon.
You know what, I've found it immensely valuable to get acquainted with Haskell, even though I've never used it (and likely never will). The concepts are timelessly beautiful, simple to understand, and can feel enlightening to run-of-the-mill imperative/OOP programmers.
What's more, it seems to me that Haskell syntax is the lingua franca when discussing anything related to data types and functional programming these days. Those ->'s are everywhere, it's useful to know what they stand for.
Just skim a few chapters of learnyouahaskell.com; you won't regret it.
I'm more of a "learn by building things" type of learner. What kinds of things can I build with Haskell?
For example, I got into Ruby via Rails, because Rails lets you quickly prototype simple web apps. So I could go from "I wish I had an app that does X" to actually building it, deploying it and sharing it with others. What would a similar "learning flow" look like in Haskell? (doesn't have to be web-based)
Put another way, when I come across a problem, how do I recognize it as the type of problem that is best solved using Haskell, vs. an imperative language?
I came from Python to Erlang / Scheme / Haskell and at this point I would answer your question,
> What kind of things can I build with Haskell?
With: Everything.
We use Haskell in production at Plum for our REST APIs, job schedulers, web applications, AWS service interfaces, a static site compiler, DB modeling, command line utilities, etc...
We also use it for two CLI utilities that are cross-compiled for the ARM9 on our IoT product.
I consider Haskell to be superior to any of the dynamically typed languages when writing production-level code, it's cleaner, safer, easier to maintain, easier to refactor, and much more fun IMHO.
[EDIT] I neglected the other part of your question, "What is the learning flow like?"
Definitely a bit rougher than Python or Ruby, I will not lie, but don't be discouraged. It simply means you need to do a bit more studying up-front first before you can tinker without being caught at every turn by the straight-jacket.
I would first go through Learn You a Haskell because it is pretty accessible and introduces the language basics well enough. Then study the type system. You must learn Haskell's type system and terminology before you can understand more advanced code.
This may be nitpicking but certain classes of programs cannot be realistically built with Haskell. Anytime you need to tightly control latency (soft realtime) won't really work since you have both a garbage collector and lazy evaluation. Memory constrained systems are pretty tough as well since you don't really get insight into allocation/deallocation, which also makes structuring your data into a particular memory layout tricky compared to C(++) for instance.
Not to say that Haskell isn't awesome. It should probably be used for more systems. It just can't be the "hammer" to make every problem into a nail.
This depends on what you mean by "with". As you say, you can't sit down, bang out some ordinary Haskell code, and expect GHC to give you an executable that will do real-time as well as you can in C.
That said, it's perfectly possible to write a DSL that handles scheduling, uses GHC's type machinery to track memory use and execution time, and have GHC generate a program that will generate C code that meets hard real-time guarantees. In fact, someone wrote it, it's available on hackage (https://hackage.haskell.org/package/atom) and my understanding is that it's used in production for control software in the auto industry.
I am very much the same way. What I eventually did to get a foothold on the language was power through the first few chapters of 'learn you a haskell', and read 'haskell the hard way' and then sat down and typed out a tutorial implementing the Kaleidoscope teaching language in Haskell [2].
It was slow at first, and there was a lot of "now what's this arrow doing here?", but I would go back to the books when I had questions and eventually things started making sense. That won't work for everyone, but maybe it will work for you too?
The folks in #haskell are generally pretty helpful, and delightfully easy to troll.
Haskell can actually make a pretty good imperative language (and you can put together little imperative sub-languages as well). That aside...
There's a reasonable perspective that says the type of problem that's best solved in a language is a problem for which there's good library support. For that, I'd probably recommend scanning https://github.com/Gabriel439/post-rfc/blob/master/sotu.md
But in my experience, where Haskell really shines is sketching out operations on some type that I know I'm going to get wrong - substantially or subtly - my first many tries. When I go to fix it, the compiler helps me find everything I need to change in tandem, and feel ahead to find inconsistencies in my model before I get there in the code. One example of this kind of problem is compilers - where we parse into an AST, transform the AST, and then produce other things from the AST. As development goes on, the AST evolves, and you have a lot of help knowing what needs to change to match and what you can ignore. I've recently been doing this for SQL (though with the goal of analyzing the queries, not producing an executable).
You might want to try using stack instead of cabal though to ensure you avoid any dependency issues. I think cabal will work fine these days, but I have been using stack/stackage for some time so can't guarantee it.
Algebraic data types and pattern matching are great for compilers, interpreters, static analyzers, and basically anything dealing with implementation of programming languages. That said, Haskell can pretty much do anything that other languages can these days, so it might be fun doing something you're already familiar with (say, IRC bot) and reimplementing it in a new way.
I recently had some murky thoughts on my ideal Matlab replacement, and it would have a feature like this. It would be huge for array-oriented programming:
func train_model(X: [n d], y: [n 1])
So many lines of code are spent verifying that the sizes of two function arguments are compatible.
I haven't used Idris, but it sounds like someone I might want. I really want a tool that lets me mix and match operational code with proofs. It's common now to write a unit test while debugging something, less common to write a randomized property tester, and rare to see a theorem prover being used.
It would be fantastic if I could casually throw in a proof while debugging a hard problem.
Thanks for the link. I was looking into liquid Haskell and refinement types and thought that it's a great idea but rued the fact that it wasn't built into the language. I am definitely going to try Idris for one of my projects.
it's nice for the haskell folks that they'll be getting some sort of dependent types, but suffice it to say that they will be of a very different sort than the Idris ones.
Not to mention, there is hope of giving a semantics to Idris since it is based on a fairly routine variant of type theory, whereas I don't think there is any hope at all of understanding "which" type theory the Haskell folks shall have implemented.
I think it will be a while, if ever, before we're able to handle passing types around as values, in Haskell, the way we do in Idris and Agda. Dependent typing isn't just some feature. It touches a whole other way of thinking about things.
If you go all the way with Agda, you have significantly constrained recursion to the point that the language isn't turing-complete anymore, but in return you get termination checking and other nice things. As it turns out, most of the code we write doesn't need unbounded recursion. Seriously, can you think of the last time you wrote something like that?
And I think that'll be a hard pill for people to swallow, much like how it was really hard to sell memory safety to C/C++ guys back in the day.
I would suggest to use Python or some other mainstream language instead of Haskell in example to contrast to Idris. People who know Haskell most likely be already familiar with dependent types and for people not familiar with Haskell syntax could be confusing (as some comments indicate).
Many mainstream languages like Python don't even have a syntactic notion of types. The best you could really do is say that Idris would get you some typing reification that you might do in those languages at no cost, but it would seriously sell short the power of dependent types like you see in Idris.
It is! That's what is so cool about it! Idris lets you write an append that will work for any X and Y chosen at runtime, but will check that the result must have length X + Y at compile time.
The story, if I recall correctly, is that the programmer Edwin Brady had a previous project that was a proof engine, and lacking a suitable name for it, named after an older British children's cartoon called Ivor the Engine.[^1] When it came time to name a newer project, he decided to name it after another character from the same show: Idris, the little red Welsh dragon. This, incidentally, is why the Idris language's logo is a stylized red wing.
but could not infer why a dragon from Ivor the engine had been chosen. That the author had previously written a proof engine makes all the sense now. :)
Please, please don't let this be the top comment on Idris. I really want to hear from people who have experience with it in different regards.
For example, what's the editing experience like compared to e.g. OCaml with emacs/Merlin? Is there anyone in a position to compare 1ML's approach to unifying type/value languages vs using dependent types?
1ML doesn't really unify the type and value languages. First-class modules are syntactic sugar over System F-omega, which means that the elaboration/type-checking phase of 1ML duly splits modules into type and value components, which live in separate worlds.
Personally, I find 1ML's approach more pragmatic, or, at least, easier to digest for most programmers - including Haskell programmers. Dependent types are more powerful, but they aren't free from complications. For instance, unifying the type and value languages willy-nilly can cause trouble if you want your value language to be Turing-complete, because now type-level computation can diverge too! In a language based on System F-omega, type-level computation is guaranteed not to diverge, because its type level is the simply typed lambda calculus. Some use cases might warrant providing more powerful type-level computation facilities (e.g. calculating the shapes of multidimensional matrices), but it isn't clear to me that the full power of a Martin-Loef-style dependent type theory is needed.
Haskell allows the use of brackets and indentation to format code, but nobody uses brackets. Do you find Haskell hard to read? And if so, is it because of the indentation? I bring up Haskell because Idris is very similar to Haskell syntactically.
These are functional languages, and there isn't a lot of code to chunk together with brackets. Functions are often just one line, and it's a bit cumbersome to make functions that require multiple lines. Do you believe brackets add to the readability of a one line function?
There's nothing about braces that ensures visual cues to nested code.
Significant whitespace does precisely that. And eliminates brace placement arguments.
int func()
{
while dosomething()
{
dosomething()
dosomething()
doanotherthing()
}
dosomething()
}
def func():
while dosomething():
dosomething()
dosomething()
doanotherthing()
dosomething()
> There's nothing about braces that ensures visual cues to nested code.
A machine can look at the braces and re-indent the code, so that you don't have to look at the braces. All while you rest assured that the meaning didn't change:
I just popped it into Vim, selected all and hit =:
int func()
{
while dosomething()
{
dosomething()
dosomething()
doanotherthing()
}
dosomething()
}
Some semicolons are expected so things are a little off.
The braces didn't do anything for a human until indentations were added, whether by human or machine. If you lifted those braces out, you'd still know that the intent was (although in C it would be wrong :)
The only thing I miss about braces is being able to jump to the end of a block easily. In python and similar that's a bit harder to do with an editor.
> The braces didn't do anything for a human until indentations were added
Having explicit delimiters (I don't care if it's {}, begin...end, or whatever) provide several benefits that I have often used.
1. (by far the most important) They trivialize the recovery of indentation that has been trashed by a bad/misconfigured/unfamiliar editor or coworker who inserted "\t" characters into the file. Just run indent-region or send the file through indent(1).
2. Moving code is easier. Just kill/yank it to where you want it and let indent-region or whatever re-indent as needed. Without delimiters I have to either re-indent manually or use an editor with a feature that changes a region's indent level. Even with such a feature, I still have to tell the editor "move this two levels deeper". With delimiters, I can leave that work to the editor.
3. The movement advantages you mentioned.
As for the extra work of having to close blocks (including the infamous LISP ")))))" motif), that's what features like electric-pair-mode is for.
Even better, delimiters can convey more information. Color cues have proven to be very effective at conveying important information; syntax highlighting is very popular, even when showing code outside an editor (such as inside a <pre> tag). I find extending these color cues to also show nesting[1] to be almost as useful as basic syntax highlighting. (I'm using rainbow-delimiters[2] in that screenshot).
I've never used a language where space indentation was used over curly brackets. So question for those who have... When doing so, wouldn't copying and pasting code around potentially cause a lot of accidental issues? Are there some negatives and side effects of the indentation style? Just curious.
If you copy and paste code irrespective of bracket position, you will have the same problems. It turns out not to be an issue.
A big positive of using indentation is that it enforces a visual indication of scope vs. languages where culturally it is normal to find code with lots of giant one-liners and braces on one line.
But overall, you just get used to it and don't notice it. Everyone who yells about languages that use indentation is bikeshedding in the worst way.
I think it depends on the way your brain works. Some people prefer tables full of numbers, other prefer data visualizations. Some people prefer compact, tight code with lots of symbols. Others prefer spatially organized code. While there are probably people on the extremes and people in the middle. My observations have found lots of people generally fall into two main camps. A number/symbol/dense data aesthetic group, and a visual/spatial group. Left-Right brain? I don't know. Neither is better or worse. It just depends on who you are and who you are working with.