> But why is slow bad? Fast software is not always good software, but slow software is rarely able to rise to greatness. Fast software gives the user a chance to “meld” with its toolset. That is, not break flow. When the nerds upon Nerd Hill fight to the death over Vi and Emacs, it’s partly because they have such a strong affinity for the flow of the application and its meldiness. They have invested. The Tool Is Good, so they feel. Not breaking flow is an axiom of great tools.
To add some hard data to this, the “eight second rule” [1][2] is the maximum amount of time a user, on average, of a system can wait for that system to return control back to the user without the user having some other thought pop into their brain, interrupting their flow.
Edit 4: I think there are still many many opportunities to improve flow in many processes. Top of my list would be keeping compile times in the single-digit seconds.
Sadly, the future of software is to become bloated and slow, for business reasons. Any software that is not bloated has fewer features, and can be relatively easily duplicated by a competitor. From a business standpoint, this is bad and a danger for the company survival, as it can now be replaced by a competitor that can crank more features into the product. By adding more features, companies pretty much guarantee that the software will also be slow, at least for a significant number of user cases.
as long as free software allows users to choose fast software instead of business software this shouldn't be a problem, but the web browser disaster is an instructive tale of how this can go terribly wrong
> I think people's attention span has shortened since then...
Unlikely. Rather, what I think has happened is that as the internet keeps filling up with more and more trash, we have been forced into being harsher with our willingness to pay attention to any one specific thing. In other words, it’s not the attention span that has changed. It’s the signal to noise ratio that keeps getting worse and worse. And that’s why you need to demonstrate value to your audience quickly or otherwise they will move on to the next thing. But don’t blame the audience for that. Blame the producers of the content.
What you’re calling “attention span” is probably more accurately called “user expectations”, or “user patience” which, obviously, can easily change.
On the other hand, there seems to be a measurably consistent maximum amount of time that one can tolerate a pause without also breaking out of flow. This might be cultural, but it seems to be consistent enough that it could be physiological.
A friend of mine has a very interesting view with regards to flow: It is a bad thing. You don't want to be in a state of flow, because you are not thinking hard enough about what you are doing then, but just following the flow.
That is a very _interesting_ take. I think it's ~wrong but, like most interesting takes, there's some truth to it. I think the "truth" here is that, well yeah you're not thinking about some things, that's literally what I consider flow, it's not thinking about **how** you're going to do X but it's higher level than that and focusing on "do x".
I think it's the difference between thinking/taking the action "walk to the door and open" (X in the flow state) and "ok get up, left leg forward, right leg forward, left leg forward, right leg forward, move arm to door, grab, twist, pull". Flow is just not thinking about the things that are not important (a lot of code is just gluing and can be done in a flow state - still allowing you to stop and think about the real important places). What your friend is calling out is that yeah I guess that sometimes you can "automate"/flow things that actually are important but to be honest I don't think that happens very often in this context (software, real use(r)-cases) because their use-case are usually so narrow and specific (we should design software to allow them to "flow" the unimportant stuff - click next for the additional questions - did you notice the button responds on mouse click up? I hope not, I hope we made that piece "flowable")
There is something to be said about flow in mechanical activities, because on those, yes, you are mostly not thinking, just following the "rhythm". Safety is normally one of the first things to get out of your head during that.
But programing is nothing like mechanical activities.
But the act of typing on the code and running/debugging it should be. Any toolset where you have to wait excessive amounts of time just to be able edit/ compile/launch/debug/review results is not really going to make you think more carefully about your code (an exception could be made for super long compile times, which do force you to be more careful with what you input, but I'm not really convinced it can help you write better code).
- a often very goal oriented activity (but sometimes exploratory) where I (try to in the latter case) transform a part of the system according to my visions of a better system
- the mechanics of moving the caret around (arrow keys, file selection), selecting text (shift and ctrl modifiers in addition to arrow keys), using shortcuts to invoke pre defined transformations (automated refactoring) etc.
It is the thinking I get paid the good money for.
But if I was constantly struggling with my tools then it will take more time and I would get exhausted faster.
I thankfully don't suffer from it but ask anyone with RSI about that.
My broader point however is that if even a small but non trivial part of my brain capacity must be used to deal with the mechanics.
In a way it is like when I learned to use a climbing harness for work and learned to trust it, the effect on my output when I worked in the field was amazing because everything I did was no longer interrupted by constant considerations and adjustments to stay safe.
I can see where your friend's coming from - but I'd disagree up to a point. It's about which aspects you want to be in a state of flow.
If you've used a computer for any length of time, then when you type you're not thinking about the act of pressing each key - you're probably thinking several words ahead of your fingers. If you type half a dozen characters and nothing happens on-screen, then they all appear at once three or four seconds later that would be an intolerable break of flow. (And happens more frequently today than it did when computers were literally 0.1% of their current speed.)
If you're driving, you don't want to be thinking about the act of reaching for the gearstick or indicator stalk, but you do want to be thinking about what's happening on the road - so there you can indeed take "flow" too far.
On the other hand, if you're performing music, your fingers need to be in a state of flow in order to free you to concentrate on the emotion of the music - yet that, too, would ideally be in a state of flow...
It's the absolute opposite, you are totally conscious about what you are doing, and clear headed about the decisions you make.
It does feel effortless, but not in the sense that you are not doing the thing consciously, rather in the sense that the thing seems so obvious and easy as you perform in perfect fluidity.
That is not what I experience. The thing I refer to when I refer to "flow" or being "plugged in" or "in the zone" is that all of the relevant paradigms and information are in my working memory.
When I'm in a flow state, I see and understand not only the function in front of me, but the functions it calls, the functions that call it, and the data structures involved. I can see the tree in minute detail at the same time I can see exactly how it fits into the ecosystem of the forest.
It isn't that I'm not thinking hard. It's that thinking hard is possible, rather than interrupted by a constant need to regain context.
I don't think I'd go as far as to claim flow is bad, but there's something to that.
I'm a long term emacs user and between changing keyboards a few times and an unfortunate spell of a few years using an IntelliJ-based IDE with Emacs Keybindings that are far better than most but inevitably off (most infuriatingly, to conclude an incremental search you use Ctrl-g, which in actual emacs jumps you back to where you started the search) my reflexes are all over the place. I'm therefore having to force myself to slow down and be consciously aware of everything I do.
I'm finding it a really useful exercise for more than just the retraining aspect. It does seem to increase my overall engagement with the task I'm carrying out, beyond just the operation of the tool.
Interesting take, you're supposed to think about what needs to be done before 'going into the flow' to do it.
But yes, sometimes this waste time because you discover afterwards that what you did wasn't what should have been done, if you were less focused on doing it you may have changed your mind before.
The way it has been described to me is that if you always take the same route to work then your cognitive performance goes down. Staying curious and exploring the different routes is hard with relatively few waypoints but then you can zoom into more granular decisions (or out ... to the causes).
A long time ago, I used vi instead of Emacs because it was much faster.
Recently I tried to replace VSCode with something else because VSCode isn't responsive enough.. But vim is annoying to customize as a C++ IDE, so I tried SpaceVim and I discovered that it starts even slower than VSCode :-( so I'm still using VSCode..
There's several GB/s of bandwidth from disk to screen these days, so unless you're processing several GB of data you have no excuse for anything in your programs to take more than a second.
It's an insult really, such incredible waste of all the potential processing power we all have.
They don't need an excuse. They just spend their time adding more features instead of speeding things up. And all the users flock to their tool over yours because the extra feature saved them hours in their process while your performance optimization saved seconds but took the same dev time.
With how much asset sizes have also grown... this isn't that unlikely, is it? A 4k screen that is refreshing at 60hz is pushing close to that in pixels than you'd think. Granted, you probably aren't changing every pixel every refresh, but the point is that there is a lot of data that goes into your screen.
Even just considering data, this is probably more true than you'd realize. Probably have thumbnails for the covers/titles. Then there is the sheer volume of media. And heaven help you it your computer is still trying to index a TB or so drive.
Just check how much ram your email program is using someday. Especially if you are hoping to keep an index of all email ever locally so that you can get rapid searches. The tricks that work for smaller mail boxes start to struggle with folks that insist on keeping every email over decades.
Could it be better? Almost certainly yes. Unfortunately, without taking deliberate effort to do less, I don't see it happening.
I have the same obsession for speed and I have given up on desktop environments years ago, ending up on with i3 and mostly CLI. These past years I was forced for some periods to work on macOS, Windows and Gnome. While they clearly work I simply can not stand them. My muscle memory starts an application then immediately start typing and to my surprise something else happens, because the app has not started yet and I still have focus in another window or in the desktop. Or I chain keyboard shortcuts and the same unexpected results happen because the previous command has not completed. It forces me to wait and pay attention to what the system is doing and this annoys me to no end.
I know plenty of people being a lot more productive than me using slow and not very ergonomic environments. So I understand this is just an obsession of mine and not necessarily justified. But once I got used to instant everything it's so hard going back.
Same here. Nowadays, I check out CLI applications before committing to a GUI version. All those seconds I shave off by not waiting for things to load for too long, hardly having to leave the keyboard have a lot of advantage. It just takes more time to get used to initially.
How is starting up an application faster in i3? There is a fundamental problem of slow GUI startup on desktops solved by mobile OSs, but not much faster desktop PCs.
I sometimes wonder what would cause software to be faster (or less buggy). Websites seem to often struggle with this despite overwhelming incentives for speed (and if you browse without an ad blocker things are even slower!) where typically any internal study will show that increased speed improves key metrics.
SaaS apps also seem to suffer from often being slow or buggy. I guess part of the reason is that switching is hard and performance problems may not be obvious until it’s too late. How do you even evaluate some product like Jura without using it a bunch? A demo is surely insufficient. I think people buying software are also more drawn to features than speed and maybe they are right or maybe they need to somehow be taught that they are undervaluing quality/speed, but how do you teach that? And how do you even asses software quality – how well it actually solves your problems and whether it is full of bugs – without a significant trial.
Maybe there is an argument that the software is so valuable that even awful software that is not fun to use is valuable and quickly becomes hard to cut.
One solution might be more heterogeneous services but then you might be paying more per-seat as you still have lots of overlap and it also sucks to have N bug trackers and M wiki/doc systems and so on. So maybe you then want services to be more interoperable but I think that may just force them all to suck.
Speed and responsiveness are two very different things, and the latter is, I think, by far the more important where desktop computing is concerned.
My complaint with modern systems (by which I basically mean anything less than 30 years old) is that I rarely feel like I have the computer's full attention.
Responsiveness / latency issues on mobile are literally driving me nuts.
Icons and links that move between the time my brains initiate the chain of events to press on something and the time my finger actually hits the target are causing me so much frustration :(
That goes beyond loss of productivity and is literally affecting my mental health.
> Icons and links that move between the time my brains initiate the chain of events to press on something and the time my finger actually hits the target are causing me so much frustration :( That goes beyond loss of productivity and is literally affecting my mental health.
It’s my observation that only developers really notice or care about speed. As long as something is “fast enough” (and the benchmark here is pretty low), most people won’t specifically choose a fast app over a slow app with more features.
This means that sometimes we put too much emphasis on speed. But it is not as important to everyone else as it is to us. A product with the defining feature “it is fast”, usually fails to get noticed except by other developers.
Exactly this. I develop really “heavy” desktop apps for a living and this is my experience too. End users are solving a problem and that problem might be “producing a quote for a machine for a customer”. If that took them an hour in the snappy but feature-lacking old software, and a new new software does it with enormous bloat, frustrating pauses, crashes but in half the time, then customers will love it and praise it.
When there is a bug report for a performance issue it’s rarely some minor delay but usually something absurd like a 15 minute wait due to something accidentally quadratic.
If you ask customers of course they won’t prefer the 2 second improvement for the function they use hundreds of times per day because it only saves them a few hundred seconds. They’d rather have one more feature that saves them 30 minutes every week instead. And the thing is there is an endless number of such features even after 200 man years of dev.
At the same time, as a developer I have almost no understanding for this mindset. If I was forced to work with slow and frustrating software all day I’d quit and grow potatoes for a living instead (and I use visual studio so my treshold for bloat and frustration is pretty high).
Expectations like that aren’t only around speed - a while ago I was asked to investigate a certain hardware/software platform combination, and with the constraints given, text came out perpetually blurry, so my answer was just “it’s unusable”, but then some non-devs had a look and were like “eh, it’s legible, we can make this work.”
this take could hardly be further from being correct
your users just aren't telling you they're frustrated and having trouble using your software; they may not even be aware of it, but it profoundly shapes how they interact with the software
> All other things being equal, more usage, as measured by number of searches, reflects more satisfied users. Our experiments demonstrate that slowing down the search results page by 100 to 400 milliseconds has a measurable impact on the number of searches per user of -0.2% to -0.6% (averaged over four or six weeks depending on the experiment). That's 0.2% to 0.6% fewer searches for changes under half a second!
> Furthermore, users do fewer and fewer searches the longer they are exposed to the experiment. Users exposed to a 200 ms delay since the beginning of the experiment did 0.22% fewer searches during the first three weeks, but 0.36% fewer searches during the second three weeks. Similarly, users exposed to a 400 ms delay since the beginning of the experiment did 0.44% fewer searches during the first three weeks, but 0.76% fewer searches during the second three weeks. Even if the page returns to the faster state, users who saw the longer delay take time to return to their previous usage level. Users exposed to the 400 ms delay for six weeks did 0.21% fewer searches on average during the five week period after we stopped injecting the delay.
> There’s a great MSR demo from 2012 that shows the effect of latency on the experience of using a tablet. If you don’t want to watch the three minute video, they basically created a device which could simulate arbitrary latencies down to a fraction of a millisecond. At 100ms (1/10th of a second), which is typical of consumer tablets, the experience is terrible. At 10ms (1/100th of a second), the latency is noticeable, but the experience is ok, and at < 1ms the experience is great, as good as pen and paper. If you want to see a mini version of this for yourself, you can try a random Android tablet with a stylus vs. the current generation iPad Pro with the Apple stylus. The Apple device has well above 10ms end-to-end latency, but the difference is still quite dramatic -- it’s enough that I’ll actually use the new iPad Pro to take notes or draw diagrams, whereas I find Android tablets unbearable as a pen-and-paper replacement. ...
> Curiously, I rarely hear complaints about keyboard and mouse input being slow. One reason might be that keyboard and mouse input are quick and that inputs are reflected nearly instantaneously, but I don’t think that’s true. People often tell me that’s true, but I think it’s just the opposite. The idea that computers respond quickly to input, so quickly that humans can’t notice the latency, is the most common performance-related fallacy I hear from professional programmers. ...
> Why don’t people complain about keyboard-to-display latency the way they complain stylus-to-display latency or VR latency? My theory is that, for both VR and tablets, people have a lot of experience with a much lower latency application. For tablets, the “application” is pen-and-paper, and for VR, the “application” is turning your head without a VR headset on. But input-to-display latency is so bad for every application that most people just expect terrible latency.
> For very simple tasks, people can perceive latencies down to 2 ms or less. Moreover, increasing latency is not only noticeable to users, it causes users to execute simple tasks less accurately. If you want a visual demonstration of what latency looks like and you don’t have a super-fast old computer lying around, check out this MSR demo on touchscreen latency.
there are mountains of research on this from every angle and it turns out that users do not love and praise software that has high interaction latency
they hate it and reorganize their life if necessary to use it as little as possible
(but it is true that often there is no snappier software that does what they need)
also sometimes customers do love and praise it if they aren't the users
> (but it is true that often there is no snappier software that does what they need)
I think this is the key phrase. Given the choice between snappy but doesn't do all they need, or does it slowly. They will rearrange their lives and use the slow/bloated software that does what they need.
It's exactly the same question answered as "should we improve performance or add a feature" that we ask users all the time.
> they hate it and reorganize their life if necessary to use it as little as possible
My particular software is of the kind that all users use it 100% of the time (i.e. 40h a week more or less). So there really isn't any way to use it more or less. They just get N dollars of work done in a week and that N will beat any amount of frustration, apparently.
> They just get N dollars of work done in a week and that N will beat any amount of frustration, apparently.
In this case, the companies using it are losing money when software makes workers do less than they could because of bad performance. And frustration is externalized on the users, who are a captive user base.
>also sometimes customers do love and praise it if they aren't the users
This is important and often overlooked. Many times, the customers for some software are not the same as the users of that software. The customers are the only ones who matter: they're the ones who pay for the software. The users don't matter at all: even if they absolutely loathe the software, so what? They have no power to change things and aren't writing the checks. Screw 'em. If they don't like it, they can find another job.
As a result of this, we get "enterprise software".
This is true (in most cases I assume), but it's users that make the free marketing, not the managers that pay the bills. They might say "productivity is up", but they don't say "I like using it". If you don't want to spend money marketing software, you probably want both of those kinds of free marketing.
I don't think most enterprise software users actually talk about the crappy software they use outside of work, or to other people in the same industry but at other companies.
For instance, at one job I had to use time-keeping software to track my hours, vacation time, etc. Like all employees, I was just a user of that software, but it had nothing to do with my normal job, it was just to track attendance and such.
Except for complaining, I never went around to other competing companies to talk to them about their time-keeping software. And my company's upper management sure as heck didn't bother to ask me if I liked the software, or if they should contract with a different vendor. Even if they did, it's not like I had a lot of experience with different, competing products. Plus, being enterprise software, it was heavily customized for that company.
The users of this software weren't providing free marketing to anyone. The marketing was probably done with backroom deals over expensive dinners and sports event tickets, I'm guessing.
I understand where you’re coming from, but when you have expert users of your software, they’ll be extremely sensitive to speed. Maybe a casual user won’t care, though.
For example, my company has a team of experts who use our internal tool for managing company-related tasks. They use this tool every day as a core part of their job. It’s a team of literally hundreds of people who use this tool.
If we add a feature which adds a click to some step of something they’re used to doing, we’ll never hear the end of it. Their team celebrates whenever we can add any sort of speed increase to something they’ve been using for a while. Adding shortcuts to common functionality is always a big user request.
It’s obviously not a core part of our job to consider only performance, we add and change lots of features all the time. But if we ever neglect performance for too long, it can really start to weigh their team down.
It’s interesting working so close to the people who use the software, because they don’t hesitate to give feedback like that, which you’d never get from just an internet software release. Lots of other software will have these “expert users” too, but you won’t hear the feedback.
> As long as something is “fast enough” (and the benchmark here is pretty low), most people won’t specifically choose a fast app over a slow app with more features.
The author chooses to use all sorts of slow software. Speed won't stop people using it, they'll just use it and think it's poorly designed/engineered.
I’m surprised by this experience because I think most user studies seem to find that speed improvements can have a big impact on important metrics for apps/websites used by many people. Presumably your users don’t have much choice in what software they get. Another argument for speed is that developers tend to have high-end hardware so things that are bearable for them may be significantly slower for users. This matters more for phones (wide range of performance) and things that depend on network speeds.
I work in medical software and can think of multiple times where we ended up with a slew of tickets when we made some change that ended up adding a single additional click to radiologist's workflows. I suspect that a lot of software with professional users would draw similar UX complaints.
If all else is equal, faster is better. But there's tons of examples of slow software that is more functional and/or easier to use winning over faster software. Look at things like Ruby and especially Rails, Python, any JetBrains product, Java (fast now but back in the day quite slow), webapps in general, probably your favourite DE, etc...
Life is short, optimizing everything takes time and most people don't care as long as something is usable.
Awesome! I feel like years ago there was so much focus on speed. We’d optimize websites for 56k modems. Now day much of the saas software i started using years ago has become worse with modern spa frameworks. FreshBooks was a big one. Even Shopify I find frustrating.
Last couple years I’ve been on roam research for note taking. But takes more than 8 seconds to load for me. So slow I’m diy mission to make my own solution. Will have to check out nvalt though.
Also try Logseq, which is kind of a free roam. It’s fairly fast to load once it’s built its index of your notes and once opened and loaded it’s basically instant.
> Similarly, I started using Lightroom around 2007 because it was so much faster than Apple’s Aperture.
I had the exact opposite experience. I used Aperture because it was so much faster that Lightroom. I could apply the same changes to multiple files at once, which at the time, at least, Lightroom couldn’t do. You had to work on each file separately. If you did a burst on manual, Aperture’s workflow was waaaay faster. I’m still sad Apple end-of-life’d it.
Other than that I agree with the spirit of the article. I love it when software is so fast you don’t even have to think about the software. It’s so rare these days, especially with web apps.
It is my experience as a developer that the shorter the feedback loop is, the more productive I am. Fast auto tests are really useful for this. Being able to test against an in-memory DB instead of a much slower “real” DB is another example.
So it is kinda obvious to me that non-developers also will benefit from fast feedback loop software.
It is commonly agreed that the marginal cost of software is $0.
And yet, for some reason, a large part of the software that we use, is, well, not great.
I don't think that we're not making better/faster software because of some economical reason.
I think that, sadly, we're not collectively able to do better, even if we wanted.
It is the same with all complex cultural products, the marginal cost of a book is also close to zero, and yet we're not collectively producing that much great literature...
I don’t understand what marginal cost has to do with this? The marginal cost is 0 because it is the cost of copying. The cost of creating new software is not zero.
Because almost no software starts with millions or billions of customers. Almost no software reaches that many users. Of the software that does, a lot of it is monetized indirectly (ads, telemetry, being a "value-add" for something else).
And, importantly, because as much as it's natural for software to be commoditized (thanks to low natural barriers to entry), it's also the core goal for pretty much every software company to resist commoditization. This is especially pronounced with SaaS. One way or another, the users are to a degree a captive audience.
Or, in other words, the software vendors don't give a flying fuck about performance, and ergonomics in general, because they don't have to - it's not like you can just switch to a competitor. Almost none of the software used by masses has a real alternative - there are always incompatibilities, feature mismatch, and data ownership issues that make it too big of a hassle for most people, most of the time.
Lots of software has monopolistic tendencies and/or large switching costs.
Most customers (outside of developers) don't have the ability to reliably ascertain the true quality of software, at least not before buying it (as they aren't software experts).
And (especially for the worst software, aka "enterprise") the people buying it are not the people who will actually use the software, so there is little incentive to improve the UX of it beyond the bare minimum.
And fundamentally... businesses don't compete on quality. They compete on profit.
1000x yes. I always complain how JIRA is unintuitive, clunky, and confusing. But all of that would be fine if it were fast, because at least I’d be able to try more options faster figuring out which button it is I actually wanted to press.
I die a little inside every time I lose an argument over whether or not we should spend time optimizing our software (I lose that argument every time I have it).
Pro tip, as soon as you are arguing over the code, you've already lost.
Most places are so far behind on priority tasks, that to slow down and discuss/debate the priority of tasks is likely to just cause more issues.
It isn't that you are even wrong. You almost certainly aren't. But nobody will wait for things to get better. If you stall out on features while prioritizing, you will lose folks.
Unfortunately, I don't have a legit answer/solution. My view is it is a lot like keeping a clean kitchen. That is table stakes and assumed. Such that folks in a kitchen have to create a somewhat self cleaning flow of things. Similarly, you have to constantly optimize and clean up the code as you go. Double down on what you have, and constantly work to improve it. Not to replace it.
Whenever you hear that, add the whole quote: “ We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%”
The full quote is: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
It doesn't mean "don't optimize" at all. It means: optimize those parts that actually make the application slow. I can tell from experience that intuition has repeatedly failed me about which parts those are. Measure, don't guess, before you optimize.
This was quite a ride. Very stream of conscious feel. Such that I don't want to get too caught in the examples.
> I don’t think I’m invoking some halcyon fantasy.
That said, the above is in fact a fantasy. Isn't necessarily wrong, but your average computer image in the 90s is laughably small compared to today. Such that just opening the app is likely processing more than was even possible in the 90s.
This also means strategies that were fine back then, such as indexing everything on a personal computer so that it could feel snappier, work against the goal.
People want faster applications, but we don't really have a route to letting them do fewer things. And I assert that is a large part of why things were maybe faster back in the day.
To add some hard data to this, the “eight second rule” [1][2] is the maximum amount of time a user, on average, of a system can wait for that system to return control back to the user without the user having some other thought pop into their brain, interrupting their flow.
[1] Response Times: 3 Important Limits (2010) https://www.nngroup.com/articles/response-times-3-important-... says it’s more like ten seconds
[2] Website Response Times (2010) https://www.nngroup.com/articles/website-response-times/
[3] The Need for Speed, 23 Years Later (2020) https://www.nngroup.com/articles/the-need-for-speed/
Edit: Added reference
Edit 2: Additional reference
Edit 3: One more reference
Edit 4: I think there are still many many opportunities to improve flow in many processes. Top of my list would be keeping compile times in the single-digit seconds.