Phoronix's initial testing seems to suggest a 10-20% less power usage while idle which is a fantastic news for those who own a Linux laptop.
Since 4.0 or so Linux has become rock solid for me so I no longer anticipate new kernel releases but this is one of those times I run the cutting edge kernel because it is so cool. If you run a relatively recent version of Ubuntu, you can also test by googling Kernel PPA. It's just three clicks away at most.
Imagine the carbon footprint difference on a server farm...
Even AWS has no reasonable automated way to be elastic in a vertical way, like auto changing instance size. Some apps can't scale well horizontally.
My database servers, for example, are built for "Easter Sunday Attendence" and underutilized the rest of the time. We do better with things like app and web servers, but there are inefficiencies.
>10-20% less power usage
Is this one of those "not big and professional like GNU" Linus moments?
That's insane. Does this apply to all laptops running Linux?
Which never happens on laptops because of the web browser /s
FYI link to packages: http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.17-rc1/
My best year, I removed 10,000 lines of code more than I added without removing any functionality from the project (and in fact adding some new features). It isn't always a good thing to do this- but when it is, you know it as soon as you start reading the code.
I often wish I could be paid just to refactor and rewrite existing code bases to make them more maintainable.
Yeah, me too--I'm very good at it and it's quite satisfying. Unfortunately it's very hard to communicate the business value of it (although the business value is huge in some cases).
If we can solve a problem without writing any code at all, then that's the most maintainable and bug free solution.
See "The Best Code is No Code At All": https://blog.codinghorror.com/the-best-code-is-no-code-at-al... .
> Every new line of code you willingly bring into the world is code that has to be debugged, code that has to be read and understood, code that has to be supported. Every time you write new code, you should do so reluctantly, under duress, because you completely exhausted all your other options.
It's worth noting: code you import counts toward your total line count. Don't think that because someone else who doesn't work with you wrote the code, it doesn't count. In some ways, that's worse. I've spent all day today debugging a no-longer-maintained library which is used in a legacy codebase I'm maintaining.
Today, while I love the simplicity of Go, I shudder to fathom how much copy-pasted lines of Golang code I wrote will be commoditized in next year or two, and thus automatically creating legacy. And there will be nobody to give a ring to except my past self.
Was your code written by amateurs? Are you running your 50-person office off excel? Give us a call.
One of my most productive days was throwing away 1,000 lines of code. -Ken Thompson
> we also got rid of some copyright language boiler-plate in favor of just the spdx lines
So I looked up spdx.
I’m going to start using this too.
But when an earlier project has a license that explicitly states something like "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.", can you replace it by a single SPDX link?
I think that technology allows you to easily link something on the internet instead of providing a offline text file, but is it lawful? And justifiable, since having _n_ text files saying one thing is easily compressable ?
Got me thinking about it, does removing lines-of-not-code does really make it smaller or just a little
IANAL so I can’t say what you legally can and cannot do with other people’s code but I would think that placing a copy of each of the licenses used by others in your repo and name them as LICENSE_THIRDPARTY_MIT, LICENSE_THIRDPARTY_GPL, etc and mention in your README that these correspond to the spdx references found in source files would mean that you were still in compliance.
As for the question “does removing lines of not-code make the file smaller or just a little”; it does not reduce the SLOC count of course. Personally I just find the idea of spdx appealing because it means that when I open files to edit them I don’t have to scroll past a lot of license text first. Additionally scanning source files for what license is being used will be simplified.
here is the list of survivors:
after 4.17.rc-1 linux still supports 23 archs.
I'm not really sure I'm arguing for or against the dropping of support, more just curious about others thoughts.
Leaving it in gives the wrong impression that this is "known working good code", which is one reason to remove it. Another is that removing them makes it easier to be agile and refactor things, because you don't have to worry about these archs anymore.
We should be able to test that code. We could if, in order for an architecture/piece of hardware to be considered for inclusion, there were a software-based emulation tool that could be used to run a full-set of tests against a kernel build.
It's a lot of work to produce that and to write such a test suite, but it can grow incrementally. We have very good (and reasonably precise) emulators for a lot of the PC and mobile space and having those running would give us a lot of confidence in what we are doing.
Video game emulators are possible because there is only one (or a very small number of) hardware configuration(s) for each console. Emulators for mobile development are possible because they simulate only the APIs, not the actual physical hardware, down to the level of interrupts and registers.
Intel probably has internal tools that (somewhat) precisely emulate their chips and it'd probably be very hard to persuade them to share, but they seem committed to make sure Linux runs well on their gear, so it's probably not a huge problem.
I think of this as a way to keep the supported architectures as supported as possible even when actual hardware is not readily/easily (or yet) available for testing. One day, when x86 is not as easy to come by as today, it could prove useful.
It's good to keep the software running on more than one platform, as it exposes some bugs that can easily elude us. Also, emulators offer the best possible observability for debugging. If they are cycle-accurate then, it's a dream come true.
I'd recommend reading Dolphin emulator release notes for a reference how much work is required to properly emulate hardware such that actual software may run without glitches and errors even for (AFAIK) 3 hardware sets.
I believe quirks would be added as they are uncovered. It'd also become a form of documentation. Emulation doesn't need to be precise to be valuable - if we can test it against an emulator on a commodity machine before testing it on metal on a somewhat busy machine, it's still a win - we don't need to waste time on a real multimillion zSeries or Cray if it crashes the emulator.
If that was true, services such as AWS Device Farm wouldn't exist: https://aws.amazon.com/device-farm/
In this case, it looks like it was a number of unused architectures that were being put to pasture - anyone who is interested can look through commit history and pull back what they need if they're invested enough.
Wrong google hit. It's a minor 32-bit architecture from Renesas that seems to be targeted at ECUs. They're still on sale but I doubt there's much consumer demand to run Linux on them. They have fairly limited Flash.
Some of these haven't been sold for over 10 years and no one knows who still has one or where to get a compiler for them. Some of them are only a few years old, but have never run an upstream linux kernel (they always ran their original hacked-up/ported kernel), and again you can't find a C compiler from the last 5 years that supports them.
Linux does not drop support for architectures lightly, it was hanging on to most of these for years when they were clearly un-used un-tested zombies. And, FWIW, sound-blaster sound cards from the 90s are still supported ;)
Also, I guess the architectures pruning was one reason he decided not to go with the 5.0 version. It would give even more meaning to the version number.
On a side note, I'm annoyed Skype now forcibly updates itself on app launch. I have no idea how big that app is getting but I just assume it's slowly getting bigger and bigger.
Then again, for as much trouble as that policy causes it is difficult to argue with its success, since Linux supports more hardware than any other open source OS and most proprietary ones too (or all, depending on how you count ancient hardware).
Oh, and you can compile your own kernel if you like - it's a really simple process. :)
* make config
* make menuconfig
* make xconfig
Only one of those was of any use, the others were dumb.
The modern procedure (for upgrading) is pretty straightforward:
make oldconfig; make; make modules_install; make install; reboot
It means when I install a new kernel (as a deb), proprietary drivers like the blackmagic decklink ones are automatically recompiled.
It's very different now than in 2000.