I use DaVinci Resolve 17 for primary video editing, plus Fusion for basic composition. Blender was done for most of the animation work (the semaphores being fusion compositions instead cause I was lazy). Audio is recorded in Resolve, but post-processed through RX Elements 8, and music is from the YouTube library.
Wow, this is horrible advice. I'm a Debian Developer and an Ubuntu Core Developer. What this will do is create a package that doesn't properly handle dependencies and be incredibly fragile.
It's not horrible advice, and none of the links is helpful if you want to achieve the stated goal in the article: Simply distributing a binary.
It's completely fine to package them in a deb. You especially should know that debs are really simple and work quite well for that. You place the binary along with the needed resource files (configuration for example) in the relative directory structure, add a minimal amount of meta information, and call one command to create the .deb. Easy and fast, and the result is very reliable, it will work everywhere.
And then you have the official way of abstraction over abstraction, "explained" with completely unreadable documentation, needing a dozen helper tools and environment variables, guided by documentation that fails to explain even just the basics of what it is you'd actually do if you followed that documentation.
No, the simple way shown by the submitted article is exactly correct.
Start with that. If it works, then you maybe take the next step later and set up a PPA, assuming the deb is not only for yourself.
I think it's bad advice, but not necessarily for the same reason. For the simple case of "I want to distribute a binary", there are far easier ways to make a deb - I use FPM, but there are other tools that will also do it in a one-liner.
For someone new to this, the article's 5-steps are certainly not "practical".
Well, a binary often needs at least one configuration file, so it's probably not just that. I think it's easy enough: You just precreate the directory structure, place the files, add DEBIAN/control, and then it's one command. If it can be even easier that's really cool :)
With respect, I think this is a harmful attitude to take. People end up writing huge masses of Puppet/Chef/Ansible, or resorting to heavyweight technologies like Docker, just to place some files on a server. This problem is solved quite well by dpkg since forever, and it's a disservice to the community that this fact is obscured by so much cruft that only distro maintainers really need to know.
Seconded, I tried to package up a publicly available header-only tar.gz into a debian package for our private repo and found it impossible to do. I gave up.
I've read more than one complaint about Debian's packaging process being a nightmare. I have no horse in the race, but since you're here, I wanted to mention this because it sounds like you're not aware of it (and could possibly help change things).
No, I'm aware it is. Most of the problems is people trying to use it to distribute binary source code. Using it for source builds and debian/rules is a lot saner and straight forward.
These types of debs cause problems with system upgrades and I groan whenever I see something that uses checkinstall or similar; there are add-on deb sites that basically do that, and you get a mess in the resulting system. A lot of this is why snap got made.
I'll admit that the Debian package format could be better in this regard, but the packaging format is primarily for use by dpkg-buildpackage and building the distro.
Disclaimer: this is my own views and not those of either the Debian or Ubuntu projects.
Interesting... by "binary source code" do you mean "binary" (executables)? Do you mean you see distributing binaries as a not-intended use case for .debs, and that they should all result in builds straight from source? Or am I misunderstanding what you're saying? Because I feel like the people who use Debian or Ubuntu (or Arch or Fedora or...) aren't really longing for the experience of building everything from source like Gentoo users might.
Been going through the Debian Maintainer process by reading through the Debian official docs. The internalpointers blog post is hand's down more lucid than anything I read on the wiki. I think the other comments here reflect that sentiment.
The best equivalent I can give is this is roughly like opening up a MSI file, and replacing its guts. It works but you're asking for a really fragile system in return.
What is the problem with distributing a tarball if you're sending binaries?
If you're making debs, you still need to make a Debian repo and sign it, and then you still have the problem your dependencies are fucked up or won't easily work across versions.
What are you trying to solve that a tarball or rsync can't?
Not OP, but also packaging binaries into .debs, and the primary reasons for that (as opposed to using tarballs or rsync) are centralized package management, including dependency checks, and scripts to execute on installation (e.g., add system users).
The entire point of authnicode wasn't to protect users, it was to make sure software wasn't modified in flight (ala what SourceForge did), and to make those binaries accountable.
Now we're getting to the point that this feels like a protection racket. It, at least in theory, is possible for individuals to get EV certificates for websites. Worst case scenario, you can get a one man business for the paperwork.
A lot of viruses can hit via either remote code execution, or exploiting a bug when loaded through a data file. Neither of those scenarios is stopped by SmartScreen. At best, it stops someone from clicking "WannaCry.exe".
MSFT is basically doing everything to make you use the Store, and it reeked back with Windows RT, and it reeks even more now.
Unfortunately, it seems MSFT is incapable of creating a version of Windows that doesn't have live tiles, constantly tracking what applications you run. I switched to Linux years ago, but I realize that most people live in a Windows ecosystem, and that they're subject to the whims of MSFT.
I'm happy to see this project going onward due to the fact that systemd continues to be a giant sore that keeps entrenching on the entire stack. If nothing else, it means that a usable Linux distro w/o systemd is viable.
Love it or hate it, choice is a good thing. They may be getting to the point that I'd seriously look at it for a few non-essential production servers running Debian and Ubuntu.
If you are the copyright holder, you can put a OpenSSL exception clause in your license which allows this to work. Debian requires such a clause for GPL software to link with OpenSSL prior to this change.
AIX at least has LLVM, and IBM had unofficial GCC ports for years.
If you want a dumpster fire for development work, I'd highly recommend HP-UX. HP-UX's stdio wasn't very std back in 1990s. What I remember is a bunch of syscalls seemingly existing but not actually working despite being OK with the same code on Solaris and Linux.
Somewhere around 2003-4, it appears all development basically consists of security patches and new Itanium hardware enablement, and aCC barely supported anything C++ related; its more like trying to use Borland C++, and GCC was really iffy, although it had the advantage that post PA-RISC, HP did adopt ELF.
Yeah, I've used HP-UX as well. All the fun of an OS and compiler no one uses, with the added bonus of all the quirks of a CPU that really no one uses. I wonder if the soldiers 1000s of libraries deep in the Oracle link line read HN and have some fun IA-64 porting tales.
IBM Mainframes laugh at Windows backward compatibility. 50-60+ years isn't unheard of. I actually want to at some point climb that cliff but my next write up is exploring Windows 3.11's networking features, and then Novell NetWare; the Windows one should go up tomorrow on SN, and NetWare next week.