
Bash 5.0 released - siteshwar
http://lists.gnu.org/archive/html/bug-bash/2019-01/msg00063.html
======
_kst_
With this release, bash now has three built-in variables (um, I mean
"parameters") whose values are updated every time they're read:

$RANDOM yields a random integer in the range 0..32767. (This feature was
already there.)

$EPOCHSECONDS yields the whole number of seconds since the epoch.

$EPOCHREALTIME yields the number of seconds since the epoch with microsecond
precision.

I'm thinking of a new shell feature that would allow the user to define
similar variables. For example, I have $today set to the current date in YYYY-
MM-DD format, and I have to jump through some minor hoops to keep it up to
date.

Does anyone else think this would be useful enough to propose as a new bash
feature? Would it create any potential security holes? Should variables like
$PATH be exempted?

(Of course this doesn't add any new functionality, since I could use "$(date
+%F)" in place of "$today". It's just a bit of syntactic sugar.)

~~~
aexaey
How does function sound for a syntactic sugar?

    
    
      $ today() { date +%F; }
      $ echo Today, $(today) is a great day!
      Today, 2019-01-08 is a great day!

~~~
flukus
It's not as nice, "cat $today" is easier to type than "cat $(today)" and would
give better completion, just declared matching variables instead of matching
functions, files and executables.

On the plus side, TIL the subshell syntax plays well with eval/expand shortcut
(ctrl+alt+e).

~~~
hawski
Wouldn't "cat $today" result with "cat: No such file or directory:
2019-01-08"? Did you mean echo instead of cat?

~~~
flukus
My real life use case for these dynamic variables would be more like
"cat/vim/cp $log" to get today's log file which would expand to something like
/somedir/logs/201901/09/product.log. Handy when you have a large matrix of
products/environments.

------
wlib
Why did we keep the language of the shell and the OS separate? It seems like a
needless abstraction which creates more harm than good (read a shell script vs
any other language). While I'm at it, why is the filesystem and syscall api
not just part of a standard userland language? For example, the filesystem
could be exposed like an object tree rather than some syscall ritual. The
syscalls could just be invisible, where the language compiler deals with it
instead of the programmer. I think that the old LISP machines got this right
while we are stuck in a usless POSIX compatibility trap. The only reason I
think they didn't design unix this way was because C was too low level, but we
could write the OS in a "higher level" functional language.

~~~
charlesdaniels
If you're doing it right, you are solving very different problems with shell
vs any other language. Shell is best used as a tool for orchestrating other
programs, you should not be implementing your programs in shell.

Syscalls, in general, are used in lieu of objects or other abstractions
because they more accurate mirror what the underlying hardware is doing. This
isn't always the case, some syscalls are maintained for POSIX-compatibility
and add a lot of complexity to emulate behavior that is no longer reflective
of the hardware.

At the end of the day, you'll find that it's very difficult to maintain the
highest levels of performance while also presenting an API that has a high
level of abstraction. Things like dynamically-resizable lists, using hash
tables for everything, runtime metaprogramming, and other such niceties of
modern HLLs aren't free from a performance perspective.

If you really want to know more, I would suggest reading one of McKusick books
on operating system design (the most recent being The Design and
Implementation of the FreeBSD Operating System 2/e, but even the older ones
are still largely relevant).

Maintaining this "useless POSIX compatibility trap" has a certain amount of
utility; I for one like not having to re-write all of my programs every few
years. I imagine others feel the same.

In closing, some projects that are pushing the boundaries of OS design which
you may want to check out include:

* Redox OS ([https://www.redox-os.org/](https://www.redox-os.org/)) - a UNIX-like OS done from scratch in Rust

* OpenBSD ([https://www.openbsd.org/](https://www.openbsd.org/)) - one of the old-school Unices, written in plain old C, but with some modern security tricks up it's sleeves

* Helen OS ([http://www.helenos.org/](http://www.helenos.org/)) - a new microkernel OS written from scratch in C++, Helen OS is not UNIX-like

* DragonFlyBSD ([https://www.dragonflybsd.org/](https://www.dragonflybsd.org/)) - a FreeBSD fork focused on file systems research

* Haiku ([https://www.haiku-os.org/](https://www.haiku-os.org/)) - binary and source compatible with BeOS, mostly written in C++, but also has a POSIX compatibility layer

~~~
jxy
It's odd not including Plan 9 here. But I guess that is the fate of Plan 9.

~~~
charlesdaniels
Yeah, I did not include anything without at least some recent development.
AROS and plan9 both got cut for that reason.

I was on the fence about including ReactOS but wound up not including that
either.

~~~
jxy
I would consider 9front quite active from their Hg repo.

[https://code.9front.org/hg/plan9front](https://code.9front.org/hg/plan9front)

------
wicket
Seeing this release makes me cringe. I've used Bash as an interactive shell
for decades but really I'm sick and tired of it.

As a scripting language, I loathe it and really don't understand its purpose.
I always write shell scripts in POSIX shell for portability reasons. Most of
the time I don't need to use any of Bash's features. In cases where advanced
features are needed and portability is not a concern, there are other
scripting languages much better suited for this (Python, Ruby, etc).

As an interactive shell, the only features I ever use are command history and
tab completion. Bash is way too bloated for my use case (it's only a matter of
time before the next Shellshock is discovered). Other lightweight shells are
missing the couple of interactive features which I do use.

If anyone knows of a shell which meets my criteria of being lightweight but
with command history and tab completion (paths, command names and command
arguments), I'd really appreciate any suggestions. Otherwise I may have to
look into extending dash or something.

~~~
manquer
zsh has much better tab completion, you should check it out

~~~
p4bl0
I read that often but I don't understand how it can be true. Bash (and I
suppose zsh's) tab-completion is programmable so you can make it do whatever
you want to.

~~~
TylerE
Zsh, especially with an addon package like oh-my-zsh isn't just programmable
but is actually programmed. Like, it just works, near magically - for instance
make <tab> looks for the Makefile in the current dir and actually scrapes the
targets from it.

~~~
p4bl0
AFAIK Bash does that by default too (at least it's been the case on my Debian
setups for _years_ ). It also works with git for example, not just with its
subcommands but also with your commits, tags, branches, remotes, etc.

Many Debian packages comes with a completion script for Bash so you get it
when you install them :).

------
offmycloud
It's sad that lists.gnu.org is running obsolete TLS 1.0 crypto with weak
1024-bit DH. Either upgrade to TLS 1.2 with reasonable cipher suites, or just
go back to plain HTTP.

~~~
ltc5505
Since I'm clueless on the subject, can I ask how you determined that
information and what resource I could use to become better informed?

~~~
offmycloud
A good first step is disabling SSL 3.x and TLS 1.0 in your daily browser. I
would also recommend the excellent Qualys SSL Server Test:
[https://www.ssllabs.com/ssltest/](https://www.ssllabs.com/ssltest/)

~~~
avarun
Is there any way to do that on Chrome macOS?

------
BurritoAlPastor
I can’t imagine what BASH_ARGV0 is for. Can someone more sage supply an
example of what problem it solves?

~~~
LukeShu
A use-case I've long wanted it for is better "\--help" messages. If you want
to tell the user how to invoke the program again, argv[0] is the right thing:

Given:

    
    
        #include <stdio.h>
        
        int main(int argc, char *argv[]) {
        	printf("Usage: %s [OPTIONS]\n", argv[0]);
        	return 0;
        }
    

Running it as `./dir/demo --help` gives:

    
    
        Usage: ./dir/demo [OPTIONS]
    

Put it somewhere in $PATH, and run it as `demo --help`, and it will give:

    
    
        Usage: demo [OPTIONS]
    

Perfect!

But with a Bash script, argv[0] is erased, it sets $0 is set to script path
passed to `bash` as an argument.

Given:

    
    
        #!/bin/bash
        echo "Usage: $0 [OPTIONS]"
    

Running it as `./dir/demo --help` gives:

    
    
        Usage: ./dir/demo [OPTIONS]
    

So good, so far, since the kernel ran "/bin/bash ./dir/demo --help". But once
we get $PATH involved, $0 stops being useful, since the path passed to Bash is
the resolved file path; if you put it in /usr/bin, and run it as `demo
--help`, it will give:

    
    
        Usage: /usr/bin/demo [OPTIONS]
    

Because the call to execvpe() looks at $PATH, resolves "demo" to
"/usr/bin/demo", then passes "/usr/bin/demo" to the execve() syscall, and the
kernel runs "/bin/bash /usr/bin/demo --help".

In POSIX shell, $0 is a little useful for looking up the source file, but
isn't so useful for knowing how the user invoked you. In Bash, if you need the
source file, you're better served by ${BASH_SOURCE[0]}, rendering $0
relatively useless. And neither has a way to know how the user invoked you...
until now.

It's a small problem, but one that there was no solution for.

~~~
ajross
> If you want to tell the user how to invoke the program again, argv[0] is the
> right thing

Some pedantry: it's actually not. The argv array is a completely arbitrary
thing, passed by the caller as an array of strings and packed by the kernel
into some memory at the top of the stack on entry to main(). It doesn't need
to correspond to anything in particular, the use of argv[0] as the file name
of the program is a side effect of the way the Bourne shell syntax works. The
_actual_ file name to be executed is a separate argument to execve().

In fact there's no portable way to know for sure exactly what file was mapped
into memory by the runtime linker to start your process. And getting really
into the weeds there may not even be one! It would be totally possible to
write a linker that loaded and relocated an ELF file into a bunch of anonymous
mappings, unmapped itself, and then jumped to the entry point leaving the poor
process no way at all to know where it had come from.

~~~
LukeShu
Sure, argv[0] is a just string, that the caller can set when they call
execve(). That doesn't have mean that it doesn't have meaning. You are
correct, there is no way to know how your executable was passed as the first
argument to execve(). But, argv[0] is specified to mean roughly "welcome to
the world, you are argv[0]", and to tell the program what it is. Sure, you
could lie to the program, and tell it that it's something it's not by passing
a different string to execve(), you can even do this from a Bash script with
`exec -a`.

I stand by my original statement: _If you want to tell the user how to invoke
the program again, argv[0] is the right thing_. I didn't say that running
argv[0] will necessarily _actually_ invoke the program again, I said that it's
the right thing to tell the user. If the caller set argv[0] to something else,
it's because they wanted your program to believe that is its identity, so
that's what it should represent its identity as to the user.

~~~
tjoff
Surely the user already knows how to invoke the program, since he/she did it
literally two seconds ago.

What if I invoke it from a distant path, do I want my 73 character long path
to be prepended in the --help ?

~~~
majewsky
Especially on something like NixOS, where /bin/bash is actually
/nix/store/3508wrguwrgu3h5y9354rhfgw056y-bash-5.0/bin/bash.

~~~
LukeShu
Do note that was my _complaint_ with $0, that when using $PATH it was set to
that full gross path.

If /bin/foo is actually
/nix/store/3508wrguwrgu3h5y9354rhfgw056y-foo-5.0/bin/foo, then when you run
"foo", 0="/nix/store/3508wrguwrgu3h5y9354rhfgw056y-foo-5.0/bin/foo" and
BASH_ARGV0="foo".

------
giancarlostoro
Any recommended reading for Bash? I'm somewhat new to it and it's interesting
ways of getting things done. I've used it minimally in the past, but have
found myself writing a 100> LOC script, which I can't help but feel I'm likely
over-complicating certain bits and pieces.

~~~
JeremyBanks
It can't be done. If you want to write reliable code, and actually notice all
of the possible error conditions instead of silently ignoring them, your code
needs to get more verbose and complicated than it would be to just use a more
capable tool like Python or Node, and it still won't be as reliable.

If you have more logic than a couple of string comparisons, Bash is not the
right tool for the job.

~~~
nwatson
I recommend Greg's Bash Wiki ...
[https://mywiki.wooledge.org/BashGuide](https://mywiki.wooledge.org/BashGuide).
See general notes, then at the bottom of the page are many links to additional
considerations.

Like others say, "bash" is a hard tool to get right (and I'm not saying I do
it right either, necessarily, but Greg's Wiki was real helpful!). I'm building
a hybrid bash/python3 environment now (something I'll hopefully open-source at
some point), and bash is just the "glue" to get things set up so most aspects
of development can funnel through to python3 + other tools.

But ... things that make bash real useful:

    
    
        * it's available everywhere (even in Windows with Ubuntu-18.04/WSL subsystem)
        * it can bootstrap everything else you need
        * it can wrap, in bash functions, aliases, and "variables" (parameters), the
          real functionality you want to expose ... the
          guts can be written in python3 or other tools
    

Without a good bash bootstrap script you end up writing 10 pages of arcane
directions for multiple platforms telling people to download 10 packages and 4
pieces of software per platform, and nobody will have consistent reproducible
environments.

EDIT: I think there's a revised version of Greg's Bash Wiki in the works.

~~~
dorfsmay
It is availbale, almost, everywhere but be careful with the version, different
Linux diatros are at different versions, the last time I used OSX it was stuck
on a very old version, and I expect the different BSD OSes to run fairly new
versions.

~~~
int_19h
BSDs don't ship Bash as part of the base system - you have to install it from
packages or ports. And that one is the most recent that the maintainer
bothered to package. E.g. FreeBSD is on 4.4.23 right now, which actually
appears to be newer than e.g. Debian unstable.

------
nerdponx
What is the value in creating built-in replacements for binaries like rm and
stat?

~~~
chrisshroba
It seems like this would eliminate the need to read a file from disk and fork
a new process, both of which take time. If you're just removing a single file,
this is probably negligible, but if you have a script iterating over 10k
files, i.e., this speed-up may be more welcome.

~~~
viraptor
> if you have a script iterating over 10k files

That's probably a good sign you should advance from shell. If the script is
trivial, it's going to be trivial in python / ruby / go / crystal / ... as
well. If it's not trivial, that's another reason to move.

~~~
adrianN
I agree that as scripts get more complex, you should migrate from bash. But
the number of files a script touches says almost nothing about its complexity.

~~~
viraptor
Wasn't saying otherwise. One reason is complexity. Another is performance.

------
timvisee
Yet macOS is still on 3.2.

~~~
Tsiklon
MacOS doesn't appear to ship GPLv3 licensed code. 3.2 is the last update on
GPLv2.

Alternatively - newer versions of ZSH are frequently provided by Apple.

~~~
adtac
What's wrong with shipping GPLv3 code? Can't they just provide the source (are
they making significant changes that they want to keep proprietary?) to comply
with the license?

~~~
jordigh
Apple likes to push DRM, which GPLv3 forbids. Apple also is afraid to give
patent grants, which GPLv3 requires. Thus, Apple refuses to get anywhere near
GPLv3 code, and won't even make it easy to give you GPLv3 source that you can
build yourself.

------
ris
> The `history` builtin ... understands negative arguments as offsets from the
> end of the history list

At last!

------
nurettin
BASH_ARGV0 < does that mean we can set process title after the script starts?

~~~
majewsky
It would appear so:

> New features [...] BASH_ARGV0: a new variable that expands to $0 and sets $0
> on assignment.

------
sabujp
how about a good way to pass around associative arrays and arrays

------
pornel
Reminder that 2019 version of macOS ships with 2007 (last GPL2) version of
Bash, and will never ship with any newer version.

    
    
        /bin/bash --version
        GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)
        Copyright (C) 2007 Free Software Foundation, Inc.
    

macOS used to be an awesome developer machine with good tools out of the box.
Now the built-in tools are just a bootstrap for their own replacement via
Homebrew. Like IE for downloading Chrome.

~~~
petre
It also ships a recent version if zsh. But I agree, you're better off with
Linux if you want a developer box and deploy on Linux.

~~~
h1d
How's that? The quality and amount of GUI tools are far more complete in macOS
than Linux desktop and you can just run a Linux VM to mimic the deployment
server.

~~~
petre
I don't care about the GUI tools, I just need an editor, a terminal and a web
browser. And maybe Sequel Pro once in a blue moon. The MacOS GUI is quite
annoying when you want to do stuff fast like for instance move windows between
virtual desktops or switch the desktop with the mouse wheel.

The tooling is just not there (old Python, old Perl, old Ruby and of course
different versions from your deployment environment), you have to resort to
third party tools such as homebrew or macports, you have to install Xcode to
get gcc, you need an Apple ID to do that, the system level API is
incompatibile with Linux, the filesystem is or at least was case insensitive.
New MacOS versions after El Capitan are also getting worse at compatiiblity
with other Unix-like platfroms. It's a pain to set up a development
environment really, especially if you use any dynlibs. Instead of a VM we have
a staging server where we deploy and there are almost always surprises.

In Linux the tooling is just there, a few seconds and one package manager
command away. If your package is not there then there are PPAs or OBS repos.
You can reproduce the platform you're deploying as closely as possible and
there are less surprises.

------
sirjaz
I know I may be downvoted/flamed on this, but why doesn't everyone start
looking at powershell as the default shell. All the default parameters you are
looking for are already there. Plus, you can you use all the other standard
shell tools

------
fxfan
As someone who lives in zsh and bash for interactive usage- I want to say-
please do not write scripts in bash or zsh. Use powershell- its an amazingly
well designed scripting language.

Also- there is ammonite. Written for scripting.

~~~
flatline
Powershell has been around for quite a while now, but only recently could you
rely on it being installed on Windows systems, let alone be available on
Linux. Maybe in another decade, but if you are targeting Unix-like systems
bash is probably still your safest bet for portable, interpreted code. Python2
is a close second.

~~~
emersion
POSIX sh is your safest bet for portable, interpreted code.

