Hacker News new | past | comments | ask | show | jobs | submit | erlehmann_'s comments login

http://flatkill.org/ claims that “The sandbox is a lie”:

> Almost all popular applications on flathub come with filesystem=host, filesystem=home or device=all permissions, that is, write permissions to the user home directory (and more), this effectively means that all it takes to "escape the sandbox" is echo download_and_execute_evil >> ~/.bashrc. That's it.

> To make matters worse, the users are misled to believe the apps run sandboxed. For all these apps flatpak shows a reassuring "sandbox" icon when installing the app (things do not get much better even when installing in the command line - you need to know flatpak internals to understand the warnings).

I have not used flatpack. Is this description accurate? Also:

> Up until 0.8.7 all it took to get root on the host was to install a flatpak package that contains a suid binary (flatpaks are installed to /var/lib/flatpak on your host system). Again, could this be any easier? A high severity CVE-2017-9780 (CVSS Score 7.2) has indeed been assigned to this vulnerability. Flatpak developers consider this a minor security issue.


The first two are also the case with snap. No packages actually seem to use the sandbox feature.


There was already a post on this. Basically the argument about home is true but this is because 1) apps should not use filesystem access but rather portals (if they can) 2) nothing should be executable in the home folder (nobashrc, no script, etc...)

If I remember well the second argument was about update not being frequent enough.

So nothing fundamentally about Flatpak but more about the infrastructure (lack of updates) and the use of it (we should not allow home access and use Portals or we should disable bashrc).


> nothing should be executable in the home folder

Says who? The purpose of a home directory to contain user-specific files including executables. Developers compile their software and write their scripts in their home directory. Even if we made the absurd decision that no file may be executed from the directory, there are many ways to cause harm by simply editing user-specific configuration files (e.g. in ~/.config).

Arguing that the problem is with executables in $HOME rather than Flatpak is incredibly delusional.


I strongly suggest to not use this. Instead, create URIs that contain arbitrary content with the data URI scheme: https://en.wikipedia.org/wiki/Data_URI_scheme

The data URI scheme is standard and widely supported, does not rely on the host bitty.site being reachable and does not need JavaScript. One can even create data URIs with a small shell script that is given a filename argument:

  #!/bin/sh -eu
  printf 'data:%s;base64,%s' "$(file -bi "$1"|tr -d ' ')" "$(base64 -w 0 "$1")"


Basic web site functionality should work in any browser.


If browsers can't keep up with basic web site functionality, they risk becoming irrelevant.


There exists DJB's redo approach [0], which i implemented [1], where dependencies and non-existence dependencies are only recorded after the build. A typical dofile is a shell script, so you do not need to learn another language. Targets also automatically depend on their own build rules (I have seen such a thing only in makefiles authored by DJB).

I wrote a blog post to show how to integrate dependency output for both dependency and non-existence dependency generation [2]. The game “Liberation Circuit” can be built with my redo implementation; you can output a dependency graph usable with Graphviz [4] using “redo-dot”.

There is only one other redo implementation that I would recommend, the one from Jonathan de Boyne Pollard [5], who rightly notices that compilers should output information about non-existence dependencies [6].

I would not recommend the redo implementation from Avery Pennarun [7], which is often referenced (and introduced me to the concept), mainly because it is not implemented well: It manages to be both larger and slower than my shell script implementation, yet the documentation says this about the sqlite dependency (classic case of premature optimization):

> I don't think we can reach the performance we want with dependency/build/lock information stored in plain text files

[0] http://cr.yp.to/redo.html

[1] http://news.dieweltistgarnichtso.net/bin/redo-sh.html

[2] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

[3] https://github.com/linleyh/liberation-circuit

[4] https://en.wikipedia.org/wiki/Graphviz

[5] http://jdebp.eu./Softwares/redo/

[6] http://jdebp.eu./FGA/introduction-to-redo.html#CompilerDefic...

[7] https://github.com/apenwarr/redo


An issue I have with make is that it can not handle non-existence dependencies. DJB noted this in 2003 [1]. To quote myself on this [2]:

> Especially when using C or C++, often target files depend on nonexistent files as well, meaning that a target file should be rebuilt when a previosly nonexistent file is created: If the preprocessor includes /usr/include/stdio.h because it could not find /usr/local/include/stdio.h, the creation of the latter file should trigger a rebuild.

I did some research on the topic using the repository of the game Liberation Circuit [3] and my own redo implementation [4] … it turns out that a typical project in C or C++ has lots of non-existence dependencies. How do make users handle non-existence dependencies – except for always calling “make clean”?

[1] http://cr.yp.to/redo/honest-nonfile.html

[2] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

[3] https://github.com/linleyh/liberation-circuit

[4] http://news.dieweltistgarnichtso.net/bin/redo-sh.html (redo-dot gives a graph of dependencies and non-existence dependencies)


Make has no memory so it can't remember things. It simply compares the dates of files. If a dependency is newer than a target the target is rebuilt.

If you want to keep some kind of memory you have to build and keep track of it yourself.

But the problem you point at is simply poor design. It is not a normal occurrence for system header files to move around like you state. If they do, a full rebuild is indeed required. It shouldn't so often that that is an issue.


If an apt-get upgrade fixed an issue in a system header or a library, but the date of the fix predates the last build (quite common; last build from yesterday, fix from two days ago but downloaded today) then make will do nothing (or any subset of the right things but not all) and make clean; make will do the right thing.

Relying on time stamps is a design decision that was good for its time, but it is no longer robust (or sane) an a constantly connected, constantly updated, everything networked world.

djb redo takes it to one logical conclusion (use cryptographic hashes to verify freshness)

There are other ways in which make is lacking: operations are non atomic (redo fixes that too), dependency granularity is file level (so dependency on compiler flags is very hard, dependency on makefile is too broad; redo fixes this too); dependency is manual (redo doesn't fix this; AFAIK the only one that properly does is tup)


> Relying on time stamps is a design decision that was good for its time, but it is no longer robust

I agree with the sentiment, but a small nitpick:

Relying on time stamps for older/newer comparisons is not robust.

Using time stamps (and perhaps file size) for equality checks is quite robust. And the combination with cryptographic hashes is even better (if a file is recreated but has the same contents afterwards, timestamp checks would trigger an unneeded rebuild, while a crypto hash check would recognize that there's nothing to rebuild).


Typically if a system header has changed or been added due to upgrading a library package you'll need to rerun any configure script anyway (since it very likely checks for header features and those decisions will change). So unless your build system magically makes configure depend on every header file used in every configure test it runs, you'll need to redo a clean build anyway, pretty much.

Make has a whole pile of issues, but this one really isn't an aggravation in practice, I find.


apt-get upgrade does not usually upgrade a package, despite the name; 99.9% of the time it applies a bug or security fix, almost never changing any functionality or interface - and would result in the same config script.

And that assumes you actually have a config script, which is also a nontrivial assumption.

Djb redo lets you track e.g. security fixes that change libc.a if you are linking statically, but that's not usually done.

The only build system I know that guaranteed a rebuild whenever and only when it is needed is tup. (Assuming you have only file system inputs)


"It simply compares the dates of files."

test(1) also compares the dates of files

   test file1 -nt file2
   test file1 -ot file2
Is there anything else that make does in addition to comparing dates of files?

(Besides running the shell.)

tsort(1) does topological sorting

tsort + sed + sort + join + nm = lorder(1)

lorder can determine interdependencies


Is this a common problem? I can't think of any project that does this, and there's a simple solution as well: don't shadow system headers. That's just asking for pain, regardless of how well make handles it.


I don't think this problem is limited to system headers. Something as innocent as #include "foo/bar.h" can be affected by this if you pass -I with at least two unique paths to the compiler.


ok, sure, I revise my answer to don't shadow any header.


Easier said than done, especially integrating over the lifetime of a years-long project with many ever-changing dependencies :-)


Could you summarize how you handle this in redo? Also what about the case where a header file does exist but is out-of-date and because of that triggers an error (e.g., version compatibility check with #error) -- how do you handle that?


I, for one, handle it with a tool that mimics the compiler's preprocessing phase and emits both redo-ifchange information for all of the headers that are used, and redo-ifcreate information for all of the non-existent headers that are looked for during the process.

    JdeBP %cat test.cpp
    #include <cstddef>
    void f() {}
    JdeBP %/package/prog/cc/command/cpp test.cpp --iapplication . --icompiler-high /usr/local/lib/gcc5/include/c++ --icompiler-low /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3 --iplatform /usr/local/include --iplatform /usr/include  -MD -MF /dev/stderr 2>&1 > /dev/null|fgrep redo
    redo-ifcreate ./cstddef ./bits/c++config.h /usr/local/lib/gcc5/include/c++/bits/c++config.h /usr/local/include/bits/c++config.h /usr/include/bits/c++config.h ./bits/os_defines.h /usr/local/lib/gcc5/include/c++/bits/os_defines.h /usr/local/include/bits/os_defines.h /usr/include/bits/os_defines.h ./bits/cpu_defines.h /usr/local/lib/gcc5/include/c++/bits/cpu_defines.h /usr/local/include/bits/cpu_defines.h /usr/include/bits/cpu_defines.h ./stddef.h /usr/local/lib/gcc5/include/c++/stddef.h /usr/local/include/stddef.h ./sys/cdefs.h /usr/local/lib/gcc5/include/c++/sys/cdefs.h /usr/local/include/sys/cdefs.h ./sys/_null.h /usr/local/lib/gcc5/include/c++/sys/_null.h /usr/local/include/sys/_null.h ./sys/_types.h /usr/local/lib/gcc5/include/c++/sys/_types.h /usr/local/include/sys/_types.h ./machine/_types.h /usr/local/lib/gcc5/include/c++/machine/_types.h /usr/local/include/machine/_types.h ./x86/_types.h /usr/local/lib/gcc5/include/c++/x86/_types.h /usr/local/include/x86/_types.h
    redo-ifchange /usr/local/lib/gcc5/include/c++/cstddef /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3/bits/c++config.h /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3/bits/os_defines.h /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3/bits/cpu_defines.h /usr/include/stddef.h /usr/include/sys/cdefs.h /usr/include/sys/_null.h /usr/include/sys/_types.h /usr/include/machine/_types.h /usr/include/x86/_types.h
    JdeBP %/package/prog/cc/command/cpp test.cpp --iapplication . --icompiler-high /usr/local/lib/gcc5/include/c++ --icompiler-low /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3 --iplatform /usr/local/include --iplatform /usr/include  -MMD -MF /dev/stderr 2>&1 > /dev/null|fgrep redo
    redo-ifcreate ./cstddef ./bits/c++config.h ./bits/os_defines.h ./bits/cpu_defines.h ./stddef.h ./sys/cdefs.h ./sys/_null.h ./sys/_types.h ./machine/_types.h ./x86/_types.h
    redo-ifchange
    JdeBP %
I also have a wrapper that takes arguments in the forms that one would invoke g++ -E and clang++ -E, tries to works out all of the platform and compiler include paths, and invokes this tool with them.

It's then a simple matter of invoking these redo-ifchange and redo-ifcreate commands from within the redo script that is invoking the compiler.

You can see this plumbed into redo in a real system in the source archives for the nosh toolset and djbwares.

* http://jdebp.eu./FGA/introduction-to-redo.html#CompilerDefic...

* https://news.ycombinator.com/item?id=15044438


I use strace(1) look for stat(2) syscalls that fail with ENOENT. An advantage of this approach is that I do not have to imitate the C preprocessor, so parser differentials can never happen. The following default.o.do file from my blog post [1] handles the case:

  #!/bin/sh
  redo-ifchange $2.c
  strace -e stat,stat64,fstat,fstat64,lstat,lstat64 -f 2>&1 >/dev/null\
   gcc $2.c -o $3 -MD -MF $2.deps\
   |grep '1 ENOENT'\
   |grep '\.h'\
   |cut -d'"' -f2 2>/dev/null\
   >$2.deps_ne
  
  read d <$2.deps
  redo-ifchange ${d#*:}
  
  while read -r d_ne; do
   redo-ifcreate $d_ne
  done <$2.deps_ne
  
  chmod a+x $3
This approach is also used for building Liberation Circuit if strace is installed [2].

I think the compiler should output the necessary information. To quote Jonathan de Boyne Pollard [3]:

> As noted earlier, no C or C++ compiler currently generates any redo-ifcreate dependency information, only the redo-ifchange dependency information. This is a deficiency of the compilers rather than a deficiency of redo, though. That the introduction of a new higher-precedence header earlier on the include path will affect recompilation is a fact that almost all C/C++ build systems fail to account for.

> I have written, but not yet released, a C++ tool that is capable of generating both redo-ifchange information for included files and redo-ifcreate information for the places where included files were searched for but didn't exist, and thus where adding new (different) included files would change the output.

JdeBP, could you please release your tool under a free software license? I suspect it has fewer errors than the similar CMake approach [4].

[1] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

[2] https://github.com/linleyh/liberation-circuit/blob/master/sr...

[3] http://jdebp.eu./FGA/introduction-to-redo.html#CompilerDefic...

[4] https://github.com/Kitware/CMake/blob/master/Source/cmDepend...


Just for the record: My personal preference is for Clang and GCC to be instrumented to emit the names of both found and non-existent header files.


I'm missing something.

Are you saying you want to be able to compile either way /usr/include/stdio.h and /usr/local/include/stdio.h, but remember what the last compilation used and know what header would be used in the next compilation, and if it's different, mark the target as stale and perform the action?

I guess you'd need to keep a log of the build and test cpp invocations for diffs.

I've never run into this scenario.


An obvious case would be a developer supporting multiple versions of a 3rd party library.


This is where I saw the beauty of including dependencies w a project. Even on my own systems, as environments change, things break, and having a stable in-tree reference had paid off.

It's a tough situation, but I find myself leaning to @tedunangst position over the years - usually I try to adapt my machines (incl software) to my needs, but this case I need to take control/responsibility, and here be dragons. Does cmake actually solve this? Do other build systems?


I personally use scons instead of Makefiles. Its dependency analysis is amazing, I haven't seen it fail a single time.


Please elaborate: What do you find amazing about scons?

Also, how does scons handle non-existence dependencies?

What would be a scons dependency graph for this C code?

  #include<stdio.h>
  main() {
   printf("hello, world\n");
   return 0;
  }
You can see a dependency graph I generated with redo here: http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...


I love that I get to use python to write the dependency graph, it allows for some interesting stuff.

Other than that, it's mostly the ease of use. This is enough to compile a C++ project (that has all its .c and .cpp files in the same directory as the SConstruct file), and it'll pick up on all dependencies correctly:

    Program(target = 'a.out', source = Glob('*.c') + Glob('*.cpp'))
I also know for a fact that it's able to pick up on how the presence of a new file might trigger a rebuild of what could require it.

Regarding the last question, using --tree=all it prints:

    +-.
      +-SConstruct
      +-a.out
      | +-main.o
      | | +-main.c
      | | +-/usr/bin/gcc
      | +-/usr/bin/gcc
      +-main.c
      +-main.o
        +-main.c
        +-/usr/bin/gcc
I'm not sure if it's hiding dependencies on system headers or not. But I can force it to show them by adding /usr/include and /usr/local/include to CPPPATH (excuse the long code block):

    +-.
      +-SConstruct
      +-a.out
      | +-main.o
      | | +-main.c
      | | +-/usr/include/stdio.h
      | | +-/usr/include/Availability.h
      | | +-/usr/include/_types.h
      | | +-/usr/include/secure/_stdio.h
      | | +-/usr/include/sys/_types/_null.h
      | | +-/usr/include/sys/_types/_off_t.h
      | | +-/usr/include/sys/_types/_size_t.h
      | | +-/usr/include/sys/_types/_ssize_t.h
      | | +-/usr/include/sys/_types/_va_list.h
      | | +-/usr/include/sys/cdefs.h
      | | +-/usr/include/sys/stdio.h
      | | +-/usr/include/xlocale/_stdio.h
      | | +-/usr/include/AvailabilityInternal.h
      | | +-/usr/include/sys/_types.h
      | | +-/usr/include/secure/_common.h
      | | +-/usr/include/sys/_posix_availability.h
      | | +-/usr/include/sys/_symbol_aliasing.h
      | | +-/usr/include/machine/_types.h
      | | +-/usr/include/sys/_pthread/_pthread_types.h
      | | +-/usr/include/i386/_types.h
      | | +-/usr/bin/gcc
      | +-/usr/bin/gcc
      +-main.c
      +-main.o -- This part was removed to decrease comment size, it's the same as the main.o part above
The SConstruct for this last block is:

    Program(target = 'a.out', source = ['main.c'], CPPPATH = ['/usr/local/include', '/usr/include'])
Note that these were generated on macOS.


From what you are showing us, the answer to How does scons handle non-existence dependencies? is that it does not handle them at all.

Go and look at M. Moskopp's graph. It has a lot of dependencies for non-existent files that the compiler would have used in preference to the ones that it actually used, had they existed.


Did scons finally get a little less opinionated?

It used to be that scons really forced you to use subsidiary SConscript child files for anything more complicated than a couple files in a single directory instead of being able to lump it all into a single SConstruct.


I think that was quite long ago. Yes, `SCons` can fit into a single file if you so choose. But its behavior under recursive builds (with nested directory structure) is far more predictable than most build systems I have seen.


Almost all other systems have this flaw because they require dependencies before a build. If recording dependencies after the build, the entire problem becomes very simple. See here:

http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...


The example above was of a change to the build structure (include path, specifically) which changes the dependency tree in a way that doesn't involve changes to the files themselves and will be invisible to make or other tools that use file timestamps. I pointed out that this was just one of a whole class of intractable problems with dependency tracking that we all choose to ignore. It can't be fixed, even in principle. The only truly sure way to know you're building correctly is to build from scratch. Everything else is just a heuristic.


Why do you think the problem can not be fixed?

Also, have you looked at the redo tool redo-ifcreate? http://news.dieweltistgarnichtso.net/bin/redo-sh.html


The following sentence about my redo implementation is wrong:

> In 2014, Nils Dagsson Moskopp re-implemented Pennarun redo, retargetting it at the Bourne Again shell and BusyBox.

I targeted the Bourne Shell (sh), not the Bourne Again Shell (bash). Also, my redo implementation contains redo-dot that paints a dependency-tree – I have not seen this otherwere.


You only put /bin/sh as the interpreter. You still used quite a number of Bourne Again shell constructs in the scripts themselves. Some are not in POSIX sh and utilities, some are not in the Bourne shell. (The Bourne shell is not conformant with POSIX sh, so targetting POSIX sh is not the same as targetting the Bourne shell.)

These include amongst others local, $(), >&-, the -a and -o operators to test, and command. (You also failed to guard against variables expanding to operators in the test command, but that is a common general error rather than a Bashism.)

Beware of testing against /bin/sh, even the BusyBox one, and assuming that that means POSIX compliance, let alone Bourne shell compatibility. Even the OpenBSD Korn shell running as /bin/sh or the Debian Almquist shell running as /bin/sh will silently admit some non-POSIX constructs.


While you are right about POSIX problems (like using “local”) I actually targeted Dash and older versions of BusyBox – not Bash.

I plan to work on POSIX compatibility for my redo implementation.


I have only tested with GNOME Files. James Lu tested other file managers, see here: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868705#msg...

Quote:

   * Add Enhances: caja, tumbler (>= 0.1.92~), nautilus, nemo
     These are some of the many file managers/thumbnailer programs that support
     desktop thumbnailers like exe-thumbnailer, and I have verified (at some
     point) that all of these work.


Templates cause bugs. The solitary appropriate solution is an unparser – a component that walks an AST and serializes it.

Making sure the result conforms to the grammar of the output language without any unparser would always involve a parser.

> As soon as I'm looking at more than one programming or markup language in the same file, I'm looking at spaghetti code.

Iain Dooley, December 2011

http://www.workingsoftware.com.au/page/Your_templating_engin...


This article has some good points but the reasons we add more logic to templates are:

- just pre-generating everything outside of the template can be very efficients. Especially if you language can't make everything lazy or if you have several representations for the same dataset.

- designers want a bit more freedom that just printing x. Having to go back to the dev team everytime you need a little tweak is terrible

- all templates are not HTML

- it's way easier and faster to prototype

- rendering caching != data caching

- everything is not about display. Linking and injecting resources are a big deal, and putting that outside of the template is a huge pain.

- conditional template inheritance ? includes in loop ?

- stuff like wordpress have entire business based on the fact you can switch templates on the fly without touching the blog code base or without the wp team to know what you are going to need inside the template in advance.


Yes, all of those are reasons. None of those reasons apply if users wwant something understandable, maintainable and secure.

Separation of concerns is a thing that can help designers, no?


> you simply include everything in the markup that might need to be there, and the programmer removes whatever is not necessary

I realize you may not be cosigning on everything in the article you're quoting, but this is the author's first suggestion to an alternative to template languages.

I work on an SPA that was built like this. The index.html is over 10k lines long. It contains almost every single piece of the UI. Templating libraries and languages aren't perfect but they offer a better separation of concerns than just "dump it all in one file and write some imperative dom fiddling code to add different states"


Note that I made a different proposal above the quote.

I think SPAs are either fundamentally dishonest engineering, in the same way that a microwave wrapped in artificial wood veneer is (and a stainless steel microwave is not) or should result in such a template. If you really think that this is too much, IMO you should not make an SPA in the first place.


> I think SPAs are either fundamentally dishonest engineering, in the same way that a microwave wrapped in artificial wood veneer is (and a stainless steel microwave is not) or should result in such a template.

Could you rephrase or expound on this metaphor? I have no idea what you mean by "fundamentally dishonest engineering" or what that has to do with SPAs and templates.


> Ethereum contracts are unstoppable and uncensorable until a core developer loses money

Source: https://news.ycombinator.com/item?id=14162399


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: