Hacker News new | past | comments | ask | show | jobs | submit login
Running GUI apps within Docker containers (trickster.dev)
330 points by rl1987 on March 26, 2022 | hide | past | favorite | 102 comments



You can also do it with this one-liner:

docker run -it --rm -e DISPLAY --net=host -v $XAUTHORITY:/root/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix debian:11-slim

Then inside the container, run:

    apt update
    apt install firefox-esr
    firefox
Now Firefox runs inside the container but displays on your hosts screen and you can use it right away.


This has less isolation. The container has full access to your X server and can record your screen or log all key presses, for example. It also has access to the host's network interfaces, and can access services bound to localhost (even if firewalled). There is little point in running software this way, if the software is available on the host.

On the other hand the use cases that the article describes can already be accomplished by Firefox's "account containers" feature or just creating another profile and running `firefox --no-remote`


> There is little point in running software this way, if the software is available on the host.

1. That's a big "if"; I'm fond of docker precisely for fixing software availability issues.

2. It's diminished, but it still provides significant protection, since the containerized program still can't see the real filesystem or process table without going through X or something networked. That's still pretty significant; it would, for instance, protect against a Firefox zero-day that was being used to read people's private SSH keys (not a hypothetical; that incident is what pushed me personally to start sandboxing my browser).


>Firefox zero-day that was being used to read people's private SSH keys

Backup there. I somehow have not heard about this. What was the vector? What was the exposure? E.g. have I leaked the contents of ~/.ssh to some (hopefully small, at least) subset of sites I've visited? Where can I learn more?




Using it for fixing availability is fine, but I disagree that it provides protection.

I don't deny that in theory someone could deploy a Firefox zero day that steals ssh keys and that having an abnormal setup via docker could throw a wrench in that, but you are essentially relying on obscurity of your setup which is not an acceptable approach - in the same way that the obscurity of TempleOS would not be a good security approach.


I feel like security through obscurity is the most prominent example of the Dunning-Kruger effect

Where beginners in security wrongly rely on it entirely

Then intermediate experts proudly proclaim that it's not acceptable

Then expert experts realize it's an excellent part of defense in depth.

-

I mean I know it's a hyperbolic example in your comment, but do you really think leveraging the obscurity of TempleOS wouldn't be an extremely potent way of dodging 99.99999% of malware in existence?

Forcing a tailored attack to breach your isolation is already miles ahead of just running it on your desktop and is most certainly an "acceptable approach", even if it's not infallible.


There are quite a number of vectors that are activated similarly regardless of large swaths of the implementation.

For example: if you're hosting a self made shopping cart versus an open source/commercial product, there's probably a higher risk of being vulnerable to SQL injections. It's conceivable that open source and commercial offerings have been beaten on more and as a result hardened against more attack vectors.

Of course to your point, it means that you're not vulnerable to a zero day that impacts many deployments. I imagine the shops that made their own logging library were quite pleased with themselves when the log4j vulnerabilities came out.

My point being that it's not a simple calculus, and it's really hard to evaluate the relative risks of one versus the other. It's probably best to look at the cost of implementing and maintaining security through obscurity and using that as a litmus test for if it's reasonable or not.

Running something on a non-standard port has a very different cost than making your own operating system.


The container should only have access to the server's keyboard input if you give it the magic cookie to do so. If you want a secure container, you would only grant limited permissions.

https://www.x.org/releases/X11R7.6/doc/xextproto/security.ht...


You're in a thread about mounting $HOME/.Xauthority and /tmp/.X11-unix in the container


The container would only be able to record the screens or key presses that occur in the same xserver, right? I never thought about access to localhost services. I'm guessing there's some xserver configuration that could prevent that.


Nobody runs more than one X server. And the X server isn't involved with other localhost services.


Wait I thought distros ran one X server per logged in user account and put them on different ttys. Out of the box on Ubuntu I can log in as two separate users simultaneously and switch between those unlocked sessions as well as the login screen by changing the tty.

If you pass the GPU device node through to an LXC container I believe you can run an X or Wayland server fully inside of it but I'm not sure if changing ttys will work correctly because you might need access to more than just the GPU to handle the handoff with the kernel correctly.


I don't intend to run more than one X server. Most of my GUI applications don't use X.


The parent's setup makes the host's X server socket available to the container, so there's only one X server in that case. In the article, they run a separate X server, which offers some isolation as you note.


All X programs can already log all other apps' key presses.


I can’t find a source for this. I believe this to be true, but I can’t find evidence of it. Do you know of one?


The original article uses a separate X server with VNC. So those can't.


Thanks, this is much simpler than the original post.

Here's the equivalent Dockerfile + docker-compose.yaml for convenience:

    # Dockerfile
    FROM debian:11-slim
    RUN apt-get -y update && apt-get -y install firefox-esr 
    CMD ["firefox"]


    # docker-compose.yaml
    services:
      firefox:
        image: firefox
        build: .
        environment:
          DISPLAY: ${DISPLAY}
        network_mode: host
        volumes:
          - ${XAUTHORITY}:/root/.Xauthority
          - /tmp/.X11-unix:/tmp.X11-unix

This way you can run it with just:

    docker-compose up


if you want to remove network_mode=host you need to run firefox with the --no-xshm flag


Yes, I think few people seem to realize X Windows can be operated over a network as well. I used to do that with the Windows Subsystem for Linux to use the KDE Konsole and other great tools on my Windows machine using an X server running on the Windows side and a few environment variable tweaks on the Linux side. My colleagues were always confused when they saw the Linux UI seamlessly mixed with the Windows stuff on my display. For the use case mentioned in the article xvfb probably makes more sense though as that stuff often runs on a server and you don't want to stream the output somewhere to work with it interactively.


This is not running X on any network, just sharing access to the resources that the containerized app needs to find the X server on the same host. There's probably an equivalent incantation for Wayland, too.


Well, that's sort of splitting hairs... It is a unix socket, which at the API level is the same as a network connection, but the kernel skips the hassle of attaching protocol headers and routing the packets.

The example above is writing and reading every X11 message over a socket, and not reaching into /dev on the host, else it would need additional parameters added to the docker command.

I'm not current on Wayland, but as of a few years ago all attempts to run it non-locally required the X11 back-compat layer. I have no idea what /dev nodes would be required to be shared for wayland to run in a chroot.


I found your comment grepping for someone sharing how to do it in Wayland, so here it is for the next person:

    docker run -e XDG_RUNTIME_DIR=/tmp \
           -e WAYLAND_DISPLAY=$WAYLAND_DISPLAY \
           -v $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY:/tmp/$WAYLAND_DISPLAY  \
           --user=$(id -u):$(id -g) \
           imagename waylandapplication
Pretty simple and reasonable really, just needs to use the same Wayland socket and (in order to access it) user ID. From: https://unix.stackexchange.com/a/359244

I suppose if you were adventurous you could give it whatever other known user ID, and start a separate Wayland compositor for it. But I don't know why you would, when surely the point is to get containerised GUIs visible to you alongside others. Containerised multi-seat I suppose?


You got me curious, so I looked it up:

  Transports
  
  To date all known Wayland implementations work over a
  Unix domain  socket. This is used for one reason in
  particular: file descriptor messages. Unix sockets are
  the most practical transport capable of transferring file
  descriptors between processes, and this is necessary for
  large data transfers (keymaps, pixel buffers, and
  clipboard contents being the main use-cases). In theory,
  a different transport (e.g. TCP) is possible, but someone
  would have to figure out an alternative way of
  transferring bulk data.
So, that is why I hear about needing X11 compat for remote connections; wayland connections are still sockets (not e.g. shared mem), but they require the ability to pass file descriptors through the socket, which can only be done with Unix sockets. (and those file descriptors provide access to shared mem)

I don't see any details about how this interacts with OpenGL, like whether indirect rendering is possible or if applications would need /dev mounted.


thanks for posting this, but I don't think it meets the definition of "Pretty simple".


It's verbose, but that's why I included the description 'just needs same socket and user ID'. That's simple isn't it? Of course it'll always need something.

It could be made less verbose by encapsulating in a compose file, or by removing the env var to point into /tmp and instead using whatever the default is for the chosen image. Also it's not necessary to specify the value for an env var being passed through with the same name (I just copied from SO).


> X Windows can be operated over a network as well

Yes, but GP's example is X over a unix socket. Not sure if that counts as network.


> Yes, I think few people seem to realize X Windows can be operated over a network as well.

Because in practice it generally can't; I'm not aware of any Linux distribution not shipping their X server with `-nolisten tcp` by default.


LTSP?


Hah, good point:) Forgot that existed.


So WSL is a must? Because I tried already countless of times without that and it never worked.


You could do the same thing with User Mode Linux or a virtual machine. WSL is just the easiest/fastest way to run native Linux binaries on a Windows system. If you have a X11 server installed properly on windows and can run Linux apps over ssh (which I've done with cygwin, but there are other paid options as well), then you can run them over ssh to a virtual linux on your own host, or look for ways to skip the ssh tunnel if they can route packets to eachother.

I'm a little surprised the examples don't require fiddling with X permissions, which I always had to do when not using ssh.


Also this method avoids “xhost +” which is a huge security hole


I know this isn't the focus of the article, but this license provision in a linked project is outright strange.

>By using this software you agree that the following non-PII (non personally identifiable information) data will be collected, processed and used by the maintainers for the purpose of improving the docker-android project. Anonymisation with respect of the IP address means that only the first two octets of the IP address are collected.

https://github.com/budtmo/docker-android/blob/master/LICENSE...

How can you call a project Apache when you're forcing people to pay via their data ? Why is their no opt out option?

Absent that this seems like a great QA automation tool. Or a Tinder bot farm...


While nasty, the Apache license does not concern itself with what an application does, only how it is distributed.


The especially weird thing is putting something like that in the license.


According to the documentation about analytics, there is an opt-out.


Where exactly, I'd expect them to both mention the analytics and the opt out option in the readme.

Just rubs me the wrong way, you can easily use this without knowing it phones home


Sadly, today this is more common than not, even with OSS. Even Elasticsearch has been doing collecting and transmitting their telemetry shit for many years now (since well before Kimchi decided to change the license from OSS-friendly to a hostile one).


I am actually running a few of my daily applications, such as firefox, vscode, or spotify inside a podman container (rootless makes me feel a little safer). I build a small python script around it, which creates a desktop icon, tags the current version (so you can rollback), and updates the images after x amount of time. I'll clean it up and put it on github if someone is interested :)


Isn’t this what Flatpaks are for? Idk if they use podman, but they do use similar sandboxing features.

Having a full blown container like this can be useful in some scenarios, but I think it’s overkill for general purpose apps.


i think they use bwrap as mentioned in a comment below. my use case is to restrict network access for example. Or running multiple firefox instances in parallel (so they dont have the same parent process / cookies etc.). or restrict memory for to 2G per container. there were just a few things i wanted to do that didn't quiet work with flatpak or snap.


Please do share! I would be interested in seeing it and doing something similar (and use podman for the same reason).


I will! cleaning up now and going to publish it later on github. The main idea was a least privilege approach to running simple desktop applications independent from the host OS and being able to control filesystem/network access on a per app basis. (spotify on fedora without flatpak or rpm-fusion repo's, not even sudo needed to install)


No real need for full Flatpak, Bubblewrap (bwrap) is intended to be a lightweight sandbox providing this out of the box, with Flatpak (and other stuff besides) building upon it. The Arch wiki has a nice introductory page: https://wiki.archlinux.org/title/Bubblewrap


Oh didn't know about bwrap yet! If i understand the wiki page correctly, you still need to get those binaries to your pc. So thats why i went with plain and simple dockerfiles.


here you go: https://github.com/mody5bundle/capps please feel free to open issues and pull requests :)


Firejail is really nice...


>>"This would provide a degree of protection against social media platform cracking down on sock puppet accounts being used from single setup because traffic is kept separate for each account and cookie cross-contamination is being prevented."

This can also be accomplised by using Firefox's Multi-Account Container extension in conjunction with the Container Proxy extension.

The Multi-Account Container extension is also great for non-dirtbag purposes. For instance testing different user profiles in an application - create 1 container for User, 1 for Admin etc.


I cannot recommend the mentioned x11docker cmd line tool enough for this. It takes care of all possible edge cases, supports different outlets (like nxagent or xephyr), focuses on security and exposes an easy interface. I'm using it for day to day work with VSCode, for example, to fix its atrocious security model, or more recently, to try out JPEXS (Java), IDA (wine) and AHK (wine). I specially like that this leaves no config or cache files left behind.

The article says you need a compatible image for that, but in my experience, everything launches just fine.


This is also supported directly in VSCode Dev Containers by using just a feature switch[1]

[1] https://github.com/microsoft/vscode-dev-containers/blob/main...


For getting a proper Wayland session I recommend doing this with Phosh and wayvnc (WLR_BACKENDS=headless WLR_LIBINPUT_NO_DEVICES=1 as env vars for starting Phosh and then "wayvnc 0.0.0.0 7050").

You can even have a whole systemd setup in a container using this VNC backend, here in this Dockerfile with CPU rendering (LIBGL_ALWAYS_SOFTWARE=1) to avoid requring a GPU:

    FROM docker.io/fedora
    RUN dnf -y update && dnf install -y phosh phoc wayvnc sudo socat iproute mutter xorg-x11-server-Xwayland dbus-x11 mesa-libgbm mesa-libOpenCL mesa-libGL mesa-libGLU mesa-libEGL mesa-vulkan-drivers mesa-libOSMesa mesa-dri-drivers mesa-filesystem gnome-shell gnome-terminal
    RUN mkdir -p /etc/systemd/system/phosh.service.d && echo -e '\
    [Unit]\n\
    ConditionPathExists=\n\
    [Service]\n\
    Environment=WLR_BACKENDS=headless\n\
    Environment=WLR_LIBINPUT_NO_DEVICES=1\n\
    StandardInput=null\n\
    TTYPath=/dev/console\n\
    TTYReset=no\n\
    TTYVHangup=no\n\
    TTYVTDisallocate=no\n\
    ExecStart=\n\
    ExecStart=/usr/bin/phoc --exec "bash -lc \"/usr/libexec/phosh --unlocked & wayvnc 0.0.0.0 7050\""\n\
    '  > /etc/systemd/system/phosh.service.d/10-headless.conf
    RUN systemctl enable phosh.service
    # Work around a problem with a missing dbus.service unit file
    RUN mkdir -p /etc/systemd/system && ln -fs /usr/lib/systemd/system/dbus-broker.service /etc/systemd/system/dbus.service
    # Mask udev instead of making /sys/ read-only as suggested in https://systemd.io/CONTAINER_INTERFACE/
    RUN systemctl mask systemd-udevd.service systemd-modules-load.service systemd-udevd-control.socket systemd-udevd-kernel.socket
    ENV container=docker
    RUN groupadd --system sudo
    RUN useradd --create-home --shell /bin/bash -G sudo user
    RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
    ENV LIBGL_ALWAYS_SOFTWARE=1
    RUN sudo -u user gsettings set sm.puri.phoc auto-maximize false
    EXPOSE 7050
    # Set up a /dev/console link to have the same behavior with or without "-it", needs podman run --systemd=always
    CMD [ "/bin/sh", "-c", "if ! [ -e /dev/console ] ; then socat -u pty,link=/dev/console stdout & fi ; exec /sbin/init" ]
Use as: podman run --systemd=always --rm -p 7050:7050 imagename


Very cool! systemd is so underrated and underappreciated in the container world.


Is Sandboxie still around? It used to do that (containerised GUI apps) pre-Docker on Windows very effectively.

Ah (edit), there it is:

https://github.com/sandboxie-plus/Sandboxie


Reading this makes me want to revisit running i3wm in a docker container on macOS. After having tried Amethyst (current daily driver), yabai, and hammerspoon, nothing comes close to i3wm, in my experience.


I looked into the equivalent setup for WSL a while ago. The thing I could never solve was: how to manage the hosts windows via the virtual environment's WM? Without the integration that setup is just extra config for no gain.

Because if most tools belong to the VM, then just run a VM in full screen and use that. No need to have the display server shared if you're using mostly Linux tools that display on the VM's X server.


That’s a good point. I’ll try out the VM route and see how that goes. I mainly just want to tile the web browser, some docs, a terminal emulator and vim/IDE and be able to switch between layouts and apps seamlessly and VM does seem like a good option. Kinda sad that macOS WM is a joke to do all this compared to some of the TWMs out there.


Additional reading on this topic if you fancy it. This is the first one I read regarding docker for GUI apps

https://blog.jessfraz.com/post/docker-containers-on-the-desk...


Distrobox let's you do the same(using docker or podman) imo and is pretty good.

https://github.com/89luca89/distrobox


Distrobox (or Toolbox) is a nice tool[0], but honestly the reliance on Podman doesn't make a lot of sense to me. The necessary container functionality is available directly from the kernel[1] with little of the complication that Podman brings along, and from what I can tell[1] it is pretty straight-forward to pull docker and OCI images with simple HTTP interaction (or wget/curl) and extract with tar. This could be a stand-alone statically compiled executable with no dependencies that could work on any Linux distribution with no hassle at all, but isn't for some reason[2].

[0] I was exposed to it via Fedora Silverblue, where something like it is a necessity since the base OS is immutable. I've found it very useful even outside that use case however.

[1] Seems to me you could even use bwrap if you were reluctant to use syscalls directly.

[2] having done a little research towards creating my own tool because I'd rather not depend on a package that isn't available in many LTS distros, among other reasons.


This is what Flatpak (https://flatpak.org/) is designed for.. why would you use docker?


Because Flatpak imposes some design choices that are... less than optimal for porting existing applications, and if you ask them to change maintainers will insist this is the way to go.

They have told users that if they want, for instance, Jetbrains IDEs to work, they should simply get a Job at Jetbrains and convince them to rewrite their entire IDE to support the flatpak model.[1]

This is the reason I avoid distros with Flathub enabled by default. Half the software on there is broken in some pretty subtantial way, and nobody at Flatpak or Flathub cares. They really need to realize they're not Apple, they simply can't tell everyone to do things their way and hope to build a working and reliable ecosystem.

They have the equivalent to snap's classic confinement, but it needs to be explicitly enabled every time you launch an app.

[1]: https://github.com/flathub/com.jetbrains.IntelliJ-IDEA-Commu...


Seems like just allowing the user to specify 'unconfined' as an override. Then the user would just need to do that with flatseal. Alternatively, and with much more needless complication, some kind of fuse-mounted /bin directory that acts as a portal could be used. Point is there are solutions that don't require huge perversions of the Flatpak model and also don't require Jetbrains to rewrite everything.

Personally I like that the Linux Desktop community is finally starting to wake up to the idea of universal application distribution that doesn't require armies of unpaid third party maintainers, and while Flatpak is definitely not perfect it is, in my opinion, a huge step forward in that regard[0].

[0] Not that the idea is really that new, there have been many attempts at bringing sanity to Linux application distribution, many of them better (IMO), but they've just never really been embraced by the community the way Flatpak has.


That's the problem, the flatpak developers outright refuse to compromise.

The person I linked above is in fact a flatpak maintainer.

Also, I hate they are shipping broken software. Almost nothing in that IDE works properly, yet Fedora now shows it by default when I search software.

At least snaps work most of the time.


You're under the impression that they can compromise here, when this is false. That's a misreading of that comment. The "compromise" used by snap is to just not have sandboxing at all. Mounting /bin fundamentally breaks the sandboxing, it's never going to work. The only other option is to change the entire OS to support this (and maybe the kernel too), and that's going to break everything even more. You'll have the same problem attempting to do this in docker.

The main mistake you're making is thinking that Flatpak imposed this design constraint, when really it's a fundamental property of the underlying OS, that sandboxing solutions are forced to work around. It's possible to fix this with a distribution built in a very specific way like NixOS where you can theoretically mount different systems together, but that comes with it own sets of problems and things that it breaks.


If you can't make it work, don't ship it. Please. In this instance, not only the Terminal is broken, but also large parts of code analysis, task running, compiling, testing... essentially all but the editor. And this experience is essentially universal. The next app I installed had sync support that didn't work because the sync working directory was unwritable.

It makes me want to rip out flatpak from every system I see.


I agree the quality of packages on flathub is pretty inconsistent. That part sucks. They're all made by third party packagers.

You could make a curated repository but that would just be taking flathub and removing a lot of packages.


IDEs usually expect/require access to all sorts of things on the host, things like vscode can be used to use remote container support on the host, etc. but it's still an area that needs improvement.

Personally I just layer my IDEs with ostree as a compromise.


another solution that allows both wayland and x11 applications accessible from the browser (disclaimer: I am the author, it’s also still very much under development) https://github.com/udevbe/greenfield


One advantage one running GUI apps within a Docker container is that it is, in my experience, totally trivial to put RAM/CPU quotas on piggy apps. Sure, you may put quotas using other means but it can get tricky (if I'm not mistaken it's particularly tricky on processes forking themselves like there's no tomorrow).

Using a Docker container, which you may already have anyway, putting CPU/RAM quotas is a one-liner.


If that's all you want it's easier to use systemd-run and start your process as a service.

I do this to put many of my resource hungry apps like FF in slices.


Can you also add bandwidth quotas?


That's a good question, I never tried so I don't know either.


I use some projects that use vnc and also have setup novnc on them..

Here’s an example of one, https://github.com/accetto/ubuntu-vnc-xfce-g3

Novnc just allows accessing things via a browser too. I use it for quick checks but VNC when I want a client


Truthfully, I think systemd-nspawn and Firejail are better solutions than Docker for GUI apps and desktop use.


This is fun, I did it as well to get an old flash player from 2011 working in modern Linux: https://raymii.org/s/tutorials/Running_gnash_on_Ubuntu_20.04...


Exactly that https://subuser.org does


Looks like its the only solution in this thread leveraging Xpra.

https://www.xpra.org/


jessfraz maintains quite a collection of Dockerfiles you can borrow: https://github.com/jessfraz/dockerfiles


Of course you can also play DOOM in docker:

https://earthly.dev/blog/dos-gaming-in-docker/


I love the idea of this. I’ve struggled to get a setup like this to work on MacOS though. Which X server is the “right” one to use? Which is the easiest? Is there some way to make it work seamlessly with built in tools?

As an aside, instead of talking about growth hacking, and OSINT I think this whole thing could be a lot more relatable if the author chose a simple motivation like privacy. Another comment here mentioned subtools (?) which clearly highlights that not everything should be trusted with full access to your system.


It sounds like you missed the x11vnc part of this. You connect into the container using vnc, not x11, and potentially via a browser based vnc client. If you want to see a working example of a gui app running successfully in docker, check out https://github.com/jlesage/docker-handbrake


the vnc solution was one of several presented in the article. The second was using the host x11 server by sharing the socket.


I've used xquartz for this on macOS in the past.


Distrobox is handy for this: https://github.com/89luca89/distrobox

It just lets you reuse any docker image on your desktop (both podman and docker), and has some nice features like the ability to create a launcher shortcut on the host and transparently mapping your home directory. I switched to it full time, all my CLI work is just done in a container, and it's pretty transparent.


Oh hey, I just automated this a bit to run the Linux version of Scantailor Advanced on a Mac. I contributed it back upstream. It's not too complicated, but it took a while to get it working. https://github.com/ryanfb/docker_scantailor

Edit: this method uses xquartz on the Mac side, and socat to bridge between a network port and a file socket.


Nice article.

ps. you know, for some people it's very hard to read bright text on black background. Especially if you're flipping from opposite contrast.


Agreed, I bring this up all the time, FYI it's about 40% of the population according to several decades of research.


I'm doing this since quite a while and I am happy with this approach, especially with applications that tend to pollute my machine in every folder possible, like IntelliJ:

https://github.com/madduci/docker-intellij


for macOS, you can use -e DISPLAY=host.docker.internal:0

if docker version is >20.10.x also works with Linux


And how to launch GUI app from docker via ssh?


ssh -X will do, but in this thread is a project using Xpra. Assuming your docker is remote, that’s probably better.

https://www.xpra.org/


Run X and set the display host?


What about in Wayland environments?


I've successfully used systemd-nspawn with my native Wayland session and nested Wayland sessions.


Anyway to do this on windows?


Sandboxie


> This is beneficial for things like social media management, growth hacking (either via social media automation or manual labour done by VAs) or OSINT investigations.

Ugh, sounds like the software equivalent of using a huge stack of ingenious hardware to just do something parasitic like high-frequency trading.


The author seems to focus on something called ”growth hacking”, which seems to boil down to social media fraud akin to the cyberwarfare psyops various nation-states have been doing.

Fast capitalism, to me, seems to be an ideology entirely centered on an authoritarian we-versus-them mindset where profit is king, civilian collateral be damned.


Well, growth hacking is a buzzword for a marketing position in a company encountering the steep part of the curve. I suppose one of the innovations is that a widget becomes more attractive to a segment of the population when you bolt on some social functionality, what you’re seeing.

At the top of growth is incumbency. I think that’s where your second paragraph comes in. Incumbents surveil innovations, and “growth hackers” don’t like to talk about this obvious conclusion. They’d rather inhabit a role that concludes once growth has succeeded and slowed.

This is no more revolutionary than traditional marketing. Marxism talks about capitalist marketing as a way to channel alienation among working classes away from collective organization. But, Marx from the beginning needed to market his ideas to different groups. He needed those groups to realize that they needed his political orientation. He tries to do this with abstractions, and explicitly gives up on whole classes he deems unfit to understand those ideas (the lumpenproletariat).

As the product of a growth hacker reaches incumbency, marketing gives way to public relations. We might consider Cambridge Analytica public relations hackers. Same level of statistical sophistication, same single-focus of corporate promotion over all other advancements.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: