At least the $6.22 shipping cost to my European country is reasonable and the same for two CHIPs (3: $7, 4: $9, 5: $11). I recall it was much higher during the kickstarter (and they worked to reduce it, as it seems on the campaign page).
Edit: PockeCHIP shipping is $11
I'm going to be posting my own soon using the new Pi Zero w/ camera connector - made my own text message powered camera doorbell.
Spend a bit more (say, $30) and the quality will greatly surpass expectations.
Might also google office or company liquidators in your area. You can often snag IT equipment and servers for fairly cheap from them as well.
2.66 ghz quad core xeon, 16 gb ram.
Not exactly the original poster's specs, but pretty close.
Other than that, the default GPGPU picks are either the OG GTX Titan or the 780 Ti since either has the same processor as the Tesla compute cards.
Personally I've bought from this one: http://www.aliexpress.com/item/NodeMcu-Lua-WIFI-development-...
So far tested 3 units and they work without any problem, packaging was also very good.
I don't see any cheaper than those currently, but once you have everything tested, if you want to cut the cost you can always just use esp8266 directly which goes for $1.8
Furthermore, their processing power is orders of magnitude apart, as is RAM and storage. NodeMCU targets embedded, and there are plenty of cases where a rpi is used but a NodeMCU would be more than sufficient. But that doesn't mean that they're equivalent, there are also plenty of use cases where a NodeMCU is horribly underpowered.
They also have the Orange Pi PC Plus, which has both ethernet and wifi, 1GB of RAM instead of 512MB, and an 8GB eMMC. But that's getting up in price to ~$25 shipped.
Does anyone here know?
¹ — https://en.wikipedia.org/wiki/Coaxial_power_connector
EDIT: Just realized that’s the same size barrel connector used for Sony’s PlayStation Portable², PlayStation TV³, etc., and the power supply⁴ those devices come with is 5V ⎓ 2A and have a centre positive polarity, so it seems like they’ll be perfect for use on the Orange Pi One.
¹ — https://en.wikipedia.org/wiki/Polarity_symbols
² — https://en.wikipedia.org/wiki/PlayStation_Portable
³ — https://en.wikipedia.org/wiki/PlayStation_TV
⁴ — http://f.cl.ly/items/3I3N1m0U1E3o3I1q0v19/sony_ps_tv_ac_adap...
If not -- why‽
There are several standards, and at least some of them specify voltage ranges for particular sizes, but there's no universal standard.
List of plug sizes and their common uses: https://en.wikipedia.org/wiki/Coaxial_power_connector#Listin...
edit: outer diameter 4mm
BTW, I'm criticizing the fact that the shipping cost are not easy to find upfront, not that there are any. I'm even acknowledging that they worked to reduce them.
So you want to know something more than a country (perhaps postal/zip code) before showing me the shipping price? I simply don't bother going further and look for another vendor (would not work here, I know).
It's often advertised as a 35 dollar computer, etc., however when I decided to get one all the recommended partners and distributers were easily selling it for 40-45 minimum, excluding tax and shipping costs.
It's termed the $35 computer becuase components are purchased in dollars, and so it's the base price without having to take into consideration the daily fluctuations in currencies.
It's manufactured in the UK however (Wales specifically), and so there will be inevitable shipping costs to destinations outside of the UK. This is simply unavoidable, and will depend on factors such as the price of oil.
I just mean that there have been times I've looked for one only to find marked up prices before shipping/other costs.
It'd just be nice if the costs were described rather than just silently raising the price without explanation.
this most likely also includes handling costs which in this business is most likely going to be one of the biggest expenses once you've conquered production.
They must be Amazon Prime members.
Not every international person knows what is usual in the US. For example in Germany prices are always including taxes.
My dad ran a restaurant when I was a kid in a decent part of town. I remember him being happy and having all these friendships with customers. Due to economic issues he had to re-open in a worse part of town and offer a more fast-food-like menu. Holy hell, every customer was just itching for an argument about, well, everything. Getting extra free stuff, complaining about price, being very rude, etc. It was quite the eye opener. I can re-experience this anytime I dare visit a Walmart (we dont go there anymore).
This is why entrepreneurs and businesspeople always say to not compete on price unless you have to, but instead on service or quality. Price just leads down to a rabbit-hole of misery and ultimately hurts the customer who, for a little more, could get a vastly better experience. In my personal life I make a special effort not to be drawn to the low end as I tend to min-max things. I fight to go a step or two ahead of the lower-end, and every time I fail to do this I usually regret it.
* powered by Allwinner R8 (ARM Cortex-A8) with some proprietary bits
* Debian-based CHIP O/S preinstalled on 4 GB flash
* one micro USB port for power (supports USB OTG if powered by battery)
* power connector for battery
* one USB 2.0 port
* one TRRS port for audio and composite video
* built-in WiFi and Bluetooth
* VGA adapter available for $10
* HDMI adapter available for $15 (no audio)
* case available for $2
It's not supporting open source community (does not provide OS api for every part of chips) - so either you use their blobs or don't have access to everything hw is able to.
And AFAIK they're heavy GPL violator:
because on every image on main page (row with three possible use scenarios) it's shown as connected to a display.
This IS freebie marketing.
Not at all. Complete brand-name HDMI connector assembly costs 29 cents in 10K quantity. 
The Raspberry Pi machines use a family of Broadcom chips. These include some kind of general-purpose processor built into the GPU (sometimes called the VC4 or VPU). During operation, it runs an RTOS that handles OpenGL ES calls, translating them to QPU instructions. During boot, it's the first thing brought up; it initializes the RAM, loads the next level bootloader, then starts the ARM CPU. There's an open-source video driver that does the OpenGL work (and I think actually supports OpenGL, not just the ES variant). There's work being done on making a bootloader replacement, but it's in the fairly early stages (https://github.com/christinaa/rpi-open-firmware).
Maybe someone who knows about the CHIP itself will chime in with more detail on how it compares.
Anyway, I've ordered two pieces. They're probably going to gather dust alongside my Raspberry Pis and Arduinos once the initial excitement has worn off. :)
[EDIT] OK there's lots of info on the hardware, just not easy to find on their sales page: https://github.com/NextThingCo/CHIP-Hardware/
Not to mention the CHIP uses an AllWinner processor which has a record of not playing well with open source and a history of security issues.
$4.24 all up. Not quite under $4, but pretty damn close.
US6932640 Oct 22, 2004 Aug 23, 2005 Yun-Ching Sung HDMI connector
US7059914 Feb 20, 2004 Jun 13, 2006 Advanced Connectek, Inc. HDMI plug connector
US7192310 May 16, 2006 Mar 20, 2007 Cheng Uei Precision Industry Co., Ltd. HDMI connector
US20060148319 Mar 3, 2006 Jul 6, 2006 Advanced Connectek Inc. HDMI type electrical connector assembly
US8500489 Jul 15, 2010 Aug 6, 2013 Luxi Electronics Corp. HDMI locking connectors
Legit question, not sarcasm or anything.
HDMI got their foot in the door first, but I think we'll be seeing more DP stuff in the very near future. The only downside of DP is the cables are usually more expensive.
If the rest of the HDMI adapter board looks anything like the VGA adapter, with just passives and a connector, then they would be making a profit charging only $3 for it, let alone the $15 they are asking.
The annual fees are 5-10k for each (even in small volumes), and they may just be trying to make up the cost.
Compared to the original RPi which required an $11 WiFi USB dongle and a powered USB hub this is a lot simpler. I primarily used it as a headless sensor node or wireless/networked LCD display. It's perfect for that and still one of the lower-cost options even after shipping $$.
Their documentation (http://docs.getchip.com/) and forum are actually pretty great. I think this will be a good contender if/ when they reach general availability.
I love that it includes a game dev kit that includes a music tracker...that's what I want it for. I have an original GameBoy for making music with LSDJ, and it's a lot of fun. But, it is difficult to find good condition GameBoys for anything approaching a reasonable price these days. I'd love to have something a bit more modern with the same basic feel and sound.
The PocketCHIP has the advantage of having a "real" computer inside and a QWERTY keyboard, so if I get bored with four note polyphony, I could run something like SchismTracker or SunVox or whatever. It is in the sweet spot for me for this kind of device, in a way that the Raspberry Pi hasn't been (though the Pi is cool, too).
There is one thing that bothers me, however. With the Open Pandora, the community has been amazing - and much of that has been because the focus is on developers^3. The online repo of apps for the OpenPandora is a true treasure trove of amazing things (see http://repo.openpandora.org/) - an app store done right, in that you have total freedom to do whatever you want with the platform, but you can also just have the plug 'n play experience of browsing a well-curated and maintained list of apps, which can be installed with a single click - no BOFH'ism required. The Pandora has proven to be a very good balance of free and curated apps. Developers can make money as well, if they choose to, and for the most part the community has been very remunerative towards the key devs pushing the platform forward.
However, this doesn't seem to be a key strategy for the guys behind CHIP, who are a bit behind the ball with setting up a common, community-focused repository for developers to contribute to .. alas, it seems that its going to be a total free-for-all with CHIP development. The best we will have is "at least we can push our own .deb's up on a website somewhere to distribute our software".
I seriously hope that, when the PocketCHIP starts to launch (its trickling out now, will be ramping up towards the end of the month), the NextThing guys will realize that they've got to get on top of this issue before someone else does - it'd be quite feasible, for example, to turn on a "PocketCHIP apps" section of repo.openpandora.org, and if NextThing doesn't do it - someone will. Such is the nature of the Open Handheld community.
As a developer and user, I'd much rather have an 'official' repo, with curated apps and quality control for the end user, than just a free-for-all wild frontier of .deb's being passed around by all and sundry.
Actually, what I'd really like to see happen is the guys behind the OS for the Open Pandora/Pyra consoles work in coordination with the NextThing team, so that maybe - just maybe - all systems could be running the same basic OS core. There really isn't any good reason for this not to happen - its only because of politics and control issues and NIMBY'ism/DRY'ishness that its not on the table at the moment.
Exactly. I would say now the Pyra will run Debian instead of Angstrom and Chip runs a Debian as well there could be an option of unifying. Angstrom was kind of horrible to work with IMHO; I use my Pandora a lot but from a Debian chroot.
PocketChip on the other hand looks more interesting.
But this is the reality: you can't make a device like this, for such a low price, if you want to keep it open and available to your customers without having to make serious compromises. The Pandora has excellent hardware controls you won't find anywhere else - the nubs are superb - and is an entirely grass-roots effort: designed, manufactured and supported by a rag-tag team of hackers who are doing everything they can to build the ideal device that we all like. The price reflects the economic reality of the circumstances.
And, this is proven again with the Pyra, where the community is self-funding all of the development, manufacturing and support costs - a real true, well-managed startup. Perhaps things will get cheaper when the money is on their side to be able to afford massively larger scales of manufacturing - but remember, there are only going to be 500 Pyra at first. That price helps get the next 500+ Pyra made. This was true for the Pandora too - it wouldn't have been able to survive as long as it has, and evolve into such a cool product, without traditional consumer-level economics of scale being discarded by the community and early adopters. We're paying a fair price for an amazing machine, getting value you will find nowhere else, entirely because the economies of scale are so difficult. If Pyra goes well (When), then there will definitely be opportunities for the price to come way, way down. But for now, those of us who can invest in the product properly, are the ones pushing it forward.
Never forget: Pandora and Pyra have been a real hacker-oriented project, from the very beginning and probably still well into the future. Nobody but us (well, Evildragon&Co.) controls this, and its been kept on the rails as a project so far precisely because the costs have been managed at a scale that is acceptable to those of us who understand what is being built here: the ideal, pocketable, 100% OPEN, Linux workstation platform.
First image shows a battery connector top right.
Although the rasbperry pi zero seems interesting, I don't know if I can plug a minimalist, small and cheap screen on a mini-hdmi. Overall there is no point using a classic screen on such tiny devices.
This seems to compete with the raspberry pi zero, and RPi zero doesn't have wifi.
It would seem like you can, or is there something I'm missing? Is it a concern with how you'd actually do the initial setup?
20 seconds from opening the box to bash$
What bums me out is that there is no easy board that I can find of the (get chip, pi zero ilk) that comes with an ethernet port. I know I can get a regular Pi but it's too much for my use case.
On a related note, I've been looking for a low cost smart power plug with ethernet (10/100/1000) without much success. If anyone knows of such a beast, please let me know.
IMO, $80 for something like this https://www.amazon.com/ezOutlet-Internet-IP-Enabled-Android-... is too much
I am looking forward for them to start delivering.
I hope that VAT won't be too high.
edit: looks like one of them can run in OTG mode (i.e client), that's wonderful!
Great game and astonishing feat for a $9 computer!
Why carry a laptop, when the location you're going to has a projector screen you'll use, and likely has a keyboard (or carry a portable input device), and a power supply. And your files are cached on your favourite cloud.
Make a nice looking case for these, and they're impressive novelties, lighter than the lightest laptop, and probably a bit more stable than driving a projector from a phone.
Of course, the projector currently doesn't run a general-purpose operating system, so your suggestion is more useful if you need to do something beyond showing slides or video.
Especially in the context of a hobby/experimental system.
I grew up on Solaris - my first ever UNIX was a Solaris 2.5.1 system on a SPARCStation 20. I've been running Solaris on intel since my first Pentium 90 workstation on Solaris 2.5.1.
Since I know how to build and package software for Solaris, I have everything I could ever want or need on it.
It's a comfortable system, and it's elegant, once one fully understands all of its capabilities. And it's extremely reliable and high performance, especially on intel based processors.
For some context, I am forced to work on Linux and I spend my entire working day working on it. Compared to reliability of Solaris and ease of use, I have grown to dislike Linux in the extreme. If you are thinking, "but that is insane, Linux is so great, how is that possible!", remember that I grew up on UNIX, so I have different criteria for what is comfortable and reliable (even in terms of development) than your average Linux user or Linux system administrator does. I dislike the GNU tools and user land (with very few notable exceptions) because I'm used to AT&T System V tools and that is how I expect the tools to behave; GNU tool chain usually frustrates me to no end. Working with Linux frustrates me to no end (I do professional development and system engineering on it).
For example: --some-long-option comes to mind, or lack of proper manual pages ("see the texinfo page"), lack of backwards compatibility support, tar -z (tar is a tape archiver, not a compressor!), and so on, and so on... I miss my ZFS, I miss my mdb, I miss my dbx, I miss my SMF, I miss my fmadm, I miss the simple and effective handling of storage area network logical units, I miss the fiberchannel stack which actually works... I don't have any of those issues on illumos based systems, but it drives the point home:
the last thing I want is yet another Linux based computer. I have enough of that as it is at work - almost 71,000 servers, 49% of them running Linux, and it sucks.
What are you even talking about here? man/info works wonderfully, if I want more readable information a terminal sure as hell isn't going to give it to me easier than searching a wiki. And solaris absolutely had problems with documentation on their larger packages.
> lack of backwards compatibility support
Hardly even a real issue if you actually maintain your damn systems more than once every half decade.
>tar -z (tar is a tape archiver, not a compressor!)
... It still is a tape archiver AND a compresser AND a 100 different but completely valid and usable things.
ZFS absolutely is usable.
Why do you enjoy DBX over GDB?
SMF? One would think you would love and embrace systemd.
FibreChannel stack that works?
I can't refute all that you have said here since I am not familiar with all of it. But, have you considered you are just doing it the wrong/difficult way?
Like I wrote before, on UNIX we have different expectations in different areas than what people are used to and accept as given on Linux. The focus is different on UNIX.
Apropos dbx versus gdb: dbx has a 1,000 page manual, and makes it really easy to step through assembler code while listing the original source. How many pages of documentation does gdb have again? On top of that, gdb doesn't even fully support my OS, I don't think gdb properly supports anything that is not Linux... hmmm, that reminds me an awful lot of Microsoft Windows monoculture.
systemd versus SMF: systemd is a shoddy copy of SMF with a Windows twist, trying to replace every service in the system. Unlike SMF, which is part of the fault management architecture, which is part of self-healing technology, systemd has no such concept, self-healing and a contract filesystem is science fiction for systemd. SMF watches over services, but it doesn't try to replace them; "do one thing, and do it well."
InfiniBand is a different technology than fiberchannel.
GDB also works on a large amount of computers. Windows, Linux, netbsd, etc.
>>>However, its use is not strictly limited to the GNU operating system; it is a portable debugger that runs on many Unix-like systems and works for many programming languages, including Ada, C, C++, Objective-C, Free Pascal, Fortran, Java and partially others. 
>hmmm, that reminds me an awful lot of Microsoft Windows monoculture.
What? Actually they support windows, which is exactly the opposite of what you are trying to say here... I use GDB DAILY on windows (work.) with zero issues.
I'll agree that perhaps systemd doesn't cover all use cases or wants. But calling it a shoddy copy of SMF with a windows twist is disingenuous. I don't care for the for or against systemd arguments but after the initial reaction/learning phase when pulling away from upstart/sysv/init based shit/etc, many of us are actually starting to warm up to systemd. It handles services wonderfully, it handles logs wonderfully, perhaps it's a bit bloated whatever you can always revert to what you want if you decide to spend the time to actually do it.
>InfiniBand is a different technology than fiberchannel.
Fair enough, i'll have to read up more on it than.
You are making quite a lot of generalizations without doing proper research. If you want to be stuck in your "In the old days us Unix people had it right!" mindset than this discussion is pointless. Otherwise I would love to continue butting heads on this.
`info gdb` is completely unacceptable, and an outrage: standard documentation on UNIX are manual pages, not to mention that systems other than GNU/Linux do not use GNU info.
> Man pages are quite limited correct,
Incorrect; manual pages are rendered by the nroff document typesetting system. Entire books have been typeset for printing with nroff. Case in point: the UNIX Text Processing book, the AWK book, the ANSI C book. The system is extremely flexible and very powerful, once one understands what is going on. When you hold the printed versions of these books in your hand, you can see that they are beautifully typeset and rendered. Brought to you by the same programs which render UNIX manual pages when you type `man some_command`!
What you see on the screen (on UNIX, cannot vouch for Linux) when you type `man ls` is an actual professional typesetting system rendering the content for stdout instead of a printing press!
> I don't care for the for or against systemd arguments but after the initial reaction/learning phase when pulling away from upstart/sysv/init based shit/etc, many of us are actually starting to warm up to systemd.
That's because you haven't had the opportunity to enjoy SMF. When you've worked with SMF, systemd looks like a cobbled-together toy. For example, systemd turns ASCII logs into binary format, just like on Windows. This in turn goes against the UNIX philosophy of
Write programs to handle text streams, because that is a universal interface. [McIlroy]
> You are making quite a lot of generalizations without doing proper research.
That's is quite ironic, telling that to someone who does professional system engineering and software development on GNU/Linux for a living. I have been doing UNIX and Linux professsionally since 1993, and working with computers in general since 1984, how many years is that? I spend every waking moment of what free time I have researching UNIX and Linux. To tell me that I'm "generalizing without doing proper research" just because I am not succumbing to GNU/Linux group think is what one could call disingenuous.
In fact TeX is used/preferred over nroff/others for a huge majority of physics/mathematics academic journals. And quite a bit outside of it. [0 - 3]
I will admit for stuff I already know and understand enough of to be considered proficient with it, man pages can be quicker. For something I just installed and still need to learn info pages provide a much better platform.
You may find the following link enjoyable to skim through.
> What you see on the screen (on UNIX, cannot vouch for Linux) when you type `man ls` is an actual professional typesetting system rendering the content for stdout instead of a printing press!
Love the enthusiasm but (La)TeX falls into that description as well.
> That's because you haven't had the opportunity to enjoy SMF.
Maybe, I've put it on my list of things to tinker with more. Thanks for the link.
> That's is quite ironic [...] I am not succumbing to GNU/Linux group think is what one could call disingenuous
I don't care about you succumbing to any group think or whatever other word you can come up with. I am trying to show you why it is actually superior in many ways. Just because you are comfortable with nroff absolutely 100% does not make it better. To put it simply, you may be a professional system/software engineer but if you can't keep up with why these systems are considered (and shown to be) better than what you have now than you will just continue to be frustrated/fall behind.
Quoting from the link above:
ADDENDUM: While not strictly relevant to the question, note that man pages are still considered the standard documentation system on free Unix-like systems like those running atop the Linux kernel and also the various BSD flavors. For example, the Debian package templates encourage the addition of a man page for any commands, and also lintian checks for a man page. Texinfo is still not widely used outside the GNU project.
Which I can confirm and concur with. Long story short, I would forget GNU info, because it is an invention not suitable to the task at hand, which is efficient and fast lookup of information in a reference manual.
LaTeX is a sucessor of TeX, which was designed with the goal of writing academic research papers, with a specific focus on mathematics research, not writing reference documentation; it is great for what it is designed to do, however it was not designed to be an online reference manual system, and it shows in the browser-like nature of the GNU info usage paradigm.
Manual pages have a certain structure, which, when one understands it, makes them extremely efficient at locating the information:
shows me the valid forms of using the command in question, in one to three concise lines.
lists all the available options which might not be present in the examples, but which I might need.
the most important part of a manual page; on GNU/Linux, this part is usually non-existent, but on UNIX, the EXAMPLES is almost always there, and it almost always contains several detailed treatises on how to use the command, system call, or a library in question. After SYNOPSIS, this is the first part I jump to with the "/" character (forward search in less(1)), and often contains enough information for me to start using the program in question and be productive immediately.
If I cannot remember exactly which command I am looking for, but I know commands related to it, just by calling up the manual page of the related command, I can look in the SEE ALSO section and find the manual for the command I could not remember.
provides which files are affected. This information is vital when knowing which files to inspect, monitor, or modify.
Sometimes, I just need to know which package a file or a command belongs to, whether it is multithreading-safe ("MT safe"), or whether the interface I am about to use is stable, uncommitted, deprecated, or external; AVAILABILITY section will tell me that. This section also does not exist on GNU/Linux, where it is science fiction for the developer to have even thought about forward and backward compatibility; often times, the Linux developers are so undisciplined that they do not even deliver built in documentation, and the manual page is written by someone else as a placeholder, and AVAILABILITY section won't exist in it, because the third party that wrote the manual page cannot know that. For example, Debian GNU/Linux often has such manual pages. That is unthinkable and intolerable on UNIX!
By convention, all the manual pages on UNIX contain these (and additional) sections. The order of locating pertinent information in a manual page, then, becomes as follows:
4. SEE ALSO;
With the order of scanning listed above, I often locate the pertinent information within five seconds, up to 35 seconds maximum (we timed it, ten runs, did the average, mean, and median, and corrected for standard deviation).
GNU info on the other hand, I'm stuck in trying to navigate "topics" as if I were in a web browser. The navigation is haphazard because everybody has their own idea of what the documentation to their program should look like, something that is well defined and uniform in the manual pages.
When you are troubleshooting a problem or need to scan through large amount of documentation quickly and efficiently, if you understand the structure (1 - user commands, 1M (or 8 on BSD and GNU/Linux) - system administration commands, 2 - system calls, 3C - standard C library, 3LIB - libraries, 4 (or 5 on GNU/Linux) - file formats, 5 - standards and macros, 6 - games, 7 - special files, 7D - device drivers, 9 - device driver interfaces), searching through the correct manual page becomes even faster, like a search on steroids, or with a twin turbo and a supercharger combined.
None of that structure is present in a GNU info manual; there, as is usual with GNU/Linux, it's a "free for all".
Any software I write is delivered with a manual page strictly following norms described above, because on UNIX, that is what we do, and it would be shameful and unprofessional not to do it (shoddy product), even if what one writes is freeware, in one's spare time. It's completely unacceptable and unthinkable to deliver a piece of software without a manual page. We have completely different quality standards and expectations of software on UNIX, even for free and gratis software.
This book, sometimes available in printed form and as a free PDF, explains how to use the nroff typesetting system:
the book is gratis to download, as it has been out of print for several decades, but it is invaluable when learning how to typeset documents with nroff(1), including manual pages.
Did you not want -z to exist at all (so you would pipe through gzip separately), or not want it to be a magical default?
GNU changed the -z handling at some point in the last decade (so that it autodetects whether input is compressed upon extraction and decompresses it without being told to), so now tar -xzf foo.tar.gz and tar -xf foo.tar.gz both work, where previously the second one would have failed because tar wouldn't have tried to decompress. Is that change what you're bothered by (it's pretty counterintuitive to me!), or did you just not want compression built into tar at all?
GNU tar now includes flag-based support for -j (bzip2), -J (xz), --lzip, --lzma, --lzop, -z (gzip), and -Z (compress).
Implementing UNIX tools inside of other UNIX tools is not how UNIX works; that might be acceptable on Windows, but it sucks on UNIX.
xz -dvc archive.tar.xz | tar xf -
tar xzf archive.tar.xz
GNU way is broken, because it is the Windows way, and Windows is busted.
I'd assume that Linux would have a lot more software available to it, as well as more maturity to its ARM ports.
I'm not interested in running this computer as a desktop, but as a UNIX server which I can carry in my pocket.
As for software, the package library of illumos based systems can stand shoulder to shoulder with Debian based ones:
Linux on these types of devices is not interesting to me, as every such device comes with it. It's neither different nor original.
Being neither different nor original seems like a plus, when it comes to servers. Having a predictable, internally-consistent standard system would be best. Easier management, easier configuration, predictable behavior between machines, and all that. Of course, the opinion of which system it would be better to standardize on would be a matter of opinion.
Also, I think anyone trying to run any sort of serious server on a CHIP is using a nailfile where a screwdriver would be better-suited.
If it makes you any happier, ZFS is doable on a Linux-based SBC. I found a fair amount of documentation of it being done on the first generation of Raspberry Pi.
I can't use ZFS on Linux because the place where I work doesn't allow it, as they are scared of having to support it (and they don't know how), and they're scared of redhat denying them support. On top of that, why would I use ZFS on Linux when I can have the real deal on any illumos or FreeBSD derivative (assuming they would let me)? Again, zero interest in running Linux. I like sleeping through my nights instead of sitting in a priority 1 crisis bridge having a bunch of managers yelling at me, and all because of having problems on Linux I wouldn't be having if I were running SmartOS.