Hacker Newsnew | past | comments | ask | show | jobs | submit | gabeiscoding's commentslogin

I sunk a lot of time trying to get perfect 1-bit font rendering running under Docker (Debian Linux), but could not get it match the precision of the Mac default Arial font rendering.

I have a side-by-side rasterized image showing the difference here:

https://github.com/gaberudy/epaper-calendar/blob/main/docs/f...

Linux insists on doing some freetype font thickening, giving the output a random thick-line look. If anyone knows more tricks to disable this or influence the anti-alias font rendering behavior, let me know!


Feel free to ask me any questions. I put enough documentation in here to make this an easy to copy with your own hardware.


I felt a loss when the On the Metal podcast series wrapped up. I figured these Oxide guys now have too much real work to do, they won’t be able to keep up this podcast, even if it is the best tech podcast I’ve listened to. Bryan is an unbeatable host. If you love retro-computing or just the history of our industry in general, you will walk away learning something from anything he talks about. His quick wit and deep tech knowledge keeps you totally engaged, and gives you the feeling of living vicariously in the silicon valley that was about getting your hands dirty and building great tech products.

But now I can look forward to their leap over to “social audio”. First Twitter spaces (where I would sometimes chime in live) and now the Discord-hosted On the Metal.

Most podcasts are sports commentary. These guys are full-contact in the game. I love it. Keep it up Bryan!


Wow, those are incredibly kind and inspiring words -- thank you! The good news is that these are all made available via RSS wherever you get podcasts.[0]

What I (we!) love about social audio is that it allows for many more voices -- both on the team here at Oxide and in the broader community. As a result, we've been able to get into some really deep detail with members of the team; I collected a few of my favorites in [1].

Thank you again for the kind words; I think you'll enjoy exploring the back catalog of Oxide and Friends!

[0] https://oxide-and-friends.transistor.fm/

[1] https://news.ycombinator.com/item?id=34430594


I missed the On the Metal podcast and hadn't noticed the Oxide and friends format until a few weeks ago. It just ended without any hint as to what if anything to expect next. I really enjoy it when the conversation goes of whatever script there may be, someone can't/won't stop and dives ever deeper into a topic or pet pieve/past trauma. It may not have the highest production quality (looking at you Twitter spaces), but who cares? Often times it feels like some random encounter in a hackspace (e.g. Bluetooth and Wi-Fi stop working as soon as someone mentions firmware bugs).


Thanks for the links Bryan!

May I suggest a small teaser trailer post to the old “On the Metal” feed? I’m still subscribed there, but wasn’t aware of the new channels.

Those were some incredible interviews! I’ve listened to a few of the episodes twice.


So glad you enjoyed "On the Metal"! Your suggestion is a great one, and we'll work on this over the next few weeks; stay tuned, and in the meantime, enjoy the "Oxide and Friends" back catalog!


Second that! I was a big fan of "On the Metal", and was bummed when it ended. Stumbled upon "Oxide and Friends" almost by chance, and was happy to discover it kept the same style (and even better, with more voices).

You made my commute a lot more enjoyable. Still going through the old catalog -- the episode about NeXT and Objective-C (S1E8 July 5, 2021 [1]) was excellent.

Keep up the great work Bryan!

[1] https://pca.st/ydtxspr6


> I felt a loss when the On the Metal podcast series wrapped up.

Big same. I was so happy to find the Oxide and Friends podcast that I went through the entire backlog in the last couple months. It really helped solidify my thoughts on debug-ability as a first class property of a programming language/environment/system.

It's a great mix of tech history, personal accounts from industry, practical conversations about development and debugging and current industry happenings.

A few standout episodes:

  Engineering Culture
  Engineering Incentives... and Misincentives
  The Rise and Fall of DEC


Just FYI i can get the Oxide And Friends sessions in my podcast app (I use podcast addict on Android).


I am glad to hear things are going well, though still disappointed there is no decentralized Discord.

https://news.ycombinator.com/item?id=33788339


I created a desktop GUI tool [0] that used multiple cores for seam carving images of your choice. Including masking parts of image to keep or remove. The backend code parallelizing the work is from the CAIRE project.

It was ages ago, and has since been archived on google code :)

[0] https://code.google.com/archive/p/seam-carving-gui/ [1] https://github.com/esimov/caire


Cool to see this popping up again. It always impresses if you haven't seen it before and is a cool algorithm to work through.

The original paper was discussed on slashdot and back at that time I was inspired to build a little GUI around an open source algorithm implementation to play with my Qt skills.

It allows you to shrink, expand and "mask out" regions you don't want touch etc.

Still available on Google Code archive:

https://code.google.com/archive/p/seam-carving-gui/


The author of this post wrote klib/khash, and uses it in his very popular cpu-intensive bioinformatics programs such as bwa (short read alignment against the human ref genome).

I've been learning rust recently. As a learning exercise, I compared the robin-hood hashing of std::collections::HashMap in rust to his klib/khash he mentions in this article , and then tried various hash functions to try and match his performance:

https://github.com/gaberudy/hash_test

No dice, his hash table is smaller and faster.

My next step is to try and implement his data structure and hashing functions directly in rust and see if I can get it to near-C performance...


How does that compare to the dict implementation of Python 3.6 ? It's supposed to be damned fast but I don't have the skills to make such comparisons.


The split keys and values arrays in Python 3.6 dictionaries make it more compact than dictionaries from earlier versions of Python, which makes it more cache friendly. But Python dictionaries (and sets) use a variant of double hashing for collision resolution, which is less cache friendly than Robin Hood hashing's linear probing. It's certainly possible to combine both techniques to come up with a hybrid dictionary with the best aspects of both split keys and Robin Hood hashing.


Python, ruby, php hash tables are roughly 2x slower. perl5's being the worst maybe 4x slower.

Damn fast is only relative. It's damn fast compared to the previous implementations. Wikipedia even dares to say that the worst hash tables of all, perl5, is one of the very best. Bias all over.

SIMD optimized hash tables, such as the Swiss tables tricks have the potential to be much faster than khash. And khash is not cache oblivious at all, with its double hashing scheme.


Does it preserves insertion order ?


Of course not. hash table iteration should assume random order, otherwise it's a security risk or performance nightmare.


A much better source for those with an interest in the science is Chris Mason's slides from his recent talk[1] at genetics conference (AGBT) about this that he shared on twitter[2].

He's a great speaker, and a cool guy and tackles some of the most interesting (at least to hear about) science in genomics

[1] https://www.dropbox.com/s/sfg6rdmgxjwdpil/Mason_NEB_talk_AGB... [2] https://twitter.com/mason_lab/status/964151387687972864


I wrote a GUI for another seam carving library back in 2009[1], and it looks like although in archive mode, you can still access the source as well as the windows / mac binaries. Just tested it and it still works!

Not as fancy as photoshop I'm sure, but does have the ability to paint a mask of regions to keep / remove to aid the algorithm and get the desired result. Multi-threaded too!

[1] https://code.google.com/archive/p/seam-carving-gui/


Thanks for that. I used it way back when(mid 2010) to make it look like a a 30' cliff jump was a 60' one. Really impressed my friends with it.


High praise indeed - you know the software is good when you can use it to lie to your friends.


Now that this thread is essentially tially dead, I can let out my little secret - I'm not the bad boy you think I am. I told them thw truth eventually -about 30 seconds after showing them the doctored one. I hope this doesn't change your opinion of me.


That's cool! I tried it, and the app is really nice to use, but the picture still came out looking weird. See my other comment below.


While I think it's great to have Google putting their weight behind standardization efforts like Global Alliance for Genomic Health (GA4GH), I really don't get the need to replace VCF and BAM files with API calls.

Ultimately, the "hard part" about genomics is not big-data requiring Spanner and BigTable to get anything done. I actually wrote a blog post about this this week:

http://blog.goldenhelix.com/grudy/genomic-data-is-big-data-b...

Both BAM and VCF files can be hosted through a plain HTTP file-server and be meaningfully queried through their BAI/TBI indexes. Visualization tools like our GenomeBrowse or the Broad's IGV can already read S3 hosted genomic files directly without having an API layer and very efficiently (gzip compressed blocks of binary data). So, I see the translation of the exact same data into API-only accessible storage system, where I can't download the VCF and do quick and iterative analysis on it more of a downside that plus.

Disclaimer: I build variant interpretation software for NGS data at Golden Helix. Our customers are often small clinical labs who size of data and volume are not driving them to the cloud.


How do you think this compares against http://basepair.io ?


Looks great, but I can't comment more as I haven't used it.

It looks to be solving the same problems as DNAnexus, Seven Bridges, BaseSpace etc as a way to wrap open source tools in more user-friendly ways.

But it's orchestrating the production of smaller set of data that still needs the next step of human interpretation, report writing, family-aware algorithms and most complex annotations (the problem space Golden Helix is in).

In other words, the automatable bits that is not the hard part that I mentioned in my blog post.


They are correct in calling out Illumina's de-facto monopoly rents they are extracting on the market, but sadly I don't share their wildly optimistic view that we are eminent for technological disruption that will re-start the price plummeting of whole genome sequencing.

Nanopores are no where near the throughput and accuracy of Illumina's sequencing by synthesis tech, and if there is a pathway to challenge Illumina's position, it will be extremely complex, iterative and _long_.

Meanwhile Illumina is amassing a billion dollar war chest and is adding its own complex and iterative improvements to its platform (two-color detection, longer and longer reads, higher cluster density), maintaining its market lead.

As much as the analogy to microprocessor manufacturing and Moore's law is alluring, the messy stuff of biology and single molecule chemical manipulation and sensor detection is unlikely to obediently follow the same innovation curve.


For a few years, DNA sequencing significantly outpaced Moore's law. Then, Illumina beat the competition so badly that they folded.


I've got something for nanopore sequencing coming out in the next few month. Stay tuned.


Do you use a different nanopore than α-hemolysin or MspA?


solid-state


Read some of your works, if its related, sounds pretty cool! Would love to hear more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: