Hacker News new | past | comments | ask | show | jobs | submit | Ndymium's comments login

I use iBroadcast[0], it's a service dedicated exactly for this. Costs me a bit each year but I've felt it worth it. There's some differences to organising in iTunes like the handling of compilation albums that I'm not so fond of but you can see how it works on the free tier.

The browser client only does 128 kbps streaming but their mobile client can set the streaming quality (I have it at 256, max is 320) and I'm working on my own PWA client using their API that I've also set to 256, which would work on both mobile and desktop.

You can also set the browser client to stream the original file directly but browsers don't play most of my formats like ALAC so it just doesn't play anything then.

[0] https://ibroadcast.com/


Thanks for the pointer!

From the "outside" this looks like such a strange product. The landing page is very obscure and, together with the name of the service, I would automatically think this is a super old school product (being generous) or some sort of weird scam (being overly critical). There are no docs or pictures or any further description of what the product looks like, I guess the authors expect people to sign up to see (it is free! :-).

Other than that, I wonder how they address their costs [0]. It seems free accounts have unlimited uploads. Anyway, I guess I'll have to give this a try to learn more about it.

EDIT: I found some pics by clicking in their facebook page, which in turn links to a news page [1] (...).

EDIT: This product is fascinating. Seems like they've been around for 12 years, have a bunch of loyal users, and support their product (?) via reddit (at minimal approve of the reddit channel since they link to it themselves!). I wish we could know more about the team behind it. Related: [2].

--

0: https://www.ibroadcast.com/premium/

1: https://ibroadcast.com/news/

2: https://www.reddit.com/r/ibroadcast/comments/1d1iaht/is_ibro...


Here's the PostgreSQL documentation about timestamptz:

> For timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, GMT). An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's TimeZone parameter, and is converted to UTC using the offset for the timezone zone.

> When a timestamp with time zone value is output, it is always converted from UTC to the current timezone zone, and displayed as local time in that zone. To see the time in another time zone, either change timezone or use the AT TIME ZONE construct (see Section 9.9.4).

To me it seems to state quite clearly that timestamptz is converted on write from the input offset to UTC and on read from UTC to whatever the connection timezone is. Can you elaborate on which part of this is wrong? Or maybe we're talking past each other?


That is, unfortunately, a lie. You can look at the postgres source, line 39:

https://doxygen.postgresql.org/datatype_2timestamp_8h_source...

Timestamp is a 64 bit microseconds since epoch. It's a zone-less instant. There's no "UTC" in the data stored. Times are not "converted to UTC" because instants don't have timezones; there's nothing to convert to.

I'm guessing the problem is that someone heard "the epoch is 12am Jan 1 1970 UTC" and thought "we're converting this to UTC". That is false. These are also the epoch:

* 11pm Dec 31 1969 GMT-1

* 1am Jan 1 1970 GMT+1

* 2am Jan 1 1970 GMT+2

You get the picture. There's nothing special about which frame of reference you use. These are all equally valid expressions of the same instant in time.

So somebody wrote "we're converting to UTC" in the postgres documentation. The folks writing the JDBC driver read that and now they think OffsetDateTime is a reasonable mapping and Instant is not. Even though the stored value is an instant. And the only reason all of this works is that everyone in the universe uses UTC as the default session timezone.

To make it extra confusing, Oracle (and possibly others) TIMEZONE WITH TIME ZONE actually stores a timezone. [1am Jan 1 1970 GMT+1] <> [2am Jan 1 197 GMT+2]. So OffsetDateTime makes sense there. And the generic JDBC documentation suggests that OffsetDateTime is the natural mapping for that type.

But Posgres TIMESTAMP WITH TIME ZONE is a totally different type from Oracle TIMESTAMP WITH TIME ZONE. In Postgres, [1am Jan 1 1970 GMT+1] == [2am Jan 1 197 GMT+2].


You are thinking of UTC offsets as zones here, which is wrong. Yes, you can interpret an offset from the epoch in any utc offset and that's just a constant formatting operation. But interpreting a zoned datetime as an offset against a point in UTC (or UTC+/-X) is not.

You do not confidently know how far away 2025-03-01T00:00:00 America/New_York is from 1970-01-01T00:00:00+0000 until after that time. Even if you decide you're interpreting 1970-01-01T00:00:00+0000 as 1969-12-31T19:00-0500. Postgres assumes that 2025-03-01T00:00:00 America/New_York is the same as 2025-03-01T00:00:00-0500 and calculates the offset to that, but that transformation depends on mutable external state (NY state laws) that could change before that time passes.

If you get news of that updated state before March, you now have no way of applying it, as you have thrown away the information of where that seconds since epoch value came from.


I'm not quite sure what your point is. Postgres doesn't store time zones. "The internally stored value is always in UTC" from the documentation is false. It's not stored in UTC or any other zone. "it is always converted from UTC to the current timezone zone" is also false. It is not stored in UTC.


This is pointless pedantry: Expressing it as the number of seconds since a point that is defined in UTC is a conversion to UTC by anyone else's logic (including, clearly, the documentation writers), even if there's not some bits in the database that say "this is UTC", even if that point can be expressed with various UTC offsets.

The internal representation is just a integer, we all agree on that, this is not some great revelation. The fact that the internal representation is just an integer and the business rules surrounding it say that integer is the time since 1970-01-01T00:00:00Z is in fact the cause of the problem we are discussing here. The internal implementation prevents it being used as a timestamp with time zone, which its name and the ability to accept IATA TZs at the query level both in datetime literals and features like AT TIME ZONE or connection timezones strongly imply that it should be able to do. It also means the type is flawed if used to store future times and expecting to get back what you stored. We complain about behaviours like MySQL's previous silent truncation all the time, documented as they may have been, so "read the source code and you'll see it's doing XYZ" is not relevant to a discussion on if the interface it provides is good or bad.

Nor is the link you shared the full story for the source code, as you'd need to look at the implementation for parsing of datetime literals, conversion to that integer value, the implementation of AT TIME ZONE, etc.


This is not pedantry. It has real-world consequences. Based on the misleading text in the documentation, the folks writing the Postgres JDBC driver made a decision to map TIMESTAMPTZ to OffsetDateTime instead of Instant. Which caused some annoying bugs in my 1.5M line codebase that processes financial transactions. Which is why I'm a bit pissed about all this.

If you walk into a javascript job interview and answer "UTC" to the question "In what timezone is Date.now()?", you would get laughed at. I don't understand why Postgres people get a pass.

If it's an instant, treat it as an instant. And it is an instant.


https://jdbc.postgresql.org/documentation/query/#using-java-...

> note that all OffsetDateTime instances will have be in UTC (have offset 0)

Is this not effectively an Instant? Are you saying that the instant it represents can be straight up wrong? Or are you saying that because it uses OffsetDateTime, issues are being caused based on people assuming that it represents an input offset (when in reality any such information was lost at input time)?

Also that page implies that they did it that way to align with the JDBC spec, rather than your assertions about misleading documentation.


Seeing this topic/documentation gives me a sense of deja vu: I think it's been frustrating and confusing a great many perfectly decent PostgreSQL-using developers for over 20 years now. :P


I agree, "timestamp with time zone" is a terribly misleading name and personally I don't use that type very much.


Related, I just recently got IPv6 in my home connection and tried to set it up with my EdgeRouter X. It was impossible even though I followed all the online instructions to the letter. I then installed OpenWrt on it and it worked like a charm, with IPv6 out of the box (I did customise my configuration later). I wrote a post about the process for anyone interested [0].

[0] https://blog.nytsoi.net/2024/09/19/edgerouter-x-openwrt


Yeah trying to get it to work via config tree is basically begging for pain and suffering. You need to create all the configurations from scratch via the CLI to have a chance for it to work in a semi sane way.

I was able to finally get it work on one of my subnets, but then everything sort of just fell apart because I have a segregated network aside from my main home network and for the life of me I couldn't get it to work on two different subnets. Then throw in the whole issue of firewall rules, since since the prefix my ISP assigns is dynamic; it changes every time the router reboots. I figured I'd have to write a little service to watch the prefixes and adjust the rules as needed but just seems too like too much grief to deal with.

Left it until now when I found that android does some weird shenanigans with DNS so it's back on my radar but it's not something I'm particularly looking forward to struggling with again.


I actually had the same deal with firewalls and prefixes, since I want to direct traffic to my server and its address obviously depends on the prefix. Turns out OpenWrt has a feature for this too [0], meaning you can use a destination address like "::1234/-56" in your firewall rules.

[0] https://openwrt.org/docs/guide-user/firewall/fw3_configurati...


It can also cause trouble if you store past events but do not store the user's local offset or timezone at the time of the event. If you aggregate these events later into a dataset of "events by hour", they may be grouped wrong (from a user's perspective) if you convert them all to the user's _current_ timezone.


I wouldn't call myself a person with a mathematical background, but there are those people who believe it's just fine. [0] I don't have enough knowledge to debate that, but it would seem to disprove "basically nobody". Zero is a convention, like NaN or Inf are conventions.

A problem that Gleam has here is that the Erlang runtime does not have NaN or Inf in its float type (or integer type for that matter). It could be represented with an atom, but that would require an atom and a float having the same type in Gleam, which is not something the type system can do (by design). The operator could, in theory, return a Result(Float, DivisionByZeroError), but that would make using it very inconvenient. Thus zero was chosen, and there is an equivalent function in the stdlib that returns a result instead, if you wish to check for division by zero.

[0] https://www.hillelwayne.com/post/divide-by-zero/


> but there are those people who believe it's just fine. [0]

Just fine mathematically, but Hillel does specify that he's not comfortable with the concept from a safety perspective. The whole piece is a defence against a particular type of criticism, but he leaves wide open the question of whether it's a good idea from a PL perspective.


Which part do you feel like would be an issue? When you run `gleam compile`, it will automatically call the Erlang compiler to finish the job.

I find it very handy that the intermediate Erlang (or JS) files are available in the build directory. It lets you easily see what form your code will take when compiled.


Also prevents lock-in if you ever need to move away from gleam.


I don't think it's the transpile part that would the issue, it's the runtime aspect. If Gleam transpiles to Erlang/Javascript that's great but once you run the program, you have to potentially deal with runtime issues specific to those environments which you might not be familiar with.

It seems that Gleam is really useful for those who are already in either the Erlang/Javascript ecosystem.


On the contrary, it's a great first BEAM language to learn because of it's simplicity - both in terms of the grammar as well as it's tooling/compiler.

For me personally, the Javascript target is the least interesting bit - the BEAM/Erlang target is where it's at for backend work. The BEAM is fascinating and full of ideas that were once ahead-of-their-time but now are really coming into their own with compute performance having caught up.

Gleam is a strongly typed language, and is unapologetically very functional. Error handling in general is quite different than it would be on a normal stack-based language/vm. In my experience, the Erlang target doesn't make debugging any harder or more difficult than you would expect for an exception-less language.


The JS target is also very interesting to me. I like erlang fine and elixir's nascent type system is promising. But the frontend (and js fullstack for that matter) currently does not have a good alternative to typescript, and the ML type system is an especially good fit for it. Elm has too much reputational baggage and rescript/reason/bucklescript/whatever squandered its momentum and is floundering.


Very much so!

I've created Vleam[0] to convert an existing vue webapp[1] and so far it's a joy.

[0]: https://github.com/vleam/vleam

[1]: https://blog.nestful.app/p/why-i-rewrote-nestful-in-gleam


Another layer of abstraction, another thing to go wrong, another thing to rot.


But, you need a runtime. If Gleam had its own runtime, that would be another layer in a similar vein. Here Gleam is using Erlang (or a JS runtime) which is bound to be more supported and have a longer lifetime than something they cooked up themselves.

Besides, Gleam's original aim was basically "Erlang but static types", so the choice of Erlang as runtime was always there.


It's a reference to Jon Postel who wrote the following in RFC 761[0]:

    TCP implementations should follow a general principle of robustness:
    be conservative in what you do, be liberal in what you accept from
    others.
Postel's Law is also known as the Robustness principle. [1]

[0] https://datatracker.ietf.org/doc/html/rfc761#section-2.10

[1] https://en.wikipedia.org/wiki/Robustness_principle


I've always felt that this was a misguided principle, to be avoided when possible. When designing APIs, I think about this principle a lot.

My philosophy is more along the lines of "I will begrudgingly give you enough rope to hang yourself, but I won't give you enough to hang everybody else."


HTML parsing is the modern-ish layer-uplifted example of liberal acceptance.

I won't argue that this hasn't been a disaster for technologists, but there are many arguments that this was core to the success of HTML and consequently the web.

Which, yes, could be considered its own separate disaster, but here we are!


It makes sense in a "costumer obsessed" way. The user agent tries to show content, tries to send requests and receive the response on behalf of the client (costumer), and ceteris paribus it's better for the client if the system works even if there's some small error that can be worked around, right?

but of course this leads to the tragedy of anticommons, too many people have an effective "veto" (every shitty middlebox, every "so easy to use" 30 line library that got waaay to popular now contributes to ossification of the stack.

what's the solution? similarly careless adoption of new protocols? and hoping for the best? maybe putting an emphasis on provable correctness, and if something is not conformant to the relevant standard then not considering it "broken" for the "if it ain't broken don't touch it" principle?


When it comes to writing APIs I feel strongly that you should be incredibly strict.

1 != ‘1’

true != 1

true != ‘true’

undefined != false

undefined != null

etc

“Flexibility” in your API just means you are signing up for a maintenance burden for the lifetime of your API. You will also run into problems because you have to draw the line somewhere and people will be frustrated/confused since your API is “flexible” but not as flexible as they want. Better to draw the line at complete strictness IMHO. I dislike even optional fields and prefer null to be passed instead except special cases (like when null has a meaning, example: search endpoint where you pass the fields you want to search on and a field can have a null value).

I want people to be explicit about what they are doing/fetching when using an API I have written/maintained. It also encourages less sloppy clients


Ironically it leads to less robust systems in the long term.


> Postel's Law is also known as the Robustness principle.

Really? It seems like it's obviously just a description of how natural language works.† But in that case, there's an enforcement mechanism (not well understood) that causes everyone to be conservative in what they send.

We can observe, by the natural language 'analogy', that the consequence of following this principle is that you never have backwards compatibility. Otherwise things generally work.

† Notably, it has nothing to do with how math works, making it a strange choice for programming.


You may have strong opinions on anti-cheat software and they may be correct, but it is required for playing certain online multi-player games, and people want to play those games on Linux too (especially the Steam Deck, I would presume). Ergo, people want anti-cheat software on Linux.


Speaking from a purely customer perspective (I'm not a business person or startup founder), the big things that are missing from the landing page and/or signup flow are:

- Pricing. I don't know what this is going to cost me. It does not say if I'm entering a trial period or if it's free.

- Privacy policy. Who are you and what will you do with my data?

Also, I suppose this is a target market thing, but I've never heard of LINE so having it mentioned does not make me more interested. I guess it will work where that app is popular, though.


Thanks a lot! Your feedbacks are so on point and helpful. we will definitely work on the pricing, privacy and TA part.If it's not too much trouble, your signup will be a huge encouragement for us!It’s completely free now!


What was the mobile Authy fiasco?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: