Hacker News new | past | comments | ask | show | jobs | submit | ninjakeyboard's comments login

I can see what you're saying. what solution is there then? As the internet becomes more public, so it also loses the freedom of eg the 90s and that may not be so bad with AI in the toolkit of scammers attacking older or more prone populations. It's very hard to say what effect things will have. Free markets were always the thing but price fixing or other retaliatory measures against oligopolistic greed may now be employed (eg see canada threatening to tax grocery chains for pricing too high)



My new rule for any centralized entity is no more than 5k or 5 hours on there. Even my bank account has been mostly dumped today. I'm a crypto futures day trader and have been doing it for a full year 60 hours a week. Sums can start to explode in good market conditions - I just keep drawing stuff and throwing it elsewhere.


It hallucinates that you can use 4 energy per turn in Pokemon TCG and confidently tells you so. No idea where that would come from.


ChatGPT gets the rules to the pokemon trading card game wrong. It will tell you you can use 4 energy a turn. Convincingly. Not sure how it hallucinates this. The rule is 1 per turn.


A few days ago I asked ChatGPT if “pannekake” and “kannepake” are anagrams of each other.

It correctly stated that they are, but when it went on to prove that this was the case, it generated a table of the frequencies of the individual letters in these two words, and the table looked like this.

    Letter | Frequency in | Frequency in
           | “pannekake”  | “kannepake”
    - - - - - - - - - - - - - - - - - - -
    a      | 2            | 2
    e      | 2            | 2
    k      | 2            | 2
    n      | 2            | 2
    p      | 2            | 2
This reminded me that yes indeed, AI just isn’t quite there yet. It got it right, but then it didn’t. It hallucinated the frequency count of the letter “p”, which occurs only once, not twice in each of those words.


Anything that has to do with individual words doesn't work well, but as I understand, this is an artifact of the tokenization process. E.g. pannekake is internally 4 tokens: pan-ne-k-ake. And I don't think that knowing which tokens correspond to which letter sequences is a part of the training data, so it has to infer that.


Could it have been referencing Blastoise's Deluge ability? Jacob Van Wagner used it in the 2015 championship to use 4 water energy in one turn.


I just asked it, and it said you can attach 1 per turn. And then it continued something about using supporter cards to look for more energy cards, and trainer cards to switch them. (Which it also considers as using or playing those energy cards.) Not familiar with the actual rules, though. :)


Ah I was using my friends server which has a slightly different model running - thanks. It's one of the divinci models I think? Don't know much - it's code oriented. So I guess it's not 'ChatGPT' but a GPT model he built a chat on.


Isn't it just garbage went in, got weighed as a more reliable source than it should have been and thus garbage came out. Good old GIGO... It's just here, ChatGpt, as much as I love it, is amazing at imparting the impression that its shit don't stink.


My girlfriend knows how to play the sax and I know rust (and am a dj/producer!) - so I starred this and we plan to build it. Thanks for your contribution to our life!


Jira is very customizable in how it's used. It's sort of like saying that a wall with sticky notes is a bad development tool because of some anecdotal experience. It all depends on what you do with it. Maybe it's more like saying that a spreadsheet is a bad tool.

The only thing I resent jira/confluence etc for is dropping its markdown because it was too hard for the company to maintain :) But they're working on it again. Confluence is probably one of my favorite tools but I haven't used jira in ages as it's possible to get by with github these days.


Learning functional programming will make you a better developer in all languages IMO. This is the single biggest vector for understanding how to write maintainable code.

Similarly learning design patterns will probably make you a worse programmer but learning the heuristics behind them (eg composition over inheritance) will make you a better programmer.

Learning DDD will make you a better programmer by understanding how to use bounded contexts to manage complexity as software grows.


Elixir is decent and I've worked with it a fair amount in production systems... Mostly Rubyists seem to really click with it. And ruby idioms are all over it - you can taste its history and proximity to ruby's ecosystem. As a scala dev that ended up working with elixir for a couple years, my opinion is that a typesafe elixir-like language would really bring BEAM back into the mainstream. Akka is alright but it's shoehorned onto the JVM. BEAM is good as long as you don't need to do heavy computation, but lack of type-safety (need to use dialyzer?) means that shit breaks in prod that the compiler would have caught. And yes, you can mitigate this with boatloads of testing and data-validations with ecto or whatever. But every time we broke shit that a compiler would have caught I cringed.

It's a great path for rubyists to move to Elixir/BEAM and every rubyist should give it a whirl! I'm back working on scala and akka.


I want to echo Gleam [0] as a project to watch out for. It's still very early, but it's evolving quickly, and has a big focus on providing ergonomic tooling.

It's an ML inspired, statically typed language that compiles down to Erlang, and supports interop with the existing ecosystem. This means that you get access to ADTs, type inference, etc, while still being able to lean on OTP for your concurrency primitives. There's also examples of calling it from Elixir, so there's the option of falling back to statically typed Gleam for an especially gnarly piece of code, and calling it from your Elixir application [1]. I wouldn't necessarily recommend this for commercial apps yet, but Gleam today is about as usable as early-Elm was, in my opinion.

The project is also very welcoming to new contributors, and Louis (the language's creator) does a great job of curating a list of beginner friendly issues to tackle in the compiler. I've been spending my evenings learning Rust by adding onto the language, and it's been a ton of fun. If you want to help out, there's a fairly active IRC channel on Freenode, in #gleam-lang :)

[0] https://gleam.run

[1] https://dev.to/contact-stack/mixing-gleam-elixir-3fe3


I’m rooting for Gleam as well. I do wish it adopted the syntax styling of Ruby/Elixir/Crystal though. Either way, it is very compelling.


The older I get the grumpier runtime errors make me.

I want ReasonML (language!) and Erlang (OTP!) to have a baby, and I want it birthed by the Go runtime. (Go? Yeah, Go. I don't love the language, but I am a lover of low latency and garbage collection, what can I say?) Yes, there's Gleam, but if something's based on BEAM, the throughput generally won't impress. :-( Would seem a shame to do all that static typing, and then not reap the speed benefits.

Relatedly, I think there's a sweet spot for a language that accepts mutability inside of actors, but only allows immutable objects to be sent as messages, with an escape hatch available if needed. (Pony explored this space, would love to see it evolve.) Combine that with OTP for happy-path programming, and an ML so you catch most of your errors at compile time, and you could end up with great throughput, low latency and great ergonomics, all at the same time.


> but if something's based on BEAM, the throughput generally won't impress

I'm not sure where you're getting that from- that's typically the area it does well. It's bad at number crunching, but you if the work is IO bound (say, like a web application backend) it offers consistently low latency with high throughput.


We're defining throughput differently. I'm talking about CPU utilization, i.e. non-IO-bounded work. Sorry for the ambiguity.


I would be very surprised that Go had lower latency than BEAM based languages, higher throughput/better CPU utilisation OK, lower latency??

Do you have a benchmark showing this?

As for Pony, in theory it should be great but it looks very complex..


Not sure where I implied Go would beat BEAM on latency, didn't intend to. On the contrary, BEAM's had decades poured into keeping the long tail under control, while with Go it's a work in progress.

I singled out Go because it's the only reasonably mainstream multithreaded (need this for actors), statically typed (need this for CPU throughput), garbage collected (need this for ergonomics) language out there with an emphasis on keeping latency under control, and as such would be the only sensible target I know of to host the language I proposed.


I'll say one thing. I accidentally forkbombed my running elixir system in prod (miscontacting an error reporting service triggered two more error reports, and the error reporting service 500'd during an outage), and it kept servicing user requests without much of a sweat.


I have assumed BEAM has similar latency to Go and is garbage collected?

Here is some benchmarks where Erlang beats Go in throughput:

https://timyang.net/programming/c-erlang-java-performance/ https://stressgrid.com/blog/benchmarking_go_vs_node_vs_elixi...


That first link is from 2009. A lot's changed since then, so I didn't read it.

And maybe I missed something in the second link, but Go showed very similar I/O performance to Elixir, while consuming a boatload less CPU doing it. That's what I'm after. Open to being told I missed something, though.


> Open to being told I missed something, though.

The Bean keep using CPU even with no work to be done, so avoid context changes. We can't compare about CPU values.


We can't compare about CPU values

Can you explain more why we can't compare? Looking at this chart...

https://stressgrid.com/blog/benchmarking_go_vs_node_vs_elixi...

...shows pretty clearly how the CPU utilization grows linearly (ignoring some sawtooth) as the load increases, plateaus once the load remains constant, and then comes down linearly as the load decreases on the other end. Looks like a very clear mapping between work done and CPU load to me.


It is explained in the blog post after that benchmark - https://stressgrid.com/blog/beam_cpu_usage/ .

Essentially in order to optimise responsiveness the BEAM uses busy waiting, which in reality is not actually utilising the CPU as much as is reported by the OS, which results in misleading CPU usage being reported by the operating system.


"I think there's a sweet spot for a language that accepts mutability inside of actors, but only allows immutable objects to be sent as messages, with an escape hatch available if needed."

That's kind of what akka is on scala or java. Messages are immutable _BY CONVENTION_ but you can do whatever you want.


Isn't that the basis/point of the actor model? Actors can message each other and processing the message can trigger state mutation of the recipient, but they can't directly mutate each other.


All data is immutable on the BEAM with a few exceptions, so no mutation within actors.


No, that's not true. Actors in elixir/erlang mutate their state through tail call recursion through the loop function.


I said data is immutable on the BEAM, actors can update their state.


"I think there's a sweet spot for a language that accepts mutability inside of actors, but only allows immutable objects to be sent as messages"

Absolutely agree. I had great hopes an "Actor" type would be created in Swift, where every public function would be thread-safe, accept only "pure" structs / value objects, and which would automatically run on its own coroutine.

Unfortunately the concurrency story seems completely abandoned for this language.


We may get our wish when multicore finally drops for OCaml.


The Kotlin KTOR framework is the most promising new web stack I’ve seen. It doesn’t have the higher level runtime features of Erlang but it checks most of your other boxes.


I like Kotlin but I'd say a strength of BEAM is that it doesn't allow for infinite loops, which means coroutines can't block others. This is a fundamental strength.

It allows the runtime to schedule coroutines effectively - they can't block for more than a function call (recursion is how you do "infinite" loops).

I think a future competitor to BEAM languages would need this feature.


This is the kind of thing I was referring to by higher level runtime features in Erlang. If you really need this sort of thing then you should probably be looking at an Erlang stack but I think for a lot of projects a less exotic and also much more rigorously typed language is going to be more productive.


What do you mean "more productive"? You'll get your code out to prod way faster in a BEAM language and in my experience the only remaining errors are relatively minor and easy to "wait to fix", because the BEAM will keep on keeping on and there's no user facing effect (maybe your error logs are a bit polluted with them). Whole classes of errors are not even possible because of "copy-on-write" function passing. I recently fixed a code bug that tripped during a race condition entangleing with a blocking call across two datacenters 1000 miles apart in about one hour, because you can introspect literally everything in the vm with very little hassle, and IO writes are atomic (if you call an IO write to screen it will never be interrupted by another IO write to screen).

I call that productivity.


As someone that uses elixir for webapps and ocaml/reasonml for frontend work, I hope that https://gleam.run gets traction, seems like a nice BEAM language with types that is evolving quickly.


I would argue that if you're looking for a simpler way to build CRUD apps, just skip the middleman and use postgREST. I've been playing with Elixir and Phoenix, and while they're pretty great I like postgREST much better. You eliminate an unnecessary (in most cases) abstraction and backing onto the grown up postgresql authorization and authentication is invaluable for securing your app.

For non webapp stuff I've been happy using golang.


I'd love to hear your opinion on liveview if you've played with it at all. The two ways forward for the industry I see are something like hasura/postgrest with heavy js in the frontend, or something like liveview/blazor. I'd expect the more decoupled approach to win long term, but demos like this: https://github.com/moomerman/flappy-phoenix are extremely impressive.


I commented elsewhere, but I made a quick quirky game with full source code using LiveView.

[0]:https://hn.lddstudios.com/ [1]:https://github.com/ldd/hn_comments_game/

Not to share my views or anything, and I have nothing to sell about it, but I thought I would share it with people anyways


This 100%.

Dialyzer helps a bit but is difficult to work with due to really cryptic errors. Also the workflow of having the typechecker run as a separate process not part of the compiler feels really cumbersome; it's easy not to notice you have a type error somewhere (often miles away from where the error originates!).


You're working with statically typed compiled languages tho. Once you try using a dynamic language you realize another editor is enough IMO. I use emacs for anything dynamically typed (including compiled languages like elixir) and intellij for scala/java.


Webstorm or PyCharm isn't as good as IntelliJ or ReSharper, but holy hell is it better than just a text editor, even emacs.

I really don't understand why so many programmers proudly proclaim that they do things the hard way and wear that as a badge of honor.


Phpstorm compared to vim is like an oceanside resort compared to a walk in the desert without a water bottle. If there's not thousands of bugs I have prevented or discovered thanks to phpstorm, I am not surprised.

Jetbrains products are absolutely critical if you're using a dynamic, uncompiled language and to turn down the offer is professional misconduct even if it requires buying a new computer with more ram. I don't know who thought it's a good idea to pretend that a typo in one use of a variable is a legitimate expression of developer intent, but Jetbrains saves your users from that hell.


That's an old, old trope. "Real men" do it the way that takes 3X as long and yields code with more bugs.


I feel like "tool use" follows roughly the same curve as a Gartner Hype Cycle, but without an upper bound on the right (as it implies a below-peak asymptote).

In the very beginning, less is often better, since the tool is prompting you with too many things you don't understand. Then you get past that point and you're massively more productive, since it's catching all your simple mistakes. Then you become disillusioned since it doesn't catch all mistakes, and you start learning in detail how it has failed you, and you just (╯°□°)╯︵ ┻━┻ the whole thing (the "real man" trough). And in the end you go back to sophisticated tools, as you realize a 70% solution can still give you magnitudes more productivity.


But the tooling on emacs or whatever is up to par. That was my original point, not that a plain text editor is a real man's way, just that it's good enough. PyCharm or whatnot isn't necessarily much better than what you can do in emacs, but it isn't portable to other languages or as flexible. A text editor like vi or emacs isn't much different than an IDE's functionality when looking at a dynamic language. It's a WHOLE DIFFERENT WORLD with scala though (IMO - that's arguable but refactorings etc aren't on parity with intellij), and I don't foresee myself ditching intellij for scala dev any time soon.

I remember showing a scala developer who was using sublime the "extract method" feature and some refactorings in intellij and he was like "HOW DOES IT KNOW ABOUT THE CODE THOUGH???" - the IDEs have great features, but they're less differentiating for dynamic languages as a lot of the OSS tools are just as good. Eg VS Code MS Python extension for example. It's just great.


It's miles better on Python and most javascript that I've touched (VSCode's ecosystem does tend to have more breadth, and if you're working on something that VSCode has plugins for but Intellij does not, yea - VSCode can be noticeably better for most purposes). Most commonly around stuff that requires better understanding of the structure of the language / project, like refactoring and accurately finding usages.

But yes, for many dynamic languages a fat IDE is less beneficial, especially for small-ish projects (anything where you can really "know" the whole system).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: