Hacker News new | past | comments | ask | show | jobs | submit | TibbityFlanders's comments login

It's frightening how authoritarian the Left has become around the world. Ambiguous laws about'hate' seem poised to protect the world from thought crimes by curtailing basic human rights.

In this context Musk is right and has the power to bring change. He will lose a lot of that power under a Harris presidency that has advertised it plans to continue the crusade against freedom of speech.


Wow, just wow. The gloves are off. I wouldn’t expect anything this naive on HN.

There is a difference between freedom of expression (freedom of opinion) and freedom of speech. The latter contains the former, but is much more than that because freedom of expression stops where hate and discrimination begins.


The human right of free speech protects hate, disinformation, and a host of other horrible words one might say. That's because hate and disinformation are ambiguous terms used be tyrants to arrest dissidents. So long as someone isn't making true threats, or inciting imminent lawlessness, humans have a right to their own thoughts and the expression of their ideas. Your definition of hate is bit universal and will only lead to tyranny.


How is preventing a color revolution "authoritarian"?

https://www.washingtonpost.com/technology/2023/01/08/brazil-...

https://www.globaltimes.cn/page/202112/1240540.shtml

You'd want the white house to do the same if China would want to do something similar with, let's say, TikTok ;)


*would lose a lot of that power.

I'm hoping for a different outcome and the world turns in a better direction than forcing "Democrats" to vote for an un-nominated candidate.


What a waste of humanity's resources.


Amen. I said the same thing many months ago but the true believers on HN who could not see beyond their own paychecks down voted me like I was advocating for satan.


By that argument, anything that an AI system could do today that a human does right now is an incredibly massive waste of our resources :-)


Honestly, I use Edge Copilot for almost all my searches now. Google is no longer looking and soon the majority of people will realize that.


I don't understand what you mean by looking in the sentence "Google is no longer looking". Do you mean they're no longer crawling websites and therefore blind to new content online?


Same. This has been a serious productivity booster.


I'm horrible at stats, but is this saying that if I have 5 jars of pennies, and I guess the amount in each one. Then I find the average of all my guesses, and the variance between the guesses, then I can adjust each guess to a more likely answer with this method?


Not necessarily "more likely" but "better" in some "loss" sense.

It could be "more likely" in the jars example where estimates may convey some relevant information for each other. But consider this example from wikipedia:

"Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements."

https://en.wikipedia.org/wiki/Stein%27s_example#Example


No, I don't think these problems are related.


Unless that's a false dichotomy. Obesity can be treated in less harmful ways. Companies can be regulated into only including a safe amount of sugar in drinks and foods that arent considered sweets


But when are those changes going to happen? Ozempic is available today.


You seem to be evaluating the LLM based on a single response rather than the whole "conversation." The user usually interacts with the LLM through 3-4 different responses to reach the right answer, which is valuable in itself. They're using both systems just as anyone would in a conversation.

I find LLMs useful for:

- Building bridges from familiar concepts to new ones.

- Checking my analysis and implementation for mistakes and gaps. This includes detecting subtle logic errors with static analysis.

- Condensing lengthy descriptions and complex conversations.

- Creating diagrams from verbal descriptions of flows.

- Finding design patterns to support my design, along with the basic structure that fits the chosen pattern.

- Writing unit tests and improving code coverage.

- Analyzing the credibility of information sources such as news stories and scientific studies.

- Generating original ideas and solutions to problems I may not have encountered before.

- Many more edge cases that help me turn an idea into a concrete concept in rapid time.

I have also used LLMs to entirely generate new tools and workflows, using languages I had barely touched before. This improved my knowledge of those languages and sped up my learning through practical examples.

Just as the printing press made calligraphy obsolete, LLMs will eventually make coding obsolete. Coding will be replaced by pseudo code and narrative that is independent of any framework or platform.

This does not mean that design and development will become obsolete, it will just become faster, without being hindered by the unnecessary barrier of coding.

Don't dismiss the value of this tool just because some marketers and regulators are using hype and fear to make money. LLMs can enhance your existing skill and make you more productive. They are not a crutch, they are a third leg.


> They are not a crutch, they are a third leg.

I don't think a third leg would make it easier to walk if you already have two legs. But it is a good way to see it, some would love a third leg, but I think until it gets better balanced most people will avoid it.


(removed)


can you post enough about your local LLM setup for me to google/rep. this?

There is so much out there for LLM's parsing is a pain.


For audio generation I recommend Bark. I am getting 14 seconds of audio that is about a third of eleven labs quality in about 2 minutes.

This is happening on a Windows 10 Dell, with 32gb of RAM, an i5, and an Nvidia 1050 GeForce with 4gb of vram.

I'm also able to decently run local LLMs because of llama.cpp and other libraries that can share models been ram and vram. There are other tools that can help with this as well including Ollama.

I suggest subscribing to r/localLLAMA. I also suggest using Bing Copilot in Edge with allowed access to the page you're viewing. I often use it to find new GitHub libraries and to give me first steps to be able to start using a new framework.


https://today.ucsd.edu/story/a-new-replication-crisis-resear....

> Papers that cannot be replicated are cited 153 times more because their findings are interesting, according to a new UC San Diego study

> In psychology, only 39 percent of the 100 experiments successfully replicated.


Was this study replicated?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: