Hacker Newsnew | comments | show | ask | jobs | submit | GigabyteCoin's comments login

>I'd wager more than half wouldn't know what a bot net is, or what an exit node is.

Hell, I'd wager that more than 90% of their users have no idea what a bot net is.

Right click->inspect element->delete node

Did this too. That eye is doing a disservice to that site - if wasn't a web developer who knows how to remove that, I would not have finished reading the article and left that page. Annoying things chase people away, guarantee that site gets less return visitors because of it.

Just because you "understand how to drive" doesn't mean you are protected from being killed while driving.

>Moto E image taken from its review on The Guardian

That link back to the Guardian gives him any rights to use the image. In fact he's probably making it easier for the rights holder to track him down now that he has linked to the page.

Not that I think the Guardian would care... but still. Don't use other peoples images without permission.

My hope for the Bitcoin blockchain died a few months ago when I realized that all of the Bitcoin proponents on reddit were laughing at the re-design of an old Bitcoin mining website. One I used to mine on before mining became unprofitable for everybody but a few large-scale miners. "Too little, too late!" they all chanted in unison. "Mining websites are dead!" they all laughed. And then it hit me...

The fact that the fair distribution of the coin is dead means that the entire blockchain will die imho. Bitcoin's price really exploded when the mining scene was lucrative for everybody interested in the coin. It tapered off rather quickly once mining became impossible for all but the few.

Why would anybody buy into Bitcoin now that one of it's original pomises of decentralized mining has been broken?

Why would anybdoy buy into Bitcoin when it means only fattening the pockets of a few (already rich) mining syndicates?

I certainly wouldn't, and I was a massive Bitcoin proponent for most of it's life.

Many ideas that Bitcoin brought with it will live on, but the Bitcoin blockchain in it's current implementation will "die" sooner rather than later imho.

The fact that one can transfer value from point A to point B for next to nothing and without government oversight is amazing. The fact that the majority of Bitcoins that will ever exist are held by a few individuals (and always will be while mining for the layman is dead) is not.

You view bitcoin as as "investment" not a "tool" there in lies your problem.

"Why would anybdoy buy into Bitcoin when it means only fattening the pockets of a few (already rich) mining syndicates?"

Replace Bitcoin with USD in above sentence and "mining syndicates" with "banks"

Nobody has to buy into the USD, it's already there, established. Bitcoin has to be better than the established norm in order to succeed. It is in some ways, don't get me wrong, that's true. But it still isn't perfect, and I personally think a better implementation will succeed in the future.

Namely, one where the majority of "little people" can mine the coin. Or one with a fair and equal distribution to at least hundreds of millions of people (preferably billions) before the coin is launched.

Those were some of the founders original ideas, that anybody with a CPU could mine the coin. And technology slowly took over...

I think the parent commenter's point was that Bitcoin was billed as a solution to the problem of the few holding most of the money, yet has appeared to fail in that regard, and continues to fail as mining is increasingly inaccessible to all but those who are sufficiently wealthy to afford warehouses full of state-of-the-art ASICs.

This isn't to say that Bitcoin hasn't succeeded in a lot of fronts. Just that one of the key fronts - one of equal ownership of the money supply - hasn't been quite as successful as others.

Most miners sell their bitcoins to buy more hardware and pay their electricity bills...

Why go through the trouble of running a mining rig instead of just buying the coins sold by miners in the market?

It is true that the majority of Bitcoins are held by a few early adopters (NB: not today's miners!). They will diversify out as time goes on. Wealth can come from skill, hard work, luck, politicking, etc... Rejecting Bitcoin because it enriches a few people in a manner that you find objectionable is... myopic.

Look around you, almost everything you do enriches one capitalist or another. Many intelligent people have tried finding a way around that, with limited to no success. Good luck.

>Rejecting Bitcoin because it enriches a few people in a manner that you find objectionable is... myopic.

That's not why I think Bitcoin is dead.

It's primarily the fact that nobody can mine it anymore, thus nobody will be interested in it. All of my friends who I introduced to Bitcoin (and were fanatics while they were mining) don't even talk about it anymore. They're all just steadily hoarding their coins, not using them for anything, in the hopes of cashing out millions of dollars once all of the later "suckers" come along and buy into the blockchain.

It was the "free money" aspect that really sold the idea to most people. Nobody cares about the "exchange money for Bitcoin just because Bitcoin is so much more fluid" aspect.

I proposed an altcoin that would would increase reward logarithmically as a result of increasing hash power.

Because otherwise there is excessive deflation and instability.

If by axe you mean an idea, instead of a tangible object, then yes.

As someone who prizes his high quality movie torrents, I can testify that files over 5GB in size don't stay adequately seeded for very long after their initial release.

It seems like an arbitrary limit, but an adequate one for the time being.

Just a personal gripe, but I hate it when websites don't work without the pointless www. in front of their domain.

I just text a friend to visit https://lily.camera and he'll never see it, because it doesn't exist.


While I agree they should make www redirect, its not pointless. If you serve sub domains or plan on serving them, www as your primary helps differentiate cookies.


> its not pointless

It is entirely pointless to not redirect. There's no reason the URL posted shouldn't redirect to the www version.


Point taken. But I doubt this sales page will be implementing subdomains anytime soon, if ever.


lily.camera redirects


http does. https doesn't.


>Yeah, that's how you make friends. Certainly not into government spying but this is a ridiculous comparison.

I disagree. The wording might not be perfect, but the comparison is sound imho.

Potential friends aren't typically total strangers. You are usually at least considered aquaintances at first. And potential friends don't typically go "looking around" your home on the first visit, either. They're usually quite respectful of your privacy from what I have gathered over the years.

They don't just get up and search for a bathroom, they'll ask you where it is.


How can there be experts on a subject that doesn't yet exist?


AI experts don't know how to create an artificial intelligence. AI researchers study how to solve various problems in CS traditionally performed by humans that humans don't solve by carrying out an algorithm by hand like Natural Language Processing, machine learning, automated reasoning, search (e.g. chess).


There are at least two flavors of A.I. "Strong A.I." seeks to build something human-like and could pass Turings immitation game test. The new movie Ex Machina explores this test. "Weak A.I." replicates just a single cognitive skill like game playing, pattern recognition, natural language or something more trivial. Most recent A.i. worked on the latter, or its theoretical background.


Actually there can be anything possible in the articles that are published by so called Tech Journalists who have no idea of the fundamentals of the tech.


What do you mean it doesn't exist? I work in that field, it exists.


What do you mean it exists? How do you definite AI? Can your project reason with you? Or is it simply an Input-Output type of program? Just because you use natural language with it instead of punch cards doesn't mean it is intelligent.

People tried to do AI in the 60s to 90s era. It is dubbed symbolic AI. It didn't work out. A good chance that it never will. Today machine learning algos and a bunch of automated statistics is called "Artificial Intelligence". It's not intelligence at all. Intelligence implies something more than I/O computation.


I tend to agree that AI today is a long way from what the founders of the subject imagined – it's become something more like "Applied Computer Science". But what's now called "Artificial General Intelligence" isn't dead, and people are still working on it.

Also, it's more tricky than you'd think to narrow down what counts as intelligence. There aren't really any hard lines between an I/O program and an intelligent agent, even though they seem pretty far apart.


Just because you don't see the hard lines doesn't mean they aren't there. We are deluding ourselves by avoiding a hard definition of intelligence so we can keep believing that we are creating AI when its really nothing of the sort.


Just because you can't see unicorns doesn't mean they aren't there, but at some point you have to give up the search. It's fine to talk about how, broadly speaking, rats are more intelligent than ants, plants or microbes (which are basically I/O rules with a body), chimpanzees more so than rats, humans more so than than chimps etc. But in general there's a ton of overlap and the qualities we associate with intelligence – memory, planning, self-awareness, tool use, whatever – are only loosely correlated continua.

There are a few more binary measures in intelligence research, such as the mirror test, but at best they're only a small piece of the puzzle. There's no sudden point where everything clicks into place.

Of course, if you have such a good definition of intelligence, feel free to enlighten me.


Well I would say that "intelligence" is learning and inference with causal models rather than just predictive or correlative models. You can then cash it all out into a few different branches of cognition, like perception (distinguishing/classifying which available causal models best match the feature data under observation), learning (taking observed feature data and using it to refine causal models for greater accuracy), inference (using causal models to make predictions under counterfactual conditions, which can include planning as a special case), and the occasional act of conceptual refinement/reduction (in which a model is found of how one model can predict the free parameters of another).


It's an interesting perspective, but the thing about this kind of definition is that it's very much focused on the mechanism of intelligence rather than the behaviour that the mechanism produces – which flies in the face of our intuition about what intelligence is, I think.

If we found out that one species of chimp learns sign languages through a causal model while another learns it through an associative one (for example) we wouldn't label one more or less intelligent, because it's the end result that matters – don't you think?

Likewise, arguably the ultimate goals of AI are behavioural (machines that can think/solve problems/communicate/create etc.), even if it's been relatively focused on mechanisms lately. Any particular kind of modelling is just a means to that end. Precisely what that end is is still a bit hard to pin down, though.


>Brains are complicated, and there is a huge amount of heterogeneity in how people process information and think about mathematics (and indeed all topics, but it is clearer in mathematics perhaps). Some are very visual, some are big on calculation.

What do you mean by "associative model"? That doesn't map to anything I've heard of in cognitive science, statistics, machine learning, or Good Old-Fashioned AI.

But actually, I would expect different behaviors from an animal that learns language via a purely correlational and discriminative model (like most neural networks) versus a causal model. Causal models compress the empirical data better precisely because they're modelling sparse bones of reality rather than the abundant "meat" of correlated features. You should be able to generalize better and faster with a causal model than with a discriminative, correlative one.


I think I meant correlational, but it was really just a placeholder for "a different model". You could replace the chimp with some kind of alien whose thinking model is completely, well, alien to us – but still proves its intelligence by virtue of having a spaceship.

I'm not necessarily saying that different models lead to exactly the same behaviour. Clearly, chimps' models don't generalise as well as ours do and they don't have a language model that matches ours in capability, for example, which leads to different behaviour. But given that their behaviour is generally thought of as less intelligent as opposed to not intelligent at all, it seems like the mechanism itself is not the important thing.


You are repeating the same mistake as before. The difference is significant between admitting we don't know what those lines are and claiming that they don't exist.


It's not a mistake – I see the difference and I am claiming that those lines don't exist. There's plenty of evidence for that, including the continuous range of intelligent behaviour from plants to humans. It's just an empirical fact that there's no hard line.

Of course, that could be wrong – new evidence may come to light, after all. But even so, it doesn't make any sense to say that trying to understand and replicate intelligence is deluded, just because we don't know where that line is – because figuring out what intelligence is is exactly the problem that people are trying to solve. AI is one part of that, along with more empirical research in fields like cognitive science and biology.

Are people researching quantum gravity deluding themselves because they don't yet have a hard definition of quantum gravity? Figuring that out is exactly the point!


If you are going in the wrong direction, you won't get to your destination by going faster. You should stop and rethink and you won't do this unless you can admit that you might be wrong.

How much research had you done before you assertively proclaimed that those lines don't exist? Because it looks nothing like a smooth transition to me.


Face recognition, speech recognition, machine translation, text classification and more don't exist?


That's the plot line of a hyper critical, yet insightful, joke that came out of one of the many AI downturns in past decades, that as soon as an algo or implementation works it isn't AI anymore, its (fill in the blank specialization). So AI is just the present set of algos that don't (yet?) work.

So that fuzzy logic, thats not AI anymore, thats a footnote in the EE control systems theory class, isn't it? And the face recognition is a parallel processing assignment in FPGA class, speech recognition is an advanced section in DSP theory class, etc.


Right, it's not yet at that point for face and speech recognition, but that certainly happened to game AI (Chess), constraint solving (Sudoku), and planning.


Experts on a subject that doesn't exist? Like professors of theology?


Theology exists.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact