Hacker Newsnew | past | comments | ask | show | jobs | submit | my50cents's commentslogin

There is a rule that requires cloud providers like Alibaba Cloud to report vulnerabilities within 2 days. Alibaba violated this rule.

Note that the article is misleading as the rule doesn't require the disclosure must be made to the government first.


Your info is accurrate. But I doubt anyone cares about it.


Would you mind citing the rule? A similar-sounding policy linked elsewhere doesn't seem to apply to this situation.


This is the law that is mentioned in the article, as a link says that this is application of the MIIT ruling that came into effect September 1st:

http://www.gov.cn/gongbao/content/2021/content_5641351.htm

Here is a machine translation of the relevant section that seems to agree with the GP:

>Article 7 Network product providers shall perform the following network product security vulnerabilities management obligations, ensure that their product security vulnerabilities are repaired in a timely manner and reasonably released, and guide and support product users to take preventive measures:

>(1) After discovering or learning about the security vulnerabilities in the provided network products, they should immediately take measures and organize verification of the security vulnerabilities to assess the degree of harm and the scope of the security vulnerabilities; for the security vulnerabilities in their upstream products or components, they should Notify the relevant product provider immediately.

>(2) The relevant vulnerability information should be reported to the Ministry of Industry and Information Technology's cyber security threat and vulnerability information sharing platform within 2 days. The content of the submission shall include the product name, model, version, and the technical characteristics, harm, and scope of the vulnerability that have security loopholes in network products.

>(3) Remediation of network product security vulnerabilities should be organized in a timely manner. For product users (including downstream manufacturers) that need to take measures such as software and firmware upgrades, network product security vulnerabilities and repair methods should be promptly informed of the product users who may be affected , And provide the necessary technical support.


If this is actually true, this article seems like borderline propoganda.


> Since the interior state of another being can only be understood through interaction, no objective answer is possible to the question of when an “it” becomes a “who”

A large language model can in theory be understood at an algorithmic level by reverse engineering. If the algorithm turned out to be a giant lookup table, it's an "it". If the algorithm contained an obvious model of self, it's a "who".


> an obvious model of self

How do you determine what's an "obvious" model of self?


> If the algorithm turned out to be a giant lookup table, it's an "it". If the algorithm contained an obvious model of self, it's a "who".

It seems to be that (at least a naive understanding of those two possibilities) leaves a large middle ground where an language model isn't a lookup table but doesn't have a model of self.


We assume other people are "who" and not "it" because their behavior is predictably similar to our own, and so we model their experiences and perspectives based on our own subjective experience.

Occam's razor suggests that if we know other people have the same basic configuration as ourselves, then the subjective experiences of others will be more or less comparable to our own.

Additional assumptions are necessary to propose that Chalmersian/philosophical zombies could exist, having the same human hardware and behavior but lacking subjective experience. Under the principle of least complexity and the absence of evidence for alternate explanations, I think it's rational to assume the consciousness of other humans and certain animals with nearly 100% confidence.

That means something happening in the neocortex is causing consciousness. We know it's in the cortex because of the history of injury or absence of other parts of the brain, leaving us with empirical evidence. We know that hippocampus injury can result in a person losing the ability to remember more then 5 minutes of their past. We also know these individuals retain their personalities and the long-term memory from before their injury. This suggests that long term memory is encoded in the synaptic structure of the neocortex and that consciousness is an emergent result of neocortical operation.

There might be some involvement of particular brain regions or the thalamus or other organ, but it looks like the answer is part of whatever algorithm is encoded in the structure and processing of the neocortex.

It could be possible that consciousness doesn't require an explicit model of self. This is implied by ego death and other experiences by meditators and psychedelic users. The thing that is having a subjective experience could simply be a consequence of processing a particular configuration of information. The self concept seems to be a separate model that can be perceived as part of the process of awareness, but it seems to be a discrete thing.

To me that presents an interesting ethical question - if gpt-3 has a sense of self, then is it OK to subject it to the incoherent flashes of single moments of consciousness it undergoes each time you run it?

If it has no self concept, it might still have subjective experience, but would have no persistent contiguous experience . If it did have a self concept, then it would have a static past that informed the results of each run, as if you could isolate a single moment of awareness, then instantly reset the brain to a saved state and run it again. The output could be convincingly continuous, but the subjective experiences would be a myriad of similar but unrelated singularities of awareness.

I think gpt-3 lacks consciousness, and that a persistence mechanism is missing that plays a part in whatever is happening that causes awareness.

Hopefully, at some point, a m mathematician or scientist will be able to identify and explain the process of consciousness so that we can be reasonably certain we're not subjecting entities to a really weird tortured existence.


Buy a portfolio of dividend-paying stocks with highly secure underlying businesses such as public utilities.

Hold on to a stock as long as the underlying business is still secure.

Trade out of a stock only if there is a more secure one with more dividends to trade into.

If market prices allow the dividends from your $600K portfolio to exceed your living expenses, then no more work for money.


Yes. When I do that, I know where the pieces are on the board, but I don't have their visual properties such as shape or color in my mind.


I was not able to visualize images. When I first heard about aphantasia, I reasoned that since I could have vivid dreams, I must have the hardware to see.

Then I started training myself, first by visualizing basic 2D shapes, then letters, then 3D letters ... eventually I was able to conjure up complex images at will, just like in a vivid dream.

Some time later, it occurred to me that by doing this, I likely stored more data than necessary in my tiny little brain, so I spent more time to untrain myself, by deliberately ignoring images when they pop up.

Now I can no longer conjure up images. My dreams have become more abstract. It has become harder for me to remember faces. On the other hand, I have generally found life easier to process and understand. I feel wiser than my younger self.


Crypto is not hard to value. Valuation is the estimation of the present value of future free cash flows. Since most crypto tokens produce negligible free cash flows relative to their market cap, they have approximately zero intrinsic value relative to their market price.


> Since most crypto tokens produce negligible free cash flows relative to their market cap, they have approximately zero intrinsic value relative to their market price.

What's the intrinsic value of stocks that don't distribute dividends in this model?


The simple answer is that if a stock is expected to never pay any dividends, it's intrinsic value is zero.

Note that from shareholders' perspective, paying dividends is economically equivalent to share buyback, plus some tax considerations, so there are exceptions.


> The simple answer is that if a stock is expected to never pay any dividends, it's intrinsic value is zero.

Which is obviously wrong, right? Any company that has liquid assets (and no debt) is at least worth the selling value of these assets, even if it doesn't pay dividends. So I'm not sure your model helps here.


I think in that case you would expect the company to return value to shareholders when liquidating it’s assets.


Exactly, so it doesn't have a value of 0 for these shareholders.


You don’t value the cash-flows associated with the share. A share is not an abstract financial product. It is direct ownership of a company.

You value the company and then you can divide the price by the total number of shares if you want. Apart from some rare exceptions like Amazon in its first years, a company which doesn’t generate any cash-flows is soon to be an ex-company.

Dividends muddy the water a bit but you get the general idea.


> Valuation is the estimation of the present value of future free cash flows.

That’s only DCF valuation. It is not the be all and all of valuation. DCF is a way to value perpetuity. It makes sense for assets you can assimilate to a perpetuity (like a company). It makes no sense if you can’t. The easiest counter examples are raw materials and currencies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: